Reliability practices have evolved significantly under digital transformation initiatives. Smart sensors applied to the thousands of assets in plants and manufacturing facilities are collecting unprecedented amounts of data. But collecting data doesn’t automatically translate to getting value out of it.
Though analytics adoption is growing rapidly, until the information from disparate analytical applications can be consolidated into a functioning advisory system, it’s just more noise in the control room.
To drive operational excellence via reliability, organizations need more than the ability to spot all the problems. Equally important is ensuring they’re putting resources against the right problem. The best yardstick by which to prioritize is the financial value, or risk, associated with each.
Model Vulnerabilities Continually at the Systems Level, Including Their Likely Financial Impact
In many plants, problems all seem to arise at the same time — one signal or alarm followed by another. They may be truly unique or an alarm cascade (domino effect). Without an understanding of their financial impact at the systems level, it’s nearly impossible to be sure you’re tackling the highest-value problems first.
Managers and executives need better financial information to justify decision-making — a part of the process that can introduce significant delays and hold up course corrections that prevent asset failures or degradations. It’s also needed for engineers and operators, who are often on the receiving end of the noise with a stack of seemingly immediate issues.
Identify the (Obscured) Downstream Implications
Most disruption events are far-reaching, beyond the immediate asset or short term. Visibility into those system-wide implications is key to making financially optimal decisions.
A perfect example is a refinery we work with that had a failed pump. After running some Monte Carlo simulations to explore alternatives, they elected to replace the single pump with a dual-pump configuration. While this did double the CAPEX, the simulations showed that when the single pump failed, they were forced into a cold-start situation.
With dual pumps, the production rate was diminished, but they avoided the cold start and the associated emissions increases and higher energy consumption. The company was paying penalties and, when looking at how many times that happened per year, the cost for the second pump was easily justifiable because they could avoid ever having a cold start.
Justification needs to be based on the total impact to the organization — over a significant operating period — not just maintenance costs of a single failure. Without the capability to look at the entire system, it’s easy to miss the significant downstream implications and make short-sighted decisions.
Align the Spend With the Problems
In reliability, probabilities plus impact should drive priorities, meaning the likelihood of a failure is paired with its financial impact on the business overall. With that, decision-makers can practice “lean management” — avoiding waste and aligning resources and spend to priority problems, based on a financial hierarchy of issues and opportunities to improve operations and, as a result, production.
If working on continuous improvement, the culture must be financially-minded. It needs to use asset data and a system-wide view to challenge the way Operations uses assets and the way Maintenance spends money. Only then can you ensure every decision helps affect change and improvement in system-wide processes and contributes toward the ultimate goal: operational excellence.
To learn more about how companies are leveraging predictive analytics to improve their decision-making, take a look at my recent white paper, Low-Touch Machine Learning is Fulfilling the Promise of Asset Performance Management.