Data analytics offers a lot of short-term promise in the downstream arena.
CERAWeek 2017, if you missed it during March 2017 in Houston, was a signature event that brought together energy industry leaders — from ExxonMobil’s CEO to the Russian and Saudi oil ministers to market forecast gurus — all in one place in a swirling maelstrom of predictions, pronouncements and thought-provoking discussion. One surprising headline that emerged was around statements made by several energy CEOs that stepwise advances in operational excellence would be achieved through the adoption of “data analytics.” Figures as high as 30 percent improvement were thrown about.
Data and Reliability
Refineries generate a wealth of data. Data on equipment. On maintenance frequency. On unit performance. On process parameters. On costs. I spoke with one senior manager at a large refinery operating company over the summer who told me, “We are now swimming in data about our units. But we are struggling to know what to use it for.”
Refiners talk about reliability, and improving reliability. How do we define reliability, and how do we measure it in terms that are meaningful to a refinery? Especially as refiners look to shift product mixes by closely integrating petrochemicals, how do you do that without taking a reliability hit?
Ultimately, there are two meaningful measures of improvement in a refinery: financial returns and safe operations. Both of these metrics are dependent on the entire system of the refinery. Measuring the reliability of a component, an item of equipment, a process unit, is good and meaningful, but it translates into the overall financial improvement of a refinery only when the entire system of the refinery is accounted for.
To manage the refinery and its reliability, we break down reliability into key performance indicators (KPIs) that enable us to understand and improve components. What is the reliability of a particular pump? How often does it require maintenance? Under what conditions does its performance degrade or does it break down? What is its lifespan? What is the overall availability of the refinery, percentage-wise?
Total plant uptime can translate to equipment and system reliability, reducing the need for maintenance downtime, improving maintenance effectiveness and maintenance planning and performance, optimizing the maintenance and operating plans, and enabling faster response to disruptions. This can begin during design, if the operator sees the value in considering maintainability and reliability.
Higher production capacity, or production yield, from the asset can include better operating strategies, reconfiguration of the process, process technology breakthroughs, debottlenecking, plant expansion and better control strategies.
Safer operation — including design-for-safety, hazard and risk analysis, and asset integrity — results in fewer incidents, lower risk, regulatory compliance and a better societal “license to operate.”
A New System-Wide Approach to Analyzing Reliability
The responsibility for different asset performance metrics is often split among different business executives, with no one individual accountable for the optimization of an asset. The refiner needs a way to look at the entire asset, considering uptime, production yields and safety, and cutting across the multiple metrics.
Consultants have devised an approach to looking at reliability called “RAM” (reliability and maintainability). These methods take an item-by-item approach to reliability. What is the inherent reliability of each element of a refinery? What will cause that item to fail? And how can the system be designed and operated to minimize the risk and impact of that failure?
The problem with that approach is that it is being applied to a complex chemical and physical system, the refinery of 2018, in which the equipment and processes are integrally related. A process-wide modeling approach is needed to understand which elements of the system risk the refinery’s uptime the most. And it all needs to be related back to cost. What is the cost of reducing risk in each aspect of the refinery, and how do those risks relate to each other? And therefore, what is the optimal way to spend available capital to minimize production and financial risk?
In other words, by using a process system-based reliability model, the best capital decisions can be made quickly, and executives, financiers and insurers can understand the risk quantitatively. And by considering risk probability together with the systems view, the universe of outcomes and their likelihood are all considered.
This approach is very practical, and not difficult. The Aspen Fidelis Reliability™ system, applied on some of the largest refineries and petrochemical complexes successfully and effectively, represents the convergence of process modeling, probability analysis and asset management data.
How Does This Take Us Back to Data?
Equipment and process units are increasingly instrumented. Cheaper sensors and the desire of equipment operators for more monitoring data is fueling an explosion in data, and much of this data relates directly back to equipment performance and reliability. It provides the fuel to run the system-wide reliability model, which in turn identifies the low-hanging fruit for margin improvement.
To date, Aspen Fidelis Reliability has helped refiners to:
Streamline turnaround events to maintain those items with the highest uptime risk, and space out turnaround events.
Make better CAPEX decisions, to allocate redundant systems and spares to where they will have the biggest financial impact.
Stage the startup of large facilities to reduce the risk of behind-schedule startup, thereby reducing revenue risk and cash flow risk.
Left on the Table
Refineries today still leave over $10 billion of profit opportunity on the table in this area, according to a recent analysis by AspenTech.
Energy and chemical companies collectively have many trillions of dollars of capital investment tied up in their process plants. In the refining industry segment alone, Oil and Gas Journal reported in January 2015 the worldwide inventory of assets to include 643 refinery sites, able to process an estimated 88 million barrels per calendar day (b/cd). ExxonMobil alone, the largest refining asset holder, controls a capacity of close to 5.5 million b/cd.
The chemical industry asset capacity is many multiples of that. These plants range in age from the world’s oldest chemical plant — the Hoechst (now Celanese) site founded in 1863 near Frankfurt, Germany — to new facilities currently coming online. Many of these assets are operating well beyond their original design lifetime, and they expect to continue to operate and improve.
Data analytics, applied to refinery-wide reliability and uptime, will fuel that improvement over the next several years.