ADVERTISEMENT


Machine Learning and Artificial Intelligence Complement Condition Monitoring

Despite a recent industry emphasis on its effective use, data remains a largely untapped asset by operators around the world. One application with significant potential is the remote condition monitoring (CM) of production machinery and the use of CM to support a more proactive, even predictive, condition-based-maintenance (CBM) model. The complete paper discusses a remote-CM-enabled CBM model that uses artificial intelligence (AI) and machine learning (ML) on a North Sea platform operating in the Ivar Aasen field offshore Norway.

The proposed approach involves the creation of high-quality data by time-stamping data events. This data-sampling technique is driven by anomalies in equipment-performance-monitoring data. These data are sourced through the CM system’s linkages, with electrical, instrumentation, control, and telecom (EICT) built into the platform’s topsides structure. When events are time stamped in the CM data streams, control-room staff can identify the sequences of events more precisely and diagnose issues faster and more accurately.

Notably, in the pre-front-end engineering and design (FEED) stage, the platform’s owner/operator decided to sole-source the EICT. The EICT partner then provided the means to employ a digital-twin concept, essentially a virtual proxy of the platform’s physical assets. By employing digital twins across remaining FEED, construction, commissioning, and early-operation stages that preceded first oil (December 2016), the  time savings helped accelerate time to first oil and, consequently, cash flow by $90 million.

Using the Digital-Twin Concept To Achieve First Oil Faster

By combining advanced software tools with traditional computer-aided- design and manufacturing applications, the EICT supplier’s engineering groups linked the project’s FEED and commissioning stages more closely, performing some of the latter virtually using the digital twin of a physical asset such as rotating equipment. This allowed them to address key issues months sooner than normal. While the digital-twin approach did save time and enhance project quality, it also provided considerable cost and time savings through enhanced coordination of system interfaces.

The platform’s fully integrated EICT package was built using a proven process-control system and programming-code libraries that were refined and drawn from years of use in oil and gas applications. Included were electrical distribution systems, field instrumentation, and telecommunications. With the digital-twin approach. the operator attained platform stability and first oil sooner than with a traditional multivendor method. Weeks, if not months, were saved.

The sole-source EICT approach made training much simpler as well. Operations and maintenance personnel learned the different interfaces from one vendor only. Given this platform’s complexities—globally sourced design, engineering, construction, deployment, and commissioning—actual time savings are hard to measure. Conservatively, at least 30 days were saved compared with a traditional project approach that would not have sole-sourced the EICT nor used the digital-twin concept.

AI- and ML-Enabled CM and CBM

In January 2019, after deactivating the platform’s onboard control room, the platform operator’s onshore control room assumed exclusive responsibility for full-time operation of the offshore assets. Technical experts made use of visual dashboards on desktop computers that graphically display detailed information about CM parameters. Depending on their role-based access privileges, various company stakeholders can view these displays and provide engineering support and oversight to the CBM model.

Analytics software platforms, powered by AI/ML pattern-recognition capabilities, are used to uncover deviations. For example, Fig. 1 shows a data graph with the functionality of a deep-LSTM (long short-term memory) autoencoder. This network has been trained to detect anomalies in a machine using 29 analogue sensors. The autoencoder differentiates between anomalies and healthy signals by training the model on healthy data only.

Fig. 1—Data graph from an AI-enabled LSTM autoencoder algorithm used for CM/CBM.


One of the main benefits of using an LSTM autoencoder over other anomaly-detection algorithms is that it not only detects anomalies but specifies the location of the anomaly and notes its healthy state. An explanation of the graphic subplots, from top to bottom, is as follows:

Subplot 1: Reference and anomaly signal. The reference signal is the original, healthy signal. The anomaly signal is the same signal as that used in an artificial anomaly to validate the AI/ML mode. This is fed into the model.

  • Subplot 2: Model input and output. This subplot shows how the model affects the signal. Note that the output of the model is very close to the original healthy version of the signal. In other words, the model is only able to reproduce the healthy representation of the signal.

  • Subplot 3: Reference and output. This plot shows how accurately the model can reproduce the healthy state of the signal. The reference plot is the original, healthy version of the signal. Note that this signal is not fed into the model in this case. The predicted plot is the output of the model when it is fed the anomaly signal. Note that the difference between these two is small.

  • Subplot 4: Difference/delta. The difference between the input and output of this signal (and the other 28 signals) can be used to calculate a healthy key performance indicator (KPI) profile for a machine’s parameters. An increase in this value will indicate a faulty state of the given signal and can be used to trigger an alert to human operators for further investigation.

Data-Collection Technology and Time Stamping

To gain greater, more-precise insights into performance issues, the platform employs an efficient on-change, event-driven data-collection technology that provides time stamping of performance-related events. This is an exclusive feature of the EICT contractor’s monitoring software and is used to support the platform’s CM-based CBM model. As such, the feature ensures data quality and helps control-room staff in their analyses of abnormalities to establish correlations spanning different equipment groups.

Time stamping is a sophisticated data-acquisition protocol that contextualizes sensor data in controller software. Rather than polling a sensor’s data, the equipment’s programmable logic controller (PLC) pushes the data out through a gateway to the data-storage database—on change only. Because the PLC features a deadband, it can assess whether any data changes are substantial enough to be time stamped.

Additional advantages of time stamping data:

  • Data values are time stamped as near to their source as possible.

  • Specific platform-equipment data can be sent through the gateway for processing and storage vs. using preselected data, as required by traditional polling.

  • Time synchronization exists across all sensing units.

  • Operators can diagnose problems more quickly.

  • Operators can make better decisions whether to mitigate or remediate an issue.

  • Companies are able to comply with Norwegian regulations governing the maintenance of the oil-and-gas metering system aboard the platform.

With the system having 280 instruments, manual calibration once took almost a year of one engineer’s time. Today, with deployment of CM/CBM, regulators have signed off on the extra precision of the metering system’s remotely monitored instrumentation, allowing the company to perform manual calibration every 3 years instead of annually. The resource savings might be small when set against overall operating expenses, but when an offshore operator needs to drive production costs under $7 per bbl, savings of any kind are appreciated.

CM/CBM in Remote Diagnostic Services for Rotary Equipment

An offshore (and onshore) CM/CBM model can incorporate remote diagnostic services for rotating equipment. These assets are among the most-complex and expensive in the E&P toolkit and include compressor trains—either single units or entire fleets—in different locales. Because of their complexity, usage rates and availability are critical to production performance. Though compressor trains are designed, engineered, and built to be reliable and fault tolerant, downtime can cause costly process disruptions and potentially undermine health, safety, and environmental compliance.

Implementing a CM/CBM model for a compressor begins with programming the KPI signatures of a healthy profile into its digital twin, along with all required algorithms to support that healthy status. Then, the KPI operating data can be monitored and compared with the digital twin’s baseline signature to identify deviations that could develop into problems. By preventing trips and forced outages by early detection of potential faults and preventive remediation, compressor availability could be boosted by as much as 3% annually—equivalent to approximately 11 days each year.

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 194590, “Machine Learning and Artificial Intelligence as a Complement to Condition Monitoring in a Predictive-Maintenance Setting,” by Stig Settemsdal, Siemens, prepared for the 2019 SPE Oil and Gas India Conference and Exhibition, Mumbai, 9–11 April. The paper has not been peer reviewed.

ADVERTISEMENT


STAY CONNECTED

Don't miss out on the latest technology delivered to your email every two weeks.  Sign up for the OGF newsletter.  If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.

 

ADVERTISEMENT


MOST POPULAR

ADVERTISEMENT


EXPLORE ARTICLES BY TOPIC

ADVERTISEMENT