Artificial Intelligence Transforms Offshore Analog Fields Into Digital Fields
You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.
Currently, no cost-effective method exists to efficiently, reliably, and accurately capture analog meter readings in a digital format. This paper details how artificial intelligence was used to capture analog field-gauge data with a dramatic reduction of cost and an increase in reliability. This solution was implemented in the Cheleken oil field in the Caspian Sea offshore Turkmenistan. During the field trial, operators were required to take pictures of the gauges at given intervals and upload the photos to the application. After an innovative process of calibration, the acquired images were processed using artificial intelligence and deep-learning computer techniques.
Digitizing an oil field is an exciting but costly exercise that requires close supervision to avoid inefficiency. Best results are achieved when priorities and objectives are defined early in the project. For instance, if the objective is cost savings, then digitization should be applied in a manner different from that used in production optimization. Regardless of the objective, implementation of the digital oil field is a constant battle against cost and time.
The most effective value is achieved without 100% digitalization. Often this fact is overlooked, and during the digitization of the field, extra cost is incurred without tangible improvements in the value because the oil field is not homogeneous and not all parameters need to be monitored or optimized with digital devices. For instance, 90% of production comes from 10–20% of the wells in most fields. In digitizing by priority, early value can be brought to the project as stakeholders are encouraged to grow the digitization effort from a monitoring perspective to value maximization.
Automating Input Data
A user able to predict production has the ability to maximize future revenue. However, continuous metering of production is very costly. Installing a multiphase meter on each well is a luxury afforded in only a few fields with very high production. Instead, most fields need to share metering facilities by using a test manifold. Wells are then tested with a certain frequency and the daily rate is estimated on the basis of these tests. Commonly, well production between tests is assumed to be constant until a new test is conducted. This assumption can be improved with the use of a calculated value from a virtual flowmeter (VFM).
One of the more sensitive input parameters for this VFM is the wellhead pressure. In some cases, when the wellhead pressure is measured manually by observing analog gauges, human errors are incurred (in terms of visual, typographical, and duplication errors).
The methodology used in this paper eliminates all human errors while reducing the time and cost of acquisition when acquiring pressure and temperature readings from analog gauges by using computer vision. These improvements in the input parameters resulted in a more-reliable prediction of production rates from the VFM.
Reading Analog Gauges Using Computer Vision
Recent advances in computer vision have enabled the identification and processing of irregular shapes and patterns present in industrial applications. In order to process a gauge reading effectively, the gauge, dial, and dial position must be identified. After this process is complete, the metadata of the gauge (make and model) will be used to decode the location of the dial relative to the numbering scheme.
The process of extracting information from a photograph follows four steps.
- The photograph must be converted to black and white in order to reduce the noise present in the original photograph, which reduces the probability of identifying false contours.
- The major features are extracted using contour-identification algorithms. This particular sequence and the results are illustrated in Figs. 1 and 2.
- The gauge itself is identified by the selection of the appropriate contour corresponding with the highest probability to the circle that maps to the gauge. This is often a complex process, because many different circles can be identified in the contour-identification step. There is often a tradeoff between the number of contours identified and the accuracy of the circle identification.
- The line that corresponds to the gauge dial is identified.
The solution also used quick-response (QR) codes that acted as a mechanism for recording tag information of specific meters (serial number, location, meter type, and manufacturer) along with specific calibration indications referenced by the computer-vision module. Thus the operator does not have to identify the gauge.
The next step within the rollout of the system is the configuration phase; once a camera is attached to a local computer, it is automatically detected, and files are transferred securely to a repository that will process the images and the subsequent image metadata.
Data and Results
Forty gauges on three platforms were used in this experiment. Of these 40 gauges, 33 were successfully read and processed. Seven gauges were not correctly detected or processed because the quality of the photo was poor. Of the 33 that were detected and processed, average errors were as follows:
- Platform 1—Pressure: 0.9%
- Platform 2—Temperature: 0.8%
- Platform 2—Pressure: 0.7%
Quality of the photos can be improved by simply training the operators in basic photographic techniques.
Across all devices and platforms this resulted in the following overall metrics related to accuracy:
- Mean of the error=-0.4%
- Standard deviation of error=1.1%
- 95% confidence interval +/-0.4%
High accuracy was obtained with this method. Also, time needed for acquisition and reliability was improved.
During the test, photos of the gauges were acquired at the same regular pre-established times that manual analog readings were recorded. In this way, results from the two methods (photos and manual reading) were available for comparison. As previously outlined, the test was successful. However, the biggest difference between the two methods, and most value gained, was identified in the procedural improvements during acquisition and storage of the data.
During manual gauge-reading, the operator must go to each gauge, observe the value, and enter the information manually along with the gauge identification and the date and time. This has several disadvantages:
- Distracted or inexperienced operators can frequently read incorrect values or create typographical errors.
- Values from one gauge can be assigned to another device by mistake.
- Is the date and time correct? This is by far the most common mistake. First, it was once common for operators to round the time. For instance, several readings may be taken from several gauges, and all readings are then assigned to the same date and time, when in reality there are several minutes of difference. In addition, if a pre-assigned recording or reporting time exists, a tendency to force the time of acquisition may result.
- There may be no proof that the value reported is real or if it was acquired at the reported date and time.
- The time required for reading, typing, and checking can be extensive. Manually gathered data are subject to approvals, reviews, and quality control, which can consume a considerable amount of time and resources.
On the contrary, data automatically acquired from a photograph using computer vision improve the process significantly.
- Little opportunity for error with regard to the value exists. Computer vision was demonstrated to have an error of less than 1%. This is far better than human accuracy.
- The device is positively identified by the QR from the photograph.
- No error with regard to date or time occurs because of electronic stamping. This is one of the primary improvements—the correct time of data acquisition without the possibility of inaccuracy.
- The picture itself is an auditable proof of the value.
- No typing is needed in this rapid process. Time for data acquisition was improved significantly because there was no on-site verification required.
Improvements were also observed in the process of entering the values obtained while in the field. Data automatically acquired from a photograph improved the process significantly because no action was necessary aside from returning the camera to the control room. During charging, the camera synchronizes and uploads all pictures to a folder automatically. From this folder, the computer-vision process is triggered, and obtained values are input directly into the database. This results in several other improvements:
- No typing at all is needed, because charging and synchronization is automatic.
- Copying values is not possible, because each value has an auditable picture.
- Forcing of data is not possible.
- A supervisor only has to confirm that the process is running; no need exists to run quality control on the obtained values.
Artificial Intelligence Transforms Offshore Analog Fields Into Digital Fields
01 January 2020
Digital Startups Raise Hundreds of Millions to Modernize Upstream Sector
Producers and investors continue to reward small companies with big solutions that lean on new software and hardware to lift the bottom line of one the world’s few trillion-dollar industries.
Shale Producer Partners with Analytics Firm To Stop Overbilling and Fraud
Concho Resources and Enverus say turning to automated software for tracking invoices could save the upstream business many millions of dollars.
Field Trial of Cloud-Connected Wireless Completion System
This paper describes the development and field trials of a cloud-connected, wireless intelligent completion system that enables long-term monitoring and interval control to enhance production management by connecting the user wirelessly from the desktop to downhole inflow-control valves.
No editorial available
Don't miss out on the latest technology delivered to your email weekly. Sign up for the JPT newsletter. If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.
16 January 2020
13 January 2020
14 January 2020
No editorial available