ADVERTISEMENT


Machine Learning Enhances Evaluation of Oil and Gas Assets

In surface-condition evaluation, visual inpection is a favored nondestructive examination (NDE) method because experienced personnel can identify defects or damages quickly. Drawbacks of this method, however, include its lack of objective quantification and description of defects. Using recent developments in machine learning (ML), image recognition, and object detection, the authors have investigated the feasibility of using ML on algorithms in recognizing objects and describing their condition.

Theory and Definitions

The authors have used the Tensorflow framework (an open source machine learning platform [www.tensorflow.org]) as a tool for ML of a convolutional neural network (CNN) for object detection without changing the original code. The purpose was to present an algorithm with images, thereby teaching it to detect objects of interest through ML. Tensorflow aids the user in the ML process with all data-management and programming steps to perform the optimization of the algorithm. Different algorithms can be used for this purpose, but the CNN type has been found to be very efficient.

In this work, a later development of the CNN algorithm was implemented to detect and classify objects. This version is known as the Faster R-CNN with Inception ResNet v2. This CNN algorithm was developed to enhance speed of detection, but it also has high accuracy. The complete code and mathematical composition of this model is beyond of the scope of this paper, but the algorithm comprises four steps.

  1. When detecting objects, the algorithm uses an image as input.

  2. The image is decomposed onto regions of interest where the algorithm predicts that an object is likely to be located. Up to 2,000 bounding boxes are inserted at these regions.

  3. The regions inside the bounding boxes are used as input to a CNN, which will recognize distinct features of objects. The pixels within each bounding box are transformed into a value.

  4. The output is a probability distribution over known object classes. The class with the highest probability is given to the corresponding bounding box.

The algorithm may perform these steps for hundreds of bounding boxes per image, depending on the result of Step 2. To optimize Steps 3 and 4, the CNN must be given examples of object classes it will learn to detect, which is achieved by labeling thousands of images with regions of interest by bounding boxes around the objects to be detected. This manually applied bounding box is known as the ground truth. These labeled images are stored in a database and form the basis for the ML process that will allow the CNN to learn each object class. The learning is verified by presenting the CNN with unlabeled images and evaluating how well it performs in detecting objects on these unlabeled images.

During training of the CNN, the manually labeled images are used. The CNN will perform Step 2 and place its own bounding boxes on the image. The pixels from the image inside each predicted bounding box are transformed into numerical values and used as input.

By combining millions of these equations connected through the CNN, the CNN is tunable with millions of parameters. When the values from a predicted bounding box are processed through the CNN, the output probability and determined object class are compared to the ground-truth bounding box presented manually in the training images. The deviation from prediction and ground truth is used to strengthen weights for equations giving high probability of predicting the right class, and weakening the weights of equations leading to wrong classes being predicted. This process is repeated for hundreds of thousands of iterations until the CNN begins to converge. This is a simplified description of the process, but it allows efficient presentation of the authors’ results.

Two parameters must be described to allow evaluation of the CNN-algorithm performance. These are the total loss function and average precision. Total loss should converge to zero if the CNN performs perfectly. Training of the CNN is connected to this function; the weights are adjusted to minimize the total loss.

Another parameter used to determine CNN performance is the average precision (AP). It is given as the fraction of precision for all predictions of a class over the number of successful predictions. Higher values mean that the CNN is performing well at predicting objects on the provided training data set. During training, the CNN is successful in performing a prediction if it places a bounding box that overlaps 50% with one of the manually inserted ground truths for a given class.

Equipment and processes are detailed in the complete paper.

Data and Results

From the previously described data sets, the CNN was trained with 90% of the data per object class, and 10% was used for validation. The main goal for this work was to verify that the CNN would be able to perform object detection according to relevant class definitions for inspection of oil and gas process equipment. The main results were images with bounding boxes provided by the CNN around relevant objects according to the class defined. In addition, a test was performed to investigate the effect on data set size. The data sets were reduced by 50%, and training performed until total loss converged toward a stable value. The authors present the average precision per class in Table 2 of the complete paper in order to compare 100% and 50% data sets.

For AP values, it was expected to see an increase in performance when the CNN was trained with a larger data set. This was not the case in this study. Several classes show poorer performance in results from the larger data set. At the time of writing, the authors have been unable to investigate if this is a result of the quality or size of the data sets, or if continued training could have improved the results. In general, data sets for several of the object classes are still considered small. The values for the total loss do not converge, but show peaks of increased values, the result of small data sets for several of the object classes, generating noise in the adjustment of the parameters in the CNN.

The AP values in Table 2 of the complete paper vary from 0.041 to 0.582, close to work reported in the literature as best performance for multiclass object detection. In the authors’ work, the process of tuning parameters in the CNN and ML Tensorflow framework to investigate the possibility of better performance has yet to begin. For several of the authors’ established classes, the data sets are small, which affects the convergence of the CNN. For low-scoring classes, evaluation of the data set and fine tuning of the parameters in the CNN and ML framework would yield improvements to overall performance.

Figs. 1 and 2 feature images with bounding boxes predicted by the CNN shown in white, while the manually inserted ground-truth bounding boxes are shown in black.

Fig. 1—CNN showing multiclass detection, but failing
to detect all classes available in the image.
Fig. 2—CNN showing detection of welds in good visual condition. It fails to give a 100% detection
of all welds, but provides correct definitions for selected areas.

Conclusion

If the industry wishes to introduce robotics for inspection or maintenance in the future, it is necessary to provide these machines with an understanding of their surroundings. To that end, on the basis of results shown in the evaluated images and AP values, the CNN algorithm and ML framework has been trained to identify object classes relevant for inspection of oil and gas process equipment. AP values and total loss indicate that the data sets should be increased in size to improve performance. ML and CNNs can be used to improve the inspection of process equipment; however, the performance needs to be improved from that achieved in this work. It is not satisfactory to identify 1 out of 10 defects with an automated system. The precision must be higher.

A trained CNN with good performance would open several new possibilities for performing inspection and data gathering for evaluation of the technical condition of oil and gas assets. This could be achieved by increased efficiency of inspection performed by personnel with assistance from enhanced software solutions. Technology presented in this paper is intended to enable evaluation of images in real time on tablets, filling in information automatically and reducing the amount of text to be typed. The study aims at increasing the time spent in the field by inspectors.

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 190895, “Machine Learning for Evaluation of External and Internal Surface Conditions,” by Ole-Erich Haas, Axess AS, and Per Sandved Hustad, Axbit AS, prepared for the 2018 SPE International Oilfield Corrosion Conference and Exhibition, Aberdeen, 18–19 June. The paper has not been peer reviewed.

ADVERTISEMENT


STAY CONNECTED

Don't miss out on the latest technology delivered to your email every two weeks.  Sign up for the OGF newsletter.  If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.

 

ADVERTISEMENT


MOST POPULAR

ADVERTISEMENT


EXPLORE ARTICLES BY TOPIC

ADVERTISEMENT