ADVERTISEMENT

Machine-Learning Approach Determines Spatial Variation in Shale Decline Curves

You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.

To ensure continued access to JPT's content, please Sign In, JOIN SPE, or Subscribe to JPT

The two most common techniques for forecasting production performance for a new shale well are decline- (type) curve analysis and machine learning. The complete paper describes an automated machine-learning approach to determine the spatial variation in decline type curves for shale gas production, based on existing data of production, completion, and geological parameters. The methodology allows the user to decide whether the focus should be purely on forecast quality or on a combination of forecast and clustering quality. The resulting model will enable the prediction and uncertainty quantification of production profiles for new target wells or areas in the basin.

Methods of Forecasting Production Performance

Decline-curve analysis is the most-popular technique for forecasting production performance in shale formations because of the need for fast decisions. The technique involves borrowing decline curves from the closest wells or from wells with similar geological, completion, or fluid properties.

The idea of decline-curve analysis is based on the fact that a similar production profile is expected from the closest wells or from wells with similar properties. However, the process is often manual and very subjective. As a result of the approach, each existing well is assigned to a particular cluster of decline curves, each cluster having a certain typical decline curve. Clusters can be spatial or represented in completion variable space. To obtain a production forecast for a new well, the authors use the decline curve from a cluster to which it is believed that the new well will belong, usually sampling from a cluster map.

The second approach to forecasting shale production performance is machine learning that focuses on statistical correlations. A statistical model is created that connects decline curves with the same well parameters used in decline curve analysis. Typically, the result is a trained machine-learning model. For a new well, it provides production performance, with or without an uncertainty range. Additionally, maps can be created of forecasted production profiles or total recovered fluid.

The objective of the project described in the complete paper was to provide a methodology that creates clusters of decline curves and estimates decline curves for a particular location or set of variables. The intent was to limit the manual aspect of clustering and create a robust work flow.

The project was based on publicly available monthly production data from most of the producing wells of the Duvernay formation. The K-means technique was used to cluster 273 wells using geological parameters such as thickness and porosity, completion parameters such as horizontal section length and proppant volume, spatial location, fluid window, and production curves. A machine-learning classification based on the clustering results was used to draw distinct geographic regions within which the combinations of geological, completion, and production factors were fairly ­similar. A support-vector-machine (SVM) approach was used to create maps of clusters and quantify uncertainty. Additionally, a functional-classification and regression-trees (CART) approach was used to indicate the most important or sensitive factors that should be used for clustering.

The results of the K-means clustering were compared with those of the CART technique, which does not create clusters explicitly but produces well performance forecast directly. Also, because CART provides the sensitivity of variables regarding production, variable ranking can be used to decide which variables to use in the clustering.

The results show that the unsupervised K-means method performs as well as the supervised CART method. The methodology is flexible and allows for quick changes in the variables used in clustering, and the transfer to another dataset or basin is straightforward.

Methodology

K-Means Clustering. K-means is a ­widely used machine-learning technique. It separates observations into clusters in which each data point belongs to the cluster with the closest mean, which represents the core of the cluster. For this project, the same completion and geological parameters were used in both the CART and K-means approaches. The difference lies in the way the production profiles were handled.

SVM. K-means clustering assigns each well to a cluster. Assigning a cluster for a new well requires creating a map of clusters. The SVM approach uses algorithms that try to find hyperplanes that will separate the training data set in a particular way. Specifically, the hyperplane should be constructed such that the distance to each of the training points is maximized. This is called a hard margin. In complicated data sets where variables overlap and the dataset is not separable linearly, a soft margin is used. Soft margins allow introduction of errors in fitting the hyperplanes and thus make separation between clusters more generalized and less prone to noise and outliers. Using a soft margin enables introducing maximum posterior probability to obtain the most-likely cluster for a particular location as well as the probability of that cluster to be forecast at that location. This is how the production uncertainty is estimated for a particular location. After finding the hyperplanes, a fitted model is applied for every spatial point, thus obtaining a map of clusters.

Regression Trees. The machine-learning approach used in the complete paper for shale gas production is CART and involves two steps: first, the regression model is trained; next, it is used to generate production profiles based on the chosen parameters of a new well. The input for machine learning consist of the completion and geology parameters. The output is the predicted production profile.

Tree-based methodology comprises a large class of machine-learning methods, with the main concept based on dividing the space of predictor variables into subspaces corresponding to the greatest influence on the response variable. The process of subspace separation is sequential; thus, the final choice of the separation variable sequence can be represented as a tree with branches. Variables that result in the optimal split of variable space can be considered as having more influence on the response variable. Therefore, tree-based methods can be used not only for forecasting, but also for determination of variable importance.

Model-Quality Measures. Four parameters—root mean square error (RMSE), mean absolute deviation variance, misclassification, and silhouette index—were introduced to compare the quality of the model’s predictions.

Variables

The team chose 273 horizontal producers with 20 months of production history for the project to have a representation of the most economically important time period. Months with average monthly flowrates of less than 20,000 scf/D were disregarded. The paper presents the results for gas production as a response variable. The authors emphasize that reporting of liquid/gas separation is ambiguous in the Duvernay formation, and additional uncertainty must be accepted. The complete paper presents information on the modeling parameters for the producing wells in text and tabular form.

Results and Discussion

A sensitivity analysis using CART was used to obtain a chart of variable importance (Fig. 1). Because the basin has a dip in the southwest direction, the sensitivity analysis showed that latitude and depth are very important. The completion parameters, such as the number of stages, horizontal well section length, the amount of proppant, and the fluid per meter are important as well. This information was used to choose the most-sensitive parameters to fine-tune K-means clustering.

Fig. 1—Sensitivity analysis from CART for gas production.

 

For predictions, the wells were separated into a training wells set (90%) and a test wells set (10%). The CART model was trained based on the training well set, and the quality of the prediction was checked by using the test well set. The wells were clustered according to the chosen parameters.

On the basis of the clusters, a map was created using SVM. Each point of the grid is assigned the most-probable cluster. Then, for a new well, one can simply sample the cluster from a well location and obtain a production profile from the cluster distribution.

SVM also made it possible to check the quality of the mapping. In terms of RMSE, CART fares better than the clustering technique. Increasing the number of clusters improves the quality of the forecast, as expected. Adding production data to the clustering parameters improves the quality of the prediction dramatically; in that scenario, the clustering technique is more accurate than CART. In general, using all the parameters including production and a high number of clusters will provide a highly accurate forecast, superseding that of CART. The complete paper contains a detailed and illustrated discussion of the test results.

By using only the six best variables, the quality of the forecast is improved, allowing for synergy between CART and K-means, because sensitivity analysis can be conducted with CART first before using important variables in K-means predictions.

Conclusions

The approach presented in the compete paper provides a work flow to create clusters of spatially varying shale decline curves. The results show that decline-curve clustering can outperform CART if important variables and production data are selected for the model. At the same time, synergy between CART and decline-curve clustering is evident as sensitivity is carried out with the help of CART. Implementing a higher number of clusters leads to a more-robust production forecast. Using a small number of clusters facilitates easier, more-intuitive interpretation.

The ideas presented in the paper can be used for different data sets in the same basin or for other shale plays.

This article, written by JPT Technology Editor Judy Feder, contains highlights of paper SPE 196110, “Machine Learning of Spatially Varying Decline Curves for the Duvernay Formation,” by Aleksandr Bakay, Jef Caers, and Tapan Mukerji, SPE, Stanford University, et al., prepared for the 2019 SPE Annual Technical Conference and Exhibition, Calgary, 30 September–2 October. The paper has not been peer reviewed.

Machine-Learning Approach Determines Spatial Variation in Shale Decline Curves

01 October 2020

Volume: 72 | Issue: 10

ADVERTISEMENT


STAY CONNECTED

Don't miss out on the latest technology delivered to your email weekly.  Sign up for the JPT newsletter.  If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.

 

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT