ADVERTISEMENT

Heavy-Oil Steamflood Validates Machine-Learning-Assisted Model

You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.

To ensure continued access to JPT's content, please Sign In, JOIN SPE, or Subscribe to JPT

A physics-based model augmented by machine learning proved its ability to optimize a steam-injection plan in a shallow, heavy-oil field in the San Joaquin Basin of California. The model and a case study to validate its predictive capabilities were described in the previously published paper SPE 185507. Paper SPE 193680 updates the previous case study and presents the results of actual implementation of an optimized steam-injection plan based on the model framework.

Introduction

The goal of steamflood modeling and optimization is to determine the optimal spatial and temporal distribution of steam injection to maximize future recovery or field economics. Accurate modeling of thermodynamic and fluid-flow mechanisms in the wellbore, reservoir layers, and overburden can be prohibitively resource-intensive for operators, who instead often default to simple decline-curve analysis and operational rules of thumb. The physics-based model described in this paper allows operators to leverage readily available field data to infer reservoir dynamics from first principles. Production, injection, temperature, steam quality, completion, and other engineering data from an active steamflood are continuously assimilated into the model using an ensemble Kalman filter (EnKF). The model is then used to optimize steam-injection rates to maximize or minimize multiple objectives such as net present value (NPV), injection cost, and others, using large-scale evolutionary optimization algorithms. The solutions are low-order and continuous scale, rather than discretized, so modeling, forecasting, and optimization are significantly faster than with traditional simulation. Although steamfloods in the shallow, heavy-oil fields of the San Joaquin Valley have been very successful, the scale of many of these steamfloods also provides optimization opportunities. The basin contains hundreds or even thousands of wells with significant lateral and vertical reservoir heterogeneity, spatial variation of steam quality, varying historical completion quality and methods, and different levels of pattern maturity across the fields. As such, steamflood operations have multiple control variables that can be optimized.

Redistribution of steam in existing injectors is one of the most important control variables to optimize in a steamflood to account for the factors mentioned previously. A successful redistribution optimization produces a list of time-varying injection-rate changes for each injector that can maximize the overall effect of steam on incremental oil production. Additionally, optimizing steam cut across already-mature patterns can reduce operational costs.

For these crucial decisions, operators typically rely on semiquantitative approaches such as decline-curve analysis or simple analytical models. These methods are capable of using only a small portion of available data and function as rules of thumb, resulting, at best, in qualitatively optimized reservoir management decisions. On the other extreme, some oil companies use sophisticated predictive modeling tools. Reservoir simulation is the most advanced technique available today in that it is capable of integrating disparate data sources and predicting over long-time horizons. While reservoir simulation is an excellent tool for field studies and long-term planning, certain limitations prevent operators from leveraging simulation for day-to-day decision-making.

In essence, reservoir simulation enables operators to model the physics of a small number of scenarios precisely. However, it is not designed to produce thousands or millions of scenarios and to automatically update those scenarios on the basis of real-time data. Even for operators with very sophisticated simulators, there is a need for faster models to leverage real-time data, achieve prescriptive analytics, and optimize day-to-day operations.

Purely data-driven models are at the opposite end of the spectrum from physics-based reservoir ­simulation. While simulation models require months to set up and days to run, machine-learning models can be built in days and run across a full field in real time to explore thousands of scenarios rapidly and identify optimal solutions. Although such models offer a significant speed advantage and enable predictive analytics in domains such as unconventionals, where the reservoir physics are relatively more difficult to model, the absence of underlying physics prevents the use of machine learning for quantitative optimization.

Physics-based reservoir simulation and data-driven machine learning offer complementary strengths. An ideal predictive model would combine the speed and flexibility of machine learning with the predictive accuracy of reservoir simulation so operators could integrate data in real time to quantitatively optimize reservoir-management decisions continuously. Such a model can be used to quantitatively optimize any future performance indicator such as NPV within a closed-loop reservoir management/optimization framework wherein the physics-based models are continuously updated to reflect new data, and the updated models are then used to optimize continuously reservoir-management decisions such as steam or water redistribution, infill drilling, and other operations throughout the life of the reservoir. With such a system, the reservoir model is rerun and retuned constantly to remain true to actual field conditions at all times, and does not fall out of date. However, absent fast-running, physically accurate models with quantitative production estimates and statistical confidence, such a closed-loop process has not been possible before.

The primary motivation of the model described in the paper is to create a framework in which petroleum engineers can optimize oil production from a reservoir quantitatively. The goal of quantitative optimization is to predict which actions—for example, on a daily, weekly, or monthly basis—will produce the maximum quantified (predicted and risk-adjusted) return in oil production. Investigating tens of thousands of scenarios that simulate quantitatively specific sets of activities allows engineers to select operational plans that meet specific optimization criteria, such as preserving current production rates or optimizing long-term reserves growth. Often, it is necessary or desired to optimize multiple criteria simultaneously.

Case Study

The objective of the case study presented in the complete paper was to optimize a steam-injection plan in a shallow heavy-oil field in the San Joaquin Basin of California. Production in the field comes from poorly consolidated sands within the Antelope Shale member of the Miocene Monterey formation, with porosity averaging 30%, permeability averaging 2,000 md, and net thicknesses typically between 50 and 300 ft. Structural dip is steep at approximately 60°. The reservoirs are shallow, with depths ranging from 200–600 ft total vertical depth. Oil gravity is approximately 13 °API. Reservoir pressures are well below bubblepoint and average 50–100 psi. The field has approximately 200 producers and 30 injectors, producing approximately 2,000 B/D of oil and injecting 6,500 B/D of steam. The field has been under operation for approximately 8 years, with most of the continuous steamflood starting approximately 5.5 years ago. The workflow described in the paper and implemented by the operator in this case study was as follows.

1. Backtest

a. Fit historical training data
b. Predict historical test data (blind test)
c. Compare the forecast with test data statistically to quantify predictivity

2. Full fit—train the model on the full historical dataset
3. Optimization—explore future injection distributions
4. Set target plan—commit to a target injection plan
5. Implement target plan—implement actual steam changes in the field
6. Assess—assess actual performance

The earlier paper explored up to Step 3 and demonstrated the potential for optimization. This paper extends the work flow to Step 6 and discusses the implementation of an actual target plan and the results observed so far. Each step of the work flow is discussed and illustrated with charts and graphs.

The star in Fig. 1 represents the target operating scenario, which was modified by the operating company to comply with facilities constraints. The modification caused the production forecast to be slightly below the efficient frontier, but still remain well above the base case. On the basis of the model and optimization plan, the operator essentially planned to increase the steam-­injection rate from approximately 6,200 to approximately 10,400 B/D throughout the field.

Fig. 1—The “efficient frontier” of future injection plans. The triangle represents the current operating plan at the time of modeling, termed the base case. Each circle represents a particular model-predicted optimal injection plan.

 

Because the target plan was not implemented exactly, it was not possible to use the target production forecast as a benchmark for actual production. Model predictivity was tested by inputting actual injection rates into the existing model, along with operational constraints and activity. While the model response was quite close to true production, it underpredicted slightly. Analysis showed that the underlying cause was the over-/underburden heat-loss model, which predicted greater heat loss than that which actually occurred. This caused predicted reservoir temperature to be lower, leading to a low production forecast.

Conclusions

  • The machine-learning-assisted, physics-based modeling technique which had been the subject of a paper a year earlier was applied in a heavy-oil steamflood and was the basis of a significant change in field-development strategy. The model was used to benchmark actual performance over the intervening period.
  • Fieldwide predictivity was within confidence intervals to support further reliance on the model for future operating decisions.
  • Overall, this case study demonstrates that the prescriptive modeling capability of the technique is well-suited to perform such analysis and provide a fast, actionable forecast that would not be available by other means.
  • The reliability of the model is notable because it served not only as an academic exercise, but also supported a 60% injection increase from the base case, representing a major operational change in the field, particularly given the significant extrapolation involved in increasing injection by more than 60% from the base case.
  • The ability to extrapolate beyond observed data is one of the key benefits of this modeling technique over simple machine-learning approaches.
  • The results of a 15-year, multiobjective optimization plan calculated using a model built the year before the writing of paper SPE 193680 accurately predicted significant upside from a nearly twofold injection increase. The resulting operational change corresponded to significant incremental production beyond the expected base-case decline.
This article, written by JPT Technology Editor Judy Feder, contains highlights of paper SPE 193680, “Implementation and Assessment of Production Optimization in a Steamflood Using Machine-Learning-Assisted Modeling,” by Pallav Sarma, SPE, Ken Lawrence, Yong Zhao, Stylianos Kyriacou, and Delon Saks, Tachyus, prepared for the 2018 SPE International Heavy Oil Conference and Exhibition, Kuwait City, 10–12 December.

Heavy-Oil Steamflood Validates Machine-Learning-Assisted Model

01 January 2020

Volume: 72 | Issue: 1

No editorial available

ADVERTISEMENT


STAY CONNECTED

Don't miss out on the latest technology delivered to your email weekly.  Sign up for the JPT newsletter.  If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.

 

ADVERTISEMENT

No editorial available

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT