Business

Simulation and Uncertainty: Lessons From Other Industries

The authors discuss various methods for accounting for uncertainty in reservoir simulation and what we can learn from other industries to enhance our modeling.

twa-2012-3-tech101hero.jpg

Introduction

Petroleum engineering has both led and lagged behind developments in hardware and software compared with other industries. Since the emergence of modern reservoir simulators in the late 1970s, it has led in the efficient implementation of sparse matrix algebra and the solution of partial differential equations. It was also at the forefront of developing applications using high-performance computers such as the Cray.

However, the industry has been viewed from the outside world as “tribal (but friendly),” and in some areas, such as nonlinear optimization, risk and uncertainty, experimental design, and the robust application of statistics, other industries have much to teach us.

A Brief History

Commercial reservoir simulators made their appearance in the 1970s, and prototype tools for history matching and experimental design commenced in the mid-1980s. A variety of optimization techniques were applied in the following years, but there was a significant delay before the application to history matching of modern algorithms such as quasi-Newton, which had been formulated and used in other industries since the late 1970s. Steepest-descent methods remained stubbornly pervasive for some time.

The 1990s saw a growing interest in the use of evolutionary algorithms (or genetic algorithms), and commercial and internal tools for reservoir engineering started to emerge in the early 2000s. At the same time, advances were being made in other industries in the field of engineering optimization, where the function being optimized involved a time-consuming simulation. These approaches used a proxy or surrogate model, and the first history-matching tools based on these concepts were released in 2001.

Over the years, these tools have amply demonstrated their value for history matching and become part of the standard toolkit for many practicing engineers. However, uncertainty quantification for reservoir simulation is still at a relatively immature stage, and some question remains regarding the validity of currently available tools and workflows.

In recent years, there have been several SPE meetings devoted to probabilistic uncertainty quantification, and the design of experiments has, in a series of reports, highlighted its importance and the technical challenges involved. This focus will surely increase as oil companies and regulatory bodies become increasingly interested in environmental low-probability/high-impact events.

As Andre Bouchard, manager of reservoir engineering technology at ConocoPhillips, confirms, “Embracing probabilistic analysis in the quantification of reservoir dynamics allows us to extract information vital to the planning and execution of robust development strategies. The last decade has challenged the deterministic cultural paradigm that has been prevalent across our industry. With the advent of new technological advances, there is a strong motivation and opportunity to improve how we manage decisions under uncertainty.”

Monte Carlo and Risk

Most engineers have some acquaintance with risk-analysis tools such as Crystal Ball, @Risk, and ModelRisk. These are often used by economists to evaluate reservoirs, and a typical workflow involves an engineer passing low-, medium-, and high-production profiles from simulation to the economist for risk analysis. These uncertainty techniques are now starting to be used directly by engineers as part of their simulation workflow.

These risk tools are based around Monte Carlo (MC) simulation, which is a numerical method to approximate integrals of functions whose analytical form is unknown. Engineers describe these as S curves, or cumulative probability curves. MC methods are used in many industries, such as quantum chromodynamics and financial modeling. The moniker for the numerical method was coined during the Manhattan Project, and MC has been used in the nuclear and defense industries ever since. In the financial world, the new rage is for quants and analytics, and MC is being used to analyze “big data” with very large data and model dimensions to predict how we might spend our disposable income.

MC methods, which cover a wide range of different algorithms, are not without potential pitfalls, which some have said contributed to the financial crash of 2008. The extent of these practical limitations may be surprising to some engineers. Three commonly overlooked pitfalls are

  • Failing to consider the inherent numerical error in MC
  • Selecting improper sampling or probability models
  • Incorrect assumptions regarding independence between variables

Fig. 1 shows S curves for a known solution with 32 variables with some correlations. An MC simulation with 100,000 samples was repeated 100 times to generate 100 S curves, and the minimum, mean, and maximum of the curves are plotted. The magnitude of the numerical errors depends on a number of factors, and prominent among these is the number of uncertain variables being sampled, the number of samples used in the MC simulation, and the strictness of the history-match criteria (or magnitude of correlations).

twa-2012-3-fig1tech101.jpg
Fig. 1—S curves for a known solution with 32 variables.

 

Fortunately, MC is an active topic of research at many major universities, and the practical control of numerical error is continually improving. What is more difficult is to determine how many samples are required—repeatability is not sufficient because it is very easy to repeat the wrong answer—and MC tools are normally tuned by the practitioner. This tuning is equivalent to the timestep tuning that is a normal part of the simulation workflow.

The probability models for variables need to be chosen carefully in an appraisal or greenfield situation. Uniform and triangular distributions are known to be very rare in the real world, yet are often used, and the standard fallback is the Gaussian distribution, which may not always be appropriate.

When calculating prediction uncertainty following history matching, probability models are derived from some fitness function to history. The choices here require a careful understanding of the reliability of the measurements at different times and for different results and their effect on future prediction. Good history matches tend to follow multiple narrow curved valleys in a high-dimensional variable space.

An incorrect assumption of independence between MC variables always artificially adds conservatism to MC results. Not all tools used to perform MC simulation have the capability to sample distributions of dependent variables. New methods have been developed for use in the medical and pharmaceutical industries that enable sampling of very complex, high-dimension distributions of dependent and correlated variables.

If an appraisal field has 50 faults and their transmissibilities are treated independently, the risk analysis will average out the faults and give a completely different S curve and risk profile compared with a study applying a single-fault transmissibility parameter to all the faults. The latter will give a much wider P10–P90 interval and dramatically increase the chances of a heavily compartmentalized system.

Use of Surrogates

Surrogates, or proxies, are fast approximations to full-scale engineering simulations. They have been used extensively for more than a decade in many industries—including the aerospace industry (optimizing wing design with complex air-flow simulations) and weather forecasting—and are an important part of the toolkit in general engineering design, such as that for artificial knees and car components. Surrogates have been an active area of research for more than a decade at many leading universities throughout the world, and some of the original grand masters of optimization have made significant contributions to their theoretical and practical implementation. They are indeed a natural extension of the quadratic approximations first formulated for optimization in the 1970s. They have also been extensively studied by statisticians, who are very familiar with problems of fitting models to data.

In the reservoir engineering world, surrogates have been found useful by many companies for accelerating the history-matching process, and they can cope with the large number of samples required for MC simulations. As surrogates become more mature, some of the reported performance issues currently associated with them will disappear. However, the wheel has yet to turn full circle, and no engineer will rely purely on a surrogate for uncertainty quantification without having a suitable probabilistic set of reservoir simulations.

High-Performance Computing

In the oil industry, Linux clusters have emerged rapidly since 2000, and most medium and large oil and gas companies have access to large clusters. For other applications, such as Internet search engines and molecular modeling, huge server facilities have been built. On a smaller scale, it is possible to purchase, for a moderate sum, a box under the desk of every engineer that has 64 processors (four sockets, each of which has a16-core processor).

While “embarrassingly parallel” software can take full advantage of this computing power, a single reservoir simulation is limited in the amount of parallelism it can achieve. There is much current interest in the use of graphics processing units for highly parallel basic numerical operations, where they are being used to increase throughput 10-fold for automated seismic interpretation. In contrast, reservoir simulation is a weaker candidate because of the matrices’ sparse structure.

Part of this changing paradigm, enabled by computing power and driven by the need for uncertainty quantification, is to shift focus from parallelizing a single simulation to running multiple simultaneous serial simulations. Controlling software that can manage multiple runs efficiently with large volumes of data while maintaining a responsive user interface is a challenge and may require radical architectural changes to current tools.

“It is well known that intuition breaks down as the number of dimensions increases.”

The Curse of Dimensionality

It is well known that intuition breaks down as the number of dimensions increases. There is a classic problem in physics— Olber’s paradox, “Why is it dark at night?”—that observes that it should be light at night because, as we move away from the Earth, the increase in the number of stars is balanced out by the decrease in light from individual stars.

In a simulation world of 200 unknown variables, the number of acceptable history matches explodes and this issue becomes acute. Any probabilistic forecast based on some sum-of-squares weighting is doomed to failure. Only a likelihood-based approach and sophisticated robust MC methods can overcome these basic laws of mathematics.

Validation

Although the term “probabilistic uncertainty quantification” is becoming commonly used in our industry, limited workflows and software products are formulated in a fully probabilistic manner. The reality is that most workflows generate a set of possible simulation cases but without any probabilistic quantification. Even where attempts are made to do a fully probabilistic quantification, validation of the results has limited visibility, and there is little industry agreement on suitable probability models.

Several important and illuminating uncertainty quantification studies have been based on the PUNQ-S3 problem, and these have demonstrated the range of uncertainty results provided by different approaches. It is less clear what the underlying cause of this wide range is.

In the realm of nonlinear and global optimization, an extensive test suite is used by researchers to test their latest algorithms. In the reservoir simulation world, a set of SPE models is used for validation. Any reputable simulation vendor can be asked for their results against these tests prior to purchase.

No similar test suite exists to validate uncertainty tools being used. The current situation tends to be “trust me,” or “let’s compare apples and oranges and compare incorrect answers with incorrect answers.” The reality is that limitations in algorithms or errors in implementation can very easily give rise to completely invalid and wildly inaccurate results, as has been experienced frequently in other industries.

The oil industry needs to validate uncertainty tools, starting from very basic tests with different degrees of complexity and size where the S curve is known analytically. The next stage would be to compare results against an S curve that has been generated by running simple simulation models (including PUNQ-S3) tens of millions of times to MC convergence, including modeling of some of the difficult S-curve behaviors, such as a maximum limit on cumulative oil. This robust scientific approach to validation—and to an understanding of algorithm limitations—is necessary before we use these techniques in our field decision making. Without this, we can find that, by chance, our prediction fits the future the same way one sometimes wins in Monte Carlo without any increase in confidence in the fairness of the dice being thrown.

Use and Abuse of Probability

Many examples exist where well-meaning practitioners in probability apply all the correct formulas and tools but get the answer completely wrong. Classic cases include the Monty Hall problem—of which it has been said “No other statistical puzzle comes as close to fooling all the people all the time”—and a case of an appeal against a conviction of a mother whose two babies died of sudden infant death syndrome. Another example, which has been analyzed incorrectly in most publications and discussion groups, is the “I have a boy born on a Tuesday” question.

Similar dangers exist in the realm of reservoir simulation. First, we often take a single parameterized simulation model out of many possible geological realizations—this immediately shifts (biases) the mean and reduces our uncertainty—and we then compound the abuse by making untested assumptions about the correlations between parameters, which affects downside risk. It may not be until we have performed an expensive but ineffective water-injection plan that we realize the faults are mainly sealed and the assumed independence excluded this possibility in the original economic evaluation because it was at the very extreme tail of the S curve.

Conclusions

Given limitations in our current software tools, limitations in our knowledge of the static properties of the reservoir, uncertainty about parameter correlations, and unpredictability of future operational decisions, it is tempting to give up and go back to the old ways—“best history-matching case ±10%.”

However, in the last decade, we have seen major advances in computing power, software, and algorithms, and, if the industry continues to invest in training young engineers in the skills required, particularly in probability and statistics, it will be able to quantify uncertainty with increasing understanding, rigor, and validity. This will then enable the engineering alchemist’s dream—optimization and decision making under uncertainty.


goodwin-nigel.jpg
Nigel Goodwin has 35 years’ experience in the upstream and downstream oil and gas business, working both with software vendors and with oil companies. He was a cofounder of Energy Scitech, which led to the development of EnABLE, a commercial history-matching and uncertainty tool subsequently acquired by Roxar. He currently works as an independent consultant for Essence Products and Services. Goodwin specializes in optimization, statistics, and software design. He studied theoretical physics and holds BA and MMath degrees from Cambridge University and a PhD in theoretical physics from Manchester University.
powell-mark.jpg
Mark Powell founded Attwater Consulting in 1999 to provide project management, systems engineering, and risk assessment services to a diverse set of industries after a long, successful career in the aerospace, defense, and energy sectors. His faculty affiliations have included graduate schools at Stevens Institute of Technology, the University of Idaho, and the University of Houston. His current research interests are in the application of advanced Monte Carlo methods from biostatistics to solve complex problems in engineering, risk assessment, and project decisions. Powell holds a BS degree in physics and an MSE degree in aerospace engineering, both from The University of Texas at Austin.