Technology

Some Challenges for Monte Carlo Simulation

Monte Carlo simulation is the principal analytical tool of risk analysis. Its direct objective is always to estimate the range of something [e.g., reserves, project cost, business unit annual production, net present value (NPV), rate of return].

question marks

In this issue of TWA, we have attempted to shed light on the risks that surround our industry. Oil and gas companies face many types of risk, and those risks play an important role in how decisions are made. A company evaluating the viability of a project must identify the risks and then determine if those risks are worth the potential reward. A person choosing a career path must determine the risk associated with that decision. Our first article in this section delves into Monte Carlo simulation, which attempts to estimate variables to review when analyzing a project. The second article looks at evaluating  career change.—Tim Morrison, Pillars of the Industry Editor


Although Monte Carlo simulation has become much more widely accepted in the past 10 years (Murtha 1997), its applications in the oil and gas industry often lack imagination, focusing on volumetric estimates of resources and reserves. Monte Carlo simulation is the principal analytical tool of risk analysis. Its direct objective is always to estimate the range of something [e.g., reserves, project cost, business unit annual production, net present value (NPV), rate of return]. Often, these estimates play key roles in management decisions regarding investment projects. Some underdeveloped areas are presented below, in hopes that younger engineers and scientists with strong spreadsheet skills, some sense of model building, and a desire to solve useful problems can help enlighten their managers and colleagues and have some fun along the way.

Production Forecasts

Consider three methods: well by well, cluster of wells, and reservoir-simulator output. Building a production forecast, whether deterministic or probabilistic, can range in complexity from an exponential decline curve (two parameters: initial rate and decline rate) to constructing a full-scale reservoir simulator that includes schedules of wells, pipelines, facilities, and attendant constraint. I will outline three types of models and how they can be made probabilistic.

1. Using decline curves for each well and building up field production as new wells come on line.

Starting with the simple exponential decline for each well, q=qi*exp(−at), we can add features of a plateau or constrained production, a ramp-up and multiple-stage decline, or hyperbolic decline. Thus we can represent each well by one of these shapes. The number of parameters varies from two to six, and each one can be represented by a probability distribution. Specifically, we treat initial or peak rate (qi), time to reach peak (t1) duration on peak (t2), duration of first decline (t3), and decline rates (a1 and a2 if necessary) as input distributions to a Monte Carlo model. Fig. 1 illustrates the most complex pattern, with three realizations obtained by altering some of the parameters.

twa-2006-2-fig1pillars.jpg
Fig. 1—Three realizations of a production-forecast model.

 

Next, we incorporate a drilling and completion schedule, bringing on subsequent wells in a timely fashion. Finally, we must consider constraints, both at the well level and the system level. The timing of new wells may require that the timestep size in the model be changed. Suppose, for example, that for one well, an annual time step is adequate. But what if the second well takes 90 days (approximately) to drill? Then the simplest thing is to change to a timestep of 3 months, so that subsequent wells come on line in each new time period. Similarly, if the drilling and completion time is roughly 1 month, a 1-month timestep will make the model easier.

Of course, reducing the timestep size increases the number of timesteps and makes the model more unwieldy. When the drilling and completion time is not exactly a month or a quarter, one can usually find a way to make the first-period production some appropriate fraction of that period and to compensate for that in the second period by appealing to the rate-cumulative curve (get the cumulative for the partial first period and solve for the corresponding rate). This is easy for exponential decline since both the rate-vs.-time and rate-vs.-cumulative curves are linear on a semilog graph. For general hyperbolic decline, the transition is more complex.

Constraints can be even more of a challenge. A constraint on a single well is handled just like the partial month (above), namely by appealing to a rate/cumulative curve once the constraint is no longer binding. For a group of wells, the constraint requires delaying new wells until there is excess capacity. So at each timestep, one has to check which wells are available and how much excess capacity exists.

2. Using a pattern forecast for a field.

A second method of generating production forecasts is to model a field or a group of wells with a curve that has the general shape of a ramp-up/plateau/decline. In fact, we can think of Fig. 1 again. This curve may be developed heuristically by a group of experts who estimate how long it will take to drill the wells necessary to reach peak rate and how much of the estimated ultimately recoverable (EUR) reserves would be produced while on peak rate, and so on.

Thus, the curve would be determined by estimating four probability distributions:

  • Peak rate (possibly from facilities design).
  • Time to reach that rate (based perhaps on estimates of individual well productivities and drilling schedules).
  • Duration on or near peak rate possibly from rules of thumb about percentages of EUR obtained before depletion sets in.
  • Types of decline.

3. Reservoir-simulator-generated forecast.

An example of a complex group-of-wells forecast would be one generated by one or more numerical reservoir simulators along with associated geologic models. Other examples would be analytical or simulation models of various types. One client has a complex program that generates the response to a steamflood. Engineers can alter the inputs and create multiple production forecasts.

Although the simulator generates a deterministic production forecast, it has become popular in recent years to use experimental design methods to create multiple runs of the simulator(s), thereby generating a spectrum of realizations for the production forecast.

Among the parameters that might change from one run to the next are pore volume (sometimes separated into porosity and gross pay), trapped-gas saturation, water-influx capability, permeability multiplier, and relative permeability to gas or oil. The list can be long, but should be shortened by simple and obvious criteria:

  • The perturbations of the inputs are relatively easy to implement.
  • Experts agree that the candidate variables are uncertain.
  • Perturbing the variables would probably make a difference in the forecast.

In a case where there is production history, a fourth criterion becomes apparent:

  • A new realization (with some variables perturbed) survives the history match.

When the simulator is coarse, it may be possible to actually run a Monte Carlo simulation where the reservoir simulator is executed on each iteration (Billiter and Dandona 1999). More often, a surrogate model (sometimes called a response surface or a proxy equation) is created and used for the Monte Carlo simulation (King et al. 2005; Narahara et al. 2004; Yemen et al. 2005). This subject is still in its infancy, but already there are three commercial packages and many proprietary processes to handle the busywork. One must also be wary of requests for a P10 and P90 production forecast, which are nonsense concepts until one gives them clear definitions (Murtha 2001).
Before leaving this subject, I urge anyone building production forecasts to find a way to account for the possibility of delays. It is relatively easy to include a delay of start date for any forecast. Use a discrete distribution with values of, say, 0-, 3-, 6-, 9-month delays and probabilities adding up to 1. One client, who found this a very useful device when soliciting production forecasts for new projects in each business unit, heretofore had assumed that all projects would begin on time. Simply requiring each business unit to assign the probabilities made a noticeable difference in the annual production estimates. Delays for individual wells and for facilities usually can be handled within the distributions for the other parameters.

Teaching Managers How To Ask Smart Questions

While this may seem a bit condescending from your point of view, one important role you can play is to help your boss ask the right questions. By presenting results sensibly, you will help the good questions to emerge. To illustrate the point, consider this short tale. A few years ago, we were building a cash-flow model for a company about to embark on a risky subsea completion plan to drill four deepwater gas prospects in the Gulf of Mexico. Each discovery would be drained by one well, which would tie into an existing pipeline. The wells were expensive, and the mob/demob time was high. Each prospect had a chance of failure and a considerable range of reserves for the success case.

At each stage in our model development, we presented to a small group of people headed by a vice president who was highly skeptical of probabilistic methods. One day, we presented a comparison between two strategies, essentially changing the drilling sequence. Our main illustration was an overlay of the two cumulative curves of NPV, one for each strategy. All of a sudden the vice president stood up and pointed on the projector screen saying, “Does this say that Strategy A has a 22% chance of losing money, whereas Strategy B has only a 6% chance of losing money?” We said, “Yes, even though B has a bigger upside potential, the chance of losing money is quite small.” It was an epiphany! From that moment, the vice president was our best proponent of uncertainty analysis and had a knack of asking us to consider various options to run.

Here are some good questions that managers might ask.

  • What is the chance of finishing the drilling and completion before 15 March?
  • What is the chance that the cost of the pipeline will exceed $1.25 billion?
  • Which of these strategies is more likely to meet our target rate of return?

And here are some poor, but unfortunately real, requests.

  • Give me a business-unit oil-production estimate for 2006 that will be within 5% of the actual production. (The appropriate distribution for this quantity may have too large a coefficient of variation.) A better request would be for a value that we are 90% confident of reaching.
  • Give me a budget number that we will not exceed for the cost of this workover project. (Of course we can, but then we would be depriving other projects the opportunity of being executed.) A better request would be for an estimate that has only a 15% chance of being exceeded.
  • Is the cost of this 15-well workover project less than the benefit? (Both are distributions, which are likely to overlap.)

You get the idea: Good questions are based on the distribution of some key output: NPV, rate of return, total cost, or field production. Most good questions start out like this: What is the chance (probability, odds, etc.) that X (the key variable) is less than (greater than, in between) a (and b).

Cost Estimates

Aside from the popular volumetric estimate of resources and reserves, products of area, net pay, porosity, hydrocarbon saturation, shrinkage factor, and recovery efficiency, there is one conceptually simple Monte Carlo model that has been greatly underused. One can understand how complex production forecasts may have been too daunting at times. But what excuse can we have for not adding probabilistic line items to estimate the cost of drilling a well or building a pipeline? In the past 4 years, some of the large engineering and construction firms have begun to offer probabilistic estimates to their clients, and the smaller ones are at least talking about it. The American Assn. of Cost Engineers have held several seminars to teach this technique. Several majors have begun initiatives for best practices (Peterson et al. 2005) on probabilistic cost estimating, and vendors have begun to market compiled programs. So there is hope yet!

Drilling cost is a good place to start, because the stages are essentially sequential. This means that rig times and delays are additive. You can model both cost and time. More-complex projects like platforms, pipelines, gas plants, and the like require project scheduling software to estimate time because multiple activities are running concurrently. But even these scheduling programs have probabilistic versions. There are a few precautions about cost estimates. In particular, one must handle problems (or unlikely events) properly, and one must account for correlation.

The base probabilistic cost estimate should address only planned costs and time. Stuck pipe, lost circulation, and other events that cause delays and may lead to worse things, should not be included in the distributions for the line items. Rather, one should estimate their likelihood of occurrence and the consequence when they do occur. For instance, you could have a 25% chance of stuck pipe (based perhaps on empirical data and discussion with experts), and a range of time to fix the problem of 5 to 20 days (again based on experience and judgment). On each iteration, a 0–1 variable is sampled, with a 25% chance of a 1. When a 1 does occur, then a sample from the consequence distribution is included in the summation (of time or cost). There may also be associated labor or material costs (in addition to the rig cost), which are included when the 1 is sampled.

Correlation between line items can be handled in different ways. For instance, one can simply build a correlation matrix for groups of line items. Another method is to identify risk factors that apply to certain line items. One major company has a model that requires filling out an incidence matrix of risk factors and the line items that they impact. A somewhat similar approach has been advocated by Campbell (1971) and Noor (2000).

Cost/Benefit Analyses

A standard technique of engineering economics texts and courses is cost/benefit analysis. While the traditional method is deterministic, one can incorporate uncertainty. Metrics such as present worth or benefit/cost ratio now can be viewed in terms of distributions. So, instead of asking whether the benefit present worth exceeds the cost present worth, one might ask what percentage of the time the difference exceeds 0.

By way of illustration, an operating company has a large mature oil field, where well performance frequently deteriorates months or years after being put on line. Each year, engineers identify 15 or 20 candidates for workovers, which cost roughly between U.S. $1.5 million and 2.5 million each. Some workovers fail outright. Others greatly enhance performance for a few months and then start to deteriorate. Some wells have been worked over two or even three times.

We were asked to help study this project because management had conflicting evidence about the efficacy of the program. They knew how much money was spent each year, and they tried to track the post-workover performance, but had limited success.

First, we built a cost model, using the wealth of data available. But we went further and compared the estimated cost to the actual cost and the estimated time to the actual time, both of which revealed some surprises (to management). The reason for this is that each candidate well for next year would have a cost and time estimate available, but we wanted to estimate the actual cost. Then we tackled the harder job of estimating the incremental production caused from the workover. The difficulty was that neither the preworkover nor post-workover performance fit traditional decline patterns. So we created an alternative model of benefit, by observing that wells generally produced 80–95% of their EUR in the first 3 years and that the first-year production was a pretty good predictor of the 3-year production, if one took some additional features into account. So we found a distribution for first-year production after a workover and another distribution for the ratio of 3-year to 1-year production. We included the chance of failure.

Our ultimate outputs were two distributions: the net gain from the workover program—the aggregate for, say, 20 wells—and the bang-for-buck ratio, total benefit divided by total cost. These gave us a way of estimating the chance that a workover program would lose money and some measure of the return on investment.

As is often the case, along the way, we discovered some criteria for a good workover candidate. While it may not be possible to establish a pure ranking of prospects, we could group candidates into poor prospects and good prospects. Perhaps more importantly for the long run, we found some other parameters that had not been studied and for which data would be collected and put to good use in the future.

Monte Carlo simulation has come of age. Software is relatively inexpensive. It is commonly taught in undergraduate and graduate programs. Technical Interest Groups and users’ groups are active. Technical papers abound. Now it is time to extend the range of applications to help solve some really interesting problems and to help managers make better-informed decisions.

References

Billiter, T., and Dandona, A. 1999. Breaking a Paradigm: Simultaneous Gas Cap and Oil Column Production, World Oil, 38–44, also presented as paper SPE 49083 at the SPE Annual Technical Conference and Exhibition held in New Orleans, 27–30 September 1998.

Campbell, D.W. 1971. Risk Analysis. AACE Bulletin, 13(4–5).

King, G.R., Lee, S., Alexandre, P., Miguel, M., Blevens, M., Pillow, M., and Christie, G. 2005. Probabilistic Forecasting for Mature Fields With Significant Production History: A Nemba Field Case Study, paper SPE 95869, presented at the SPE Annual Technical Conference and Exhibition held in Dallas, 9–12 October.

Murtha, J.A. 1997. Monte Carlo Simulation: Its Status and Future (Distinguished Author Series), JPT, 361, also presented as paper SPE 37932 at the SPE Annual Technical Conference and Exhibition held in San Antonio, 5–8 October.

Murtha, J.A. 2001. Using Pseudocases To Interpret P10 for Reserves, NPV, and Production Forecasts, paper SPE 71789 presented at the SPE/SPEE Hydrocarbon Economics and Evaluation Symposium, Dallas, 2–3 April.

Narahara, G.M., Spokes, J.J., Brennan, D.D., Maxwell, G., and Bast, M. 2004. Incorporating Uncertainties in Well-Count Optimization With Experimental Design for the Deepwater Agbami Field, paper SPE 91012, presented at the SPE Annual Technical Conference and Exhibition held in Houston, 28–29 September.

Noor, I. 2000. Guidelines for Successful Risk Facilitating and Analysis. Cost Engineering, 42(4): 2005.

Peterson, S.K., De Wardt, J., Murtha, J.A. 2005. Risk and Uncertainty Management—Best Practices and Misapplications for Cost and Schedule Estimates, paper SPE 97269, presented at the SPE Annual Technical Conference and Exhibition, 9–12, October.

Yemen, B., Castellini, A., Guyaguler, B., and Chen, W.H. 2005. A Comparison Study on Experimental Design and Response Surface Methodologies, paper SPE 93347, SPE Reservoir Simulation Symposium, 31 January-2 February.


murtha-jim.jpg
Jim Murtha, a registered petroleum engineer, presents seminars and training courses and advises clients in building probabilistic models in risk analysis and decision making. Murtha is an industry-recognized expert on risk and decision analysis. He became an SPE Distinguished Member in 1999, received the 1998 SPE Award in Economics and Evaluation, served as 1996–97 SPE Distinguished Lecturer in Risk and Decision Analysis, and is principal author of the new chapter on Risk and Decision Analysis in the forthcoming SPE Petroleum Engineering Handbook. Since 1992, more than 3,500 professionals have taken his classes. He has published Decisions Involving Uncertainty—An @RISK Tutorial for the Petroleum Industry. In 25 years of academic experience, he chaired a math department, taught petroleum engineering, served as academic dean of a college, and coauthored two texts in mathematics and statistics. Murtha has a BS degree in mathematics from Marietta College, an MS degree in petroleum and natural gas engineering from Pennsylvania State U., and a PhD degree in mathematics from the U. of Wisconsin.