Industry Needs Re-Education in Uncertainty Assessment

It is clear the oil and gas industry recognizes the large uncertainty in which it operates. A search in the OnePetro technical paper database using the keywords “uncertainty” or “risk” returns more than 53,000 conference and journal papers.

twa-2010-3-hrhero.jpg
Source: Getty Images.

It is clear the oil and gas industry recognizes the large uncertainty in which it operates. A search in the OnePetro technical paper database using the keywords “uncertainty” or “risk” returns more than 53,000 conference and journal papers. Yet, it is also clear that the industry does not know how to reliably assess uncertainty and that this inability negatively affects industry performance. Capen (1976) described the difficulty of assessing uncertainty. He pointed to massive capital overruns and low industry returns due to an almost universal tendency to underestimate uncertainty. Brashear et al. (2001) and Rose (2004) later documented the dismal performance of the industry in the last 10 to 20 years of the 20th century due to chronic bias and evaluation methods that do not account for the full uncertainty.

Although industry profitability may have improved in the past decade because of high oil prices, Neeraj Nandurdikar in an October 2014 JPT article, “Wanted: A New Type of Business Leader to Fix E&P Asset Developments,” showed that the oil and gas industry continues to perform significantly below estimations and expectations. He cited three ways that assets erode value, all of which relate to unreliable assessments of uncertainty: (1) production and reserves estimates are overestimated, (2) capital costs are underestimated, and (3) development times are underestimated. And these do not include price estimations; the surprising oil price slide at the time of this writing has the potential to move industry performance from below expectations to below profitability.

While project evaluations can be affected by many different types of biases, these can be reduced fundamentally to two primary biases: overconfidence and directional bias.

Overconfidence is underestimation of uncertainty; i.e., our estimated distributions of uncertain quantities, such as reserves, are too narrow. They are too narrow because we do not consider all the possible outcomes. There is considerable evidence in the literature, both inside and outside the petroleum industry, of our general human tendency for overconfidence.

Directional bias results when the subset of possible outcomes considered is shifted in either the optimistic or pessimistic direction. There is also evidence that we are usually optimistic in our overconfidence; i.e., we fail to consider some possible negative outcomes, or we give greater weight to possible positive outcomes than possible negative outcomes. As a result of the two primary biases, we make decisions with incorrect estimated distributions rather than true distributions (Fig. 1).

jpt-2015-02-fig1industryneeds1.jpg
Fig. 1—An incorrect estimated distribution due to overconfidence and optimism.

 

Unreliable estimation of uncertainty has serious consequences. Portfolio modeling by McVay and Dossary (2014) indicated that moderate, typical levels of overconfidence and optimism can result in average portfolio disappointment (estimated net present value (NPV) minus actual NPV) of 30% to 35% of estimated NPV. Greater amounts of overconfidence, optimism, and disappointment have been experienced in the industry, e.g., throughout the 1990s. Nandurdikar (2014) indicated that over the past 15 years, the average exploration and production (E&P) development delivered only 60% of the value promised at sanction.

Lookbacks and Calibration Key to Solution

Persistence of unreliable uncertainty assessment over several decades indicates that we need a step change in the way we assess uncertainty. Improved project evaluation methods can go only so far in correcting the problem. This is because a primary contributor to overconfidence and optimism is unknown unknowns—those unknowns we don’t even think to include in our evaluations. The only way to measure overconfidence and optimism and account for unknown unknowns is to perform lookbacks and calibration, i.e., keep track of probabilistic forecasts and compare actual results with the forecasts over time.

Reliability of probabilistic forecasts can be represented in a calibration chart in which the frequency of outcomes is plotted against the assessed probability of outcomes (Fig. 2). Reliable probabilistic forecasts that quantify the “true” uncertainty will fall on the unit-slope line; e.g., the actual value is less than the P10 estimate 10% of the time and the actual value is less than the P90 estimate 90% of the time (for a cumulative distribution function convention in which the P10 is the low number and the P90 is the high number). Fig. 2 shows the calibration for a set of forecasts that are both overconfident (slope less than 1) and optimistic (shifted upward). Note that this figure applies to value-based forecasts; optimistic cost-based forecasts would be shifted downward.

jpt-2015-02-fig2industryneeds1.jpg
Fig. 2—A calibration chart for a set of forecasts that are both overconfident and optimistic (red).

 

A knowledge of overconfidence and optimism from calibration charts can then be used to improve future assessments. Estimated distributions of uncertain quantities can be adjusted as needed to eliminate overconfidence and optimism, which will widen the distributions and shift them in the appropriate direction. Tracking of forecasts, comparing them with actual results, and using them to improve future forecasts should be a continuous process over time.

Re-Education Needed

Persistence of unreliable uncertainty assessment also indicates that we need to change the way we educate engineers about uncertainty, and it needs to start no later than their undergraduate education. Most formal education is focused on getting the “right” answer. Undergraduate engineering students traditionally have been taught to solve problems deterministically, and traditionally have been motivated, through grades, to get the single right answer. I teach senior design courses in which there is commonly no single correct answer, and students are often noticeably uncomfortable when I cannot tell them the single correct answer.

It is the same when students are asked to quantify the uncertainty in their knowledge about a topic. They are uncomfortable because quantifying uncertainty is an alien concept to them. We are all subject to many biases and quantifying one’s uncertainty does not come naturally, which is why humans are poor at it. Assessing uncertainty, i.e., assessing how much you know about something, is a different skill, separate from petroleum engineering or other knowledge areas.

Accordingly, the ability to assess uncertainty in a particular area does not necessarily come with increased knowledge in that area. Evidence in the literature suggests that despite their advanced knowledge, experts can be just as poor at assessing uncertainty as nonexperts. Although assessing uncertainty is a different skill, it can be taught and learned.

Teaching Uncertainty Assessment

We consider uncertainty assessment to be a vitally important skill for students in the Petroleum Engineering Department at Texas A&M University. So much so that, in addition to the 11 Accreditation Board for Engineering and Technology student outcomes (a-k) for which we are required to document achievement by our bachelor of science graduates for accreditation, we added a 12th student outcome specific to uncertainty assessment.

We have two required undergraduate courses that have significant content related to uncertainty. The first is Petroleum Project Evaluation, which includes uncertainty quantification in project evaluation and investment decision making under uncertainty. The second is Integrated Reservoir Modeling, which includes geostatistical modeling and quantifying uncertainty in dynamic reservoir simulation.

We do not believe uncertainty assessment should be isolated only in courses focused on uncertainty quantification and decision analysis. Biases and uncertainty pervade all aspects of petroleum engineering, so they should be addressed throughout the curriculum. Probabilistic assessments can often be easily incorporated into existing assignments, projects, or exams. Assessments can be requested for either continuous distributions (e.g., provide P10, P50, and P90 values on a distribution for x) or discrete distributions (e.g., provide probabilities for the choices in a multiple-choice question rather than selecting one answer). We include uncertainty assessment in several other courses, including the capstone design course, but we still have work to do in the rest of the curriculum.

To be truly effective, however, uncertainty has to be more than just taught; it must be experienced by students. You can teach students about biases and how to do probabilistic assessments, e.g., geostatistics and Monte Carlo simulation. However, you do not address the fundamental problems of unknown unknowns, overconfidence, and optimism until you force students to assess their own uncertainty and demonstrate their overconfidence to them. Because uncertainty assessment is a separate skill independent of knowledge area, the ability to assess uncertainty can be assessed with general knowledge questions or forecasts in addition to questions related to petroleum engineering.

Furthermore, just as overconfidence and optimism can be measured and corrected only with lookbacks in industry, students can learn to recognize and correct their biases only by obtaining feedback on the calibration of their probabilistic estimations and forecasts. I regularly request a variety of engineering-related and general knowledge probabilistic assessments, including weekly football game score predictions, and provide feedback on students’ calibration throughout the semester. The vast majority of students exhibit typical high amounts of overconfidence early in the semester and many learn to widen their ranges with the feedback they receive over the semester.

It is my observation that student interest and improvement in uncertainty assessment increases significantly when there is something riding on it. The primary motivator for students is grades, which can affect job offers and future income. It is possible to grade probabilistic multiple-choice questions with scoring rules that measure both knowledge of the subject area and the student’s ability to assess his or her uncertainty in the subject area. Overconfidence exhibited by assigning too-high probabilities to incorrect answers can result in significant grade penalties. When grades are affected, most (but not all) students learn to be less overconfident over time.

Provide Continuing Education

Improvements in undergraduate and graduate education in uncertainty quantification are necessary, but insufficient, to correct the industry’s persistent problems with overconfidence and optimism. Education in uncertainty quantification needs to continue beyond formal education for several reasons.

Firstly, many, if not most, currently practicing professionals do not know how to reliably quantify uncertainty. Secondly, reliable quantification of uncertainty is not easy, as evidenced by several decades of industry underperformance. If it was easy, more would have mastered it. Based on observations of students, treatment in higher education will be a good start but will not yield mastery for life. Thirdly, the fields we develop, the technology we use, and the world in which we operate change with time. Thus, the uncertainties we will face change with time as well. Learning to quantify uncertainty reliably is a lifelong learning process.

Continuing education on uncertainty quantification can be accomplished through participation in conferences, workshops, and short courses. A coverage of topics such as debiasing, probabilistic modeling, and decision analysis can be quite helpful. However, the key to reliable uncertainty assessment is lookbacks and calibration, so this should be a primary emphasis in continuing education. The full uncertainty will never be assessed, regardless of the sophistication of the probabilistic methodology, if unknown unknowns are not included in the assessments. Simpler probabilistic methods combined with reliable assessments of uncertainty derived from lookbacks and calibration will be superior to sophisticated probabilistic methods combined with overconfident and optimistic uncertainty assessments that apparently are the norm today.

While coverage of these concepts in formal continuing education efforts will be valuable, the most important education may be self-education that comes through simply keeping track of one’s predictions and comparing them with actual results over time. This is not a new concept; it was suggested by Capen (1976).

Call for Change in Incentives

Disappointment and industry underperformance have persisted for decades because lookbacks and calibration, and the learning and subsequent forecast improvements that derive from these calibrations, are rarely practiced. There are a few pockets of practice, but they are apparently the exception. Of the thousands of SPE publications that mention uncertainty or risk, very few mention lookbacks and calibration. Why is this? There is a host of reasons.

Firstly, many companies still use deterministic methods. Although these methods do not strictly preclude the use of lookbacks and calibration, their potential value is reduced significantly if forecasts are not probabilistic. Secondly, until recently, there has been little quantitative evidence in the literature to show that the cost of chronic overconfidence and optimism is high, which has likely contributed to the lack of appreciation for the importance of lookbacks and calibration. Thirdly, many of the forecasts made in the oil and gas industry are long term; it can take years and sometimes decades to obtain the actual values needed to compare with the forecasts in order to check calibration. Finally, there is a lack of tools for tracking and calibrating forecasts over time. These are significant challenges, but they are surmountable.

The biggest reason, however, has to do with incentives. In most companies, employees are not incentivized to generate estimates and forecasts that are well calibrated probabilistically. They are incentivized to do the opposite. In order to win bids, compete for budget, get proj­ects approved, avoid disappointing superiors by telling them the project will take longer and cost more—i.e., in order to do things that get attention and reward—people are naturally encouraged and incentivized to be overconfident and optimistic.

To compound the problem, there is virtually no accountability for project failures and consistent underperformance, as pointed out by Nandurdikar (2014). Often, the analysts and decision makers have moved on by the time the problems are realized. Even when they are still around, there is often little consequence. Because lookbacks and calibration are seldom performed, the problems are attributed to “unforeseen circumstances” rather than overconfident and optimistic forecasts. Incentives in the wrong direction plus no accountability for unreliable forecasts make a recipe for chronic overconfidence, optimism, disappointment, and underperformance. No wonder these problems have persisted for decades.

If you want to change human behavior, you have to change their incentives. If you want employees to generate well-calibrated probabilistic forecasts in order to maximize profitability, then you have to incentivize employees to generate well-calibrated probabilistic forecasts. This means you must have a systematic process for tracking forecasts and subsequent actual values and for generating calibration reports. This should be done at all levels: individual, group, division, and corporate. Then there needs to be appropriate accountability. You cannot judge the reliability of a single probabilistic forecast because of the uncertainty in outcomes, which can include both success and failure. You can judge the reliability of only a group of probabilistic forecasts. Thus, the accountability has to be provided at some frequency less than on a per-project basis.

If accountability is provided on a single-project basis, analysts may be overly conservative to avoid failure, which can result in missed opportunities. Tracking should include as many probabilistic forecasts as possible for statistical significance. These should obviously include the more significant forecasts, e.g., reserves, time to first production, initial rate, development costs, and oil prices. They can also include business forecasts, e.g., quarterly earnings, and less significant forecasts, e.g., time estimate to produce a report for the boss, both to generate sufficient numbers for statistical significance and to instill a corporate culture of reliably quantifying uncertainty in virtually everything.

Just as students are more motivated to reliably quantify uncertainty when their grades are affected, professionals will become more motivated to produce reliable probabilistic forecasts if something is riding on the quality of their probabilistic calibration. If you want to really change incentives and behavior, include probabilistic calibration in annual performance reviews at all levels, and make it a factor in deciding compensation, bonuses, promotions, and the like. Nothing short of this is likely to have much of an impact.

In addition to providing accountability, a system for tracking and calibrating forecasts can be used to educate both analysts and decision makers and improve probabilistic forecast quality and business performance over time. Some engineers complain that decision makers prefer to make decisions based on gut or instinct because they are either overconfident or do not understand probabilistic analyses. However, it may also be possible that decision makers do not trust analysts’ probabilistic forecasts because they suspect, from experience, that the forecasts are unreliable.

Calibration feedback over time can help analysts improve their forecast quality by either internally adjusting their methodology or by externally modifying forecasts made using the same methodology. If it can be shown to decision makers that analysts’ forecasts are probabilistically reliable, decision makers can learn to trust and use analysts’ forecasts in decision making. If decision makers have evidence that analysts’ forecasts are not well calibrated, they can use calibration results to externally adjust the forecasts themselves to improve forecast reliability.

Decision making and profitability will be optimal in the long run only when probabilistic forecasts are well calibrated: P10s are true P10s, and P90s are true P90s, and so forth. Changing corporate culture to produce well-calibrated probabilistic forecasts will require educating the current workforce and the next generation of engineers on the importance of lookbacks and calibration, as well as changing business processes and incentive structures.

If we do not change corporate ­culture and incentives regarding uncertainty assessment, overconfidence and optimism and consequent chronic under-performance will persist for several more decades.

References

Capen, E.C. 1976. The Difficulty of Assessing Uncertainty. J. Pet Technol 28 (8): 843–850.

Rose, P.R. 2004. Delivering on Our E&P ­Promises. Leading Edge 23 (2): 165–168.

Brashear, J.P., Becker, A.B., and Faulder, D.D. 2001. Where Have All the Profits Gone? J. Pet Technol 53 (6): 20–23, 70–73.

Nandurdikar, N. 2014. Wanted: A New Type of Business Leader to Fix E&P Asset Developments. J. Pet Technol 66 (10): 15–19.

McVay, D.A. and Dossary, M. 2014. The Value of Assessing Uncertainty. SPE Econ & Mgmt 6 (2): 100–110.

jpt-2015-07-mcvay-duane.jpg

Duane A. McVay is the Rob L. Adams ’40 Professor in the Department of Petroleum Engineering at Texas A&M University. His primary research focus is on uncertainty quantification, particularly in production forecasting and reserves estimation in oil and gas reservoirs. He joined Texas A&M in 1999, after spending 16 years with S.A. Holditch & Associates, a petroleum engineering consulting firm. McVay is a Distinguished Member of SPE and will serve as an SPE Distinguished Lecturer in 2015–16. He received BS, MS, and PhD degrees in petroleum engineering from Texas A&M University.