Technical Report on Data Analytics in Reservoir Engineering Open for Comments
The period for Society of Petroleum Engineers members to comment on a draft technical report titled “Data Analytics in Reservoir Engineering” is now open and will close on 4 January 2020. This technical report focuses on the effect of data analytics on reservoir engineering applications, more specifically the ability to characterize reservoir parameters, analyze and model reservoir behavior, and forecast performance to transform the decision-making process. The report outlines the methodology and guidelines for building robust models and provides examples of successful applications.
“Reservoir engineering is difficult. The most successful practitioner is usually the engineer who, through extensive efforts to understand the reservoir, manages to acquire a few more facts and thus needs fewer assumptions.”
—P.L. Essley Jr. (1965)
Data analytics is a process of collecting, cleansing, transforming, and modeling data to discover useful information that can be used to make recommendations for future decisions.
Data analytics is fundamentally transforming several industries, such as retail marketing, telecom, insurance, and banking. In this digital age, it is becoming more important for companies to leverage technology to change the way they operate, aided by data analytics. In the recent years, there also has been a growing application of digital technologies in oil and gas exploration and production. Oil and gas operations are becoming increasingly more complicated with modern facility infrastructures, complex reservoirs, increased regulatory requirements, changing workforce demographics, the fast pace of unconventional field development, and a competitive landscape.
In this report, we focus on the impact of data analytics on reservoir engineering applications, more specifically, the ability to characterize reservoir parameters, analyze and model reservoir behavior, and forecast performance to transform the decision-making process.
Why is Data Analytics Relevant Now for the Oil and Gas Industry?
The confluence of several factors, such as sensor explosion, advances in cloud and hardware technology, and innovations in data science algorithms, in addition to the recent downturn in the oil and gas industry alongside the success of data analytics in other industries, are attributed to the crossroads where we are with respect to applying data analytics in reservoir engineering work processes.
Over the past few years, several successful case studies have demonstrated the benefits of applying data analytics to transform the traditional reservoir model to a data-driven decision support. The key questions that remain are related to determining the right work processes that lend themselves to data-driven insights, how to redesign them effectively in the new paradigm, and adopting the appropriate business model to complement them. Most oil and gas companies already have embarked on this journey and are at varying maturity levels on this trajectory.
How Does Data Analytics Add Value in Reservoir Engineering Applications?
The rapid progress of big data and analytics offers companies opportunities to automate high-cost, complex, and error-prone tasks. Many oil and gas operators are accelerating efforts progressively to capture these opportunities in order to reduce costs and increase efficiency and safety. Companies that adequately employ automation can improve their bottom line significantly by converting data into information and enabling timely decision-making.
While data analytics applications in reservoir engineering can add value to various types of reservoirs, in particular a data deluge because of the scale and pace of field development has characterized the rise of unconventionals. While physics-based methods such as numerical simulations and analytical modeling remain in use, they present major challenges for unconventional assets, in particular
- Lack of reliable conceptual models to describe the underlying physics properly
- Difficult characterization of the inputs required
- Complex physics-based models require long run times, which conflicts with the short decision cycles in most unconventional developments
The computational requirements of physics-based model often lead to a trade-off between accuracy and model footprint. Today, it is still impractical to develop and maintain a basin-wide simulation model that is accurate at the well-level. It is essential however to understand the key factors driving the economic performance of unconventional field development. This gap is often addressed in practice by data-driven models designed to support field development decisions such as optimal well spacing, targeting, and completions design.
Operators and software companies have extended the utility of data-driven models to support transactional decisions regarding entering or exiting unconventional plays. Emerging plays and appraisal stages are characterized by significant uncertainty regarding economic viability, where data-driven models in conjunction with systematic field pilots through experimental design are used to derive early insights and reduce risk. Subsequently, data-driven models are used for field-development decisions and optimized drilling and completions practices.
Data-collection programs to assess rock and fluid properties (e.g., fluid, log, core acquisition) also have benefited through application of data analytics. In lieu of collecting extensive fluid or core samples and conducting laboratory experiments, empirical correlations, and data-driven methods are used to extract key features and estimate fluid or rock properties.
One of the more important tasks of a reservoir engineer is to make production forecasts. When the governing equations describing the underlying subsurface behavior is reasonably well understood, such as in conventional reservoirs, data analytics is used to accelerate production forecasting through proxy models (response surface models), reduced physics models (e.g., capacitance-resistance models) or reduced order models (e.g., proper orthogonal decomposition, trajectory piecewise linearization). More recently, data-driven and physics-constrained predictive uncertainty analysis methods have been developed to accelerate reservoir management decisions through a direct forecasting approach without ever building an actual model. In cases where the underlying phenomenon is not well understood or very complex such as in unconventional applications, data-driven methods (i.e., regression methods in machine learning) are used to map the inputs directly to desired response variables (such as cumulative production or estimated ultimate recovery at the end of a time interval). Often physics-inspired or certain features are used to improve production forecast accuracy such as the new variants of decline curve analysis.
A key responsibility of the reservoir engineer is reservoir management, which includes responsibly developing and producing fields. Different stages of field development require different objectives and specific analyses. During early field life, the focus is on assimilating key reservoir data, (e.g., pressure, saturation, fluid distribution, rock and fluid properties, and hydraulic connectivity) and understanding subsurface behavior (e.g., reservoir connectivity, drive mechanisms, tendency for sand production). In this stage, data analytics can be used to augment data collection programs and accelerate continuous learning by monitoring reservoir response through automation. The vast amounts of data available from the sensors enable modern artificial intelligence methods to derive insights. Realizing production optimization goals in real time require fit-for-purpose models that can collect available data, analyze, and act at the relevant time frequency. This is becoming practical with data-driven methods for problems such as detecting well productivity decline and identifying contributing factors, recommending well workover candidates for production enhancement opportunities, estimating injection rates, and well controls to optimize recovery.
Data-driven analytics are also used in enhanced oil recovery (EOR) and improved oil recovery (IOR) applications as a screening tool to accelerate lengthy evaluations. In waterflood and steamflood applications, several successful case studies have demonstrated the efficacy of using data-driven and hybrid models to maximize production on a daily basis, select shut-off candidates, and subsequently optimize overall field development.
How Can We Overcome the Challenges To Successfully Apply Data Analytics?
Successful data analytic applications in reservoir engineering begin with a fundamental understanding of the business needs and of the key physics at play. Availability of adequate data that is characterized by good quality is essential for building a robust model and solution. However, data quality remains a big challenge for several companies to produce sustainable solutions, resulting from a variety of data management issues that need to be addressed. Outliers and missing, duplicate, obsolete, and unstructured data are just a few of the challenges that must be overcome. Additionally, multiple sources of disparate data need to be integrated into a single consistent version with contextual information. Several companies have embarked on this effort by constructing data lakes, establishing data standards and good data management practices to enable this transformation. Fundamentally, organizations are realizing the value potential of data and need to pay attention to what data is collected and how it is collected, processed, and stored for both current and future applications.
In routine operations, data coverage often is limited to narrow operating ranges. For instance, wells are drilled in reservoirs with favorable rock properties, unconventional wells are completed with relatively less varying completions design, and so on. This limits the ability to use these data to develop robust data models that can be adequately extrapolated. It requires careful attention to plan how data is collected through experimental design methods, keeping in mind the type of analysis that needs to be performed.
In general, there is a perceived sense of lack of know-how for approaching data analytic projects in reservoir engineering applications. This is exacerbated by a shortage of adequate staff that merges data science and engineering skills. Having a good grounding in data science and the underlying physical processes will help assess the validity of analytics approach and interpret results appropriately. Further, these new methods modify existing work processes and will require appropriate change management for user adoption and realizing the benefits. Having management support is essential for driving these deep-rooted changes leading to business transformation.Several of these challenges are not unique to oil and gas industry. Other industries, such as retail, banking, insurance, and healthcare, have successfully leveraged big data to drive efficiency and growth profitably. The oil and gas industry is looking at learning from others and extending this success to oilfield operations. Beyond reservoir engineering, several other technical areas (e.g., drilling, completions, and geoscience) that are characterized by data processing and interpretations could significantly benefit from data analytic methods.
Don't miss out on the latest technology delivered to your email monthly. Sign up for the Data Science and Digital Engineering newsletter. If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.
24 January 2020
16 January 2020
16 January 2020