Analyzing Well Events To Increase Oil Recovery

The human mind, presented with a series of time data, will generally look for common features, label them as events, and seek correlations against other timelines. This approach is fallible because questionable data might be selected in attempting to find supporting evidence for a perceived pattern.

The human mind, presented with a series of time data, will generally look for common features, label them as events, and seek correlations against other timelines. This approach is fallible because questionable data might be selected in attempting to find supporting evidence for a perceived pattern. A better alternative would be to establish an automated, comprehensive, dispassionate, independent, and statistically valid process for pattern matching. It is a major challenge to develop such a process. But if all the objectives are met, then many evaluations can be carried out using an “event-based” analysis.

Instrumented systems and the analysis of their time-series data can provide a range of events and channels through which those events can interact. Some influences will be automatically transmitted by closed-loop process control schemes. Others will be prescribed by written operating practices, and still others will be the result of “custom and practice” on the part of operators, supervisors, and planners.

Despite the presence of these systematic, nonreservoir-derived reasons for patterns of change, the historical daily production data is the most direct measurement of the hydrocarbon fluids being extracted from the reservoir. The business case to pursue increases in efficiency of extraction and total recovery, while simultaneously being able to accurately assess the effectiveness of measures taken to support, sustain, and maximize that production, is clear.

New ‘Top-Down’ Analysis

Understanding the flow from each well as a result of an agreed allocation process, based on the assessment of flow data from the well’s tests, allows history matching. Typically, one or more reservoir simulation models are tuned to the observed flow and pressure data. These processes represent a “bottom-up” approach to understanding reservoir performance, through which a development plan can be monitored and updated as necessary.

However, BP has been working on a separate but entirely complementary “top-down” approach in which operational data is analyzed without preconceptions about the reservoir structure. This approach allows the rapid formulation of a workable “model,” thus avoiding the need for an involved history-matching procedure. The top-down approach uses the capacitance resistivity model (CRM) combined with an analysis of the operational time-series data based on “events” in that data.

Event-Based Approach

Firstly, one must choose the production and injection wells to include for the asset being studied. These will be sufficiently instrumented to provide a time-series data stream containing events, which are distinguishable from the unavoidable sources of measurement noise. The wells will be material to the oil recovery plan and will include some key injection wells, in addition to producers in which output might be enhanced by the injectors.

Next, one must choose a period for analysis. Recent data is likely to be the most instructive in terms of potentially assessing probable future events, but better data-quality periods may exist. Quality may be measured in terms of working instrumentation, constant or stable numbers of injection and production wells in service, and the absence of changes in operating regime, such as artificial lift or the breakthrough of water or gas into production wells.

Then, one must choose the well attribute that will be used as the variable to indicate the occurrence of an event. Typically, the choice will be an allocated (or possibly measured) flow, an aspect of production, such as water cut or gas/oil ratio, or a direct intensive measurement, such as a pressure or a temperature. Similarly, injection wells will have some (typically) corresponding attribute, such as injection rate or pressure.

The event-based analysis begins with the marking of events for each production and injection well. Parameters are used to control the relative occurrence of events. Some iteration is typically required until a set of events that make sense organically and pass visual assessments has been achieved. Visualizations made using the association software assist with events evaluation.

Once a complete set of events has been obtained, the process associates these with each other. This is done by considering an appropriate range of time delays that reflect the physical separation of the wells and the intervening reservoir properties. The result is a ranking of the connections considered possible between injectors and producers with their estimated time delays, which are summarized as an optimal score for each connection.

The engineer can use the validated scores and the resulting insights to modify other predictive reservoir models to better address the underlying business drivers around reservoir management, such as waterflood pattern optimization. The ability to quantify production support by injectors is a key enabler for optimizing a waterflood.

Experience With Technique

An initial BP top-down project had considered which types of modeling techniques might be able to make sense of reservoir production surveillance data without recourse to bottom-up, physics-based modeling. Four vendors, each using a different method, evaluated the initial 3 years of operational data from a complex, multifaulted reservoir. Complicating the analysis were changes in the number and type of wells as the reservoir experienced a transition from the initial pure depletion phase to an early waterflood. Gas lift began to be used on some of the producing wells. The event-based analysis was deemed the best technology.

An event detection and analysis (EDA) was simple to implement and sufficiently flexible, and its focus could be easily adjusted to different wells, well pairings, types of events, and periods (Fig. 1). It was also possible to analyze different regimes and epochs or simply constrain the period of interest to subsets of wells of interest.

jpt-2014-08-welleventsfig1.jpg
Fig. 1—An event detection and analysis display showing selected matched injection- and production-well events.

The next deployment of EDA was for an offshore reservoir supported by a single gas injection well. However, this well was operated in a “campaign” mode, whereby gas was sold to market during periods of high spot price and injected into a gas cap for the rest of the year. This annual pattern of cyclic behavior had continued for several years. The reservoir management team wanted to review the data to decide whether to continue or alter the operational paradigm.

The event analysis was appropriate, as the operational history contained very large, unambiguous, input disturbance events, and a change in operating strategy was recommended. This change had a net positive impact on the asset’s revenue stream.

The first waterflood implementation carried out by BP was on a mature, complex reservoir with limited incremental water available and limited ability to distribute it flexibly to the in-field injection pads to optimize production support. By loading the reservoir operating history into the application and periodically updating it, potential variations in the waterflood plan could be appraised. The result was that any trial variation in the waterflood pattern could be either cut short or extended on the basis of ongoing periodic analyses.

An event detection and association workflow can complement other types of analysis, such as CRM technology as a key validation step in the delivery of top-down waterflood (TDWF) assessment. Working from the same operational data, but using it in distinctly different ways, the CRM model derives its well performance parameters using nonlinear estimation.

Then EDA, focusing on the events revealed in the transient perturbations of the wells, independently checks and validates that analysis. In TDWF deployments, the asset surveillance engineers have used the embedded, integrated EDA tool kit to validate their CRM results and support their selection of the most appropriate and statistically valid injector and producer connections.

Method Enhancements

Although there are challenges to the process based on well selection, period chosen, and event marking, most of these reflect the relatively simple nature of the analysis. However, by extending the scope of the basic analysis to an attribution process for production events, typical support statistics have increased from 30% to more than 80%. We also have evidence from early trials of this approach that the overall support statistics can be made to approach the theoretical limit of 100%.

Genuinely complex events will involve more criteria such as “time spent in a state,” what the previous state had been, e.g. as with Markov models, and which state transitions are actually valid. Such durations and switching behavior, asserted from physical laws or operational rules, will provide an exponential explosion of possible system states and so provide the backdrop for the development of a genuine “complex event processing” application.

A specific class of applications that is highly suited to a bottom-up, data-driven, event-based analysis is the set of problems associated with product quality control, whereby the relationships between inputs and outputs are repeatable but complex. If the sources of disturbances are recorded, then the application of an event-based analysis could support parametric representations. Stochastic systems could be analyzed. Random changes can be labeled as events, but will require a slightly different workflow to compensate for the variable time delays.

If causality is genuinely present, then the simple approach of using precedence may be sufficient to adapt the workflow to the stochastic problem. This is commonly done in econometric modeling.

Summary and Next Steps

This event-based technique has delivered value to several BP assets by improving the top-down representation of injection and its support of producing wells. The technique has been automated in certain key respects. It is fast to implement, objective, and capable of extension. It is also suitable for generalization to a range of operating data sets.

Any system of inputs and outputs that demonstrates repeatable behavior will be amenable to event-based analysis. Systems such as TDWF, which can carry out data analysis, will benefit from the use of historical data to reveal additional process insights.

The next step for the development of this technology is to align it with the emergent field of complex event processing. This is an active field of computing with “big data” that seems likely to dominate practical oil and gas processing applications in the very near future.