Enhanced recovery

Oil Recovery Increased by Use of Event Detection and Association

An operator has launched a successful, limited-scope deployment of a nonparametric capability known as event detection and association (EDA).

jpt-2014-10-fig1oilrecovery.jpg
Fig. 1—Frequent operation of a production-well choke (red trace).

An operator has launched a successful, limited-scope deployment of a nonparametric capability known as event detection and association (EDA). Variations from the original analytical approach may be necessary before the EDA techniques can be used to solve problems in the field. EDA comprises part of a suite of tools collectively referred to as top-down waterflood (TDWF) diagnostic and optimization. By studying the time-based correlation of these input and output events, the basic TDWF capacitance/resistivity model is either confirmed or improved.

Introduction

It is almost an automatic response for the human mind, presented with time-series data such as controller strip charts, to seek features in those data, mentally label them as “events,” and seek correlations in time against other timelines. However, the human mind is not only highly capable, it is all too often fallible. It will favor connections with short time lags and events of the same shape, size, and duration, paying little attention to the influence on shape of variable scaling. It will also tend to associate variables for which it has a preconceived notion that relationships exist while failing to assess the statistical validity of the labeling process. The mind’s eye can be persuaded easily by superposition and purposeful or even spurious alignment of individual tags’ time traces. Such visual tricks help to find supporting evidence for a pattern. The real challenge is development of an automated, comprehensive, dispassionate, independent, statistically valid process. If all these objectives can be met, then many types of oilfield evaluations can be carried out using an event-based analysis.

The Event-Based Approach

A Brief Description. For an asset that is being studied, one must first choose the producer and injector wells to include in the study. These will be wells that are sufficiently well instrumented to provide a time-series data stream expected to contain events that are discernible from the unavoidable sources of measurement noise. Next, one will choose a time period to be the focus of the analysis. Recent data are more likely to be most instructive in regard to what is likely to happen next, but better-quality periods of data may exist. Then, one must choose the well attribute that will be used as the variable that will indicate the occurrence of an event. Typically, the choice will be an allocated (or possibly measured) flow, an aspect of production such as water cut or gas/oil ratio, or a direct intensive measurement such as a pressure or even a temperature. Similarly, injector wells will have some (typically) corresponding attribute, such as injection rate or some pressure.

The event-based analysis begins with the marking of events for each producer and injector well. The authors have developed an automated event-marking algorithm.

Once a complete set of injection and production events has been obtained, the process associates these events with each other. This is performed by considering an appropriate range of time delays that reflect the physical separation of the wells and the intervening reservoir properties. The result is a ranking of the connections considered possible between injectors and producers, together with their estimated time delays, and is summarized as an optimal score for each connection.

Well Selection. At least one injection well and one production well must be selected for a waterflood analysis. Generally, there will be multiple injector wells, because one will be making comparative inferences concerning the relative use of water injection across a number of different producers. Theoretically, there is no limit to the number of wells selected, but practical issues around the complexity of interpretation should be considered. Nominally, there are no problems with honoring any a priori limitations on which injectors might be influencing specific producers. The wells should be material to the production- and injection-fluid production and consumption.

EDA is a data-based technology, so there need to be sufficient data for the wells, which should be adequately instrumented. Events need to occur for EDA to be successful, so the wells should experience some variation in injection rate (ideally starts and shut-ins). There should also be changes in production rate or in some alternative surrogate (and monitored) variable. A large-scale reservoir analysis can often be broken down usefully by hydraulic unit.

Time-Period Selection. There needs to be a relatively homogeneous period of operations of sufficient duration represented by the data. A representative period should be chosen that is likely to reveal the interaction between the selected wells. The authors have found that a minimum duration of 3 months is a requirement when using daily data. If data are aggregated up to periods longer than 24 hours, then correspondingly longer periods will be required. It is advisable to avoid time windows that contain turn arounds and other prolonged well-­shut-in periods. An exception might arise if the well in question is acting as an observation well and events are being inferred at the wellhead or downhole on the basis of pressure trends.

Consideration should be given to constancy of “epoch” (i.e., a fixed number of active injectors and producers over the period) and also to “regime.” Constant regime would mean no change in the responding production well between wet or dry operations, no sudden change in gas/oil ratio, or no switching between natural and artificial lift.

Basic Event Marking. The simplest strategy for marking an event is to select a variable, such as rate, and then set an absolute or relative change threshold for that variable. The natural variation in pressure and flow values during reservoir life can make the choice of an event threshold challenging. Even where the selection is seemingly straightforward, there is a highly nonlinear effect on the number of events marked as a function of threshold setting.

Two desirable attributes of an event are its magnitude and duration. The former can be derived by reference to the amount by which a change exceeds a chosen threshold. The latter can be a function of the time during which the previous value is exceeded. Duration needs to be subject to a reasonable upper limit for cases where the change is effectively semipermanent.

Advanced Event Marking. This process tackles two issues. First, to be genuinely useful, a method should have the capability of being automated. Second, the techniques should deal both with the time delays between episodes at the injection wells and the producer ­responses, and with the confounding events in the producers, which arise from the operator’s direct interventions in changing the choke valve and the artificial lift. Fig. 1 above illustrates how frequently a production well’s choke position can be altered, as shown in the lower (red) of the two time traces.

There are also sources of disturbance associated with switching wells between production headers running at different pressures and with periodically routing wells to test. These impacts are designated as “intrawell events,” while the communications between wells through the subsurface are described as “interwell events.” Interwell events typically have much smaller magnitude and significantly delayed response signals at the affected production well. An intrawell episode gives an immediate, strong, well-characterized, and repeatable response.

Reservoir production is fundamentally an integrating process. The integrating processes can be visualized by plotting flows as cumulative variables. Then, slope changes become more readily apparent to the eye. Subjectivity can be eliminated from the event-marking task by implementing a statistical test. The appropriate test for a significant change in gradient between two lines is a test that takes into account the scatter in the data to which each of the lines has been fitted separately. If we default to a 95% confidence test, then the only remaining parameters of choice are the number of measurements used when estimating the gradient for the current time, and the similar number for the data beyond the current time.

Two options exist for managing intrawell events and their impact on the analysis. After attribution to a causal event among the well’s independent variables, periods of data can either be removed or be retained in an adjusted form. It is usually clear that a change in position of a choke valve has taken place or that there has been an alteration in the artificial lift employed. For daily data, a third indicator of an intrawell event is the number of operating hours in a day. When this is fewer than 24 hours, an event must have occurred. The user selects a change threshold for each of the intrawell variables. Inspection of responses will also suggest a time period during which the event variable is responding to the input.

The alternative approach of compensation using regression is more complicated to implement but still conceptually simple. A regression of the impact of each of the manipulated variables on the dependent variable is derived from the same data. A new time series of data is generated by the retrospective application of the regression to the actual observations.

Event Association. Once events have been marked for the injection and production wells, the next task is to associate them. The association process determines the time lag that maximizes the support between injector and producer events. Knowledge of the reservoir physics from a bottom-up simulation model, or from tracer or interference testing, can be used to set limits on the maximum time delay.

When carrying out the validation task, the user, or an automated routine, would recognize that a realistic association is one that exceeds the accidental match that would arise from random overlap between the two sequences of events. There may be fluctuations in event density between different epochs and across regime changes, so an intelligent choice is required for previous subdivision of the total available operational timeline.

A threshold multiplier of two is usually effective at separating valid from random or accidental associations. This means that the support for the match should be at least twice as large as the value that would be expected from random alignment, a concept that is called “lift” in EDA terminology.

Events that are linked can be shown on the timeline. For a discussion of the operator’s past practical experience with the timeline, please see the complete paper.

Strengths, Weaknesses, and Challenges for the Event-Based Analysis

The existence of operational data is an unavoidable requirement for an event-based analysis. The approach is subject to the normal issues associated with ­data-driven work flows, including those associated with the data historian. Typically, an investigation requires a long period of operational data to be retrieved from the historian. The quality of these data is subject to the normal issues around data compression. Key among these issues is the threshold used to trigger recording a new raw value for each tag. If the change threshold is set to, or defaults to, a value that is too large, then many subtle changes in the original data will be lost.

On a positive note, the approach is nonparametric and robust to some types of gauge error. For instance, bias (offset) and slight drift in the gauge are not important in marking events on the basis of a single measurement tag. Obviously, if either type of error is excessive, it may result in the more-complex aspects of the event-marking logic breaking down.

The use of events to investigate a process is amenable to a design-of-­experiments (DOE) approach to the task of process identification. A DOE work flow would also tend to populate the timeline with an appropriate number of events.

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 167836, “From Market-Basket Analysis to Wellhead Monitoring: Use of Events To Increase Oil Recovery,” by R. Bailey, Z. Lu, S. Shirzadi, and E. Ziegel, BP, prepared for the 2014 SPE Intelligent Energy Conference and Exhibition, Utrecht, The Netherlands, 1–3 April. The paper has not been peer reviewed.