Source: Getty Images.

To the Editor:

This is in response to “The Unfinished Revolution: What is Missing From the E&P Industry’s Move to ‘Big Data’” by Robert K. Perrons and Jesse Jensen (May 2014 JPT). A more basic question is, “How can E&P move to a ‘Big Data’ environment?” In order to embrace “Big Data,” the following must be part of the equation:

  1. The commitment of upper management to allocate monies to Big Data for initial development and ongoing management
  2. The assurance from those who generate and use data to have a manageable plan to maintain workflows and processes to preserve data accuracy
  3. A general awareness of analytics that result in utilization of Big Data
  4. An ongoing endeavor to move beyond self-interest and address the larger corporate good

Big Data is not only the large volume of “siloed” information (single system data independently stored and accessed), but also the process of continually growing the availability of timely, accurate, and integrated information available for end users who create analytics that improve the way business is done. Analytics and discoveries therein will promote the value of Big Data. As more analytics are performed, increased data demands will be made, more workflows and standards will be put in place so that more information can be made available, and more process improvements will ensue because knowledge has expanded beyond its normal horizons.

If the end result of Big Data is improved analytics and better business, how can it be achieved?

There must be ownership of the information and processes. There must be buy-in by management, information technology, and end users. The process demands cooperation at all levels.

In most organizations, information abounds in spreadsheets, applications, databases, and handwritten notes, including internal and external sources. The dynamics need to change such that the data become centralized, thus promoting shared and common-sourced information. For successful implementation, basic tasks and actions need to be in place:

  1. Standardize methods to capture key identifiers and create naming conventions and commonality between systems and applications.
  2. Define sources of key information commonly utilized between groups.
    • Data are always corrected at the source.
    • Data are published from the source and populate (or accessible) to all other systems.
  3. Define “ownership” to ensure accuracy. Identify individuals responsible for input or who monitor specific data imports.
    • Empower staff by making them aware of the utilization of the information within the corporation.
    • Establish staff accountability and include such tasks in goal setting.
    • Provide ongoing coaching to staff regarding value to the company.
  4. Create a discipline-specific “Data Steward” role.
    • Responsible for accuracy of the specific system/application’s key information and other systems using the same information.
    • Ideal candidate has familiarity with the discipline system’s database structure.
  5. Apply secondary quality control to validate input in the source system.
    • Build business rules within applications to limit “bad” entry or highlight data that are not within a valid range.
    • Build queries to target issues that are reviewed regularly.
  6. Develop standardized workflows and methodologies to capture and report information where applicable.
  7. Limit manual re-entry of common information between systems wherever possible.
    • Create quality control methods for intersystem accuracy supported by the Data Steward.
    • Publish reports at regular intervals illustrating potential issues.
  8. Centralize information in a common accessible location (Data Mart/Data Hub) allowing files to link to information rather than repetitive downloading.
  9. Within Data Mart/Data Hub:
    • Identify application-to-application relationships to establish foundational reliability of views and data extraction.
    • Create and maintain a data dictionary such that information sources and business rules can be easily accessed and searched.
    • Include data from all disciplines (geophysical, petrophysical, geologic, drilling, completions, production, operations, reservoir, and financial) for evaluation purposes.
    • Include access to public, commonly accessed information, e.g., tables, hyperlinks, and downloads.
    • Create discipline-specific and cross-discipline views of integrated data.
    • Allow easy access but establish permissions for restricted information.
  10. Continued follow-up and communication between information technology and end users to identify bugs and enhancements.
  11. Provide Data Mart/Data Hub training to end users:
    • Initial
      • What data are available?
      • How to access information?
      • How to link between tables and views?
    • Ongoing
      • How does information flow within the company?
      • What is your role and what are your responsibilities to maintain the Big Data?

Before broad-based analytics using Big Data can occur, effective data/text mining and data quality control must be put in place, which can then transition to centralized Data Marts/Data Hubs. There, relationships between discipline-specific data sets are established and interdiscipline tables/views can be created. Analytics should be easily updatable and reproducible. Successful utilization of Big Data should include multidimensional spatial and statistical analytics for a meaningful interpretation.

Sometimes a simple question, “Why does a specific subset of wells have better production than others?” may result in a complex answer relying on a combination of information. Items to consider might be time frames/“vintaging,” geophysics/geology/petrophysics (faulting/fracturing, trapping, overburden, reservoir/rock quality, pressures, and sampling), drilling and/or completion activities (circulated or pumped fluids/muds/proppants or their volumes), equipment used, perforations spacing/phasing, pressure regimes (induced or natural), production (equipment, reservoir dynamics, depletion, pressure, drainage/communication, secondary/tertiary recovery implementation, and reporting), infrastructure (availability, limitations, and restrictions), or financial implications (influenced by drilling/completions/production processes as well as coding and billing issues).

Making Big Data successful requires a companywide commitment. Sustaining Big Data requires ongoing maintenance and cooperation. If up to the task, the rewards can be incredible, but do not expect good decisions from poor data or good data that are poorly organized.

Barbara Keusch, WPX Energy

Authors’ Response

We thank Barbara Keusch for her thoughtful response to our Guest Editorial, “The Unfinished Revolution: What is Missing From the E&P Industry’s Move to ‘Big Data,’” which appeared in the May issue of JPT. She points out in the letter accompanying her reply that her contribution is not a rebuttal per se and, in light of the fact that her suggestions seem eminently sensible to both of us, we are inclined to agree. In her note, Keusch helpfully extends the topic of Big Data in the E&P sector by offering specific, pointed recommendations about detailed aspects that our article did not cover. Our motives for writing the Guest Editorial were to draw attention to the fact that Big Data is unfolding differently in the E&P industry than it is in other sectors, and to act as a catalyst for future discussions about how the industry might potentially reconsider some aspects of how it is moving toward Big Data. We are very pleased that Keusch and others are answering this call.

Robert K. Perrons,
Queensland University of Technology, and Jesse Jensen, Intel


01 October 2014

Volume: 66 | Issue: 10