AI/machine learning

Hurdles to AI Adoption Transcend the Technical

Artificial intelligence tools present many opportunities for the energy industry, and, as technological concepts leave the realm of science fiction, companies have started to grasp what is possible. What roles do culture and ethics play in helping companies understand the digital revolution?

dsde-2019-09-oe-artificial-intelligence-hero.jpg

The concept of artificial intelligence has been around for several decades, but it’s only recently that discussion of its capabilities have escaped the realm of science fiction, as platforms have begun to permeate nearly every level of our day-to-day lives.

The oil and gas industry is no different. According to Price Waterhouse Cooper’s 2019 Global CEO Survey, nearly 80% of oil and gas executives say they believe that AI will significantly change the way their companies do business in the next 5 years. More than 50% of those executives plan to implement or have already implemented some aspects of AI in their operations.

Speaking at a panel discussion during the SPE Offshore Europe Conference in Aberdeen, Oil & Gas Technology Centre CEO Collette Cohen said AI tools could lead to significant improvements in operational efficiency.

“If we got it right, and we really got involved, we could have safe operations, cheaper operations, and have huge cost savings,” she said. “With all of the bounty of data that we have, this would seem like the perfect opportunity to apply AI.”

The potential financial benefit for companies that apply AI tools in their operations is huge. Martin Kelly, vice president and head of corporate analysis at Wood Mackenzie, said the industry could save up to $75 billion each year through the application of digital technologies, with those savings coming throughout the exploration and production lifecycle.

Some companies have already made waves, and Kelly noted a few. Shell used digital monitoring systems on oil fields in Nigeria to save $1 million on site visit reductions. Equinor has pledged between $120 million and $140 million toward digitalization and said it intends to increase value on the Norwegian Continental Shelf by $1.2 billion. Repsol’s digitalization budget for 2019 was $150 million, with an estimated return of $1.1 billion by 2022.

While these examples are prominent, Kelly said they are some of the only examples of successful digital technologies or initiatives being disclosed and discussed on an industrywide level. While this lack of disclosure may be a way for companies to protect proprietary developments, Kelly speculated that it is more of a reflection of the difficulty in executing a digital transformation at scale.

“In terms of the foundational elements of the digital transformation, it’s fair to say that the industry, by and large, has set the use of digital technologies as a strategic priority,” he said. “There’s an awful lot of work that’s gone into identifying use cases, identifying processes and work flows where digital technologies—including artificial intelligence—will make a difference. There’s still much more work to do on the data side of it, though, and because of that we haven’t seen the successful proof of concepts, the successful trials being deployed at scale.”

Another example Kelly cited was Woodside’s use of wireless sensors in liquefied natural gas production. Based on the data it obtained from these sensors, the company saw $7.3 million in annual savings. Woodside is well known for its work with the IBM Watson platform across its operations; according to IBM, more than 80% of the company’s employees have adopted the platform for their day-to-day work and it has drastically reduced the amount of time employees spend trying to uncover possible solutions or hazards.

Alison Barnes, head of robotics at Woodside, said AI tools such as Watson provide enormous safety and productivity benefits, which can make work more rewarding for people. Despite this, there are real challenges in using these tools to deploy things autonomously, especially when it comes to manipulation on the topsides. She said companies need to figure out what tasks they want the machines to do and what tasks they want the human to do. By focusing on those areas where the skills of an AI tool surpasses those of a human, they can potentially create overarching solutions that lead to better end-to-end production. 

“We’re already seeing what our future platform is doing,” she said. “Those operational tasks, going out into the field to check something, that’s starting to be replaced. We’re doing that task in a new way, accessing the data remotely. Sensors and robots are going to be our new tools, and they’ll give us more options to do those tasks without increasing NPT [nonproductive time].”

Kelly said that cultural buy-in across the industry is going to be critical in actualizing a digital transformation, echoing sentiments another panelist expressed about the trust economy and ethics in system design. Callum Sinclair, a partner at the commercial law firm Burness Paull, said, “trust is the new currency” in this ecosystem; if people do not trust AI to solve their problems, they will not use it.

“Like physical systems, you need careful design and constant monitoring in order to ensure that issues are averted. AI is in its infancy; so, how much of that is actually happening? Are we designing these systems with that in mind? How are we ensuring ethics by design? That’s as important as security. What are the rules, and who decides them? What are the implications? This is part of the wider debate going on at the moment,” Sinclair said.

Sinclair’s presentation at the panel focused centrally on the ethical, moral, and legal challenges in implementing AI tools on a widespread industrial level. He said ethical concerns should be as critical to system design as security concerns, speaking about such issues as vicarious liability, where one person is liable for the negligent actions of another person. In an environment with greater interconnectivity, where the problems that arise from a tool’s application could be traced to a number of sources, establishing protocol is critical.

“How does that come into play? You could have strict liability where you don’t need to prove an intent to harm, you don’t need to prove actual negligence. Is that going to be the model for AI systems? Are the creators of systems liable, or is it the people who train them, those who employ those systems? These are really tricky questions, and the ethical and moral principles could help direct lawmakers on this one,” Sinclair said.

A combination of robust liability and government regulation could help solve such issues, but Sinclair said that existing laws are problematic because they rely on concepts such as proof of intent and legal personality and the extent to which the laws vary between jurisdictions creates a puzzle for lawmakers.