Visualization and the Cloud: Enabling Collaborative Environments

Source: Paper SPE 191525.
Immersive virtualization environments allow users in different physical locations to observe full-scale models of various types of equipment, such as a gas-processing module from an offshore installation.

New systems are emerging all the time to help industry improve its operations, and the digital world offers a plethora of opportunities for growth. Digital connectivity brings data from different areas of the value chain into a single platform, allowing companies to visualize their data better and optimize their work flows. Increasingly, solutions that have made their mark in different industrial sectors are making their way into energy.

One such solution, virtual reality (VR), may have a growing influence on our recreational lives, as any online gaming enthusiast may attest. While its adoption in oil and gas operations is less apparent, efforts are being made to show how it can improve efficiency in the industry, particularly through enabling more detailed remote collaboration.

Immersive visualization takes the concept of VR further, synchronizing it with augmented reality (AR) and mixed reality into a singular environment. It is a computer-generated experience, incorporating sensor feedback that layers virtual information over live camera feeds into a headset and allowing users to view and interact with 3D images in a space where physical and digital objects can coexist and interact in real time.

The Engine of Visualization
Other industries already use this technology. Surgeons use augmented visualizations of magnetic-resonance-imaging scans superimposed over the bodies of their patients when planning surgeries. Online multiplayer games are designed primarily as visual integration frameworks, optimized for ease of use with efficient high-quality image rendering and multiuser networking. Paul Helm, managing director of the VR and AR developer LA12, said during the 2018 SPE Annual Technical Conference and Exhibition (ATCE) that gaming inspired his company’s entry into the space.

“My sons are into games, and they were telling me that you can build these virtual environments—not exactly VR or such at that time, but certainly these environments where people who have never met before can work together toward a common set of objectives. We’re talking about, essentially, this concept of having an environment where people can come together, establish a shared situational lens, a shared common mental model, shared information, discuss things that are going on, and make decisions as quickly as they possibly can,” Helm said.

Helm, who was presenting a paper (SPE 191525) cowritten with Julian Pickering of Geologix Systems Integration, said the headsets in an immersive visualization system provide stereo vision of an environment rendered either within the device itself or on a computer connected to the headset. The positional tracking of the headset and hand controllers in the real world enable user navigation and interaction with the contents of the virtual environment to varying degrees of freedom. A user’s ability to interact with the virtual environment increases as the degrees of freedom enabled by the headset increase (Fig. 1).

Source: Paper SPE 191525.
Fig. 1—The degrees of freedom enabled by a headset
dictate the level to which a user may interact with objects
in an immersive visualization environment.

“When we talk about degrees of freedom, what we’re actually talking about is the ability to move or rotate a headset and, therefore, interrupt an environment,” Helm said. “So, three degrees of movement is where we can rotate your head, but if we try to move toward an object, it moves away from you at the same time. Six degrees of freedom gives you the ability to translate as well as rotate, and that means you can move from a seated environment to a room-scale environment. The key to that is you’ve got to have the open space. You can build a simulation in that environment using those tools.”

Any immersive visualization setup must be able to operate efficiently in any number of space models. Helm said relying on access to room-scale spaces for all virtual meetings would be prohibitive. Although giving users the space to walk physically around a virtual object would add to the effect, Helm and Pickering (2018) suggested that immersive collaboration spaces work to a basic seated-use model. They said this model is most effective because it has the potential for desk- or cubicle-based deployments.

Collaboration Space Models
In his presentation, Helm outlined two types of collaboration space models: persistent and configurable. Persistent spaces represent a physical collaboration space permanently dedicated to an asset or activity. One example of a persistent space would be an asset-monitoring environment served continuously with information feeds, allowing end users to enter the space either individually or with other people. In this setup, people can review current or historical information; discuss and annotate their thoughts, findings, and conclusions; and leave the space whenever they are ready to do so, just like in a physical environment. Access comes through authorization, and moderators can establish permissions for various levels of interactions.

“Persistent models are server-based,” Helm said. “A client logs into it. It’s always online. It’s always receiving data from one source or another, rendering it in some form, and when the individual enters that environment, they’re able to interact with the contents of that environment.”

Configurable spaces are based on a template where an “owner” creates a new space on the basis of previously established requirements. Once created, the template can be configured with outside content (such as displays, 3D objects, or models) provided by the creator or others, and it can be saved and accessed as needed. Configurable spaces allow for the saving and restoring of layouts; the space can be configured to show the current information and status of an asset, or it can be reconfigured to a point in time, allowing the replay and analysis of specific events and circumstances, similar to restoring virtual machine snapshots.

“What we can do [with configurable spaces] is dynamically create an environment and invite people in it. We have templates where you can build out the contents of the space, bring people into it, and discuss what they’re doing at any given point,” Helm said.

Helm and Pickering (2018) identified a number of possible applications of immersive visualization in oil and gas, including seismic volume reviews, well planning, process and equipment monitoring, and support for risk environments.

Helm outlined two separate environments developed by LA12 using the Unreal Engine from Epic Games, an online game development company, with the HTC Vive and Oculus Rift headsets being used interchangeably in each space. The first space was a basic layout for collaborative drilling monitoring consisting of a template with eight paneled walls, each equipped with a floor-to-ceiling display. Users interact with the display through laser pointers and controller triggers corresponding to mouse movements and clicks on a virtual keyboard. Optional methods of teleporting or sweeping by use of a controller joystick can help facilitate movement around the space. Users can interact with the displays in the same manner they would with a mouse in a shared desktop.

This setup has its challenges, primarily with display resolution and data entry through the virtual keyboards. Helm said that relying on pointers proved troublesome for users who were not familiar with the controller operation. In addition, displays optimized for standard high-definition resolution and higher—particularly those using small typeface—can be difficult to read without approaching the display closely, though larger imagery and typefaces presented less of an issue. Helm said readability concerns were not too significant, however.  

“The resolution is good enough to read the contents of the screens, and, as newer headsets come out, it will get easier,” he said.

The second example was a persistent-space model consisting of a gas-processing module from an offshore installation exported from a commercial computer-aided design package, edited in modeling tools to remove source-tool and design-process artifacts and imported in major units to the engine. Users in separate physical locations can connect to the model over public Internet, and audio communication is provided through voice over Internet protocol embedded within the immersive visualization session. They can move freely around the 3D model and data display spaces.

Source: Paper SPE 191525.
Fig. 2—The spectator view of an immersive visualization
space shows a user interacting with displays of data
entered via virtual keyboard. This view comes from
a standard PC monitor instead of a headset.

Fig. 2 shows a spectator view of the session as seen through a standard computer screen. Spectators can listen in on a session taking place and navigate around a structure using a keyboard and mouse without directly altering the displays themselves. They could also be restricted to fixed viewpoints within the space, relocating those viewpoints in real-time upon request.

Helm said LA12 was able to size the gas-processing unit model into a full-scale model, allowing the view of relevant embedded data displays associated with the equipment and processes under review. Each compressor on display contained 1.7 million vertices and 1.2 million polygons, and the engine performed real-time tracing on the system while delivering 60 frames per second to the headsets. While this is not a preferable level of detail for industry use, Helm said it was a positive sign for future development.

“We had real-time data coming in, three participants in different geographical locations talking to each other. You go up to the top of the platform, and you’ve got all your control systems up there. We can make these as interactive as we need them to. Generally speaking, we’re not trying to teach people how to turn a valve. What we’re trying to do is get people to understand how you can cluster and discuss and get to a conclusion based on an understanding of what’s going on,” he said.

Life in the Cloud
While cloud computing is hardly a new development, it is still somewhat novel in oil and gas. More companies are rapidly moving toward cloud technologies as part of their increased investment in digital. Much like immersive visualization, the transition to the cloud is something that could help companies adapt more efficient work flows, concentrating services within a smaller set of specialized providers.

In a presentation on cybersecurity at ATCE, Peter Black outlined the importance of the cloud to energy operations. Black, managing director of the hydrocarbon allocation and production software company EnergySys, said he believes the cloud is “fundamental” to the future success of the industry and that its adoption dovetails with the conservative nature of oil and gas companies because it allows them to test new processes in a lower risk environment.

“We don’t like to try new stuff. There’s always a mad rush to be second. In that context, what the cloud offers us is the opportunity to reduce the risk of experimentation, to try things with no upfront costs and low risk of exposure. That’s going to be fundamental moving forward. People talk about new technologies. The cloud enables that adoption,” Black said.

Black said cloud systems can improve project efficiency and productivity, particularly in terms of allowing for regular software upgrades without user intervention, and the cost of transitioning could be prohibitive in some cases. However, he said the long-term benefits outweigh the negatives. As more companies join cloud platforms, demand for hardware to run the infrastructure increases. This demand jump increases purchasing power, and, as a company’s purchasing power increases, it can negotiate lower prices as it purchases hardware in bulk. That could lead to falling costs, which companies pass on to their customers by dropping prices, further fueling demand. This is called the virtuous cycle of cloud computing.

“If we can have a common, proper, multiend cloud infrastructure, that means more users,” Black said. “We can afford to implement more features, costs get lower, and the prices go down. Over the longer term with cloud solutions in oil and gas, prices have actually decreased.”  

Black said companies looking to build cloud infrastructures often think of them as purely physical constructs—servers, storage, and networks—but the cloud is not, in and of itself, about technology. He said it is a fundamentally different approach to the consumption of computing services, one in which services are provided on a utility basis.

This approach helps in determining a method to security. Black said companies should be aiming to purchase services, not software, when building cloud infrastructures. The fact that many companies manage their old network infrastructures is not an invitation to replicate existing practices in a public cloud environment. He said the cloud does not fundamentally alter the nature of cybersecurity risk but that it helps provide access to several tools (such as key management and on-demand disk encryption) that can dramatically simplify the work being done by security professionals.

For many cloud application providers, most of whom host on one of the major infrastructure providers such as Microsoft Azure or AWS, the responsibility for security is shared between the provider and the customer. Black described the advantages of such a setup using the EnergySys security platform, run on AWS, as an example. AWS is responsible for security of the cloud: It handles the security infrastructure, edge locations, and other relevant services. EnergySys is responsible for security in the cloud: It handles, among other things, the applications launched in a cloud system, as well as customer data, client-side data, server-side encryption, and network traffic protection.

“It’s a concentration of technical excellence,” Black said of the security setup. “It is [AWS’] responsibility to be secure, and we make it our business to make sure our applications are secure. That tiered responsibility for application and infrastructure security really does concentrate effort into areas where it is the best place to be.”

Helm, P. and Pickering, J. 2018. Enabling Collaborative Decision Making Through Immersive Visualization. Presented at the SPE Annual Technical Conference and Exhibition, Dallas, 24–26 September. SPE-191525-MS.

Black, P. 2018. Cybersecurity, the Cloud, and Oil and Gas. Presented at the SPE Annual Technical Conference and Exhibition, Dallas, 24–26 September. SPE-191563-MS.


Don't miss out on the latest technology delivered to your email monthly.  Sign up for the Data Science and Digital Engineering newsletter.  If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.