Data mining/analysis

The Processing Problem: Can Computers Keep Up With Industry Demand?

High-performance computing is an important piece of the puzzle for operators looking to integrate field models with surface facilities. Next-generation processors and accelerators should help build the systems needed to meet industry's growing demands, but the tools may be reaching their limits.

Image of a computer chip

Big data is one of the big buzzwords in oil and gas operations today. Operators cannot get enough of it. Managing a successful venture requires the ability to extract valuable information from massive data sets and process that information in a quick and efficient manner.

High-performance computing (HPC) is critical in making these things happen. It can help operators optimize their field-development plans and integrate those field models with surface facilities models, minimizing risk and uncertainty and creating an environment for more assertive decision making on a project. As the definition of HPC expands to include big-data analytics, machine learning, and artificial intelligence, developers are looking for more scalable and powerful approaches to meet the growing demands of energy industry customers.

“The fundamental algorithms that people are so excited about today in machine learning have been known for over 30 years, but we’ve lacked the computational power to do anything with them, and we’ve lacked the ability to ingest and process the data needed to feed into them and train them to do anything of substance. That’s really changed in the last 5 or 6 years,” said Forrest Norrod, senior vice president and general manager at AMD.

Speaking at the 2019 Rice Oil and Gas HPC Conference in Houston, Norrod addressed the challenges involved in unlocking the full potential of next-generation central processing unit (CPU), graphics processing unit (GPU), and accelerator technologies. Doing so could help propel computing to exascale systems capable of making a quintillion calculations per second, but it will require a comprehensive, innovative approach to system architectures.

Norrod said HPC has led a number of innovations in large system architecture, including the advent of heterogeneous systems that combine the CPU with a GPU or other accelerator. While heterogeneous systems still make up a fraction of the world’s supercomputers, they have allowed engineers to access large computational resources, something Norrod said has “unlocked machine intelligence.”

“It really is the explosive growth in machine learning that has gotten us excited, gotten Infinity excited, gotten Intel excited about really doubling down on generating technology that’s relevant to large-scale, heterogeneous, clustered systems,” he said. “Across the industry, you see a level of interest now in this area that’s higher than it’s ever been, and, I think, that will translate to great systems for your use regardless of what type of application you’re going after.”

A problem with developing faster systems is that the tools needed to drive increasing performance are not reaching the limits of what they can realistically do. Most CPUs currently in production are based on monolithic dies—single pieces of silicon that are placed on a package and installed in a processor system. Modern chip design has focused on minimizing the die area, making the dies more cost-effective and easier to implement, but constraints in reliability, power, and the resistivity of metal means that, as processors shrink in size, developers are unable to get more frequency out of them. The density that developers can pack into a single chip is also decreasing.

In order to improve performance, Norrod said, developers have to look at different methods.

“Simply driving frequency, simply driving transistor count in a chip, that isn’t going to cut it anymore,” Norrod said. “The only way to do it, the only way to continually drive the performance that you need, is by using every trick in the book. There is no one lever. There are no two levers. It requires innovation on integration, innovation on system design, and innovation on software to continue to yield high-performance systems.”

Norrod said he expects to see more companies move toward multichip architectures for their processor systems, which aggregates several individual smaller chips together to overcome the area constraints of a single chip. In such a system, shared connects can provide enormous bandwidth for large data transfers while consuming a low amount of power. Norrod also referenced 3D chip stacking, which holds some promise in advancing silicon performance. AMD released the world’s first commercial GPU (Fury X), which integrated 4GB of 3D-stacked high-bandwidth memory (HBM). NVIDIA’s data center GPUs, such as the Tesla V100 series, also have stacked HBM capable of delivering peak bandwidth of 900 GB/second.

Intel moved this architecture a step further by launching Foveros, an active interposer technology, in December. The interposer contains the vertical electrical connections (through-silicon vias and traces) needed to bring power and data to the chips on the top of the stack, but it also carries a platform control hub. This effectively makes the interposer part of the design, allowing for greater flexibility; select functions can be removed from the chips to save space, and different transistor types can be used for different chips.  

The Rice Oil and Gas HPC Conference was hosted by the Ken Kennedy Institute for Information Technology at Rice University.