Posted by on
Tags: , , , , , , , , , , , , , , , , , , , ,
Categories: Uncategorized

The oil & gas industry has been a high-rev engine driving HPC growth for more than 40 years. It’s a sector with insatiable demand for supercomputing processing power and high performance, high capacity data storage – demand continually intensified as fossil fuel reserves have become harder to find. In fact, one reason the dire and incorrect predictions in the 1970s and 1980s that the world would run out of oil within a few decades is that they did not account for HPC’s perennial leaps forward in handling increasingly complex seismic workloads.

Long in the upper echelon of HPC vendors serving advanced supercomputing sites, HPE has traditionally competed successfully in the energy sector. In fact, one of the most powerful HPC systems installed at a commercial supercomputing site is the “HPE-Cray” DAMMAM-7. Number 11 on the Top500 list of the world’s most powerful supercomputers, the system is installed at Saudi Aramco, the Saudi Arabian oil and natural gas company.

As the HPC industry moves toward the exascale era (systems capable of a billion billion calculations per second), the O&G industry’s needs are evolving – not only for traditional seismic exploration but also to support the industry’s move into alternative energy sources. Increasingly, the industry requires broader, more far-reaching HPC platforms that incorporate advanced artificial intelligence along with powerful and flexible edge and cloud capabilities – and, naturally, elite processing power.

Across the energy industry, organizations are investing heavily in physical and digital infrastructure to better generate, transform, store and distribute energy,” said Bill Mannel, vice president and general manager, HPC, at HPE. “High performance computing and artificial intelligence are becoming ever more crucial for this digital transformation.”

“HPC and AI are also transforming,” Mannel said. “First, as HPC, AI, and Big Data converge, exascale-class systems enable faster insights to solve some of the critical problems in this new era of energy. Second, energy companies are increasingly augmenting their on-premises digital infrastructure with cloud and edge computing.”

In short, the HPC resources utilized by oil and gas companies must support strategies to address severe pressure to balance stringent climate change goals with the rising demand for fossil fuels. Population density and energy consumption are causing unsustainable levels of carbon emission and greenhouse gas (GHG), which drive climate change. In fact, operations of the energy companies alone account for 9 percent of all human-made GHG emissions. Energy companies are increasingly facing social, legal and environmental pressures from stakeholders to decarbonize. And they are doing so – oil and gas companies, are transforming themselves into carbon-neutral energy companies.

source: HPE

“As they transition, energy companies will increasingly use a wider variety of HPC and AI workloads from different industry verticals,” Mannel said. “With about 40 years of experience, HPE is a proven leader across HPC verticals and offers customers expertise and solutions to advance their business using these workloads. To stay competitive, HPE customers can cost-effectively process complex data faster, lower risks, and improve decision-making by leveraging cloud and exascale computing.”

Bringing its experience in HPC across a range of software applications critical to variety of industries, HPE is working with the open-source community and commercial customers and partners on initiatives to help energy companies implement a diverse set energy transition workloads, such us:

The diversity of HPC and AI applications for energy transition workloads requires a new approach to traditional HPC.

Next-generation systems will need to handle exascale-class performance demands and massive data throughput requirements. These new systems will be more heterogeneous with multiple processors, accelerators such as GPUs, a variety of interconnects and other elements.

In addition, delivery of HPC and AI is changing. The energy industry is increasingly augmenting on-premises data centers with cloud computing to improve end-user experience, agility and economics. Use of public cloud in geosciences is forecasted to grow at 22.4 percent CAGR until 2024, according to HPC industry analyst firm Hyperion Research.

Integration of HPC, analytics, and AI for energy transition (source: HPE)

Public clouds have delivered a dramatic shift in flexibility and elasticity of compute cycles. Methodologies such as containerized workloads are now also being deployed on on-prem systems facilitating software portability between public clouds and on-prem data centers. While this flexibility is great, once workloads mature and move from development to production, the cost of running in a public cloud can skyrocket.

Another problem with data-intensive workflows is repatriation of data. It is usually easy and inexpensive to upload data to the cloud provider and this is attractive when the data value is low. But as customers want to implement AI and analytics, data repatriation can be hampered because of egress charges.

HPE GreenLake, a pillar of HPE’s drive to become a cloud-first technology company, is designed to be a best-of-both-world solution: to deliver the economics of the public cloud with the security and performance of on-prem IT, providing a cloud-like infrastructure while maintaining control of data and managing and scaling workloads (pay only for use) but with the benefits of dedicated systems.

Underpinning GreenLake is HPE’s portfolio of integrated HPC solution across compute, networking, storage, and software – with a single point of contact for all support requirements through HPE Pointnext Services.

HPC and AI solutions portfolio from HPE for energy transition (source: HPE)

These services are provided by HPE support staff with experience helping O&G companies on their energy transition journey and tailor solutions to their specific needs. Energy companies can run HPC and AI workloads with HPE solutions at the edge (where increasing volumes of data are generated), in data centers, and in cloud environments (for better flexibility and economics).

HPE solutions range from single, small systems through to exascale-class supercomputers with tailored software, interconnect and storage capabilities.

HPE Cray supercomputing systems and the HPE Apollo family are purpose-built HPC and AI platforms that can support wide range of size, complexity, processor, and accelerator choices. HPC options include top-bin CPUs, fast memory, integrated accelerators (GPUs or coprocessors) and fast cluster fabrics and I/O interconnections.

For harsh edge environments such as oil rigs or smart meters / drills, HPE Edgeline systems provide enterprise-class compute, storage, networking, security, and systems management at the edge.

In addition, as energy workflows become more complex and data-intensive, the HPE HPC storage portfolio addresses the storage demands of AI as well as all-flash enterprise file storage, and it is also scalable and cost-effective.

HPE HPC and AI compute, storage and software solutions portfolio (source: HPE)

The Cray ClusterStor E1000 is purpose-engineered to meet the demanding input/output requirements of supercomputers, and HPE Parallel File System Storage delivers a high-performance solution for HPC clusters. This portfolio also includes object storage; data management framework software to manage, migrate, protect and archive data.

Read the white paper here:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.