Posted by on
Tags: , , , , , , , , , , , , , , , , , , , , ,
Categories: Uncategorized

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement.

Al Geist kicked off ORNL’s Advanced Technologies Section (ATS) webinar series last month by recapping the history of the march toward exascale. As Geist described how the Frontier supercomputer addresses the four primary exascale challenges, he disclosed key information about the anticipated inaugural U.S. exascale computer.

Most notably, Frontier is poised to hit the 20 MW power goal set by DARPA in 2008 by delivering more than 1.5 peak exaflops of performance inside a 29 MW power envelope. Although the once-aspirational target had originally been etched for 2015, until fairly recently, it was not clear that the first crop of exascale supercomputers – set to arrive in the 2021-2023 timeframe – would make the cut. Indeed, it is unclear if they all will, but it is looking like Frontier, using HPE and AMD technologies, will.

Geist is a corporate fellow at and CTO of the Oak Ridge Leadership Computing Facility (OLCF) and the CTO of the Exascale Computing Project. He’s also one of the original developers of PVM (Parallel Virtual Machine) software, a de facto standard for heterogeneous distributed computing.

Geist began his talk with a review of the four major challenges that were set out in the 2008-2009 timeframe, when exascale planning was ramping up within the Department of Energy and its affiliated organizations.

“The four challenges also existed during the petascale regime, but in 2009, we felt there was a serious problem where we might not even be able to build an exascale system,” said Geist. “It wasn’t just that it would be costly, or that it would be hard to program – it may just be impossible.”

Energy consumption loomed large.

“Research papers that came out in 2008 predicted that an exaflop system would consume between 150 to up to 500 megawatts of energy. And the vendors were given this ambitious goal of trying to get that down to 20, which seems like an awful lot,” said Geist.

Then there was reliability: “The fear with the calculations we were doing at the time is that failures would happen faster than you could checkpoint a job,” said Geist.

It was further thought that billion-way concurrency would be required.

“The question was, could there be more than even just a handful of applications, if even one, that could utilize that much parallelism?” Geist recalled. “In 2009, large scale parallelism was typically less than 10,000 nodes. And the largest application we had on on record was only about 100,000 nodes used.”

The last issue was a thorny one: data movement.

“We were seeing the whole problem with the memory wall: basically that the time for moving data from memory into the processors and from the processors back out to storage was actually the main bottleneck for doing the computing; the computing time was insignificant,” said Geist. “The time to move a byte is orders of magnitude longer than a floating point operation.”

Geist recalled the DARPA exascale computing report that came out in 2008 (led by Peter Kogge). It included a deep analysis of what it would take to field a 1 exaflops peak system.

With the technologies at the time, it would take 1,000 MW to build a system of off-the-shelf components, but if you scaled the then current flops-per-watt trends, you’d cross exascale at roughly 155 MW with a very optimized architecture, Geist relayed. A barebones configuration, stripping away memory from the strawman system down to just 16 gigabytes per node, resulted in a 69-70 MW footprint.

But even the aggressive 70 MW figure was out of range. A machine that power-hungry was unlikely to secure the necessary funding approvals.

“You might wonder, where did that [20 MW number] come from?” Geist posed. “Actually, it came from a totally non-technical evaluation of what was possible. What was possible said: it’s gonna take 150 MW. What we said is: we need it to be 20 [MW]. And why we said that is that [we asked] the DOE, ‘How much are they willing to pay for power over the life of a system?’ and the number that came back from the head of Office of Science at the time was that they weren’t willing to pay over $100 million over the five years, so it’s simple math [based on an average cost of $1 million per megawatt per year]. The 20 megawatts had nothing to do with what might be possible, it was just that stake that we drove in the ground.”

Jumping ahead in the presentation (which is available to watch and linked at the end of this article), Geist traces the evolution of machines at Oak Ridge: Titan to Summit to Frontier. The extreme concurrency challenge is addressed by Frontier’s fat node approach, where the GPUs hide the parallelism inside their pipelines.

“The number of nodes did not explode – it didn’t take a million nodes to get to Frontier,” said Geist. “In fact, the number of nodes is really quite small.”

Where Titan used a one-to-one GPU-to-CPU ratio, Summit implemented a three-to-one ratio. Frontier’s design kicks that up a notch with a four-to-one GPU-to-CPU ratio.

“In the end, what we found out was that exascale didn’t require this exotic technology that came out in the 2008 report,” said Geist. “We didn’t need special architectures, we didn’t even need new programming paradigms. It turned out to be very incremental steps, not a giant leap like we thought it was going to take to get to Frontier.”

Read more here:

https://www.hpcwire.com/2021/07/14/frontier-to-meet-20mw-exascale-power-target-set-by-darpa-in-2008/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.