Posted by on
Tags:
Categories: DELL EMC IBM Nvidia

Nearly 20 months after revealing its plans to develop a new version of its Power series of processors, #IBM Corp. today is rolling out its first machines based upon the new chip, saying they’re the best you can get for compute-intensive #artificialintelligence workloads. The new #Linux-based #AC922 Power Systems can speed up the training times of deep learning frameworks nearly fourfold, @IBM said. In addition to performance improvements in the processor, the systems feature the latest 4.0 version of the Peripheral Component Interconnect Express or PCI-e expansion bus, @Nvidia Corp.’s #NVLink 2.0 high-speed interconnect and @OpenCapi, an interface architecture for connecting microprocessors to memory, accelerators, input/output devices and other processors. The combination accelerates performance over bandwidth up to tenfold, IBM claimed. “Power9 is an absolute beast when it comes to moving data, a critical point for AI-centric processes,” said Charles King, president and principal analyst at Pund-IT Inc. “Since AI depends on the repetition of deep learning exercises thousands of times, Power9 systems can cut substantial time off the process.” The AC 922 server comes with two Power9 processors and up to six Nvidia graphics processing units connected by the NVLink interface. “It’s terrific for any accelerated workload,” said Sumit Gupta, vice president of high-performance computing, artificial intelligence and machine learning at IBM. “Machine learning data sets are huge,” he added. “We can move data to accelerators much faster than on Intel systems.” IBM said Power9 will also be at the heart of the U.S. Department of Energy’s “Summit” and “Sierra” supercomputers, which are expected to be the world’s most powerful. IBM has talked a lot recently about the end of the Moore’s Law performance curve, which saw processor densities double biannually for more than 50 years. As central processing unit speed improvements slow, systems makers have been looking at outboard accelerators such as GPUs as a way to boost performance. That’s why the inclusion of PCI-e 4.0 and NVLink 2.0 is such a big deal. NVLink 2.0 can communicate at 25 gigabits per second, which is seven to 10 times the speed of the PCI-e 3.0 interconnect used in Intel x86 systems, IBM said. “Power9 is like a Swiss Army knife of AI accelerators,” said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy. “You can plug in the highest-performance accelerators on the planet and have coherent memory, meaning the accelerator has direct access to system memory.” The use of co-processors boosts overall performance by farming out tasks in parallel to the on-board accelerators, but the CPU creates a bottleneck, Gupta said. “The biggest problem is that the data comes through the network to the CPU memory, and every accelerator also has its own memory, so you need to move data to the accelerators,” he said. With each GPU holding 16 gigabtyes of its own memory, bandwidth speed between the CPU and GPUs affects overall performance. “Essentially, Power9 has three interfaces that accelerate connection to other devices and to storage-class memories,” Gupta said. NVLink 2.0 is the most important of those three, Moorhead said. “Adding PCIe first is big, but I believe adding NVLink 2.0 is a bigger deal,” he said. “It gives the company a performance and coherency advantage using multiple GPUs on the same server.” IBM formed the OpenPower Consortium four years ago as an alternative to Intel’s hegemony with a focus on collaborative development and high-performance systems. The organization has attracted more than 300 members, including Google Inc., but hasn’t made a significant dent in Intel’s market share. Google announced plans in the spring of 2016 to build a new server based upon the Power9 chip, but has said little about the project since. IBM said Power9 was a complete rebuild of the processor family that was four years in the making. A slew of systems are due in 2018, though IBM didn’t provide specifics. Pund-IT’s King said it’s been worth the wait, calling Power9 an “AI powerhouse. If Power9 and IBM’s related systems deliver as promised, it will substantially rock many AI projects and efforts,” he said. IBM also didn’t reveal pricing, but Gupta said costs will be competitive with a comparable x86-based system.

https://siliconangle.com/blog/2017/12/05/ibm-targets-deep-learning-first-power9-based-systems/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.