Posted by on
Categories: AI DEEP LEARNING Nvidia

@NVIDIA CEO unveils Volta-based GV 100 for workstations, new inferencing software, technologies providing a 10x boost for deep learning, a self-driving car simulator, and more. Millions of servers powering the world’s hyperscale data centers are about to get a lot smarter. NVIDIA CEO @JensenHuang Tuesday announced new technologies and partnerships that promise to slash the cost of delivering deep learning-powered services. Speaking at the kickoff of the company’s ninth annual GPU Technology Conference, Huang described a “Cambrian Explosion” of technologies driven by GPU-powered deep learning that are bringing support for new capabilities that go far beyond accelerating images and video. “In the future, starting with this generation, starting with today, we can now accelerate voice, speech, natural language understanding and recommender systems as well as images and video,” Huang, clad in his trademark leather jacket, told an audience of 8,500 technologists, business leaders, scientists, analysts and press gathered at the San Jose Convention Center. Over the course of a two-and-a-half hour keynote, Huang also unveiled a series of advances to NVIDIA’s deep learning computing platform that deliver a 10x performance boost on deep learning workloads from just six months ago; launched GV 100, transforming workstations with 118.5 TFLOPS of deep learning performance; introduced DRIVE Constellation to run self-driving car systems for billions of simulated miles. Power to the Pros Huang’s keynote got off to a brisk start, with the launch of the new Quadro GV 100. Based on Volta, the world’s most advanced GPU architecture, Quadro GV100 packs 7.4 TFLOPS double-precision, 14.8 TFLOPS single-precision and 118.5 TFLOPS deep learning performance, and is equipped with 32GB of high-bandwidth memory capacity. NVIDIA CEO Jensen Huang launches the Quadro GV100. GV100 sports a new interconnect called NVLink 2 that extends the programming and memory model out of our GPU to a second one. They essentially function as one GPU. These two combined have 10,000 CUDA cores, 236 teraflops of Tensor Cores, all used to revolutionize modern computer graphics, with 64GB of memory. Deep Learning’s Swift Rise The announcements come as deep learning gathers momentum. In less than a decade, the computing power of GPUs has grown 20x — representing growth of 1.7x per year, far outstripping Moore’s law, Huang said. “We are all in on deep learning, and this is the result,” Huang said. Drawn to that growing power, in just five years the number of GPU developers has risen 10x to 820,000. Downloads of CUDA, our parallel computing platform, have risen 5x to 8 million. “More data, more computing are compounding together into a double exponential for AI, that’s one of the reasons why it’s moving so fast” Huang said. Bringing Deep Learning Inferencing to Millions of Servers The next step: putting deep learning to work on a massive scale. To meet this challenge, technology will have to address seven challenges: programability, latency, accuracy, size, throughput, energy efficiency and rate of learning. Together, they form the acronym PLASTER.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.