Posted by on
Tags: , , , , , , , , , , , , ,
Categories: Uncategorized

Right now, your options when it comes to AI accelerators are…well…rapidly accelerating. We’re seeing an explosion of vendors—from established companies to startups—offering a long list of new AI accelerator products that goes  beyond more traditional GPUs for AI applications.

Confusing? Sometimes. Hard to keep track? Yes. Not sure what is the right choice for your AI workloads? You are not alone.

The good news is, we at HPE can help you navigate this changing landscape of advanced and novel AI accelerators. We have taken care to get involved and deeply engaged with developments on this front. Today, we’re working closely with a variety of accelerator vendors with the objective to help our customers through their evaluation of vendors. Here’s what we know.

 

 

 

I. Productivity 

Many of the vendors in the AI accelerator space are focused on accelerators to speed up the training of AI models. This allows data scientists and applied mathematicians to design novel neural networks—enabling faster time to solution by reducing the training time needed for desired accuracy.

II. Deep learning requirements 

Applications of deep learning (DL) are driving much of the interest in new AI accelerators. As production DL workloads mature, they are creating new requirements related to increased model sizes and sample sizes. Data scientists can work faster and more easily if they don’t need to break up models across processors to train them. Many new accelerators make this possible, with some purpose-designed to anticipate massive growth in DL model sizes. Some new AI accelerators also support DL training of larger models with larger individual sample sizes, such as high-resolution images.

III. Operational costs

In large data centers with AI workloads, the cost of GPUs, coupled with the cost of power associated with CPU-GPU utilization, is significant. More specialized AI accelerators and processors—designed specifically for AI workloads and models—can deliver higher performance per dollar.

Added to the need to mitigate energy costs, many organizations are also looking to reduce their energy footprint in the data center due to environmental concerns and related regulatory requirements. More choice in AI accelerators offers a way to save on overall watts-per-dollar spend.

Constraints on data center floor space and power availability are concerns as well. Existing data centers already have their powerlines installed and can find they have limits on how much power is available for a rack. Having to provide more power to processors that demand it can mean costly upgrades. Several new AI accelerators deliver higher efficiency and take up less space, thus reducing or eliminating the need to increase power supply.

IV. Market economics

As AI gets more broadly adapted and operationalized, organizations want more processor choices. They are seeking a diversity in architectures that will allow them to do things that existing architectures don’t allow. One or two options are no longer sufficient. AI practitioners want competitive alternatives to CPUs and GPUs.

What’s more, AI workloads are now moving from inside the data center, out to the cloud and the intelligent edge. Think about autonomous cars and smart wind turbines—places where there is a growing need for different kinds of AI accelerators that can work in different form factors and environments.

New innovations and developments are happening every day in the dynamic world of AI accelerators. Case in point: Cerebras Systems, the pioneer in high-performance AI compute, and EPCC, the supercomputing centre at the University of Edinburgh, just announced the selection of Cerebras CS-1 for EPCC’s new international data facility. Featuring the HPE Superdome Flex Server, the CS-1 is built around the world’s largest processor, the WSE, which is 56 times larger, has 54 times more cores, 450 times more on-chip memory, 5,788 times more memory bandwidth, and 20,833 times more fabric bandwidth than leading GPUs,

HPE is here to be a trusted partner in your organization’s quest to optimize your AI workloads with emerging accelerator technologies. We’ve surveyed over 50 accelerator vendors, and through early access we have run hands-on AI accelerator benchmarks on several. We’ve combined this knowledge and experience into a process to help you choose the emerging architecture that is mature and ready for your AI needs.

Read more here:

https://community.hpe.com/t5/Tech-Insights/Ready-to-navigate-the-AI-accelerator-landscape/ba-p/7118163#.YBtaBqROK_Z

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.