Deep Learning Places New Demands on Data Center Architectures
Machine and deep learning applications bring new workflows and challenges to enterprise data center architectures. One of the key challenges revolves around data and the storage solutions needed to store, manage, and deliver up to AI’s demands. Today’s intelligent applications require infrastructure that is very different from traditional analytics workloads, and an organization’s data architecture decisions will have a big impact on the success of its AI projects.
These are among the key takeaways from a new white paper by the research firm Moor Insights & Strategy.
“While discussions of machine learning and deep learning naturally gravitate towards compute, it’s clear that these solutions force new ways of thinking about data,” the firm notes in its “Enterprise Machine & Deep Learning with Intelligent Storage” paper. “Deep learning requires thinking differently about how data is managed, analyzed and stored.”
So, how do you think differently? One way is to think about a blissful dance between compute and storage, one in which both partners are continually in lockstep with each other. When a storage system is paired with a deep learning compute system, it should have the ability to access and serve up large data sets with extreme concurrency without causing the processing elements to stall while they wait for data.
“Deep learning requires large amounts of data to be fed into the processor without making the processors wait for that data,” Moor Insights & Strategy says. “Properly marrying compute with the right storage technology, such as the Dell EMC Isilon series, allows data to be fed into the machine learning pipeline at the speed of the processor.”
Leave a Reply