Posted by on
Categories: AMD Intel IoT Machine learning

Today’s modern data intensive applications covering all the current buzz generating markets like #IOT, #analytics, #machinelearning, and multifaceted research initiatives are requiring unique approaches to #processing, #networking and #storage. These applications call on diverse processor strategies to address the different workloads, with @AMD, @Intel Core and #XeonPhi, #GPUs, and Power processors being deployed heterogeneously to address the different requirements. Obviously, higher performance requirements are a perpetual trend, but this super-linear increase in I/O pressure due to tougher I/O patterns, higher concurrency, and heavy read access are outstripping the default high performance I/O infrastructures ability to keep up. Extremely parallel file systems have dealt well with homogeneous large sequential I/O, a workload pattern that is just not found in these emerging applications. Instead of taking the destructive approach of whole-sale replacement of existing file system technologies, DDN took the approach of leveraging the rapid commoditization of flash memory with a software-defined storage layer that enables applications by sitting between the application and the file system. In fact, IME can be deployed to cost-effectively extend the life of existing file system solutions. With a scale-out approach, DDN’s Infinite Memory Engine® (IME®) presents an I/O interface that sits above, but remains tightly integrated with the file system to transparently eliminate I/O bottlenecks. IME unlocks previously blocked applications by delivering predictable job performance, faster computation against data sets too large to fit in memory, and accelerate IO-intensive applications. By doing this in a completely software-defined approach, IME is server and storage-agnostic, and application transparent–maintaining file system semantics so no code changes, scheduler changes or API usage are required. Not only does IME serve as an accelerating shim between application and file system, it also helps address the data center challenges found in high performance environments. IME’s strategic use of flash can reduce space power and cooling requirements from 10x to 300x over legacy storage approaches. This allows administrators to continue to scale applications to meet workload demands while maintaining high performance, independent of the amount of storage capacity behind the file system. Developed from scratch specifically for the flash storage medium, IME delivers a unique approach to performance, while also delivering data security and durability. IME eliminates traditional I/O system slowdowns during extreme loading, dial-in resilience through erasure coding based on the per file or client basis and delivers lightning-fast rebuild due to its fully declustered distributed data architecture. These capabilities combine to bring freedom to complex applications through lower cost, while delivering deeper insight and smarter productivity. To learn how you can leverage the power of IME to improve the efficiency of your compute, storage and networking, visit the DDN website.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.