Pure Storage And NVIDIA Announce AIRI Converged Infrastructure Reference Architecture
Last month, I attended @NVIDIA ’s GTC 2018 conference, which has turned into one of the premier #AI, #ML and #DL conferences in the world. You can read my analysis here. Part of the reason for this reputation is NVIDIA’s leadership in machine and deep learning training which it helped to create and popularize using its highly programmable and performant GPUs.
With less fanfare and credit, Pure Systems has taken a similar approach to storage in AI, ML, and DL, and have brought out AI-optimized FlashBlade storage systems to take advantage of the parallelized AI workflows. An AI solution without the right storage is like a fiddler crab with one giant claw- it can only get around as fast as that one, giant claw. We have previously covered this potential imbalance in a blog and a paper that both talk about the importance of parallelized storage when it comes to parallelized AI workloads. @NVIDIA and @Pure Storage are now pulling their combined “super-powers” together to a create what we normally expect from @HPE, @DellEMC, @IBM, and @Lenovo- that is, a converged infrastructure platform, called “ #AIRI ”. AIRI (AI Ready Infrastructure) is an AI-optimized converged infrastructure reference design, available from Pure and NVIDIA partners and contains different combinations of Pure Storage FlashBlades, NVIDIA DGX-1s, and Arista switches. DeloitteVoice What Today’s Business Leaders Can Learn From The 1953 Mt. Everest Conquest AIRI is targeted at enterprise, not CSPs, who design and then get ODMs to build their own systems. Holistically, enterprises have two choices to get into AI. They can choose on-prem systems like AIRI or from vendors like HPE, Dell EMC, IBM, and Lenovo or use AI cloud services from AWS, Azure, GCP or IBM Cloud or Watson. Enterprises do not want to fall behind like they did with the public cloud, inevitably costing them both time and efficiency. Amazon.com AWS had a seven-year public cloud head-start, and many enterprises were caught flat-footed with the private cloud. Therefore, enterprises are hoping to get a much quicker start on AI applications that help to address security, cost or latency problems, and should be local and not in the public cloud.