Overcoming Space and Power Limitations in HPC Data Centers
In companies of all sizes, critical applications are being adopted to accelerate product development, make forecasts based on predictive models, enhance business operations, and improve customer engagements. As a result, there is a growing need for Big Data analytics in many businesses, more sophisticated and more granular modeling and simulation, wide-spread adoption of AI (and the need to train neural nets), and new applications such as the use of genomic analysis in clinical settings and personalized medicine. These applications generate workloads that overwhelm the capacity of most installed data center server systems. Simply put, today’s compute-intensive workloads require access to significant HPC resources. Challenges bring HPC to the Mainstream Many of today’s new and critical business applications are pushing the limits of traditional data centers. As a result, most companies that previously did not need HPC capabilities, now find such processing power is required to stay competitive. Unfortunately, several problems prevent this from happening. When attempting to upgrade infrastructure, most organizations face inherent data center limitations with space and power. Specifically, many data centers lack the physical space to increase compute capacity significantly. And all organizations incur high electricity costs to run and cool servers, while some data centers have power constraints that cannot be exceeded. Additionally, there is lack of internal HPC expertise. IT staff may not have the knowledge base to determine which HPC elements (including processors, memory, storage, power, and interconnects) are best for the organization’s workloads or the expertise to carry out HPC system integration and optimization. These skills have not been required in mainstream business applications until now. As a result, most organizations need help when selecting an HPC solution to ensure it is the right match for the organization’s compute requirements and budget constraints, and one that fits into an existing data center. Selecting the Right Technology Partner Traditional clusters consisting of commodity servers and storage will not run the compute-intensive workloads being introduced into many companies today. Fortunately, HPC systems can be assembled using the newest generation of processors, high-performance memory, high-speed interconnect technologies, and high-performance storage device like NVMe SSDs. However, to address data center space and power issues, an appropriate solution must deliver not just HPC capabilities, but the most compute power per watt in a densely packed enclosure. To achieve this, it makes sense to find a technology partner with deep HPC experience who can bring together optimized systems solutions with rack hardware integration and software solution engineering to deliver ultimate customer satisfaction. This is an area where Super Micro Computer, Inc. can help. Supermicro® has a wide-range of solutions to meet the varying HPC requirements found in today’s organizations. At the heart of its HPC offerings are the SuperBlade® and MicroBlade™ product lines, which are advanced high-performance, density-optimized, and energy-efficient solutions for scalable resource-saving HPC applications. Both lines offer industry-leading performance, density, and energy efficiency. They support the option of BBP® (Battery Backup Power modules), so the systems provide extra protection to the data centers when a power outage or UPS failure occurs. This feature is ideal for critical workloads, ensuring uptime in the most demanding situations. SuperBlade and MicroBlade solutions are offered in several form factors (8U, 6U, 4U, 3U) to meet the various compute requirements in different business environments. At the high end of the spectrum, there is the 8U SuperBlade: SBE-820C series enclosure supports 20x 2-socket (Intel® Xeon® Scalable processor) blade servers with 40 hot-plug NVMe SSDs or 10x 4-socket (Intel® Xeon® Scalable processor) blade servers with 80 hot-plug NVMe SSDs, 100Gbps EDR InfiniBand or 100Gbps Intel Omni-Path switch, and 2x 10GbE switches. This SKU is best for HPC, enterprise-class applications, cloud computing, and compute-intensive applications. SBE-820J series enclosure supports 20x 2-socket (Intel® Xeon® Scalable processor) blade servers with 40 hot-plug NVMe SSDs or 10x 4-socket (Intel® Xeon® Scalable processor) blade servers with 80 hot-plug NVMe SSDs, and 4x Ethernet switches (25GbE/10GbE). This SKU is similar to the SKU above, except it is built to operate at 25G/10G Ethernet instead of 100G InfiniBand or Omni-Path. This solution is most suitable for HPC workloads in IT environments that leverage Ethernet switches with 40G or 100G uplinks. The 8U SuperBlade offering includes the highest density x86 based servers that can support up to 205W Intel® Xeon® Scalable processor. One Supermicro customer at a leading semiconductor equipment company is using 8U SuperBlade systems for HPC applications with 120x 2-socket (Intel® Xeon® Scalable processor) blade servers per rack. This allows the company to save a significant amount of space and investment dollars in its data center. Supermicro solutions helped a Fortune 50 Company scale its processing capacity to support its rapidly growing compute requirements. To address space limitations and power consumption issues, the company deployed over 75,000 Supermicro MicroBlade disaggregated, Intel® Xeon® processor-based servers at its Silicon Valley data center. Both SuperBlade and MicroBlade are equipped with advanced airflow and thermal design and can support free-air cooling. As a result, this data center is one of the world’s most energy efficient with a Power Usage Effectiveness (PUE) of 1.06. Compared to a traditional data center running at 1.49 PUE, this new Silicon Valley data center powered by Supermicro blade servers achieves an 88 percent improvement in overall energy efficiency. When the build-out is complete at a 35 megawatt IT load power, the company is targeting $13.18M in savings per year in total energy costs across the entire data center.