Hyperconverged infrastructure (HCI) ticks a lot of boxes for a lot of organizations. Recent figures from IDC showed growth in the overall converged systems market was static in the fourth quarter of 2020, but sales growth for hyperconverged systems accelerated, up 7.4 percent and accounting for almost 55 per cent of the market.
It’s not hard to see why this might be. The pandemic means that many organizations are having to support largely remote workforces, meaning a surge of interest in systems that can support virtual desktop infrastructure. Those all-in-one systems seem to offer a straightforward way to scale up compute and storage to meet these challenges in a predictable, albeit slightly lumpy, way.
But what if your workloads are unpredictable? Are you sure that your storage capacity needs will always grow in lockstep with your compute needs? Looked at from this point of view, HCI may be a somewhat inflexible way to scale up your infrastructure, leaving you open to the possibility of paying for and managing storage and/or compute resources that you don’t actually need. Suddenly that tight level of integration is a source of irritation. Aggravation even.
This is why HPE has begun to offer customers a path to “disaggregation” with the HPE Nimble Storage dHCI line, which allows compute nodes, in the shape of HPE ProLiant servers, to share HPE Nimble storage arrays, while still offering the benefits of traditional HCI.
HPE’s Chuck Wood, Senior Product Marketing Manager, HPE Storage and Hyperconverged Infrastructure, says that while the classic HCI model delivers when it comes to deployment and management, admins still face complexity when it comes to the actual infrastructure.
“In traditional HCI, when you need to do Lifecycle Management, like adding nodes or even doing upgrades, all of those operations can be disruptive, because your apps and your data are on the same nodes,” he says.
Read more here: