Software-Defined Storage & Containers: Solution For Shared Service Providers
Shared service providers, including telephone companies, hosting services, and related businesses, stand to benefit from the combination of containerized microservices and #softwaredefinedstorage. What you need, where you need it, when you need it. One of the core goals of cloud computing can essentially be summed up that way. Whether you need application access, more processing power or memory, more data capacity, or more data, cloud computing enables rapid scaling up and down. “Up and down” is also important. Just as rapidly as a given cloud user can request and access a resource, it is just as important to enable equally rapid release of those resources when done. This is also a core goal of cloud computing, the ability to dramatically reduce operating cost, especially through optimized utilization of pooled resources. Containers Continue Changing The Cloud The underlying fabric, the way in which we actually use cloud computing to execute applications and manage workloads, is rapidly advancing. Previous to cloud, and even in the earliest days of the transition to cloud computing, applications were monolithic assemblies of code running on a processor and obtaining or recording needed data on large network-attached storage (NAS) appliances. Scaling up meant adding more drives to the appliance. These were usually highly specialized and therefore more expensive. If a flaw developed in the software, it usually brought the entire application down. This model continues to be used in many environments to this day. More recently, applications have become an assembly of microservices, individual processes that are each containerized along with all of the resources they require to execute, including specifications for where and what storage they require. This is highly consistent with the fundamental architecture of cloud-connected networks. These containers can be instantiated wherever they will be most efficiently used, and if a flaw develops in one of them, that container is simply discarded and re-instantiated. The entire application continues to run. This provides very high resilience and reliability. During the earliest days of the introduction of containerized microservices, storage was still provided from monolithic NAS appliances, creating a significant performance reduction as every storage request was required to transit the network.