What Microsoft Azure Stack Is and Isn’t
In September 2017 @Microsoft released #AzureStack, after teasing its appearance for two years. Labeled as the first ” #hybridcloud appliance” (some may disagree with that statement) it’s nevertheless a very interesting platform, essentially offering the best of Azure in a rack in your datacenter. In this article I’ll look at what Azure Stack is, and what it’s not, the primary use cases and how it might fit into your organization’s cloud strategy. Hardware + Software You don’t buy #Azure Stack from @Microsoft, you take your pick of several “integrated systems” from @Cisco, @Dell EMC, @HPE or @Lenovo. These range from four, eight and 12 nodes, the current limitation in scale for Azure Stack (expansion with more nodes to an existing stamp, as well as joining a new stamp to an existing stamp is coming). If you compare the different offerings they cater to slightly different buyers, with memory per node ranging from 256GB to 768GB, from 12 to 24 cores per CPU and from 6TB to 12TB SSD cache with 40TB to 100TB of HDD storage. Some SKUs offer NVMe storage for cache (essentially SSD/flash storage connected directly to the PCI Express bus) and there are all-flash configurations available (using SSD for the capacity storage tier, and NVMe for the cache layer). Underlying Azure Stack is an Active Directory domain, running on virtual machines (VMs). The nodes all run Windows Server 2016 Hyper-V Core with the VMs that provide Azure infrastructure sharing the space with tenant VMs. Storage is provided through Storage Spaces Direct (S2D), combining local storage (HDD, SSD, NVMe) in each node for high performant, tiered and resilient VM storage. S2D currently has a limitation of 12 nodes which explains the current max number of nodes of 12 in Azure Stack. While S2D can be configured with a few different resiliency types, Azure Stack only uses three-way mirroring, storing three copies of each data slab on three different nodes, meaning you can lose up to two nodes in a cluster and still be up and running. The file system is Resilient File System (ReFS), the preferred file system for Hyper-V in Windows Server 2016 and later. Networking internally is RDMA, in most cases 40 Gb/s or faster between each node. Networking is provided through the SDN stack from Windows Server 2016, relying on the Network Controller (NC): a distributed, highly available (multiple VMs) configuration and automation plane for controlling both virtual and physical networks. The NC manages Virtual Networks (vNets), Datacenter Firewall (a software firewall), the Software Load Balancer (SLB) and the RAS Gateway. What Azure Stack Isn’t Based on these specifications you’ve probably realized that Azure Stack is not a DIY virtualization platform for your server room. It’s not a Hyper-V platform (fantastic hypervisor, average management tools) for you to migrate your current workloads to. And it’s not something you run on your own hardware. Early in the lead up to the release Microsoft made it clear that Azure Stack (for production deployments) would only be offered from select vendors. It makes sense for Microsoft to do it this way. What you get is a turnkey, appliance-like solution that you and the hardware vendor integrate into your existing environment, which is then ready to deploy workloads on to. The geek in me (and I suspect many reading this) want to take it apart, figure out how it works and build my own, better version. But a company that drops $100,000 to $300,000 (there are no list prices available, but these are the range of prices I’ve heard) wants something that just works, not something for us geeks to play with. Azure Stack also isn’t version 2 of Azure Pack. This free add-on that lived on top of Windows Server 2012 R2 Hyper-V, System Center 2012 R2 Virtual Machine Manager (VMM)/Operations Manager (SCOM) and Service Provider Foundation (SPF) provided a “looks like Azure” self-service cloud model for your datacenter. For Pack you picked your own physical servers, your own storage and networking and then layered the software on top — a consultant’s dream. But the software stack was fragile and only provided a small subset of the services available in Azure, and more crucially, only offered the old Azure Service Management (ASM) model of API interfaces to Azure. So today with Pack, when all of Azure is built on ARM, your on-premises cloud only offers ASM, which makes it less useful as a hybrid cloud platform. There’s nothing DIY about Azure Stack. You’re not allowed to run agents for anti-malware, backup or monitoring on the hosts, although agentless monitoring is possible, either through SCOM or Nagios. You have very limited access to the built-in AD forest underlying Azure Stack, although there’s a privileged PowerShell, Just Enough Administration (JEA) endpoint that you can access for some tasks and in emergencies, with a token from Microsoft support, you can access more high-level cmdlets. Azure Stack also isn’t set and forget. There are monthly updates provided for the software stack from Microsoft and you have to update at least every three months to continue being supported — again the idea being that apart from bug and security fixes, these updates keep Stack in sync with public Azure. Also, to maintain parity with public Azure, many features of Hyper-V aren’t available, such as Generation 2 VMs, VHDX support and Live Migration. Finally, Azure Stack isn’t a replacement for your current server virtualization platform. If you have VMware or Hyper-V clusters today in your datacenter, replacing them with Azure Stack just to run VMs isn’t the right way forward (nor would it be cost effective). Connecting Azure Stack There are two main modes for deploying Azure Stack, fully disconnected (think ships, submarines and remote branch offices) or Internet connected. In the latter case you can use the Pay-as-you-use model where the usage of Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services are reported back through your Azure AD tenants in public Azure and charged just like normal Azure usage. If you’re disconnected, you get charged on the number of CPU cores in your Stack. Deploying Azure Stack in your datacenter involves connecting the networking through Border Gateway Protocol (BGP) configuration and connecting it to your Azure AD tenant. There are two identity models, either AD Federation Services (AD FS) or Azure AD. The former can’t be used in a multitenant deployment but is your only option in a fully disconnected deployment. There are built-in systems in Azure Stack for services providers where they resell capacity to multiple tenants through the Cloud Service Provider (CSP) framework. Backup of the infrastructure is simple — just provide a file share outside of Azure Stack and a scheduled job runs to backup the internal side of Azure. For tenant IaaS and PaaS workloads, however, you need to look at other options such as Azure Site Recovery (ASR) or third-party solutions. The Point of Azure Stack Given the laundry list of what Azure Stack isn’t, where does it fit? The promise of Azure Stack is that it provides a public Azure in your own datacenter. A selection of IaaS (A, D and Dv2) VMs can be provisioned, and PaaS services include SQL Server and MySQL, KeyVault (for storing certificates and passwords securely), Azure Functions (serverless compute) as well as App Services (Web applications). You can download Microsoft and third-party offers in the Azure Marketplace and provide them to your tenants. [Click on image for larger view.] The Azure Stack Web app. The big drawcard is that Azure Stack, more than any other offering on the market today, is a true hybrid cloud. Almost anything you write to deploy on public Azure should run in Azure Stack (provided the Resource Provider has been ported to Stack), without change, in a DevOps model. The Azure Resource Manager (ARM) QuickStart templates have their cousins in QuickStart ARM templates that work in Stack. More Resource Providers are coming to Azure Stack.