PowerVM Software Defined Networking – Introduction
In the fourth quarter 2016 release of #PowerVM, #IBM is introducing a technical preview for software-defined networking ( #SDN ). This is exciting for the team, and we are eager to get feedback from users so that we can move the support out of technical preview and into production environments quickly. But it seems like software-defined networking is one of those buzz phrases that means different things to different people. What exactly is going to be introduced and what will it mean for you? History Before virtualization, network controls for systems were typically applied at the ‘edge port’ – the last port on the switch that plugged into your server. Many advanced switches supported Layer 2 through 7 controls. That enabled server admins to apply QoS, security filters, and more. This was great for workloads that were stationary and never changed. Then, virtualization hit the scene and suddenly many virtual machines were sitting behind that port, perhaps each on a different VLAN. On top of that, the workloads became mobile – moving from system to system. No longer could an admin statically define the rules on the edge port of the switch. In fact, working to get multiple VLANs through to the appropriate servers takes more time than most of us would like to admit. Administrators became more focused on simple connectivity (things like VLANs) for the workloads across their environment. The controls started to be enforced solely within the virtual machine (VM), with things like firewall processes in the VM or at a perimeter firewall. But that becomes tedious as well – several thousand VMs means a lot of auditing within the VMs themselves. Enter Software-Defined Networking SDN’s purpose is to provide the administrator with a tool set that enables them to virtualize the network, utilize policy based management, and to gain back the controls that they had before. We’re focusing on five key aspects: Quality – Control the network throughput on the VMs. Security – Control what a VM can communicate with, and how. Policy Based Management – Controls are defined once and then applied to the workloads. No matter where the workload is, these rules should always be active. Policies can apply to many VMs (simplifying your sprawl). Agility – Rules should be attached to VMs in seconds instead of hours or days. Capacity – The server administrator should be able to add network capacity (IP addresses and routing throughput) without involving the network team. These are a significant set of capabilities that enable administrators to scale. Instead of being focused on applying rules on a VM by VM basis (slow), they can apply rules in the hypervisor and be sure that they’re applied. Components To realize these benefits, the hypervisor (and the system itself) needs to change. Three components are required: Programmable Virtual Switch – A switch that sits in the hypervisor and controls the flows between the VMs and the physical port, which is programmed with very simple ‘flow table’ rules. For PowerVM, we will be using the industry standard Open vSwitch to deliver this. It will be supported on NovaLink managed systems. Controller (PowerVC or OpenStack) – Takes high level policies and compiles the low-level rules for the programmable virtual switch. Gateway – A device that takes the virtual networks Ethernet packet and puts it onto the wide area network (WAN).