Converged Networking: Underlay Networks

Mike Lyons
4 min readAug 19, 2020

Mike Lyons, IBM Executive Architect and Clea Zolotow , IBM Distinguished Engineer

Introduction

The effect of server migration and movement has a large impact on the ability of an Internet Protocol (IP) data centre network to move and scale. Operating large fleets of discrete data centre servers (often called bare-metal severs) meant that pools of IPs would be aggregated together along application affinity lines. If a workload moves (ie the physical server) from one pool to another, the IP address of the server would have to be changed. Networking overlays can abstract these pools away from the physical network and logically flatten the environment and aid workload mobility.

LANs and IP Stacks: A Perspective

One of the most profound changes in the delivery of IT services came with the large-scale uptake of the Internet Protocol stack built on Local Area Network technologies. Prior to the growth of IP almost every computer system vendor needed to create a dedicated network technology for every platform from the physical cabling, to the electrical signalling and the traffic protocols. Examples of such network technologies were IBM’s SNA and Digital Equipment’s DECNET. This added significant cost to building large scale compute systems and gave rise to an industry wide effort to standardise networking protocols in the late 1970’s.

A typical network built on an SNA framework

The ISO 7 layer model was a simple idea that clearly outlined a rigorous architecture for building large scalable networks and many enterprising companies applied this idea to TCP/IP, an existing technology. With several key vendors selling general purpose networking components and the IEEE defining standards for interoperability, a whole industry suddenly grew up around the concept of a Local Area Network (LAN).

Standardisation dramatically lowered the cost of building large pools of clustered compute platforms thus enabling the rise of the Client-Server application environment. It further supported the software coding practices that took hold and continue to dominate today .

The simple and robust architecture of IP allows it to scale to global size to support the internet however the flaw in this approach is that IP was architected as a way of aggregating large numbers of small local area networks so is implicitly geographic in nature. For example, Internet Protocol (IP) clients on the same physical network are all identified with the same prefix (eg 10.10.100.0/24). Any machine in the subnet needs to be configured with a default route that allows it to connect with endpoints in other networks. This allows every server to simply pass any data packet destined for a server it can’t reach locally to its default gateway. This implies that any router in a large network knows where to forward any packet that it receives. We will leave discussing how this functions as well as it does for another time….

Impact of Migration and IP Addresses

The effect of this simple default route is that any server that moves to a different physical location must have its IP address and default route changed. This turns out to be a significant problem when a machine is virtual and machine mobility is key to load levelling and continuous availability.

Underlay routing in a data-centre network

So while the physical data centre network described here is built using well established IP architecture principles the use of hypervisor based abstracted networks and layer 2 tunnelling protocols such as VXLAN allows “flat” virtual environments where machines can move from host to host across the network without needing to be reconfigured with a new IP address or default route.

L2 forwarding with an overlay

To achieve this abstraction each rack in a converged DC needs to have a Virtual Tunnel End Point (VTEP) that forms the gateway between the physical Underlay network the virtual Overlay network.

Network Abstraction

The practice of network abstraction has been around for a long time. The best example of this are Multi-Protocol Labelled Switching (MPLS) networks. The physical network is built using point to point links between towns and cities which connect to provider edge routers which create a logical overlay network for each client.

This abstraction is done by encapsulating every logical network packet by a physical one while the software in the routers keeps the overlay routing separate from the underlay. Every layer of abstraction of the network can be seen in the nesting of addressing data in each physical packet, like a nesting Babushka (Russian) doll. This works the same way in the data-centre where virtualisation platforms can implement the logical separating in the same way a MPLS network does.

Conclusion

The value of network abstraction is that virtual server instances can be moved around large physical data centre networks. The movement is needed for redundancy and load distribution without disruptive network changes needed on the VM. While this makes managing the server easier the impact is that the machine has no awareness of where it really is in the physical network which brings with it other challenges. More on that another time!

--

--

Mike Lyons

Mike is a Distinguished Engineer with Kyndryl and has a life long interest in the transport of information.