Converged Networking — Why?
Mike Lyons, IBM Executive Architect
Clea Zolotow, IBM Distinguished Engineer
Since the rise of the client-server application architectures, data centre networks have hosted large fleets of discrete servers which supporting these architectures. While this style of application development leveraged end-user sessions via a Wide Area Network (WAN) link, a significant amount of traffic flowed between adjacent servers. The main concern of the DC network was to host these large fleets of servers on a single Local Area Network (LAN).
The creation of a 3-tier network layers of Core, Distribution and Access switches allowed the DC LAN to host many physical ports in a structured manner. While the servers tended to be dedicated to a small number of tasks, each one still needed a complete stack of operating system and application components and typically remained in that function throughout its working life.
In many cases very large hierarchical data centre network would be constructed with distribution blocks of servers with related functions or “affinity”. Since the capital cost of the servers and software could be reasonably high it was not uncommon for the function of a given server to expand over time with additional software installed on them rather than need a new physical server.
With the rise of multi-core servers with large physical memory came increased compute virtualisation platforms, such as VMWare. VMWare, in the intel space, provided similar capabilities that existed in the larger midrange (i.e., AIX) and mainframe industries for some time. Now, virtual machines running the data centre workload ceased to be tied to any particular physical machine. This resulted in the possibility to move whole machine images between the physical servers. This architecture has many advantages for load distribution and service resiliency however the traditional 3-tier network topology didn’t always work well in cases where server affinities were split across distribution blocks.
This move toward a more abstracted compute landscape renewed interest in an old idea first called a “half crossbar” network topology however today it is commonly referred to as a Leaf-Spine network fabric.
The requirement to allow large East-West data moves across the network drove the need for a flatter “any to any” network topology. By reducing the number of network hops between adjacent servers and providing a larger number of possible traffic pathways, this Leaf-Spine topology has gained significant uptake in highly virtualised environments.
While the servers where connected to a tiered DC network they generally used a dedicated storage fabric to provide mission critical storage to the server fleet. This fabric was usually based on Fibre Channel (FC) switches. Since these networks needed to be “non-blocking” (i.e., allowed multiple connections through the switch) they were typically arranged in a Leaf-Spine topology and often with completely separate fabrics for redundancy.
While the performance of these fabrics was excellent their cost could be prohibitive. As the relative cost and performance of Ethernet network outpaced Fibre Channel switches many IT service organisations developed offerings based on iSCSI such as the IBM Global Technology Services IPHyperStor solution.
IPHyperStor took the traditional Storage Area Network and substituted the physical Fibre-Channel fabric with an ethernet fabric carrying iSCSI. This offering demonstrated how cost effective ethernet networks could completely displace the more expensive fibre-channel SAN with no loss of performance or reliability.
This demonstrated that Ethernet based fabrics could deliver enterprise grade storage solutions at much reduced costs, but meant having two separate (but largely identical) ethernet fabrics connecting all the servers. The next logical step was to build a single Ethernet based fabric to deliver all the data and storage capacity needed for GTS hybrid-cloud solutions.
Stay tuned for more on the converged networking space!