How Moore’s law drives hybrid cloud

Mike Lyons
4 min readNov 25, 2020

--

Mike Lyons Executive IT Architect, IBM

Moore’s Law in a single chart

There is a line early in the Pixar movie “The Incredibles” where a character is being interviewed and laments the fact that a super hero’s work in never done. “Can’t you just stay saved?” he asks. Something about this question I’m sure resonates with anyone in the IT services industry.

The IT industry is not static for a number of reasons but the main source of change comes from demand for newer technology. Our clients are always looking to improve the efficiency of their operations as they can’t always rely on revenue, but you can always rely on expenses. Clients are always looking for better (or more to the point, cheaper) ways to manage their environment and as new capabilities come to the market we design enterprise architectures to apply them.

In summary our clients have an endless supply of problems to solve but a limited amount of money.

Moore’s Law

The march of Moore’s Law means that we need to plan to regularly migrate workloads from legacy platforms to newer ones to take advantage of the lower energy consumption and higher speeds to remain competitive.

Prior to the 21st century we got used to the clock speeds of CPU’s pretty much doubling every few months. The physics of transistors as logic switches meant the smaller you can make them the faster they switch. Also lowering the voltage applied to the transistor also yields a speed improvement. So Moore’s law is derived from this push to reduce the feature size of circuits on the chip surface. This can be seen on the above chart as the number of transistors that could be packed onto the chip.

Then in the early 21st century physics caught up with the old technique of lowering the operating voltage of the transistors on the chip so the clock speeds stalled at about 2.4GHz. It was however still possible to make the transistors smaller so chip vendors were able to put multiple CPU cores on the processors. While the idea of virtual machine emulation had been around a long time, this enabled VM’s to be supported in hardware and the era of virtualisation really took off.

When it becomes possible to build multiple VM’s on a single host with many CPU cores we can suddenly make the use of compute resources much more efficient. This is where the cost of the compute technologies dramatically changed the way applications are written and consequently the underlying network structure that we build to support it.

The Rise of Virtualisation

While machine virtualisation had been around for decades in the mainframe arena it really came of age after the time of “client-server.” Programmers discovered that they could network a number of physical machines each with a different function rather than write monolithic applications on large mainframes. This revelation caused the hardware cost of hosting large software stacks to suddenly drop and started the fad of server farm sprawl.

This really got out of hand in the early 2000’s as corporate server farms could often require thousands of physical machines, some hidden under desks, run from home, and not handled by corporate IT. The time for large scale virtualisation had come.

Why do we build huge farms of virtual machines? Because we used to build large farms of physical servers. We are still thinking in a physical world.

While virtualisation was a boon for efficiency, we basically went on a spree of migrating physical machines to virtual machines. The challenge of course is that once you share the resources of a physical machine you will need to consider the performance of the applications hosted on the virtual machines. And a physical machine moved into a virtualized system without adjusting capacity requirements isn’t optimal.

Virtual machine mobility

While piling lots of VM’s into a physical host is a great idea, it does cause issues with the availability of the hosted apps. In order to be enterprise ready we needed to build clusters of physical hosts to remove any single points of failure (SPOF). These clusters need to be able to move hosted virtual machines between physical members of the cluster.

This virtual machine mobility allowed for the maintenance of the physical hosting machines without disrupting the application service running on the VMs. As the technology improved we could do live migrations of VMs and even build in automatic failure the event of a physical machine fault.

This fluid mobility was a great opportunity to resolve some long-standing operational problems in running data-centres it threw up a number of others. In the case of the networks we built at the time they became increasingly unsuited to this paradigm shift. For example we used to optimise the DC networks to present huge numbers of physical ports to attach lost of servers, where now we need to optimise for virtual workloads that move.

While Moore’s law has obviously seen a large increase in the performance of server platforms it has also seen similar growth in the performance of networks and storage technologies and these also drive the push for newer technologies in a somewhat circular pattern.

So things keep changing and constantly providing new problems to solve, what we do about it is the subject of the next blog.

References

42 years of microprocessor trend data (n.d) Retrieved Nov 2020 from https://www.karlrupp.net/2018/02/42-years-of-microprocessor-trend-data/

IT Virtualization Best Practices: A Lean, Green Virtualized Data Center Approach (Mickey Iqbal) — ISBN-13 : 978–1583473542

--

--

Mike Lyons

Mike is a Distinguished Engineer with Kyndryl and has a life long interest in the transport of information.