Modelling out data center energy use over the next two decades

Written By

Steven Carlini
Vice President of Innovation and Data Centres

at

Schneider Electric

Data centres are the heart of the world’s digital transformation. By now, most people know this, and it reminds me of a quote – “Computing is not about computers anymore. It’s about living.” Nicholas Negroponte.

What is not clear is the toll our computer-aided lifestyles will have in the form of energy use. At the turn of the century, as the data center physical footprint was expanding exponentially to support the dot.com era speculation, many experts predicted that data center energy demand could surge and represent over 20 per cent of all global demand – up from less than one per cent. While Moore’s law was greatly in effect since the 1970s, “processor speeds, or overall processing power for computers will double every two years at the same power use” helped keep power use under control.

In the mid-90s, a new technology called virtualisation became super popular. With virtualisation, you could put three to ten virtual servers into one physical server, downsizing the physical data center footprint and associated power use became commonplace. The original virtualisation is a heavyweight application using hypervisors, meaning it took quite a bit of computing resources to operate. Newer, lightweight virtualisation using containers, kernels, and microservices runs far more efficiently and uses less energy.

Technical advancements on multiple fronts

Looking forward, technical advancements are in play as well. Quantum computing for example, using Quantum bits (or qubits) made of subatomic particles, namely individual photons or electrons that can exist in multiple states simultaneously, at least until that state is measured and collapses into a single state. There is also a move away from rotating hard disk drive (HDD) technology to a static solid-state drive (SSD). The SSD uses a fraction of the energy an HDD uses, but it costs a lot more. Future storage technology emulating human DNA is in the works and can reduce energy use for storage dramatically. Many people are bullish about the process of encoding and decoding binary data to and from synthesized strands of DNA.

Building data centres with general purpose processors was the norm for decades until about three years ago. That is when learning AI started to ramp up and needed faster speeds and leaned on GPUs. But graphic processing units used too much power and the industry started moving to application-specific integrated circuit (ASiC), which is an integrated circuit (IC) customised for a particular use, rather than intended for general purpose needs. For AI, ASiCs made for deep neural networks are now ten times more powerful and effective than standard GPUs and are known as Tensor Processing Units (TPUs). But, processing high volumes of data in hyperscale data centres usually requires the transmission of data that is physically far-away, which causes network congestion and costs a lot of money. This is one reason I believe large central data centres will be used mainly for data storage and processing will transition to the edge of the network in the form of edge computing in edge data centres.

Driving the need for more data centers

So, there you have it – the need for more processing and storage is driving the need for more and more data centres. Many will be very large hyperscale facilities designed and run by people who operate data centres for a living. On the other end is the need to build out smaller, micro sized data centres at the edge, deployed at scale, but for mainly telecom applications with energy efficiency not being a major focus. Then we have all these emerging technologies that I mentioned that may or may not become mainstream and may or may not influence energy usage.

We believe a tool would be helpful, to provide the user flexibility in making assumptions about data centres and the edge. Specifically…

  • The current mix of IT at the edge (<4 IT racks) vs data center 4+ IT racks
  • PUE now and in the future
  • Growth rates of data center and edge IT

These factors are individually adjusted using simple sliders in the tool, which show possible scenarios of the data center and edge global energy forecast through 2040. The Data Center and Edge Energy Forecast Tool demonstrates the importance of implementing best practices to minimize global energy consumption. Users will be able to model out global energy consumption in less than a minute and get a global percentage of total data center energy use by edge and larger data centres.

Long-term impact of IT on electricity consumption

We developed the forecast model after an internal Schneider Electric study highlighted the key risks associated with the rise of IT in terms of electricity consumption. “The long-term impact of IT on electricity consumption” was conducted in October 2018 by Ujjval Shah and Vincent Petit and it forecast the potential IT energy consumption required going forward and predicted the demand for information. This study looked at the growth of data then analysed the IT equipment needed to support that data. In this tool, we included the study’s data on forecasted compute, storage, fixed networks, and network equipment to calculate the total data center and edge IT energy.

In this model, we also leveraged a data center capacity analysis that IDC performed for Schneider Electric in 2019. From this study, we derived the ratio of centralised data center vs. edge IT load for 2021 (65 per cent data center 35 per cent edge) and 2040 (48 per cent data center 52 per cent edge), using an assumption of 40 per cent loaded centralized data centres and 30 per cent loaded edge data centres.

With the data center and edge ratios above in years 2021 and 2040, and the bottoms-up study conducted internally, we compute the default data center and edge growth (six per cent and 11 per cent respectively). These values are adjustable by the user.

To derive the non-IT energy consumed (infrastructure losses from power, cooling, lighting, etc.) for the data center and edge, we assumed PUE values for data center and edge. Specifically, in the default scenarios, we assumed the average PUE of centralised data centres improves from 1.35 in 2021 to 1.25 in 2040, and the average PUE of edge data centres improves from 2.0 in 2021 to 1.5 in 2040. These PUE assumptions are adjustable by the user. The non-IT infrastructure energy is added to the IT energy to calculate the total data center and edge energy consumption.

Use the new tool to model out global energy consumption in less than a minute

I encourage you to check out the Data Center and Edge Energy Forecast Tool to model out global energy consumption in less than a minute. As I said earlier, the tool demonstrates the importance of implementing best practices to minimize global energy consumption.

Popular Right Now

Related

People who read this article also read ...