The critical digital infrastructure sector continues to thrive in a period of economic uncertainty. Further opportunities and growth are expected in 2023 and 2024, but owners and operators are also likely to face some difficult and expensive challenges, according to a new report from the Uptime Institute which identifies five data centre predictions for the year ahead.
In the report, Uptime Institute Intelligence looked beyond some of the more obvious trends of 2023 — that the sector continues to expand and innovate while facing stricter regulatory requirements — and identified some challenging issues.
For 2023 and beyond, many of the threats to digital infrastructure development and stability do not come from design or operational failures, or from managing complexity, but from external forces. These include supply chain disruption, power and cooling requirements for powerful next-generation IT hardware, growing wariness of public cloud, pressure on IT fuctions to address their energy footprint, and strong inflationary pressures.
Each of their five predictions highlights new or growing challenges facing data centre operators. The Uptime Institute went on to say that it is not intended to suggest an industry in crisis — data centres are accepted to be in high demand and largely efficient and resilient — but rather to highlight areas where continuing vigilance and action may be required.
“The latest generation server processors from both Intel and AMD represent another step-change in power and cooling requirements – mainstream dual-processor servers can now draw more than 600 watts, some around a kilowatt. This is without GPUs or other accelerators. As recent as ten years ago, most servers topped out under 300 watts,” said Daniel Bizo, research director at Uptime Institute Intelligence.
“That alone it not a problem for existing data centres to handle, but there are growing pain points. One is power density of new servers, once they become mainstream and common. Only a handful of them can use up all the power available to a typical IT rack without upgrading power distribution. Even with power delivery available, much of the data hall will end up sitting empty, because there is no power left.
The other, potentially more serious problem, according to Bizo, will be cooling needs. Airflow intake of high-power servers is problematic for ‘legacy’ data centres with uneven air flow distribution, generating hot spots.
Then there are the temperature needs. The high amount of heat in the server chassis may lead to supply air temperature restrictions (for example, 22°C or less for continuous operation without throttling), to protect components from overheating, which may result in throttled performance or shutdowns. Many data centres built over the past decade operate to higher temperatures for energy efficiency gains (between 24°C and 27°C), which may cause problems for IT teams should they want to configure their servers with the highest performance components.
“For now, these issues are relatively minor as these new chips and servers are still few in numbers,” concluded Bizo. “But we expect these power levels to further escalate with successive generations due to a combination of semiconductor physics (scale of integration outpaces transistor energy gains) and infrastructure economics (more performance density is attractive). This clearly points to increased use of direct liquid cooling, which will bring its own host of design and operational issues due to lack of mechanical and material (chemical) standards.”