Edge data centres are on the rise, but what are the implications for sustainability?
In the first of a series of articles, Mark Venables looks at the growth of edge computing and edge data centres.
When we consider edge computing it comprises two main components. True edge computing is conducted on the device itself, whether that be a smartphone, laptop, or piece of equipment. However, other services such as video streaming, also benefit from local processing and these require a data centre, in this instance an edge data centre.
Edge data centres are smaller facilities located close to the populations they serve that deliver cloud computing resources and cached content to end-users. The premise is that by processing data and services as close to the end-user as possible, edge computing allows organisations to reduce latency and improve the customer experience.
Latency has always been a problem for data centre managers, but in recent years it has become a critical concern due to big data, the Internet of Things (IoT), cloud and streaming services. End users and devices demand anywhere, anytime access to applications, services, and data housed in today’s data centres, and latency is no longer tolerable. As a result, organisations across many industries are establishing edge data centres as a high-performance and cost-effective way to provide customers with content and functionality.
According to Gartner, around ten per cent of enterprise-generated data is created and processed outside a traditional centralised data centre or cloud. By 2025, Gartner predicts this figure will reach 75 per cent.
“Organisations that have embarked on a digital business journey have realised that a more decentralised approach is required to address digital business infrastructure requirements,” Santhosh Rao, senior research director at Gartner, says. “As the volume and velocity of data increases, so too does the inefficiency of streaming all this information to a cloud or data centre for processing.”
As with all rapidly evolving technologies, evaluating, deploying, and operating edge computing solutions has its risks. And they come in many forms, but a key one relates to security. “Extending your footprint using edge computing exponentially increases the surface area for attacks,” says Rao.
“A nascent vendor landscape compounds this risk. Unsecure endpoints are already used in distributed denial-of-service attacks or as entry points to core networks.
Another concern is that the cost of deploying and managing an edge computing environment can easily exceed the project’s financial benefits. Moreover, projects can become victims of their own success — scalability can become a serious issue as new IoT endpoints proliferate. “Edge computing has enormous potential to enable digital initiatives supported by IoT, but leaders need to tread carefully,” Rao says.
Welcome to the age of edge computing
The energy consumption and carbon emissions from data centres is gaining increasing attention as the focus sharpens on combating climate change. Although there is a credible drive to improve the carbon footprint of the sector by utilising green energy sources, an even more sustainable approach is to reduce unnecessary cloud traffic, central computation, and storage as much as possible by shifting computation to the edge. Edge computing stores and uses data locally reducing the amount of traffic sent to the cloud and, at scale, has a huge impact on energy use and carbon emissions.
“Huge, centralised data centres (cloud computing) have become a critical part of the infrastructure for a digitalised society,” Alyssa Coke, COO, ObjectBox, says. Conservative estimates put data centre emissions of cloud data centres of the big hyperscalers (Google, Amazon, Microsoft, Alibaba Cloud) at more than half of the emissions.
“Until recently 90 per cent of enterprise data was sent to the cloud, but this is changing rapidly,” Coke continues. “This number is dropping to only 25 per cent in the next three years according to Gartner. By then, most of the data will be stored and used locally, on the device it was created on, such as smartphones, cars, trains, machines, watches. This is edge computing. Accordingly, edge devices need the same technology stack (just in a much smaller format) as a cloud server. This means: An operating system, a data storage or persistence layer (database), a networking layer, and security functionalities that run efficiently on restricted hardware.
“As you can only use the devices’ resources, which can be limited, inefficient applications can push a device to its limits, leading to slow response rates, crashes, and battery drain. Edge computing is much more than some simple data pre-processing, which takes advantage of only a small portion of the computing that is possible on the edge. An edge database is a prerequisite for meaningful edge computing. With an edge database, data can be stored and processed on the devices directly (the so-called edge).
“Only useful data is sent to the server and saved there, reducing the networking traffic and computing power used in data centres tremendously, while also making use of the computing resources of devices which are already in use. This greatly reduces the bandwidth and energy required by data centres. On top, edge computing also provides the flexibility to operate independent from an Internet connection, enables fast real-time response rates, and cuts cloud costs.”
The growth of video streaming
However, not all edge computing can be carried out on the device itself, there is still a need for some local storage or processing and here edge data centres come into play. Network bandwidth has increased dramatically in recent years, driven by new applications such as video on demand. These video streaming services, such as YouTube and Netflix, are expected to consume around 80 per cent of total bandwidth. This is partly because each stream is composed of a large file, but also because of the nature of how video-on-demand content is distributed. Video data is streamed from the cloud to millions of people in a one-to-one manner, as opposed to traditional broadcasting where the video stream is distributed to a mass population at a scheduled time (one-to-many).
This high bandwidth consumption results in high energy usage and ultimately in higher carbon emissions because the network is used more heavily and requires significant amounts of power to deliver this increasing amount of data. Daniel Schien at the University of Bristol reported that total emissions from people watching YouTube globally in 2016 was the equivalent of 10 million tonnes of CO2, much of this is from the network.
“We have identified that the increased use of cloud and networking results in high bandwidth consumption, leading to higher energy usage,” Dalia Adib, edge computing practice lead, STL Partners says. “This issue is prevalent, not just in video streaming services but across a variety of applications that actively use networks and data centres. Edge computing could combat this by reducing network loads, optimising energy used for compute and storage, as well as enabling solutions that would help enterprises better monitor and manage their energy consumption.
“Although content delivery networks already exist to mitigate the need to carry the same traffic over internet backbones, these mostly work outside the actual mobile networks, Adib says. “Traffic backhauled over the mobile network could be further mitigated by hosting content nearer to the customer, within the network, edge.
“The growth of video streaming and use of other bandwidth-heavy applications such as gaming will further exacerbate traffic and energy growth, particularly as content becomes more customised, higher definition and more interactive. Edge computing also reduces energy consumption in networks, by reducing the total amount of data traversing the network. By running applications at the edge, data can be processed and stored nearer to the devices, rather than relying on data centres that are hundreds of miles away. This could lead to a significant reduction in energy consumption related to network transport, while also benefiting from low latency that edge provides.”
More efficient than cloud data centres
Although large data centres can aggregate compute and storage needs across many thousands of users, they may not always be optimised in the way they use energy; cloud data centres often run 24/7 even when they are not being used. “Edge data centres might need to deal with more variation in utilisation and therefore be designed to manage this more efficiently (for example, by making resources dormant when not required),” Abid explains. “The orchestration and management of a distributed set of (smaller) data centres will need to be built into the design and ensure edge compute (and thus energy) resources are used efficiently.
“Energy is required for the power and cooling of data centres. Arguably, an edge data centre may require less energy for cooling, relative to its output and size. This is known as free cooling and is particularly relevant in cooler climates. A few racks of servers (edge data centre) would have a higher surface area per server than if the same size rack was being processed in a hyperscale data centre. Currently, cooling of data centres accounts for 40 per cent of total energy consumption from data centres, therefore, depending on their location, the overall energy spend to run and cool mini data centres could be reduced.”