When it comes to the sustainability of compute, Verne Global are not new kids on the block. Since 2012 its mission has been to provide industrial scale high performance computing (HPC) to organisations across various and advanced industries. Its approach is focused on ensuring optimal accessibility, flexibility, and efficiency to deliver genuine HPC processing power and speed, and eight years after its launch its core message resonates more than ever.
Its data centre is located on a former NATO base in Iceland, its Icelandic campus hosts HPC applications powering research across a range of sectors, including financial services, engineering, scientific research, and AI. The technology leader at Verne Global is Tate Cantrell, its chief technology officer, who is responsible for setting the technical direction for the company. He oversees the design and construction of all aspects of the facilities.
Cantrell has been involved in data centres and other high-tech facilities for more than 15 years, starting as a research programmer for computational modelling on biomedical applications. He was recently named as one of the ‘Climate 50 – The World’s Top Most Influential Climate Leaders In Data Centres And Cloud,’ and was also awarded a Bronze Stevie in the Executive of the Year category for his ability to demonstrate how emerging markets and industries can efficiently utilise data centre space.
An early sustainability pioneer
“We got started before then, in early 2008 as a concept,” Cantrell says. “We saw the impending energy challenges that were coming over the next decade, and that has certainly materialised. What we wanted to do was to pick the best place on the globe for driving the next generation of efficiency in data centre compute.
“For us, that meant a couple of things. First, we wanted a dependable source of sustainable power. One of the challenges that data centres present power grids is that data centres want compute power all the time. That is especially true for our HPC customers; they take a server, turn it on and when they flip the switch to it their goal is to run that CPU at 100 per cent, until it either falls over, or is replaced by the next generation.
“To operate your infrastructure in that manner it is important to ensure that your power costs are low, and that you have sustainability in terms of where that power is generated from. That is one of the main reasons we chose Iceland because of the geothermal and hydro cooling resources. Both of those are continuous power sources, unlike wind and solar energy which will ebb and flow, meaning you would require grid capacity storage to back those up. Geothermal, very much like a data centre, you turn it on, and it runs: a great marriage.”
It is not just the power source itself but the reliable and resilient distribution network that made Iceland an ideal location. The power infrastructure on the island was designed and built to cater for the power-hungry demands of the aluminium smelting industry back in the Seventies. While data centres typically use generators as backup in case the utility power goes down, aluminium companies do not have that option for their smelters that draw hundreds of Megawatts. “If they go cold, and they can go cold over a six-hour period without the electrolysis process continuing, they are done for,” Cantrell adds. “What that resulted in when we came in as the first mover in Iceland, to generate and create this data centre industry, was a power grid that had dependability that was enviable around the world.”
If sustainable and reliable power was the number one lure, not far behind was the temperate climate. “A typical summer day there is going to be somewhere around 10 to 15 degrees as an extreme high, and in the other real benefit of Iceland, because it is at the end of the Gulf Stream, is that it is warm in winter as well, around zero degrees. From an engineering standpoint, that is just paradise. With a tight band of temperature, you can get aggressive with your power infrastructure design.”
Location, location, location
Although locating data centres in a temperate climate close to renewable energy resources may sound like a plan for future sustainability, it can only ever be part of the solution. “You will continue to see data centres located around the world,” Cantrell continues. “For applications like Netflix there is going to be the need to have data centres that are located close to users. We will continue to see an expansion of data centres that are geographically distributed over the globe to deliver edge applications.
“But applications, such as autonomous driving or some of the research to look for the potential vaccines against the virus, or the science, or computer simulations for industry, or the expansion of artificial intelligence, are all HPC applications and those very much do not need to be near the edge. In fact, because of the proliferation of global networks, we would say an extremely high percentage of these HPC applications do not need to be near the edge. If they do not, then the laws of economics are going to drive them to the best possible location. That is where locations such as Iceland and Scandinavia play in very well.
Overcoming the challenge of remote locations
The technology is proven so what are the challenges when it comes to data centres in regions such as Iceland and Scandinavia? According to Cantrell, the biggest challenge is convincing people that it is possible. “One of our biggest challenges is in client acquisition, mainly because historically speaking, and there’s decades of experience that come with this, people are very accustomed to going down the street to the local colocation facility, putting their servers in place and making sure that they can go and polish the bits and bobs over time. HPC and centralised computing is all about changing the mindset.
“We find that the companies that are most receptive to the story of centralised computing are companies that are financially driven by the total cost of ownership (TCO). Once somebody understands the TCO, it makes our sales process a lot easier. Client acquisition is about education and ensuring that our customers understand that they can operate in a location like Iceland.”
Cantrell points to some of its financial services customers who never set foot in the data centre. “We work with them to acquire the servers, they are pre-configured, they arrive on our loading dock with a set of drawings, our specialist team builds out the deployment, plugs in the networks, and voila, all of those servers are being managed remotely by someone, several time zones away,” he says. “It is true that the sun never sets on the data centre.”
Keeping your cool
Verne Global have two approaches to cooling, both are air cooling at the rack level. The reason for this choice is simply cost. “We have a few customers that have requested to go into high density liquid cooling, but currently based on our customer base who that is driven by TCO they are focused on high-density air-cooled applications.
“We have two designs that we employ, the one associated with our tier three high resiliency infrastructure is a fully contained data hall, what we would refer to as indirect cool. Here the air circulates within the space, and then the heat is extracted using liquid through an air handler, which is converting the heat in the air into the heat in the liquid stream. The other design that we have is simple 100 per cent air cooled.”
Growing demand for data
One of Verne Global’s customers is DeepL. The Germany-based company was created to break down this final barrier by using its deep neural network translation service to move language translation from stilted to natural. “In terms of AI, they do not talk about their AI servers and the machine learning infrastructure, they talk about it as a supercomputer,” Cantrell explains. “They are constantly tuning their algorithms and when you think about these neural networks and the way that they work, we are on an exponential curve of growth. The more data we have access to the more that it spreads through the neural networks, the more data that we get. It is almost this positive feedback loop.
“If we think about that, and start to think about applications such as autonomous driving, you really have this area where we are going to be effectively building this feedback loop, where our demand for high security is going to increase the requirements and it is going to continue to grow.”
The advent of HPC as a service
The traditional way the industry has approached HPC is to tell a big box vendor how many petaflops they need and wait for them to ship a supercomputer. “That is a historical perspective of HPC,” Cantrell concludes. “The way we think about it as a service today is more as a box by box perspective; building your supercomputer as you go. We work with the vendors to have a design pre-configured and allow the customer to put in the order and the next time they are going to hear from us is to say ‘your infrastructure is ready, you can log in and access your boxes’. That is why it is a service from our perspective.”