World’s densest Hyperconverged server now certified for sustainable immersion cooling

Worlds Densest Sustainable Server

Operators of edge data centres face a triumvirate of challenges in meeting their sustainability aspirations, achieving time to value, and meeting the rapid time to market requirements. An integral part of the sustainability picture is the cooling system and increasingly immersion cooling systems are being viewed as the answer. The challenge for small and medium hosting providers is the lack of an OEM service using tier one original hardware for use in immersion cooling systems.

That challenge is being faced head-on by a pioneering partnership between InfraBurst, Hyperscalers and Submer. The collaboration was conceived to develop the first truly open and sustainable Edge solution – at the sweet spot of 7KW per module. To meet this goal the team first made sure to fully certify the S5S server, one of the densest 2U 4 node servers on the market, to reach peak performance within Submer’s biodegradable immersion cooling pods. 

The main building blocks of the three edge solutions announced this week, is Submer’s self-contained MicroPod and QCT’s unique Hyperconverged server, the S5S. InfraBurst combined these two to deliver pre-configured artificial intelligence and machine learning in the box – offering up to 91 TFlops from just 6RU. Alternatively, the fully self-contained and weatherproof unit (IP67) offers various plug and play configurations of up to 12 individual Nodes, 672 Intel cores, 24TB Memory, up to 72 drives + up to 16 Nvidia T4 GPUS and much more. 

“We work with the majority of ODMs and OEM’s, but we were never able to certify a brand-new OEM partner within a couple of weeks,” Daniel Pope, co-founder, and CEO of Submer says. “The way we were able to securely access and make the significant changes on the BMC, IPMI, and BIOS among other crucial components on the QCT motherboard, combined with the support and guidance we received from InfraBurst and Hyperscalers was a big part of the success within such a short timeframe.”

The results, according to experts are not just impressive in terms of the number of cores, memory and storage but also show how the use of open original equipment manufacturer (OEM) hardware enables the completion of a product or solution development cycle within a few months – rather than years.

“InfraBurst stands for modular, open, and sustainable hardware solutions for data centres – available worldwide and freely scalable,” Stephan Hack, InfraBurst director says. “To achieve this, we have joined one of the largest OEMs as well as the world’s most innovative cooling provider as partners and together we have developed the first sustainable, modular, and open micro data center in a box. The fully autonomous and mobile unit takes up about the space of a conventional freezer, is weatherproof (IP67), can be exposed to direct sunlight – and is ready for use within a few hours.” 

The importance of open OEM

Importantly the solution is built on open OEM. The components that go on any server motherboard, whether it is CPU Ram, SSD, NVMe or GPU all come from third-party manufacturers such as Intel, Samsung, Seagate, and NVIDIA. While locked OEM giants rebrand these devices and the servers they sit on, they also lock them down.  “You cannot use the original or generic components,” George Cvetanovski, the founder and CEO of Hyperscalers, explains. “What that means for end customers, is that they are being locked down in a yellow brick road that is designed by these OEMs.”

“They get a lot less for their budget. Whatever workload they are doing, cloud, machine learning, or artificial intelligence, they get a lot less capability. We offer an open supply chain alternative to HP-Dell-Cisco; we do not lock anything down, whereas they do. The thrills of being locked out of your service: you cannot upgrade the memory, you cannot upgrade the SSD, or the NVMe without buying that branded offering by HP, Dell, or Cisco.”

“What that meant for these giants was that they had to pay two, three times as much. We are extending these same open hyperscale efficiencies that the macro service providers get to everybody else, from very large cloud providers to telcos, to managed service providers, cloud providers, systems integrators, and so on.” 

Partner Resources

Popular Right Now

Edgecore Insight Podcast

Ep-1: Navigating the Waters of Sustainability

Others have also read ...


2019 – 2020 What – Where – Why

Edge computing relying on location, latency and bandwidth has increased with IOT demands. It is not an instead of but complimenting traditional Enterprise facilities, colo and cloud to get closer to the data source or end users. Where 5G is rolling out enterprise opportunities will follow along with edge facilities. Edge growth in other regions will be more of a steady increase until their network is upgraded

Click to View