Meeting the dual challenges of economics and sustainability in high performance computing

[et_pb_section fb_built=”1″ _builder_version=”4.4.5″][et_pb_row _builder_version=”4.4.5″][et_pb_column _builder_version=”4.4.5″ type=”4_4″][et_pb_blurb image=”data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTA4MCIgaGVpZ2h0PSI1NDAiIHZpZXdCb3g9IjAgMCAxMDgwIDU0MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KICAgIDxnIGZpbGw9Im5vbmUiIGZpbGwtcnVsZT0iZXZlbm9kZCI+CiAgICAgICAgPHBhdGggZmlsbD0iI0VCRUJFQiIgZD0iTTAgMGgxMDgwdjU0MEgweiIvPgogICAgICAgIDxwYXRoIGQ9Ik00NDUuNjQ5IDU0MGgtOTguOTk1TDE0NC42NDkgMzM3Ljk5NSAwIDQ4Mi42NDR2LTk4Ljk5NWwxMTYuMzY1LTExNi4zNjVjMTUuNjItMTUuNjIgNDAuOTQ3LTE1LjYyIDU2LjU2OCAwTDQ0NS42NSA1NDB6IiBmaWxsLW9wYWNpdHk9Ii4xIiBmaWxsPSIjMDAwIiBmaWxsLXJ1bGU9Im5vbnplcm8iLz4KICAgICAgICA8Y2lyY2xlIGZpbGwtb3BhY2l0eT0iLjA1IiBmaWxsPSIjMDAwIiBjeD0iMzMxIiBjeT0iMTQ4IiByPSI3MCIvPgogICAgICAgIDxwYXRoIGQ9Ik0xMDgwIDM3OXYxMTMuMTM3TDcyOC4xNjIgMTQwLjMgMzI4LjQ2MiA1NDBIMjE1LjMyNEw2OTkuODc4IDU1LjQ0NmMxNS42Mi0xNS42MiA0MC45NDgtMTUuNjIgNTYuNTY4IDBMMTA4MCAzNzl6IiBmaWxsLW9wYWNpdHk9Ii4yIiBmaWxsPSIjMDAwIiBmaWxsLXJ1bGU9Im5vbnplcm8iLz4KICAgIDwvZz4KPC9zdmc+Cg==” _builder_version=”4.4.5″]

The demand for high-performance computing (HPC) within data centres is increasing rapidly, driven by the growth of big data analytics, machine learning, and AI over the last few years. These computes depend on large numbers of GPU cores and CPU cores working in parallel. This facilitates complex workloads for applications such as rendering for animation films, medical diagnostics, or large computational fluid dynamics (CFD) models to execute many more instructions than they could on traditional server infrastructure.

Both high core count CPUs and GPUs are needed to build an HPC cluster, and demand for GPU based processing is increasing rapidly as they are very good at performing ‘Floating-Point Calculations’. These applications require racks full of high-end AMD or NVIDIA GPUs such as the Titan-X or Tesla cards. For data centres with requirements for higher numbers of CPU and GPU cores to fit into an individual rack footprint, the power demands are increasing significantly. Traditional data centre deployments of medium density CPU and storage are typically in the range of 3 – 7 kW, which can efficiently be cooled using traditional air-cooling methods.

Sustainable high performance computing

One company that is meeting this challenge of economical and sustainable HPC is Qarnot Computing. The company, founded a decade ago by Paul Benoit and Miroslav Sviezeny, has developed the Q.Ware, a platform as a service (PaaS) dedicated to cloud computing which gives access to an ecological and economic HPC infrastructure whatever the computation needs or market. Qarnot rethinks data centres by breaking up collections of servers and spreading them into buildings in the form of computing heaters and boilers, embedding high performance processors and free-cooling computing clusters. Computation is distributed thanks to Q.Ware, which takes advantage of container technology. The waste heat is reused instantly thanks to the disruptive platform that automatically dispatches the computation where heat (of air or water) is needed, thus avoiding data centre costs related to infrastructure, maintenance, and cooling.

A year of growth

With a total of 20,000 new cores deployed in 2020 (including 7,000 in Groupe Casino warehouses), Qarnot strongly boosted its infrastructure. The ambition is to be the first European ecological cloud computing provider. That implies both scalability and high quality and technical standards. A strategic partnership has been concluded with AMD and the infrastructur features AMD Ryzen and Ryzen Threadripper processors. “We are thrilled to team with Qarnot to enable projects with our latest processor technology, designed to meet the most demanding needs with more core throughput, larger caches, and powerful multi-threading capabilities, pushing the boundaries to deliver on the promise of next-generation high-performance computing” Roger Benson, senior director, commercial EMEA at AMD, says.

Rendering more Minions

Illumination, a subsidiary of Universal Pictures International, chose Qarnot to render the highly successful film, Minions 2: The rise of Gru. To meet this significant technical challenge, the Qarnot teams installed more than 15,000 cores in three weeks. It took four months of intensive computing to complete this much-awaited sequel’s rendered images; the initial instalment was watched by 6.6 million theatregoers in France alone.

Qarnot has been a significant player in high performance computing for the past ten years and has built its reputation in the banking and 3D animation sectors, among others. “We have been seduced by the reactivity and expertise of Qarnot, which enabled us to use our renderer on more than 15,000 cores during the four months,” Bruno Mahé, head of technology at Illumination, explains.

Scalability of the distribution platform

 R&D has been embedded in Qarnot’s DNA since its creation ten years ago and remains at the heart of its development. One of the major projects is the scalability of the Q·Ware platform that distributes the computation. “The goal of this development is to be able to serve a growing number of clients on a larger grid, with the same high-level security and efficiency standards that the platform currently offers,” Paul Benoit, co-founder, and president of Qarnot, says. “An additional team of engineers has been created with a mission to broaden our activities and make the added-value benefits available for new sectors. After developing our expertise in CFD, we now investigate the stakes and HPC needs of artificial intelligence, machine learning, medical research, molecular docking, weather forecast and fluids dynamics.

 “We have already managed to both strengthen our commercial relationships with existing clients such as BNP Paribas, who we have worked with over the past five years, and build new ones with Natixis and Société Générale, which have entrusted their risk analysis computation to us.”

2021, a year of promise

 Benoit explains that Qarnot has been proving that technical and environmental requirements can go hand in hand for several years. “We prove that it works and reap the benefits,” he explains. “The demand is rising significantly, with several prestigious clients recently starting to work with us. We always need to balance supply and demand, which explains why we keep increasing the number of available cores in the IT infrastructure. For example we just bought several A6000 GPUs to meet the needs of our AI and 3D rendering clients.

“We ask a lot of our teams, and they take up the challenges with the utmost professionalism. We are growing, but we remain focused on the quality of the services provided to the clients with security, reliability, suitability for their needs, and the teams’ availability, all integral to our offering. We know how to make a difference in this very competitive market. This year looks outstanding, we are anticipating a very favourable year.”



Partner Resources

Popular Right Now

Edgecore Insight Podcast

Ep-1: Navigating the Waters of Sustainability

Others have also read ...


2019 – 2020 What – Where – Why

Edge computing relying on location, latency and bandwidth has increased with IOT demands. It is not an instead of but complimenting traditional Enterprise facilities, colo and cloud to get closer to the data source or end users. Where 5G is rolling out enterprise opportunities will follow along with edge facilities. Edge growth in other regions will be more of a steady increase until their network is upgraded

Click to View