Simply put, high-performance computing (HPC) is the aggregation of computing power, delivering far higher levels of performance than could be expected from a standard desktop or server.
Typically, HPC deployments are used to run complex algorithms, models or deep learning workloads to solve large problems in engineering, science or business. Standard compute deployments are often not capable of delivering the level of compute power provided by HPC due to restrictions on CPU capacity, time (processing is too slow for the task) or the ability to run complex models concurrently.
The demand for HPC has grown drastically over the last five years, notably in the academic arena. Many universities use HPC deployments for their research, such as finding sources of renewable energy, developing projects for space exploration, and creating new materials. HPC is also very prevalent in meteorology for weather forecasting, including the predicting and tracking of storms, and also in medical research, smart energy grids and manufacturing simulation analysis. Any application that uses ‘big data’ can be suitable for HPC.
According to David Watkins, solutions director for VIRTUS Data Centres, due to the high power and cooling requirements of HPC, modern data centres are often the only facilities that can provide a suitable environment; trying to accommodate HPC within a university or commercial building can be challenging. The increased demand has led to data centres (predominantly hyperscale) being designed to cope with these types of applications, and HPC is certainly a growth area in the data centre industry. In 2020, for example, the HPC Market was valued at $4.5 billion and is expected to reach $11.54 billion by 2026. So, how has HPC changed the way that data centres are designed?
New data centre design considerations
“HPC deployments typically consume more energy than standard compute deployments. However, this is mitigated by several things,” says Watkins.
“Firstly, the compute power is significantly greater and delivered in a smaller footprint. Secondly, the higher power levels needed can require different cooling techniques, such as delivering water cooling to the actual hardware from the building cooling systems, rather than cooling via air as is the norm for standard servers. And thirdly, HPC deployments can ramp up and down very quickly depending on the nature of the workload being supported. This directly affects the data centre design; a water-cooled data centre requires the inclusion of buffer vessels to enable a sudden demand for cooling to be accommodated.”
These design implications are clearly worth considering, as HPC brings many benefits over a traditional computer. Digital realty company, Interxion, states that a standard computer processor can carry out two to four billion cycles per second. This is enough for normal, day-to-day users, but is not a suitable throughput for massive apps, algorithms and datasets.
A cluster or supercomputer in a high-performance computing facility can achieve speeds into the quadrillions of calculations per second – especially if designed with advanced central processing units (CPU), graphics processing units (GPU), high-speed memory and low-latency networking. These speeds can make even the largest tasks manageable.
These faster speeds mean that users can solve problems more quickly. While the high-performance computing cost might be a short-term expense, they can save money many times over with their rapid insights, discoveries and innovations.
Additionally, HPC infrastructure can be changed and optimised for unique workloads. Tuned to their specific tasks, HPCs transform how organisations manage projects – whether it is streamlining repetitive tasks, using automation, or testing new processes quicker than before.
As the world’s dependency on data grows, organisations using HPC will get ahead of the competition. In business, high-performance computing companies might generate insights or deliver services faster than rivals. In research, HPC can help teams innovate more rapidly.
Watkins assures that most hyperscale data centres are capable of supporting HPC deployments. However, whilst the demand is increasing, HPC deployments are expensive and most workloads can be delivered using standard server methodology.
“As such, unless there is a ‘build to suit’ opportunity, the key is to design and build data centres that can accommodate both types of deployment. This provides maximum flexibility and value.”
Limitations in implementation
High-performance computing modern systems and practices can help organisations innovate and thrive but, in some circumstances, there might also be challenges. Interxion say that data transfer speeds and bandwidth can be challenging for companies first employing HPC applications. On-premises HPC infrastructure is often an obstacle, as networks might not be designed for the ultra-fast data transfer speeds that HPC needs, while uploading data to HPC systems in the first place can also be time-consuming.
Similarly, the cost of purchasing equipment to deploy high-performance computing solutions can cause issues. Depending on the HPC workload in question, an organisation may need to purchase several computing resources at once, proving a barrier to entry for many who cannot budget for the initial payment to own their HPC infrastructure.
Additionally, data privacy is essential for all companies, especially those in highly regulated fields like finance and healthcare. In these fields, personal data must be held securely and comply with many requirements. High-performance computing storage can be spread across multiple solutions, each of which must guarantee data privacy.
Watkins also notes that, when HPC is discussed, there are often concerns around power and sustainability. However, he says that it should be remembered that whilst more power is required for operations, using HPC delivers greater sustainability benefits in the long term as given the high level of utilisation, operating efficiency increases.
While HPC is a powerful solution for increasing compute and storage requirements, there are clear barriers to entry which data centres are beginning to address with focused design specifications that seek to solve the challenges of density, heat, and bandwidth. In addition to advanced cooling, durable data centre environments that can offer affordable power, network options, scalability, redundancy, and security will become more readily available going forward as the demand for HPC-ready facilities increases.