A Beginner’s Guide to High Performance Computing

Quantonation
Quantonation, Quantum Investors
7 min readSep 10, 2020

High-Performance Computing (HPC or supercomputer) is omnipresent in today’s society. For example, every time you watch Netflix, the recommendation algorithm leverages HPC resources remotely to offer you personalized suggestions. Thanks to the growth of Cloud Computing and sheer computational power, the number of applications is skyrocketing in the industry and academic world, but also in other fields such as cosmetics and finance.

What is HPC?

When people talk about HPC, it is not always clear what that term means to them. HPC has many different definitions which vary from one expert to another.

Let’s dive into the literal meaning of HPC. It stands for High-Performance Computing. The ability to carry out large scale computations to solve complex problems, that either need to process a lot of data, or to have a lot of computing power at their disposal. Basically, any computing system that doesn’t fit on a desk can be described as HPC.

Apart from this conceptual definition, HPC can also be described in terms of hardware. HPC systems are actually networks of processors. The key principle of HPC lies in the possibility to run massively parallel code to benefit from a large acceleration in runtime. HPC systems sometimes reach impressive sizes since applications are accelerated when you add parallelism to the process, which is to say, when you add computing cores. A common HPC capability is around 100,000 cores. Most HPC applications are complex tasks which require the processors to exchange their results. Therefore, HPC systems need very fast memories and a low-latency, high-bandwidth communication systems (>100Gb/s) between the processors as well as between the processors and the associated memories.

We can differentiate two types of HPC systems: the homogeneous machines and the hybrid ones. Homogeneous machines only have CPUs while the hybrids have both GPUs and CPUs. Tasks are mostly run on GPUs while CPUs oversee the computation. As of June 2020, about 2/3 of supercomputers are hybrid machines. They have more computing power since GPUs can handle millions of threads simultaneously and are also more energy efficient. GPUs have faster memories, require less data transfer and are capable to exchange with other GPUs, which is the most energy-intensive part of the machine.

High Performance Computing used to be strictly defined with high speed network to allow strong interconnections between cores. The rise of AI applications led to an architecture based on more independent clusters but still massively parallel.

HPC systems also include the software stack. That can be divided into three categories. First the user environment encompasses the applications known as workflows. Then the middleware linking applications and their implementation on the hardware. It includes the runtimes and frameworks. Last, the Operating system, at system level with the job scheduler, management software for load balancing and data availability. Its role is to assign tasks to the processors and organize the exchange of data between the processors and the memories to ensure the best performance.

HPC applications

HPC provides many benefits and value when used for commercial and industrial applications. Applications that can be classified in five categories:

- Fundamental research aims to improve scientific theories to better understand natural or other phenomena. HPC enables more advanced simulations leading to breakthrough discoveries.

- Design simulation allows industries to digitally improve the design of their products and test their properties. It enables companies to limit prototyping and testing, making the designing process quicker and less expensive.

- Behavior prediction enables companies to predict the behavior of a quantity which they can’t impact but depend on, such as the weather or the stock market trends. HPC simulations are more accurate and can look farther into the future thanks to their superior computing abilities. It is especially important for predictive maintenance and weather forecasts.

- Optimization is a major HPC use case. It can be found in most professional fields, from portfolio optimization to process optimization, to most manufacturing challenges faced by the industry.

HPC is more and more used for data analysis. Business models, industrial processes and companies are being built on the ability to connect, analyze and leverage data, making supercomputers a necessity in analyzing massive amounts of data.

The 5 fields of HPC Applications.

HPC: a major player for society’s evolution

HPC needs are skyrocketing. A lot of sectors are beginning to understand the economic advantage that HPC represents and therefore are developing HPC applications. Beyond the historical fields of simulation for industry and academia, HPC is penetrating finance, customer services and medicine.

The price of High Performance Computing via the cloud is being more and more competitive. (AWS = Amazon Web Services)

Industrial companies in the field of aerospace, automotive, energy or defence are working on developing digital twins of a machine or a prototype to test certain properties. This requires a lot of data and computing power in order to accurately represent the behavior of the real machine. This will, moving forward, render prototypes and physical testing less and less standard.

The HPC dynamics and industrial landscape

According to research firm Hyperion, the HPC market was worth $27.7 billion US dollars in 2018. They anticipate an increase to a $39.2 billion US dollars market by 2023, which represents a compound annual growth rate of more than 7% driven by these numerous new applications.

The limits of a model

Unfortunately, supercomputers are revealing some limits. First of all, some problems are not currently solvable by a supercomputer. The race to the exascale (a supercomputer able to realize 10^18 floating point operations per second) is not necessarily going to solve this issue. Some problems or simulations might remain unsolvable, or at least, unsolvable in an acceptable length of time. For example, in the case of digital twins or molecular simulation, calculations have to be greatly simplified in order for current computers to be able to make them in an acceptable length of time (for product or drug design).

Moreover, a second very important challenge is the power consumption. The consumption of computing and data centers represents 1% of power consumption in the world and this is bound to significantly increase. It shows that this model is unsustainable in the long term, especially since exascale supercomputers will most surely consume more than current ones. Not only is it technically unsustainable, it is also financially so. Indeed, a supercomputer can cost as much as 10mUSD per year in electricity consumption.

The new chips revolution

CPUs and GPUs are not the only solutions to tackle the two previously stated issues.

Although most efforts are focused on developing higher-performance CPU and GPU-powered supercomputers in order to reach the exascale, new technologies, in particular “beyond Silicon”, are emerging. Innovative chip technologies could act as accelerators like GPUs did in the 2010s and significantly increase the computing power. Moreover, some technologies, such as quantum processors for example, would be able to solve new categories of problems that are currently beyond our reach.

In addition, 70% of the energy consumption in a HPC is accounted for by the processors. Creating new chips, more powerful and more energy efficient would enable us to solve both problems at once. GPUs were the first step towards this goal. Indeed, for some applications, GPUs can replace up to 200 or 300 CPUs. Although one GPU individually consumes a bit more than a CPU (400W against 300W approximately), overall, a hybrid supercomputer will consume less than a homogeneous supercomputer of equal performance.

The model needs to be reinvented to include disruptive technologies. Homogeneous supercomputers should disappear, and it is already underway. In 2016, only 10 out of the supercomputers in the Top500 were hybrid. By 2020, within only four years, it rose to 333 out of 500, including 6 in the top 10.

At Quantonation, are convinced that innovative chips integrated in hetereogeneous supercomputing architectures, as well as optimized softwares and workflows, will be key enablers to face societal challenges by significantly increasing sustainability and computing power. We trust that these teams are ready to face the challenge and be part of the future of compute:

  • Pasqal’s neutral atoms quantum computer, highly scalable and energy efficient;
  • Lighton’s Optical Processing Unit, a special purpose ligh-based AI chip fitted for tasks such as Natural Language Processing;
  • ORCA Computing’s fiber based photonic systems for simulation and fault tolerant quantum computing;
  • Quandela’s photonic qubit sources that will fuel next generation of photonic devices;
  • QuBit Pharmaceuticals software suites leveraging HPC and quantum computing resources to accelerate drug discovery ;
  • Multiverse Computing’s solutions using disruptive mathematics to resolve finance’s most complex problems on a range of classical and quantum technologies.

Thank you very much to the experts that helped us conduct this research including Christelle Piechursky, Cyril Baudry, Jean-Charles Cabelguenne, Jean-Paul Marinier, Julien Nauroy, Eric Rodriguez, et Laurent Seror.

Written by Jean-Gabriel Boinot-Tramoni & Marie Gruet

Please reach jg@quantonation.com if you have any comment

--

--

Quantonation
Quantonation, Quantum Investors

An Investment Fund dedicated to Deep Physics and Quantum Technologies