High Performance Computing

The essentials on: high performance computing

Date:
Changed on 06/05/2024
By making it possible to solve vast and complex problems in much less time and at a lower cost than traditional computing, HPC (high performance computing) has been an essential element of university research and industrial innovation for over thirty years. Definition, applications and issues, Inria takes stock of this indispensable field in the face of the many challenges facing society today.
Visualisation de traces d'éxécution d'un programme
© Inria / Photo H. Raguet

What is high performance computing?

High performance computing, also known as HPC, is the ability to perform complex calculations and massive data processing at very high speed by combining the power of several thousand processors.

More concretely and by comparison: a computer equipped with a 3 GHz processor can perform about 3 billion calculations per second, while an HPC solution can perform several quadrillion calculations per second (a quadrillion is equal to a million trillion).

High performance computing emerged in the 1960s to support government and academic research, and is now used to explore and find answers to some of the world's greatest scientific, engineering and business challenges, as well as to efficiently process ever larger and more extensive data sets. Major industries have also been looking to HPC since the 1970s to accelerate the development of complex products in sectors such as automotive, aerospace, oil and gas exploration, financial services and pharmaceuticals.

How does HPC work?

HPC is based on three main components: the machine, the software and the applications.  

To build a HPC architecture, the computing servers (called nodes) are networked together in clusters to form what is called a cluster. Software programs and algorithms are run simultaneously on the servers in a cluster, allowing them to operate in parallel. The cluster is networked with the data storage to save the results. Together, these components work seamlessly to accomplish a variety of tasks.

In other words, the supercomputers that enable high-performance computing contain the same elements as conventional desktop computers (processors, memory, disk, operating system), but each element is more powerful, and there are many more of them.

To operate at peak performance, each component must operate in rhythm with the others. For example, the storage component must be able to transfer data to the computing servers and retrieve it as soon as it is processed. Similarly, the network component must be able to support the rapid transfer of data between the compute servers and the data storage. Otherwise, the performance of the HPC infrastructure is reduced.

Exascale, the future of HPC?

Today, supercomputers are capable of performing more than 1015 (at least one million billion) operations per second. This performance is called petascale. Some high-end systems exceed 1017 - that is, at least one hundred million billion - operations per second (so-called 'pre-exascale' performance).

The next generation, called Exascale, should be able to perform more than one billion billion (1018) calculations per second. This performance should be achieved within a few years.   

"Currently, the most powerful machine in the world, which is in Japan, is one-half Exascale," says Emmanuel Jeannot, scientific delegate at Inria Bordeaux - Sud-Ouest, before adding, "but to achieve this, you need to design very large machines... You have to imagine, this represents tens of thousands of assembled processors (the Japanese Fugaku supercomputer has exactly 158,976 processors), in the same room".

What is HPC for?

Diverse interests

The main advantages of HPC lie in its ability to process extremely fast data that is growing exponentially in size and quantity, but also in the fact that it offers the possibility of creating extremely detailed simulations (such as in the case of car accident tests), thus eliminating the need for physical tests and reducing the costs associated with these tests.

"HPC computing is a major technology, useful to scientists (astrophysicists, climatologists, epidemiologists, biologists, geologists, etc.) as well as to engineers in industry," explains Bérenger Bramas, a researcher at the Inria Nancy - Grand Est centre and leader of the Texas research programme, in a recent article, before adding: "Certain simulations, for example in meteorology, require significant computing power, which is even unusual. They can only be carried out on supercomputers, computers capable of performing several million billion computing operations per second".

Inria involved in eight European HPC projects

The challenges surrounding HPC are such that the European Union has set up a joint initiative called EuroHPC with Member States and private partners. It co-finances cutting-edge research projects on the development, deployment and maintenance of world-class European supercomputers.

Inria participated in its last two calls for projects, in January and July 2020. The Institute is now a partner in eight of the selected projects.

Many areas of application 

From monitoring a developing storm to assessing seismic risk to analysing the testing of new medical treatments, it is indeed through the way data is processed that groundbreaking scientific discoveries are made, game-changing innovations are fuelled and the quality of life is improved for billions of people around the world.

Many sectors can now rely on HPC to innovate: aerospace (simulation of airflow on aircraft wings, for example), automotive (automatic learning for self-driving cars, manufacturing and testing of new products, or safety testing), finance (complex risk analysis, financial modelling or fraud detection), or energy (resource localisation).

But the two fields in which HPC is showing more than convincing results are medicine and meteorology. In the former, HPC is now making it possible to test new drugs more quickly, create vaccines, develop innovative treatments or understand the origins and evolution of epidemics and diseases. In the latter, HPC is used, beyond the fifteen-day forecasts with which we are all familiar, to predict and simulate, with other mathematical models that require more computing capacity, the short, medium and long term evolution of the climate.

What are the challenges of HPC?

Companies and institutions are so interested in high-performance computing that its global market is expected to exceed $60 billion by 2025, according to Intersect360.

However, high-performance computing is currently facing several challenges, starting with energy consumption. "A supercomputer consumes a lot of power, tens of megawatts for the largest ones. By way of comparison, a nuclear unit, on average, represents a power of 1000 megawatts of electricity, or 1 gigawatt of electricity," explains Emmanuel Jeannot.

Several solutions have already been put forward to address this problem, such as studying the scalability of calculations upstream. "In many situations, the need for an 'immediate' result is not justified; the calculation can therefore be carried out on a limited number of computing cores. Thus, a few hours or days of patience can result in substantial savings," researchers said in a publication.

"The community has set itself the goal of using no more than twenty megawatts to run a machine. This is what delayed the arrival of the Exascale," adds Emmanuel Jeannot. 

Another point is the high cost of operating the machines, both for their power supply and for their cooling. The problem of financing is leading governments, particularly in Europe, to join forces. The scale of the resources and financial investments required to achieve a sustainable Exascale HPC ecosystem is now so great that the countries of the European Union must join forces to finance it.

Finally, the HPC ecosystem is now facing a problem of scaling up. Today, only a few dozen applications are doing so. "We are building large infrastructures, which are very costly and energy-consuming, for a small number of extremely strategic applications," points out Emmanuel Jeannot, who adds, however, that "HPC infrastructures are also designed to run several applications, which leads to very significant gains in efficiency.

Two questions to... Emmanuel Jeannot, scientific delegate at Inria Bordeaux - Sud-Ouest

-          What is the state of French research in the field of HPC?

To do high-performance computing, you have to be good at both applied mathematics and computer science, two fields in which France is very well positioned at world level.

We have, for example, a great deal of expertise in applied mathematics on physical modelling such as fluid mechanics, waves (electromagnetic or in complex materials), etc. and the way in which these phenomena can be simulated on large machines.

As far as computing (software) is concerned, France is well represented in two strategic fields, namely the way in which applications can be programmed and the way in which the resources that make up a supercomputer are used to best effect. At Inria, in particular, we have several teams that are very well placed in the resolution of linear systems, such as Hiepacs in Bordeaux, ROMA in Lyon, and Alpines in Paris. We are also well placed, in France, in the field of task-based programming, but also on the management of hardware resources (running the network, using memory, etc.) and on the issue of the life of data.

Finally, France is very involved in research into energy-related aspects, and more particularly the optimisation of the use of supercomputers from an energy point of view. 

Overall, in France, research and industry are working closely together on HPC-related issues. For example, Inria recently signed a framework agreement with Atos, which includes an HPC component. Inria teams are currently working on a roadmap that includes strategic research topics around HPC, both for Inria and for Atos.

-          What are the prospects for HPC on a European scale?

Today, Europe is not in the top three in the race for the largest supercomputers. But I like to say that just because a Formula 1 car goes very fast in a straight line does not mean that it will be efficient on complicated circuits. Instead, Europe has chosen to build machines that are not necessarily the most powerful, but adapted to more realistic applications, which we really need.

What you have to understand is that there is of course global competition, but the ultimate objective of research is above all to make progress. As a researcher, you collaborate at both European and global level. It is a world in which a lot of development is done in open source. Inria researchers contribute to this and work with their foreign and industrial counterparts. The JLESC, supported by Inria, among others, is a good example of global collaboration on HPC, since it brings together the major players in the field to exchange ideas on the subject.

A major challenge for Europe today is to master the software stack. The EuroHPC initiative aims to strengthen European research in this area: we have mastered many software bricks that we want to consolidate.