Let’s start with the basics: What is exascale computing?

Exascale computing begins at a billion-billion OPS, or 1018 operations per second. It’s a thousand times quicker than the current fastest machines we’ve developed, the petascale supercomputers. It’s a standalone computer that can tackle an issue at hand at a scale unmatched by cloud-based solutions. Exascale computers are the largest, most powerful computers now feasible with current technology.

What are some of the tech innovations that make exascale possible?

The evolution of high-bandwidth memory and the transition from central processing units to graphics processing units have both contributed to the rapid progress toward exascale. We couldn’t have achieved the same degree of computation that GPUs enable with only regular memory. The ratio of computational capability to memory and bandwidth in exascale computers is significantly larger.

Because of this, exascale computers will be cutting-edge AI tools. For AI training, GPUs shine because the compute-to-memory ratio required for the task is well-suited to the capabilities of these machines.

What will these machines enable scientists to do that they couldn’t do before?

The list is quite lengthy. Materials science, climate research, disease prevention, and cosmology are just a few examples of how Argonne and the world could benefit from large-scale simulations paired with AI.

Next-generation batteries that are safer, more cost-effective, and last longer can be developed using this information. To reduce the amount of plastic that ends up in the ocean, we can use it to construct better nuclear fuels for clean energy or create new polymers that break down in the presence of light. Artificial intelligence can be linked by fibre optic cable to Argonne’s Advanced Photo Source (APS), the world’s leading high-energy light source, located just 2 kilometres away.

The APS is similar to a massive, extremely powerful X-ray machine, enabling for atomic-level analysis of materials. It is being utilised, for instance, to figure out how COVID-19’s protein structure is put together and why some face mask fabrics are more effective than others at preventing infection. Having two of the most advanced scientific instruments in the world together on the same site will be truly incredible.

The world’s most critical and difficult problems will be easier to tackle with the help of exascale supercomputing. Explore the magnitude of 18 zeros.

We know this is a bit like asking parents to name their favorite child, but which of the many exascale research projects excites you the most?

We will achieve tremendous progress in the pharmaceutical industry with the help of these instruments. When applied to physics-based models, AI and ML will revolutionise the discovery, development, and testing of new pharmaceuticals. There is a time constraint of one year to get from concept to clinical testing. Obviously, COVID-19 research is a major contributor to this chronology, but there is also substantial room to include other diseases, such as cancer, heart disease, and the discovery of new antibiotics.

There is also promising potential in industrial science. It takes around 20 years for a new battery, optical coating to protect solar panels, or photo-voltaic material to enter widespread use after its conception. Can we cut that time with machine learning and AI? Can a novel material be developed, tested, and put into mass production in two years? We can come a lot closer to that with exascale computing.

The Internet and Global Positioning System are only two examples of how large-scale federally financed technological projects have impacted everyday life as well as business operations. Will exascale have a similar growth curve?

When GPS satellites were first launched, the military was the primary customer, and devices that could receive the signals cost more than your average car. What happened over time, of course, is that those costs came down. With exascale, we pushed the computer vendors to really up their game. And so the advanced CPUs, GPUs, and memory technology they’re enabling will be in everyday machines within a few years. These won’t be exascale machines, but the technology will be there. That will be one of the lasting dividends.

The other benefit, of course, is that solving these big problems—whether it’s climate modeling or precision medicine or applying machine learning to scientific data—will affect a lot of other things, including policy decisions.

If exascale computers use conventional silicon technology only much more of it why haven’t we built one already?

Exascale computing faces three formidable obstacles. First, there is the issue of authority. Ten years ago, attempting to construct an exascale computer would have necessitated a gigawatt of power, or an annual electricity cost of around $1 billion. To solve this problem, we had to create devices that used less power.

Second, there’s the problem of scaling up. Nearly a hundred thousand graphics processing units (GPUs) are being utilised, each with tens of billions of transistors cycling at a billion times per second. This level of integration was previously infeasible, making it impossible to achieve such a large number of computational elements.

Consistency is the third factor to consider. When hundreds of thousands of electronic parts are put together, failures are inevitable. To ensure continued operation in the event of component failure, sufficient redundancy and fault tolerance must be built into the design.

In the world of PCs, software development typically lags well behind hardware. There is no Moore’s Law for applications. Will that be a problem with exascale?

We anticipated a sizable lag between developing the hardware and releasing software that could run on it when we first started making plans for exascale. As a result, the Department of Energy (DOE) launched the Exascale Computing Project in 2016 to develop exascale-compatible software.

More than 30 academic institutions and over a thousand DOE workers collaborated to assure that the hardware and software would be released simultaneously. Making progress on both the plane and the carrier it will land on at the same time is like trying to build an aeroplane while you’re trying to fly it.

Aurora will have more than eighty software applications in over two dozen categories when we first open the box, and that’s only the beginning. New exascale applications will be considerably simpler to develop thanks to the significant investment in software libraries by the DOE.

Science fiction has taught us to be afraid of machines that are as smart or smarter than humans. Are there good reasons to be afraid?

There are several areas in which machines are already more advanced than humans. After all, how recently did you perform a quantum wave function calculation?

Thus, it is essential that machines outperform humans at some jobs. That’s why we have machines: to carry out laborious or risky tasks that humans would otherwise have to avoid.

Technologies in and of themselves are neutral. However, sophisticated technology can be used destructively by those with ill will. As the importance of cutting-edge technology continues to grow, it’s crucial that we maintain a level head when considering ethical, security, and safety concerns. The perfect place to put a machine that is a thousand times faster and smarter than any human is one in which doing the right thing is taken for granted by everyone. This means that it’s important for everyone to weigh in, not only technologists.

Building a sustainable economy is a top priority. The world’s environment must be stabilised, and there must be no shortage of antibiotics, better energy technology, and safer automobiles. That’s why researchers are attempting to create self-aware machines. If we do it with our eyes open, it shouldn’t be too difficult.