What Is Technology Trust?March 15, 2023
What Is Computer Vision?March 21, 2023
When you think of a supercomputer, what may come to mind is what is depicted by television and film. You may think of “2001: A Space Odyssey”, where a supercomputer takes it upon itself to terminate humans. You may think of “The Hitchhiker’s Guide to the Galaxy”, a comedic take on two supercomputers who take on varying personalities and different sides of how they view humans. You may think about the film, “Portal”, where a supercomputer gains an attitude and becomes an artificial intelligent nightmare. You may also think of “Colossus: The Forbin Project”, where a supercomputer is built inside of a bunker and controls all of the United States’ nuclear weapon resources. You may think of “Prometheus”, where an AI supercomputer comes to life and looks more like a human than a computer. There are many science-fiction films involving supercomputers, but these state-of-the-art computers have more real-life commonplace uses that continue to help our everyday lives.
What Is a Supercomputer?
A supercomputer is essentially what it sounds like—a super computer. It’s a computer that performs at or near the highest operational level there is for a computer. Conventionally, supercomputers are used for scientific and engineering applications that require an immense database. These super-charged computers can handle the massive amount of computations requiring the most computational power that a computer can offer.
Latest technological advancements like multicore processors and general-purpose GPUs or graphics processing units have turned many home desktop computers into “supercomputers” in their own right. But a supercomputer isn’t just a fast computer.
The critical difference between supercomputers and everyday general-purpose computers is the processing power. The system of a supercomputer is made up of numerous processing units or CPUs. Central processing units have groups comprised of compute nodes and memory. Unlike everyday general-purpose computers, supercomputers can potentially include thousands of nodes using parallel processing which communicate with each other to resolve problems.
What Is Parallel Processing?
Most hi-tech supercomputers are several parallel computers that implement parallel processing, which is a method of computing that runs two or more CPUs or processors to control independent parts of the whole system. Because parallel processing breaks up different parts of a task between various processors, it can complete the job in less time.
Parallel processing allows computers to execute code more efficiently, which enables them to solve problems and sort through data faster than a traditional processing system. Because of the way it is set up, it solves issues more quickly and can solve more complex problems.
The two types of parallel processing methods include SMP, or symmetric multiprocessing, and MPP, or massively parallel processing. SMP is the parallel processing method that uses multiple processors sharing a common operating system and memory. In this method, the processors are sharing the same input and output bus or data path. One copy of the operating system controls all of the processors.
Massively Parallel Processing is the parallel processing method that coordinates the processing of a program through multiple processors that work various parts of the program. In this method of parallel processing, each of the multiple processors use its operating system and memory.
Who Is Winning the Battle of Supercomputing?
Various countries around the globe are all attempting to build and run the fastest supercomputer in the world. The battle of having the fastest supercomputer with the most computing power drives many organizations to innovate.
Sunway TaihuLight located at the National Supercomputing Center in Wuxi, China has the 5th fastest supercomputer in the world. The tasks that this particular supercomputer is tackling include climate science, advanced manufacturing, and marine forecasting.
The supercomputer at Lawrence Livermore National Laboratory called Sierra is the 4th fastest in the world. Using both IBM CPUs and NVIDIA GPUs, this supercomputer is designed for modeling and simulations used by the US National Nuclear Security Administration.
Oak Ridge National Laboratory’s supercomputer, Summit, is the 3rd fastest supercomputer in the world. This supercomputer aims to better understand genetic data, how people develop chronic pain, and how they respond to opioids. It is now assisting in developing treatments and vaccines against Covid-19.
Fujitsu and RIKEN’s Fugaku in Kobe, Japan is the 2nd fastest supercomputer in the world. This supercomputer supports many different endeavors, including drug discovery and medicine, weather forecasting, the development of clean energy, and more.
The United States has regained the top spot in the world’s fastest supercomputer. The Hewlett Packard Enterprise supercomputer is the first to cross the exascale performance threshold. Also located at the Oak Ridge National Laboratory, it has reached compute speeds of 1.1 exaflops, equivalent to 1.1 quintillion calculations per second. Being under the same umbrella as Oak Ridge National Laboratory’s other supercomputer, Summit, Frontier could potentially look to further the research of genetic data and develop additional treatments and vaccines against Covid-19.
Where Can We See the Benefits of Supercomputers?
As mentioned earlier, supercomputers are typically used for scientific and engineering applications. Supercomputing is used in forecasting weather. It also helps weather foresters and operational meteorologists predict the effects of extreme weather, including storms and floods. Supercomputing also helps the exploration of oil and gas by collecting massive data on geophysical seismic activity to support the discovery of oil reserves. Supercomputers also aid in aerodynamics, nuclear fusion, and medical analysis. Supercomputing is also being used to battle the global pandemic.
During the height of the global pandemic, various nations and industries put aside the idea of competition and formed the Covid-19 High-Performance Computing Consortium, which included some of the most powerful companies and prestigious universities. The HPC Consortium included IBM, Amazon Web Services, AMD, Dell Technologies, Google Cloud, Hewlett Packard Enterprise, Microsoft, NVIDIA, Intel, Massachusetts Institute of Technology, and more.
These various entities managed a range of computing capabilities that included some of the largest supercomputers in the world. Using their combined computing capabilities, the collective has multiple projects that aim to help solve the issues around Covid-19 and other related matters.
The strive to have the world’s fastest and most powerful computers will continue. We could have a new list of the top 5 supercomputers in no time or even a list of the fastest quantum computers. But as competitors join forces in solving world issues—the battle of supercomputing will continue to help the world advance and the world will be better for it.