Browse Definitions :
Definition

supercomputer

What is a supercomputer?

A supercomputer is a computer that performs at or near the highest operational rate for computers.

Traditionally, supercomputers have been used for scientific and engineering applications that must handle massive databases, do a great amount of computation or both. Advances like multicore processors and general-purpose graphics processing units have enabled powerful machines that could be called desktop supercomputers or GPU supercomputers.

By definition, a supercomputer is exceptional in terms of performance. At any time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term supercomputer is sometimes applied to far slower -- but still impressively fast -- computers.

How supercomputers work?

Supercomputer architectures are made up of multiple central processing units (CPUs). These CPUs have groups composed of compute nodes and memory. Supercomputers can contain thousands of nodes that use parallel processing to communicate with one another to solve problems.

The largest, most powerful supercomputers are multiple parallel computers that perform parallel processing. There are two parallel processing approaches: symmetric multiprocessing and massively parallel processing. In some cases, supercomputers are distributed, meaning they draw power from many individual PCs in different locations instead of housing all the CPUs in one location.

Supercomputer processing speed is measured in quadrillion floating point operations per second, also known as petaFLOPS or PFLOPS.

Differences between general-purpose computers and supercomputers

Supercomputers are general-purpose computers that function at the highest operational rate or peak performance for computers. Processing power is the main difference between supercomputers and general-purpose computer systems. A supercomputer can perform 100 PFLOPS. A typical general-purpose computer can only perform hundreds of gigaflops to tens of teraflops.

Supercomputers consume lots of power. As a result, they generate so much heat that they need to be stored in cooling systems.

Both supercomputers and general-purpose computers differ from quantum computers, which operate based on the principles of quantum physics.

What are supercomputers used for?

Supercomputers perform resource-intensive calculations that general-purpose computers can't handle. They often run engineering and computational sciences applications, such as the following:

  • weather forecasting to predict the impact of extreme storms and floods;
  • oil and gas exploration to collect huge quantities of geophysical seismic data to aid in finding and developing oil reserves;
  • molecular modeling for calculating and analyzing the structures and properties of chemical compounds and crystals;
  • physical simulations like modeling supernovas and the birth of the universe;
  • aerodynamics such as designing a car with the lowest air drag coefficient;
  • nuclear fusion research to build a nuclear fusion reactor that derives energy from plasma reactions;
  • medical research to develop new cancer drugs, understand the genetic factors that contribute to opioid addiction and find treatments for COVID-19;
  • next-gen materials identification to find new materials for manufacturing; and
  • cryptanalysis to analyze cyphertext, ciphers and cryptosystems to understand how they work and identify ways of defeating them.

Like any computer, supercomputers are used to simulate reality but on a larger scale. Some of the functions of a supercomputer can also be carried out with cloud computing. Like supercomputers, cloud computing combines the power of multiple processors to achieve power that is impossible on a PC.

list of ways supercomputers are used
Scientists and engineers use supercomputers to simulate reality and make projections.

Notable supercomputers throughout history

Seymour Cray designed the first commercially successful supercomputer. It was the Control Data Corporation (CDC) 6600, released in 1964. It had a single CPU and cost $8 million -- the equivalent of $60 million today. CDC 6600 could handle 3 million FLOPS and used vector processors.

Cray went on to found a supercomputer company named Cray Research in 1972. Although the company has had several different owners, it is still in operation as Cray Inc. In September 2008, Cray Inc. and Microsoft launched CX1, a $25,000 personal supercomputer aimed at the aerospace, automotive, academic, financial services and life sciences markets.

IBM has been a keen competitor. IBM Roadrunner was the top-ranked supercomputer when it was launched in 2008. It was twice as fast as IBM's Blue Gene and six times as fast as any other supercomputer at that time. IBM Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on the popular quiz show Jeopardy!

Top supercomputers of recent years

Sunway's Oceanlite supercomputer is reported to have been competed in 2021. It is thought to be an exascale supercomputer, which is one that can calculate at least 1018 FLOPS.

In the United States, some supercomputer centers are interconnected on an internet backbone known as the very high-speed Backbone Network Service, or vBNS, which is part of the National Science Foundation Network (NSFNET). NSFNET is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2, a university-led project, is part of this initiative.

At the lower end of supercomputing, data center administrators can use clustering for a build-it-yourself approach. The Beowulf Project offers guidance on how to put together off-the-shelf PC processors, using Linux operating systems, and interconnecting them with Fast Ethernet. Applications must be written to manage the parallel processing.

photo of Berzelius supercomputer
Berzelius is a Swedish supercomputer designed for AI research.

Countries around the world are using supercomputers for research purposes. One example is Sweden's Berzelius, which began operation in the summer of 2021. The system will be used for AI research primarily in Sweden.

Some top supercomputers of the last two decades

Year Supercomputer Peak speed (Rmax) Location
2021 Sunway Oceanlite 1.05 exaFLOPS (unofficial) Qingdao, China
2021 Fujitsu Fugaku 442 PFLOPS Kobe, Japan
2018 IBM Summit 148.6 PFLOPS Oak Ridge, Tenn.
2018 IBM Sierra 94.6 PFLOPS Livermore, Calif.
2016 Sunway TaihuLight 93.01 PFLOPS Wuxi, China
2013 NUDT Tianhe-2 33.86 PFLOPS Guangzhou, China
2012 Cray Titan 17.59 PFLOPS Oak Ridge, Tenn.
2012 IBM Sequoia 17.17 PFLOPS Livermore, Calif.
2011 Fujitsu K computer 10.51 PFLOPS Kobe, Japan
2010 NUDT Tianhe-1A 2.566 PFLOPS Tianjin, China
2009 Cray Jaguar 1.759 PFLOPS Oak Ridge, Tenn.
2008 IBM Roadrunner 1.105 PFLOPS Los Alamos, N.M.

Supercomputers and artificial intelligence

Supercomputers often run artificial intelligence (AI) programs because they typically require supercomputing-caliber performance and processing power. Supercomputers can handle the large amounts of data that AI and machine learning application development use.

Some supercomputers are engineered specifically with AI in mind. For example, Microsoft custom built a supercomputer to train large AI models that work with its Azure cloud platform. The goal is to provide developers, data scientists and business users with supercomputing resources through Azure's AI services. One such tool is Microsoft's Turing Natural Language Generation, which is a natural language processing model.

Another example of a supercomputer engineered specifically for AI workloads is Nvidia's Perlmutter. It is No. 5 in the most recent TOP500 list of the world's fastest supercomputers. It contains 6,144 GPUs and will be tasked with assembling the largest-ever 3D map of the visible universe. To do this, it will process data from the Dark Energy Spectroscopic Instrument, a camera that captures dozens of photos per night containing thousands of galaxies.

Photo of the Perlmutter supercomputer
Nvidia's Perlmutter supercomputer was launched in 2021 and is being used to find solutions to problems in astrophysics and climate science.

The future of supercomputers

The supercomputer and high-performance computing (HPC) market is growing as more vendors like Amazon Web Services, Microsoft and Nvidia develop their own supercomputers. HPC is becoming more important as AI capabilities gain traction in all industries from predictive medicine to manufacturing. Hyperion Research predicted in 2020 that the supercomputer market will be worth $46 billion by 2024.

The current focus in the supercomputer market is the race toward exascale processing capabilities. Exascale computing could bring about new possibilities that transcend those of even the most modern supercomputers. Exascale supercomputers are expected to be able to generate an accurate model of the human brain, including neurons and synapses. This would have a huge impact on the field of neuromorphic computing.

As computing power continues to grow exponentially, supercomputers with hundreds of exaflops could become a reality.

Supercomputers are becoming more prevalent as AI plays a bigger role in enterprise computing. Learn the top nine applications of AI in business and why businesses are using AI.

This was last updated in March 2022

Continue Reading About supercomputer

Networking
  • subnet (subnetwork)

    A subnet, or subnetwork, is a segmented piece of a larger network. More specifically, subnets are a logical partition of an IP ...

  • Transmission Control Protocol (TCP)

    Transmission Control Protocol (TCP) is a standard protocol on the internet that ensures the reliable transmission of data between...

  • secure access service edge (SASE)

    Secure access service edge (SASE), pronounced sassy, is a cloud architecture model that bundles together network and cloud-native...

Security
  • cyber attack

    A cyber attack is any malicious attempt to gain unauthorized access to a computer, computing system or computer network with the ...

  • digital signature

    A digital signature is a mathematical technique used to validate the authenticity and integrity of a digital document, message or...

  • What is security information and event management (SIEM)?

    Security information and event management (SIEM) is an approach to security management that combines security information ...

CIO
  • product development (new product development)

    Product development -- also called new product management -- is a series of steps that includes the conceptualization, design, ...

  • innovation culture

    Innovation culture is the work environment that leaders cultivate to nurture unorthodox thinking and its application.

  • technology addiction

    Technology addiction is an impulse control disorder that involves the obsessive use of mobile devices, the internet or video ...

HRSoftware
  • organizational network analysis (ONA)

    Organizational network analysis (ONA) is a quantitative method for modeling and analyzing how communications, information, ...

  • HireVue

    HireVue is an enterprise video interviewing technology provider of a platform that lets recruiters and hiring managers screen ...

  • Human Resource Certification Institute (HRCI)

    Human Resource Certification Institute (HRCI) is a U.S.-based credentialing organization offering certifications to HR ...

Customer Experience
  • contact center agent (call center agent)

    A contact center agent is a person who handles incoming or outgoing customer communications for an organization.

  • contact center management

    Contact center management is the process of overseeing contact center operations with the goal of providing an outstanding ...

  • digital marketing

    Digital marketing is the promotion and marketing of goods and services to consumers through digital channels and electronic ...

Close