Definition

What is Nvidia?

Nvidia Corporation is a technology company known for designing and manufacturing graphics processing units (GPUs). Since establishing itself as the premier graphics chip provider for gaming, Nvidia has expanded into high-performance computing (HPC) and AI.

Nvidia is a fabless manufacturer, meaning it does not directly manufacture the devices it sells. Instead, it designs the underlying chips and boards and then works with partners to manufacture them. Taiwan Semiconductor Manufacturing Company (TSMC) is the semiconductor foundry for the latest Nvidia Blackwell generation chips.

For Nvidia's fiscal year 2025, which ran from February 2024 to January 2025, the company reported revenue of $130.5 billion for a 114% year-over-year gain.

Nvidia and computer graphics

Nvidia initially set out to produce GPUs for PCs. They quickly became known for producing the most powerful GPUs available, competing against rival GPUs from ATI -- which was acquired by AMD for $5.4 billion in 2006. These GPUs run games and other 3D workloads at high frame rates. They also contain the NVENC video encoders and decoders to speed up video rendering and playback

An image of a graphics card
A GPU is the critical component of a graphics card.

Recent Nvidia GPUs contain circuitry to speed up newer functions in computer graphics. This includes ray tracing cores, Tensor Cores and the Deep Learning Super Sampling engine

Nvidia and artificial intelligence

Nvidia hardware has powered much of the recent machine learning (ML) and generative AI (GenAI) boom. Their devices were well positioned to handle such workloads because GPUs are inherently highly parallel and can perform many trillions of operations per second. Nvidia also has a proprietary programming interface, Compute Unified Device Architecture (CUDA), that lets developers use the card's capabilities efficiently. This made Nvidia GPUs the de facto standard for the type of high-throughput/low-complexity processing AI and ML require.

Nvidia has an evolving list of server chips optimized for AI workloads. These combine high floating-point computing, dedicated high-bandwidth memory (HBM), transformer neural network optimizations, and high-speed InfiniBand and ethernet network interconnects.

The A100 GPU debuted in 2020 and was pivotal in training the foundational era of modern GenAI models. The Hopper class chips -- including the H100 in 2022 and H200 in 2023 -- trained and provided inference for many of the current generation of large language models. As part of the Hopper generation, Nvidia also debuted its Grace Hopper Superchips, which combine an Arm-based Grace CPU with Hopper GPUs.

The successor to the Hopper series is Blackwell. The Blackwell B200 chips were announced in March 2024, providing an incremental step up from the first series of Blackwell chips. There is also an Nvidia GB200 Grace Blackwell Superchip that connects Grace CPUs with Blackwell GPUs. The final iteration of the Blackwell series is the Blackwell Ultra GB300, which is set to ship in 2025.

In 2026, Nvidia plans on launching the Vera Rubin architecture, which comprises a Vera CPU and Rubin GPU.

Nvidia Omniverse and digital twins

Nvidia has several initiatives to bridge the gap between the physical and digital worlds. Nvidia Omniverse Cloud is its unified system to support these initiatives, but other product-specific systems may use Nvidia hardware.

An image showing application development tools available with the Nvidia Omniverse cloud solution.
Nvidia Omniverse, shown here in a cloud implementation, supports several application development tools.

In January 2025, the Nvidia Cosmos AI model was announced to increase the abilities of AI agents in the physical world. Nvidia refers to Cosmos as a 'world foundation model' WFM as it connects the physical with the virtual world.

Digital twins enable an organization to create an exact one-to-one representation of a physical space in a digital world. This can be used to track and plan the physical environment and unlock new efficiencies. For example, an assembly line can be "built" as a working 3D model before it takes shape on a factory floor. Once the physical version is built, the digital model can use sensors to update itself to the current conditions and help diagnose problems.

Autonomous robots can use machine vision and other Nvidia-powered technologies to make decisions and interact with the real world. For example, Nvidia platforms can power software-defined vehicles and self-driving cars.

History of Nvidia

Jen-Hsun "Jensen" Huang, Curtis Priem and Chris Malachowsky founded the company in 1993, based in Santa Clara, Calif.

Nvidia's founders believed a dedicated GPU was needed for computer graphics to advance. Previously, computer games were entirely CPU-based. However, gaming technology was advancing, slowly moving to Windows from Microsoft Disk Operating System or MS-DOS. Graphics -- especially 3D graphics -- relied on considerable floating-point math processing, and the math coprocessor in the CPU was insufficient.

When Nvidia entered the GPU market in the early 1990s, it faced stiff competition from ATI Technologies, Matrox, Chips and Technologies, S3 Graphics and 3dfx. However, Nvidia rose above its competition in 1999 with its GeForce card, which featured more advanced 3D graphics and lighting techniques than those other manufacturers used.

As the GPU market consolidated around Nvidia and ATI -- which was acquired by AMD in 2006 -- Nvidia sought to expand the use of its GPU technology. In 2006, the company introduced CUDA, a parallel computing platform used to program GPUs.

Rather than using 3D graphics libraries as gamers did, CUDA enabled programmers to directly program to the GPU. This enabled them to write massive parallel programs to execute high-performance floating-point processes -- such as simulations, visualizations and other applications with large amounts of data that need to be processed in parallel.

After introducing CUDA in 2006, Nvidia worked to have the platform taught in universities. CUDA courses are taught in more than 200 universities worldwide. This yielded a workforce of Nvidia programmers.

In 2008, Nvidia introduced the Tegra line of systems-on-a-chip (SoCs) that combined an Arm CPU with a scaled-down Nvidia GPU. Tegra was primarily sold to carmakers for in-dash systems. However, in 2017, Nintendo adopted Tegra for its handheld Switch console.

In 2016, both Nvidia and AMD faced a challenge as crypto miners realized that GPUs were exceptionally efficient at mining cryptocurrency. Large systems were built, consuming the entire supply of GPU cards from both Nvidia and AMD. This led to a shortage of GPU cards, which only worsened when the COVID-19 pandemic caused shortages and supply constraints.

For most of its history, Nvidia strategically acquired small companies. But in 2019, it acquired networking vendor Mellanox Technologies -- then rumored to be an acquisition target for Intel -- for  $6.9 billion. Mellanox was the industry's leading InfiniBand network interconnect provider. The company also developed data processing units (DPUs) -- chips used in smart network interface cards (smartNICs).

SmartNICs intelligently route data better than a standard networking chip or CPU. They are intended to take over the processing of networking data while the CPU continues its primary job of processing data. Because HPC and AI require moving massive data sets, Nvidia's GPUs benefit from intelligent data processing and routing.

In 2020, Nvidia attemped to acquire CPU designer Arm Holdings for $40 billion. The deal ran into immediate regulatory opposition in Arm's native United Kingdom, with allegations that Nvidia would play favorites with Arm licensing. Despite vehement denials, Nvidia failed to overcome the opposition and dropped the deal in February 2022 after 18 months of effort.

In June 2024, Nvidia was valued at more than $3 trillion. It became the world's most valuable publicly traded company briefly in 2024.

Popular Nvidia products

While Nvidia sells many of its GPUs to consumers under the GeForce brand name, the company names each new generation of its enterprise architecture products after a famous scientist, such as Maxwell, Turing, Tesla, Ampere, Hopper and Blackwell. Vera Rubin is the latest chip architecture, announced in 2025 with the first devices set to begin shipping in 2026. Other popular Nvidia products include the following:

  • GeForce. This is Nvidia's line of consumer-oriented graphics processors for desktops and laptops.
  • Nvidia Quadro/RTX. The company's GeForce was modified for professional visual computing graphics processing products, such as computer-aided design. Quadro has been retired and replaced with the RTX line. As of March 2025, the top-end product is the GeForce RTX 5090, which uses a Blackwell GPU-based architecture.
  • Tegra. This is the company's SoC series for mobile devices. It is used in devices such as the Nvidia Shield and Nintendo Switch.
  • DGX servers. Nvidia's own line of hardware includes GPUs, memory and solid-state drive storage. In March 2025, DGX Station, powered by the GB300 Grace Blackwell Ultra Desktop Superchip, was announced. This line is targeted at HPC and AI uses.
  • BlueField. The company's suite of DPUs is designed to intelligently manage network traffic and relieve the CPU. This technology was inherited from the company's acquisition of Mellanox Technologies.
  • Spectrum-X. Nvidia's next-generation ethernet platform provides high-performance networking and effective security for the data center. It consists of ConnectX SmartNIC, BlueField-3 DPU, Quantum InfiniBand and the DOCA data center infrastructure software.
  • Quantum-X. Based on technologies acquired from Mellanox, the Quantum-X InfiniBand product line includes InfiniBand adapters, DPUs and switches.
  • Jetson. Nvidia's ultrasmall form factor is designed for embedded systems, combining a Nvidia GPU with an Arm processor.
  • DGX Spark. Originally developed under the codename Digits, DGX Spark is a small AI supercomputing powered by a GB10 Grace Blackwell Superchip.
  • Nvidia Dynamo. Announced in March 2025, Dynamo is Nvidia's next-generation inference server, replacing the older Triton inference server technology.
  • Blackwell GPU. The Blackwell GPU, including the Grace Blackwell superchip, is Nvidia's current flagship generation GPU platform.
  • Nvidia AI Enterprise. While hardware is critical for Nvidia, the company also has an expansive software portfolio. Nvidia AI Enterprise collects a series of open source and proprietary models and tools in a subscription model for enterprise consumption.
  • NIM (Nvidia Inference Microservices). NIMs are Nvidia's packaging format for GenAI models. NIMs enable developers and vendors to package models for simplified deployment with Nvidia's hardware on-premises and in the cloud.

Explore how CPUs, GPUs and DPUs differ from one another and what GPUs do in a data center.

This was last updated in March 2025

Continue Reading About What is Nvidia?