DARPA quantum computing benchmark test seeks 'utility scale'
DARPA aims to evaluate up to 20 vendors -- from startups to large tech companies -- using a benchmarking framework that spans multiple types of quantum computing.
DARPA, the Defense Department's R&D organization, aims to gain insight into the viability of quantum computing by evaluating upward of 20 vendors and their vastly different technical approaches.
The agency last week selected 15 companies for Stage A of its Quantum Benchmarking Initiative (QBI), a projected 5-year program to put quantum machines through their paces. Technology providers range from IT industry veterans such as HPE and IBM to quantum tech startups. As a group, they represent key types of quantum computing, such as systems that use superconducting, trapped-ion, neutral-atom, photonic or spin qubits. Qubits, or quantum bits, are the core unit of information in quantum computing.
The 15-vendor field is set to expand: DARPA said it is negotiating with three other companies. In addition, the agency is already working with two vendors in its Underexplored Systems for Utility-Scale Quantum Computing (US2QC) program, which includes PsiQuantum's photonic approach and Microsoft's superconducting topological qubit modality. US2QC, which launched in late 2023 and entered its third and final phase in February, is a pilot program for the UBI initiative.
'Ambitious' quantum benchmarking
QBI is notable for its scope. Most of the quantum computing ecosystems and technology hubs that have launched worldwide deal with one or two modalities of the technology. Joe Altepeter, DARPA QBI program manager, said assessing a wide range of modalities is a complex task, given the unsettled nature of quantum computing.
"Compared to classical computing, quantum computing is really the Wild West," he said.
Technologists have become accustomed to classical computing chips, which have different speed, size and connectivity characteristics but are essentially the same from a technical standpoint, Altepeter said. This similarity allows for apples-to-apples comparisons of systems -- the proper metrics to track are fairly obvious, he noted. Quantum computing, however, involves entirely different metrics and methods for tracking success between the various approaches.
"That's definitely a challenge, and I think that's why a lot of centers [and] a lot of programs tend to focus on one or two [modalities]," he said. "You need so much expertise to evaluate all of them."
Brian Hopkins, a vice president and principal analyst who covers emerging technology at Forrester Research, noted that the DARPA program is unusual for the number of modalities under study.
"QBI is the most ambitious and detailed benchmarking effort we have seen to date," he said in an email interview.
Hopkins also cited QBI for creating a shared benchmarking framework for quantum computing at a time when technical approaches are hard to compare. That framework, he said, "gets to the heart of the question in front of us: Which technology or technologies are most promising to create practical value?"
Transformative machines in a decade?
DARPA's quantum computing initiative also differs from other testing programs in its time horizon. Altepeter said most programs evaluate what quantum computers can do today. Instead, DARPA asks vendors to provide technical details on the quantum concepts they believe will become "industrially transformative" machines within a decade, he said.
As the full name of the US2QC pilot program indicates, DARPA's target is "utility-scale" quantum computing, which the agency defines as the point at which the computational value of a machine is greater than its cost. Altepeter said QBI's working assumption is that "you are going to have to make a very large fault-tolerant machine" to achieve the industrially transformative and utility-scale objectives.
"If somebody has a completely surprising approach that does an end run around fault tolerance, we'll be thrilled to dig into it," he said. "But I think it is going to be near-universal that people are going to pursue fault-tolerant approaches.
Fault tolerance addresses one of the top technical challenges in quantum computing: the instability of qubits and the resulting introduction of computational errors. Vendors of quantum computing technology currently address errors through techniques such as error suppression and error mitigation, but full fault tolerance is generally considered to be a few years away.
The benchmarking program will reveal how vendors aim to address fault tolerance and other issues.
"We're going into this with an open mind," Altepeter said of QBI. "We'll see what floats and what doesn't."
Vendors in DARPA's quantum evaluation
DARPA is considering a mix of modalities, an variations within modalities, across the vendors participating in its evaluation. The list below includes vendors in the agency's QBI and US2QC programs.
Vendor |
Modality |
Alice & Bob |
Superconducting cat qubits |
Atlantic Quantum |
Fluxonium qubits with co-located cryogenic controls |
Atom Computing |
Scalable arrays of neutral atoms |
Diraq |
Silicon CMOS spin qubits |
Hewlett Packard Enterprise |
Superconducting qubits with advanced fabrication |
IBM |
Modular superconducting processors |
IonQ |
Trapped-ion qubits |
Microsoft |
Superconducting topological qubits |
Nord Quantique |
Superconducting qubits with bosonic error correction |
Oxford Ionics |
Trapped ion qubits |
Photonic Inc. |
Optically linked silicon spin qubits |
PsiQuantum |
Silicon-based photonics |
Quantinuum |
Trapped-ion qubits; quantum charge- coupled device architecture |
Quantum Motion |
MOS-based silicon spin qubits |
Rigetti Computing |
Superconducting tunable transmon qubits |
Silicon Quantum Computing |
Phosphorus atom qubits in silicon |
Xanadu |
Photonic qubits |
Quantum test results may vary
It's an open question regarding which quantum computing modalities will win out over time, Altepeter said. The results of the benchmarking program could go multiple ways, he noted.
"I could imagine we go to Stage A and there are zero companies that can float, and we say this is really something that is a 20- or 30-year proposition," he noted. "I can imagine there's six modalities that survive and they each specialize in different problems. I can imagine there really is a favorite and we coalesce around one approach."
Altepeter described Stage A as a six-month sprint in which vendors will outline the technical details of their quantum concepts, as previously noted. Stage B, a year-long phase, will review the R&D plans of vendors. In Stage C, the final phase, an independent verification and validation team will test their hardware.
Benchmarking hardware across quantum computing modalities is a harder but more valuable path than focusing on a single modality or relying heavily on simulators, according to Hopkins.
"This matters because the key questions for both government and industry aren't about who has the most qubits," he said. "They're about which platform can scale, correct errors, maintain coherence and actually run useful computations without falling apart."