If we had millions of qubits today, what could we do with quantum computing? The answer: nothing without the rest of the system. There’s a lot of great progress happening in quantum research across the industry. However, as an industry, we must overcome four key challenges to scaling up the quantum system before the finish line of this marathon will come into view.
The power of quantum
A simple way to understand the power of quantum computing is to think of a computer bit as a coin. It can be either heads or tails. It’s in either one state or the other. Now imagine that the coin is spinning. While it’s spinning, it represents — in a sense — both heads and tails at the same time. It is in a superposition of the two states.
The spinning coin is similar to a quantum bit, or qubit. In a quantum system, each qubit in superposition represents multiple states at the same time. As more superpositioned qubits are linked together (a phenomenon called entanglement), ideally a quantum computer’s power grows exponentially with every qubit added to the system.
Today, quantum systems are running on tens of entangled qubits, but to run practical applications, we’ll need tens of thousands, or more likely millions, of qubits operating together as they should. So, what barriers do we need to cross to meet that threshold?
Scaling up the quantum system isn’t all about the number of qubits that can be created. The first area requiring major innovation and attention is around the industry’s ability to create high-quality qubits that can be manufactured at volume.
The qubits that are available in the small, early quantum computing systems we see today simply are not good enough for commercial-scale systems. We need qubits with longer lifetimes and greater connectivity between qubits before we will be able to build a large-scale system that can execute quantum programs for useful application areas.
To achieve this level of quality, we believe spin qubits in silicon offer the best path forward.
Spin qubits look remarkably similar to the single electron transistors Intel has been manufacturing at scale for decades. And we have already developed a high-volume manufacturing flow for spin qubits using 300 mm process technology, mirroring the processes used to manufacturing transistors today.
In our efforts to improve qubit quality for commercially viable quantum systems, we again looked to our legacy in transistor manufacturing for inspiration. We worked with our partners Bluefors and Afore to develop the cryoprober — a cryogenic wafer prober that can test wafers at scale, similar to the way we test transistor wafers. This one-of-a-kind piece of equipment helps us get test data and learnings from our research devices 1000x faster than previously possible.
With the cryoprober, it now takes hours instead of days with respect to time-to-information. This testing capability will enable us to leverage statistical data analysis to create a rapid feedback loop and further improve qubit quality.
Today’s qubits are controlled by racks of control electronics that operate outside of the cryogenic refrigerator — where the qubits themselves sit. Qubits are tremendously fragile. Most qubits need to operate at incredibly low temperatures — just a fraction of a degree above absolute zero — to minimize the thermal and electrical noise that could introduce error into the system. But that means even near-term machines require hundreds of electrical wires running into the cryogenic refrigerator to perform simple operations on a small number of qubits. For a commercial-scale quantum computing system, we would need millions of wires going into the qubit chamber. This is neither practical nor scalable.
Intel has already introduced a promising alternative to the status quo, demonstrating a device we call Horse Ridge, named for the coldest location in Oregon. Horse Ridge is a cryogenic qubit control chip technology with scalable interconnects that operates within the cryogenic refrigerator at 4 Kelvin, as close as possible to the qubits themselves. This elegant design enables the control of multiple qubits with a single device, replacing the bulky instruments typically used with a highly integrated system-on-a-chip (SoC) that sets a clear path toward scaling future systems to larger qubit counts. It’s a major milestone on the journey toward quantum practicality.
As I mentioned previously, qubits are very fragile, which makes them also prone to error. A key hurdle to developing a practical quantum system will be the ability to correct errors within the quantum system operation as they occur. However, full-scale error correction will require tens of qubits to make just one logical qubit, which again points to our belief that a commercial-scale system will require millions of qubits. As innovation in quantum error correction progresses, we are developing noise-resilient quantum algorithms and error mitigation techniques to help us to run algorithms on today’s small qubit systems.
Scalable full-stack system
Since quantum computing is an entirely new type of compute that has an entirely different way of running programs, we need hardware, software, and applications developed specifically for quantum. This means that quantum computing requires new components at all levels of the stack — the application, compiler, qubit control processor, control electronics, and qubit chip device. Getting these quantum components to work together is like choreographing a new quantum dance.
This is why collaboration between the quantum hardware and software innovation teams is so critical. At Intel, we are doing research at every layer of the stack, using simulation and emulation to understand how all layers of the stack will work effectively in simulation, before we actually build them in hardware.
The path forward
Quantum computing promises an exponential speed-up in compute performance. However, the development of a large-scale quantum system presents many hurdles to overcome. But these challenges do not deter us. They energize the field. As researchers, we are excited about that potential and about the progress being made and, while we recognize that we are just passing mile one of this marathon, we look forward to crossing the finish line.
Dr. Anne Matsuura is the director of quantum applications and architecture at Intel Labs. She has previously been chief scientist of the Optical Society (OSA), chief executive of the European Theoretical Spectroscopy Facility (ETSF), senior scientist in the Bio/Nano/Chem Group at In-Q-Tel, and program manager for atomic and molecular physics at the U.S. Air Force Office of Scientific Research. She has also been a researcher at Lund University in Sweden, Stanford University, and the University of Tokyo; a Fulbright Scholar to Nagoya University; and an adjunct professor in the physics department at Boston University. Dr. Matsuura received her Ph.D. in physics from Stanford University.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to [email protected].
Copyright © 2020 IDG Communications, Inc.