by Insa Mohr
figures by Hannah Zucker
The past year has been momentous for quantum computing, a technology that applies the almost-mystical phenomena of quantum mechanics to build novel supercomputers, high performance computers used to solve large scale computational tasks. Google demonstrated that their quantum computers can solve a problem that no classical computer could ever solve. For the third year in a row, IBM managed to double its quantum computing power. Additionally, several web service providers, including Amazon, announced plans for cloud-based quantum computing services. Quantum computers can make drug development, power storage, manufacturing, and agriculture better, faster, and more sustainable. They may also unravel cybersecurity infrastructure around the world making them a potential threat to national security.
Despite these newsworthy headlines, the field of quantum computing is still in its infancy. The quantum computers we see today are relatively small-scale systems with up to 50 qubits, the building blocks of a quantum computer. For real-world applications, however, we would need quantum computers with thousands, if not millions, of qubits. Whoever cracks the difficult question of how to scale quantum computers may shape the future of technology and geopolitics. Cracking this question, therefore, is the holy grail of quantum computing.
The paradox in building scalable quantum computers
Quantum computers can best be understood when compared to traditional computers. The building blocks of a traditional computer are binary bits, each of which stores one of two values: 0 or 1. These bits interact with each other through simple logic gates, such as NOT, AND, and OR. A traditional computer holds billions of bits on tiny transistors.
In contrast, a quantum computer is constructed from qubits. Unlike bits, qubits can hold states of 1 and 0 at the same time, like an atomic “Schrödinger’s cat.” This can be difficult to understand in detail–even most hardboiled computer scientists don’t–but the most important takeaway is that quantum computers are both tremendously complex and far more powerful than the traditional computer you are probably staring at right now.
Unsurprisingly, creating and controlling qubits is difficult. Unlike bits in regular computers, qubits are highly sensitive to any kind of outside interference, known as noise. This can be anything from stray magnetic fields and temperature imbalances to material impurities. Sometimes even a nearby elevator is enough to disturb operations. Noise accelerates the loss of information in the qubit, and the time before the information is lost is referred to as the coherence time. If there is too much noise, the coherence time becomes too short to complete an operation.
Scaling these systems further complicates the task: as more qubits are added to a system, the system becomes more prone to noise. Put another way, as the number of qubits in a system increases, the average coherence time per qubit decreases. Paradoxically, to offset this lower computational power per qubit, even more qubits have to be added to the system, which, again, exacerbates the coherence problem. This paradox lies at the heart of the issues facing quantum computing today.
Can we reduce or counteract this noise? After all, qubits can’t be completely isolated from any potential noise–they still need to communicate with electronics outside the computer. Some approaches focus on reducing the noise itself by controlling temperature or stabilizing electric fields. Others focus on improving error correction through complex mathematical models.
As we will likely never be able to reduce or correct noise completely, making qubits themselves more robust to noise is the holy grail of quantum computing.
The technologies and players in the game
An eclectic group of large corporations, universities, startups, and national institutions compete in the search of this holy grail, experimenting with a dozen different potential methodologies. There are approaches based on photons, optical lattices, and even diamonds, but four particular approaches are most promising: Superconducting Circuits, Trapped Ions, Ultracold Atoms, and Topological Qubits methodologies.
By far, the most commonly used method by large technology corporations, such as Google, IBM, and Intel, is Superconducting Circuits. This technique involves creating “artificial atoms” and deploying them in a structure akin to traditional electric circuits. Google and IBM have been locked in a race, doubling their system sizes continuously for the past few years and reaching processors of about 50 qubits.
Until recently, this method’s main rival has been the Nobel Prize-awarded Trapped Ions method. The Trapped Ions method operates on a system of five ions which are trapped together in an electromagnetic field and interact with each other through vibrations. While operations are slower and the largest available system only comprises 11 qubits, Trapped Ions have much higher coherence times. The method is exclusively marketed by Google Ventures-backed startup IonQ, which enjoys a bit of an underdog status, but has been hotly tipped as a promising candidate.
The rising star is the Ultracold Atoms method, which relies on capturing and arranging super-cooled atoms with lasers. Although this method is much newer, a team of Harvard and MIT researchers recently used it to devise a highly noise-resistant quantum system of 50 coherent qubits–one of the largest quantum systems ever created. This architecture, however, will likely not be as adaptable as other methods to use cases like cybersecurity.
Finally, Microsoft’s Topological Qubits is the most promising, but also by far the most complex and least developed approach. This approach works by separating part of an electron to shield it from outside interference. Barring any other breakthroughs, Microsoft may lead the second generation of quantum computers one day, but it may take more than 50 years.
Which method will win? We don’t know! For a long time, the Superconducting Circuit and Trapped Ions methods were treated as the gold standard. Now the Ultracold Atoms approach has advanced with impressive momentum, with the first serious implementation already matching the qubit size of other far more developed methods, and with outstanding coherence times. It would not be surprising if future implementations are many times larger.
Why it matters who wins the race
Although the race for finding the holy grail is far from complete, its ‘winner-takes-all’ nature makes it exciting to watch. The outcome of the race between large technology giants, small startups, and academia will determine the future dynamics of technology markets.
Largely unseen to us, there is also a battle of unprecedented magnitude raging between political superpowers. Quantum computers have the theoretical ability to crack the cryptography methods underlying most of our technology today. As a result, quantum computing is sometimes referred to as the “digital nuclear bomb,” and a true arms race is underway between the Western and Eastern hemisphere. It looks like the U.S. is still far in front today, but China’s aggressive technology strategy backed by billions of dollars of quantum research investments may one day put it at the forefront of the battle.
Having said that, the future of quantum computing is still largely undetermined, no matter what the media or corporations’ PR departments may proclaim. We simply don’t know when, how, and by whom we will see the first large-scale quantum computers. The only certainty is that quantum computing will significantly change the world. Considering the immense implications for our national security, whether the world will change for better or for worse is yet to be seen.
Insa Mohr is a special student at the Harvard John A. Paulson School Of Engineering And Applied Sciences focusing on Engineering and Computer Science. She previously worked as a Management Consultant at the Boston Consulting Group and is very enthusiastic about getting fellow business people as excited about science and technology as she is.
Hannah Zucker is a Ph.D. student in the Program in Neuroscience at Harvard University.
Cover image: National Institute of Standards and Technology