China’s quantum supremacy demonstration may grab headlines, but not yet leadership in the quantum computing race. Leading hardware groups have firmed-up their development roadmaps for the marathon ahead and error correction has become a key part of the story. The challenge of scaling-up remains pre-eminent.
A growing number of quantum majors, startups and institutes are working to build an increasing range of quantum hardware. From early NISQ devices to full scale FTQC machines. China’s Jian-Wei Pan again attracted headlines in 2020 with the announcement of a quantum supremacy demonstration claimed to surpass that achieved by Google Sycamore in 2019. As then, mathematical debate has followed about how hard the calculation really was.
For an introduction to qubit technologies read Quantum hardware – into the quantum jungle.
The Jiuzhang experiment may claim first prize for the most complex calculation ever completed, but behind the headline how important is this new milestone? IBM can point to much the largest cloud programme and an inflection point in premium usage. Trapped ion players are jostling to take a lead on quantum volume, but still lag on qubit count. To place it all in context we need to look at what has been happening across the quantum hardware sector.
Superconducting qubits get ready for the big push
Google – a year of transition
The first big hardware news of the year was John Martinis’ departure from Google, citing leadership tensions with Hartmut Neven . Speaking at Google’s Quantum Summer Symposium, Neven reemphasised the continuity of Google’s plan and outlined the milestones they intend to target on the way to building a ‘small’ FTQC with 1 million physical superconducting qubits by 2029.
Even in his departure, Martinis has been at pains to underline the strength of the programme and hardware leadership that remains at Google. However there will be challenges. The tunable qubits and fast gates preferred by Google offer great flexibility and performance, but calibration of the Sycamore 53Q device has clearly been a challenge. With additional control comes the need to route additional control lines on and off chip. Scaling automatically compounds the wire routing challenge and component count versus failure rate overall. It’s notable that most of the work reported by Google in 2020 has used 23Q configurations of Sycamore, with automated calibration not initially able to deliver acceptable 2Q gate performance in larger setups. Google is looking to materials research as one way to improve qubit coherence times. While promising, this would require scientific, not just engineering, progress.
Google roadmap – 102Q (logical qubit prototype), 103Q (one logical qubit), 104Q (tileable logical modules), 105Q (engineering scale-up), 106Q (error-corrected quantum computer) by 2029. Error correction via surface code protocol.
Neven likes to echo Kennedy’s words on the Apollo programme “We think we can do it before the decade ends”. Google’s immediate goal is to demonstrate that physical qubit errors can be systematically reduced using prototype logical qubits of increasing size (code distance) – effectively a demonstration that error correction using the surface code protocol works in practice not just theory.
For more on quantum error correction read Quantum Error Correction – from geek to superhero.
Neven puts particular emphasis on the creation of tileable modules of about 10,000 physical qubits. He sees these as the true equivalent of the logical gate seen at the heart of conventional computer designs. Importantly they represent a point at which ‘the physics is de-risked’ and investors only have to worry about conventional engineering challenges. Google’s aspiration is to hit this mark by about 2025/26.
IBM – the Big Blue machine
IBM started laying the foundations of their roadmap early on. IBM were early movers in seeking to educate the wider community that it’s not just qubit count that matters, but also qubit connectivity, gate set and achievable circuit depth (a measure closely tied to gate fidelity). These attributes are captured in a metric introduced by IBM – Quantum Volume.
Since 2017, IBM has delivered a series of 28 devices with steadily improving performance. Against a stated target of doubling QV each year, they have managed to do so twice in the last 11 months. They are now at QV 128 with their 27Q processor , which we can expect their recently launched 65Q processor to surpass in due course. For 2023, IBM is aiming at the very significant milestone of producing a 1121Q processor, codenamed Condor. This will be hosted in a new dilution ‘super-fridge’. The Goldeneye fridge is currently in prototype and is designed to be capable of hosting multiple chips.
IBM roadmap – 127Q (Eagle) 2021, 433Q (Osprey) 2022, 1121Q (Condor) 2023; leading to a 1 million Q large scale system. Error correction via colour code protocol.
IBM are clearly focussed on large scale FTQC. The initial designs for Condor feature the same heavy-hex layout they have also used in other recent chips. This low-connectivity design is tailored to make chips with their fixed frequency qubit design easier to fabricate, while it is intended to implement error correction using low-connectivity colour codes rather than the surface code. Very broadly, this could be seen as a target to push their roadmap clearly ahead of others by 2023. Beyond that IBM hints that its super-fridges will each ultimately be able to stack multiple chips providing ‘millions’ of internally networked qubits.
When evaluating whether IBM can achieve its goals, it’s hard not to be impressed with its track record. However IBM’s vision also requires significantly reduced 2Q gate errors. While their recent generations of processor have each shown steady improvement in this key parameter, their plans now seem to acknowledge that a more significant modification of their 2Q gate design is required.
While retaining fixed frequency qubits to take advantage of the long coherence times they allow, IBM have been experimenting with the use of an additional tunable resonant coupler and bypass capacitive coupler on each gate pair. This promises much faster (and so lower error) 2Q gates, however it’s a significant variation of their previous technology design and so far has only been realised in simple 2Q experimental devices. To manage the growing wiring challenge, IBM has developed a next generation chip layout based on three layers of superconducting wiring. Seeing how smoothly these technologies can be brought together will be a key test for IBM’s roadmap.
Speaking at IQT Europe, Lieven Vandersypen (QuTech Scientific Director) pointed out that though many would love to go faster, consistently doubling quantum volume year-on-year remains a challenging schedule. We need to recall that just adding qubits doesn’t do it. 2Q gate fidelity in simultaneous operation is currently the limiting factor. Commenting at Quantum 2020, Jay Gambetta (IBM) struck a positive tone ‘I see challenges ahead, but no road blocks’.
Rigetti can also point to a track record of devices with steadily improving performance. Their concept for their next generation of devices is based on bonding multiple 32Q Aspen chips, connected by long range interconnects, onto a single silicon carrier chip. This they see as a pragmatic trade-off within the limitation of current fabrication technology. This modular approach may also offer additional flexibility for early NISQ applications.
It’s also worth noting that Rigetti uses a distinctive superconducting qubit approach based on the use of ‘parametric gates’. These seek a middle way between the benefits of long-lived fixed frequency qubits and the fast gates offered by tunable qubits. Commenting at European Quantum Week, Frank Wilhelm-Mauch (co-ordinator of OpenSuperQ) pointed out the potential future appeal of the parametric gate approach. However, Fact Based Insight expects the initial computer being built by OpenSuperQ to use a design closer to Google’s.
Emerging from what looked like a strained financing round, Rigetti can point to a $8.6m grant from DARPA’s ONISQ programme and its leading role in a £10m consortium to build the UK’s first commercial quantum computer .
D-Wave continues to pursue its own very different path focussed on quantum annealing. This has allowed D-Wave to scale up quickly to target early client opportunities. Its recently launched Advantage system is a significant upgrade to 5000 annealing qubits with 15-way connectivity . However a drawback of this technology is that it doesn’t have a theoretically well understood pathway to implement error correction and so ultimately to scale to provide adiabatic FTQC.
D-Wave is reported to have had a difficult financing round, but has successfully emerged with an interesting new strategic partner . NEC brings clout in hybrid product development as well as sales & marketing reach.
Finnish startup IQM provides an interesting example of a startup with a differentiated offering. IQM builds on-site quantum computers for research institutes and HPC centres, and offers a co-design approach for industrial customers. This chimes with European concerns about technological sovereignty and aligns well with leading Finnish know-how in cryogenic systems (Bluefors) and partner expertise in control systems (Zurich Instruments). IQM has won a public co-innovation public tender by VTT and the Finnish government. This €20.7m project will deliver a 50Q quantum computer in Finland by 2024 . Others such as Quantum Machines recently partnered with Q-CTRL are also targeting this R&D market and already boast a specialist customer base across 10 countries.
It’s an advantage of superconducting qubit technology overall that alternative engineering options are available. However Fact Based Insight believes that the specifics of each company’s approach will start to matter more commercially. At a time when everyone wants to scale-up, the scaling characteristics and challenges of these approaches differ. Importantly innovations in this area will often be patentable. If parametric gates win out, Rigetti’s patent portfolio could shine; if wiring geometries become a bottleneck expect IBM’s multi-layer superconducting wiring or OQC’s co-axmon to leap to the fore. If Seeqc’s ‘digital’ SFQ control technology works as advertised expect others to be disrupted as it builds its own unique hybrid system-on-a-chip modules and application specific quantum platforms.
Trapped ions make their move
Trapped ions had a great 2020. While QV was pioneered as a measure by IBM, Honeywell has recently leaped forward being the first to QV 64 and then QV 128 with their 6Q H0 and 10Q H1 processors . Some might wonder how a 10Q processor can claim to be as powerful as IBM’s 27Q processor, however this just highlights two of the advantages trapped ion enthusiasts have long expounded: superior connectivity and higher gate fidelity than superconducting qubit approaches. Both of these advantages are correctly (if crudely) rewarded by the QV metric. The Honeywell processors are also the first of any type to implement mid-circuit measurement, unlocking further flexibility.
Honeywell roadmap – H1 (linear trap), H2 (racetrack layout), H3 (grid layout), H4 (integrated optics), H5 (large scale via tiling); by 2030.
IonQ have announced a 32Q device they expect to achieve a QV significantly higher than any previously measured, though they now prefer to talk in terms of a new measure – algorithmic qubits.
IonQ roadmap – 22AQ 2021, 29AQ 2023, 64AQ 2025, 256AQ 2026, 384AQ 2027, 1024AQ 2028. Error correction – 16:1 2025, 32:1 2027.
Algorithmic qubits (AQ) – IonQ has defined this measure to indicate the number of ‘effectively perfect’ qubits available for calculation (note that available logical gate depth will still be limited). In the absence of error correction encoding AQ=log2(QV).
One disadvantage of trapped ion systems is that they offer significantly slower gate speeds (typically x100-1000 slower) than superconducting qubits. They look to offset this with longer qubit lifetimes and higher fidelities leading to less error correction overhead.
A recent lab demonstration (with a setup very similar to IonQ’s devices) has seen a logical qubit successfully encoded from 13 physical qubits . The Bacon-Shor-13 code employed doesn’t offer the same long term scalability as topological codes (such as surface codes or colour codes), but it does point the way to significant medium term improvements in effective error rate. IonQ clearly believe that this, combined with high fidelity physical qubits, will be enough to achieve quantum advantage sooner than other approaches. Dave Bacon’s recent move from Google to IonQ comes into perspective!
The real long term challenge for trapped ion systems is again scaling up, particularly where they depend on finely tuned laser systems to drive their high fidelity qubit gates. Just as superconducting qubit approaches differ, so do trapped ions.
AQT offers commercial access to a device based on just such a different approach. Rather than use qubits defined on hyperfine transitions as used by Honeywell and IonQ, they use qubits defined on optical transitions. While shorter lived (and so slightly lower fidelity), such qubits operate at wavelengths where integrated photonic components are significantly easer to fabricate hence promising an easier path to scaling. 2020 saw such integrated devices demonstrated in the lab at these optical wavelengths . This promises to unlock a pathway to easier scaling of this technology. AQT has worked with the AQTION, a QT Flagship project, to build a complete ‘system in a rack’ for the first time.
Other trapped ion startups are looking beyond laser driven gates. Universal Quantum, NextGenQ and QT Flagship project MicroQC are seeking to bring far-field microwave gates out of the lab and into commercial devices. Investors may take particular note that Chris Balance and Thomas Harty, names closely associated with many key performance records for laser driven gates, have selected to base their own startup, Oxford Ionics, on near-field microwave gates.
Trapped ion architectures often assume scaling using photonic interconnects between modules. Faster interconnects have recently been demonstrated , but still seem a likely performance bottleneck . On the other hand Universal Quantum have demonstrated that their ion shuttling approach can in principle deliver QV similar to all-to-all connectivity .
Neutral atoms on the cloud
Neutral atom qubits used 2020 to continue their sharp rise to prominence. Sharing many characteristics with trapped ions, they offer the advantage that neutral atoms can be packed more tightly. This promises the potential to scale to 1000Q modules more quickly.
Cold atom is an alternative name for this technology based on its use of laser cooling and hard vacuum to reach microK temperatures, well below the reach of cryogenic cooling.
ColdQuanta are a notable champion of this approach and have launched QuantumCore as a basic unit cell to target a number of quantum sector opportunities. It is already the basis of Albert, a quantum matter system on the cloud . ColdQuanta has been selected by DARPA to work on a 1000Q processor as part of the ONISQ programme, an award worth up to $7.4m. .
ColdQuanta roadmap – 100Q by 2021, 300Q by 2022, 1000Q by 2024.
QuEra, Paswal and Atom Computing are others following the neutral atom path.
Silicon for the long game
In 2020, silicon qubits based on quantum dots demonstrated significant progress towards realising one of their long promised advantages. Groups in QuTech and UNSW demonstrated qubit operations with MOS quantum dots at 1K (don’t think 1degree warmer than the 10mK, think x100 warmer) . This promises to be a significantly easier regime in which to operate and scale up devices, though it remains to be seen if coherence times and fidelities at these higher temperatures will be competitive.
QLSI, a new €14m QT Flagship project, is a major push to take this technology forward in Europe. Notable participants include CEA-Leti, CRNS and QuTech as well as commercial players and startups already active with this technology including Hitachi Europe and Quantum Motion. The aim is to demonstrate a 16Q device by 2024, as well as assessing scaling challenges to 1000Q+. . QuTech already boasts a 2Q silicon qubit processor available on the Quantum Inspire cloud .
Australian startup Silicon Quantum Computing has been an early mover in silicon qubits. In 2020 it announced the focussing of its roadmap, dropping MOS quantum dots and doubling down on phosphorus atom qubits in silicon. These devices use an ultra-cutting edge fabrication technique offering atomic precision way beyond conventional CMOS techniques.
SQC roadmap – 10Q prototype by 2023, 100Q before 2030, useful FTQC by the mid-2030s.
When describing SQC’s fabrication technology, Michelle Simmons (founder of SQC) likes to point out not just its ability to engineer qubits with atomic precision, but also that this same technology can create stable, simple and pristine control wiring within the same device substrate. This year they have reported the lowest noise in silicon qubits to date . Following his departure from Google, John Martinis has now joined SQC. The chance to develop devices with ultra-fast gates and scalable wiring options has no doubt been a big draw.
In 2020 Canadian startup Photonic Inc have published early research promising to add an important new tool to the silicon qubit armoury. This promises an improved interface with optical photons based on T-centre defects in silicon .
China’s Jiuzhang experiment was able to demonstrate a calculation more complex than that achieved on any other platform so far. It did this by implementing an algorithm known as Gaussian boson sampling with 100 output modes and with up to 76 output photon clicks detected. It produced output samples in 200s that it claims it would take Fugaku (the world’s most powerful supercomputer) 0.6 billion years to recreate. This is a level of complexity significantly in excess of that achieved by Google Sycamore in its original quantum supremacy demonstration.
Scott Aaronson along with Alex Arkhipov were the original proposers of boson sampling as a route to demonstrating quantum supremacy. Aaronson was a reviewer of the recent Juizhang scientific paper and makes an amusing observation “I asked why the results had only been verified by classical calculations up to 30 photons, I pushed that verification should be possible up to 40 or 50. A couple of week’s later they came back saying they’d now verified the results up to 40, but it had burned $400,000 worth of supercomputer time so they decided to stop there”. Is this the most expensive scientific referee comment in history?
Hardness dispute – At the time of writing the full significance of the Jiuzhang result is under dispute. In a reversal of the original Google vs IBM quantum supremacy debate, Google is now the one arguing it may have found a more efficient way to reproduce the sample output by conventional means. However, whether or not Jiuzhang’s formal claim stands, that doesn’t really change what it tells us about the state of USTC’s photonics tech roadmap.
Jiuzhang does not come out-of-the-blue. China has been increasing its investment in quantum technology since at least 2006. The expertise of Pan’s group is well known, and in 2019 they successfully completed a pre-cursor experiment with a 60-mode device . This latest experiment is a notable scientific achievement and again demonstrates tour-de-force science and engineering from this team.
Anthony Laing (Univ. Bristol) focussing on the photonics rather than the maths observes “The jump in the number of photons mutually interfered is astounding. Going from about a dozen to over 70 is a huge leap, really surprising and super-impressive. I’d love to see the team develop configurability into their setup for a better comparison to classical methods and hardware.”
The implications of Jiuzhang should not be stretched too far. The device is not programmable in its current form, and implements one static algorithm rather than a universal approach to quantum computation. Perhaps even more importantly it has been achieved using a ‘conventional’ optical table setup. All of the active components remain discrete. Many manual adjustments will have been required to achieve a stable configuration. This approach is scientifically exciting, but presents severe challenges for scaling up.
Western groups have for some years been pursuing integrated photonic technology as a promising route to truly scalable photonic quantum computing. This promises to allow (almost) all the required components to be directly incorporated in a fabricated device. Notable demonstrations of actively programmable devices date back to 2015 .
Xanadu employ an approach conceptually similar to Pan’s Gaussian boson sampling and already offers cloud access to programmable 8, 12 and 24 mode processors.
Xanadu roadmap – Three processor series in parallel 2021-23: X-Series (current) X40, X80; XD-Series (100% connectivity) XD4, XD8, XD12, XD40, XD80; TD-Series (time domain multiplexing) TD2, TD3; calable to 1 million Q in 5-10 years. Error correction via GKP qubits.
QuiX have recently launched a 12 mode processor.
QuiX roadmap – 12 mode 2021, 50+ mode 2022
PsiQ have raised $250m+ funding for their programme to build a universal one million qubit device in ‘a handful of years’ and have been producing test chips with GlobalFoundries using standard fabrication processes .
PsiQ roadmap – 1 million Q, able to be operated as 100-300 logical qubits, reportedly within 5-8 years. Error correction via FTCS beyond foliation [inferred ].
Duality Quantum Photonics emphasise the potential for special purpose quantum computing devices to target industrially relevant problems in 3-5 years and plans to introduce its first component level prototypes in 2021.
Integrated photonics isn’t the only idea in the West. ORCA Computing are now in the second year of pursuing a system based on discrete modular components connected with optical fibre. ORCA emphasise this approach is not as ‘all or nothing’ and promises to be quicker and cheaper to develop and reconfigure.
Scaling it up
Speaking at IQT Europe, Bob Sutor (IBM) summed-up what will be the view of many potential adopters “From a scientific perspective its always very interesting to hear about all the different qubit technologies. But can they break through to 100 and then 1000 qubits? The single question is scalability, enough really good qubits to solve the commercial problems you really want to solve”.
Michael Cuthbert (UK NQCC Interim Director) comments “It’s much too early to pick winners. There are multiple technologies and multiple approaches to scaling that need to be investigated”.
Most of the leading hardware players have set out their roadmaps. In the end this hardware will spend most of its time running error correction. Don’t be surprised that the debate over how best to do this has become intertwined with that over qubit technology.
To watch in 2021
- Quantum Supremacy – Expect Jiuzhang to make waves whatever the final outcome of arguments on calculation difficulty. More importantly watch for signs of that this technology is being made programmable and scalable. Only this can make it truly disruptive.
- QV – IonQ have set high expectations for what its new 32Q configuration can achieve. Will it really hit 4 MegaQV in measured performance (22QA in IonQ’s new terminology)?
- QV/s– Quantum Volume tests many important device characteristics, but it fails to capture the raw gate speed differences between different qubits platforms. Expect the Superconducting qubit community to fight back with adapted measures that allow them to boast about their fast gate speeds.
- Qubit count – Will IBM be first to put a 100Q+ processor on the cloud with their 127Q Eagle? Or will Rigetti seize the qubit count lead with a 4x32Q multichip Aspen module? What will we see from Google’s ‘100Q’ device? Watch out for trends on simultaneous 2Q gate fidelities.
- Logical qubits – Watch out for error correction demonstrations from leading players as a sign that they are honing in on a new major milestone – operating logical qubits.
- China – Will OriginQ manage to add a 60Q Wu Yuan 2.0 device to their cloud offering?
- Europe – QT Flagship project OpenSuperQ is expected to deliver its first device. How close will it get to 100Q and with what QV? AQTION will deliver a 50Q+ device; watch out for its flexible rack-based configuration.
- UK – Rigetti is building a superconducting qubit based machine in the UK, hosted in one of Oxford Instrument’s new Proteox family of dilution fridges. Watch out as details of the target specs emerge.
- Superconducting tech – Watch developments at startups with new flavours of this technology such as SeeQC, and OQC. Watch out in particular for details of what QCI are planning. It’s a strong team but still operating beneath the radar. What they choose to back will be a guide to how they see the scaling challenges ahead.
- Quantum Annealing – Watch out for details of future D-Wave hardware plans. What out for details from Qilimanjaro on its proposed ‘coherent’ quantum annealers.
- Trapped Ions tech – Existing leaders such as Honeywell and IonQ will be pushed by AQT. Watch out for news from startups with microwave gate technology such as Oxford Ionics, Universal Quantum and NextGenQ.
- Neutral Atoms – Will we see a ColdQuanta 100Q device stealing headlines in 2021? Watch out for fidelity performance. Watch out for more on plans from startups QuEra, Pasqal and Atom Computing.
- Quantum dots – Spin qubit prototypes have often been based on MOS or SiGe quantum dots on a silicon substrate. Over the last two years germanium qubits on a silicon substrate have made striking progress, including the demonstration of a 4Q processor by QuTech. Watch for signs of which variant will emerge as the leading quantum dot qubit platform.
- Silicon fidelities – Demonstrating truly high fidelity 2Q gate performance remains a key goal. Watch as details emerge from the QLSI consortium. Will we see SQC or Photonic Inc starting to catch up on qubit fidelities?
- Photonic platforms – Multiple variations of this technology have been developed, each with different strengths and weaknesses. Silicon-on-insulator (SOI) is the most established technology and has been championed by PsiQ. Silicon nitride (Si3N4) offers a strong existing component ecosystem and has been preferred by Xanadu and QuiX. Other variations of this technology are emerging. Watch out for signs of which of these technologies will come out on top for quantum applications. Which will Duality pick for its initial developments?
- Special purpose devices – An emphasis on quantum simulators is a feature of the EU’s QT Flagship. Watch out for results from projects such as PASQuanS and Qombs. End users engagement is particularly advanced here through companies such as ATOS, EDF and Airbus. Watch-out for prototype plans from statups such as PASQAL, Duality or Bleximo.
- Topological qubits – 2020 saw a significant setback with doubts surfacing over previous TU Delft results regarding Majorana quasiparticles (one platform upon which such qubits could be realised). Watch out for the champions of this conceptually appealing approach to fight back.
- Control hardware – The R&D market is set to be an important stepping stone for control specialists such as Zurich Instruments and Quantum Machines. Majors such as IBM, Google, Intel and Microsoft are eyeing cryo-CMOS control hardware located closer to the quantum hardware. Seeqc is developing its own potentially disruptive ‘digital’ control technology. Watch out for signs of how this market is going to shake out.
- UK NQCC – Watch out for the award of funding packages in multiple technology areas early in 2021. The NQCC doesn’t have a technological axe to grind, and so it’s choices will offer a clue as to where it see’s challenges that need to be addressed.
- New qubits technologies – AWS has just announced its interest in hybrid electro-acoustic qubits. Quantum Brilliance are pursuing a NV Diamond based processor. EeroQ are pursuing electrons on helium. Watch out for details of these and other novel tech platforms.
- New codes – AWS’s new hybrid electro-acoustic qubits are notable not just for the novel technology platform, but also for the ‘concatenated cat code’ error correction scheme proposed. This exploits an engineered bias in noise characteristics to reduce overheads, and avoids the magic state factory bottleneck that plagues many approaches. Watch for other schemes inspired by this approach.
- Scaling, scaling, scaling – The key question for most devices is not what they can do, but what they show about the platforms ability to keep scaling up towards quantum advantage.
Quantum Outlook 2021 – Hardware / Algorithms / Software / Internet / Sensing / Landscape
Pingback: Quantum Internet Outlook 2021 – Fact Based Insight
Pingback: Quantum Outlook 2021 – Fact Based Insight
Pingback: Quantum Crypto - Trust me, I've come to save the world – Fact Based Insight
Pingback: Quantum Outlook 2022 – Fact Based Insight