Despite strong progress, demonstrating actual quantum advantage with early devices is still an elusive challenge. Expert opinion differs markedly on where and when initial value will come. Future large scale machines look great, but the overheads look daunting. With current error correction methods even a one million physical qubit machine may only be a modest size for a quantum computer.
Quantum computers only gain an advantage when they run a quantum algorithm. Early theoretical work often focussed on what can be achieved by large scale FTQC. In some cases this offers remarkable (exponential) speedups, for example with Shor’s algorithm (cryptanalysis), Harrow Hasidim & Lloyd (linear algebra) and phase estimation (quantum chemistry). In other cases it offers more modest (quadratic) speedups, but with wide applicability such as Grover’s algorithm (search).
But such ideal machines are still many, many years away. In recent years much work has focussed on what might be achieved with early NISQ devices using heuristic approaches such as VQE, QAOA, QNN and quantum annealing. Here the theoretical underpinnings of what speedup can be achieved are much less well established.
For an overview of quantum algorithms read Quantum Software – beneath the quantum hype.
Matt Johnson (QC Ware) likes to compare current work in the quantum sector to the challenge of completing a trans-continental railway. The progress on hardware from one coast needs to be met by progress on algorithms coming from the other.
The push for quantum advantage on early devices
The headline grabbing success of Google’s Sycamore chip at the end of 2019 put it in pole position at the beginning of 2020 for tests of what might be possible with early NISQ devices. Google used its Quantum Summery Symposium as something of a ‘coming out party’ for its quantum computing service. The results on display were impressive but mixed in their implications.
The initial phase of Google work with Sycamore focussed on selected problems being investigated by the core team. This included experiments on Sycamore with a series of commonly mooted NISQ approaches.
VQE was used to replicate Hartree-Fock quantum chemistry calculations for hydrogen chains. Deliberately targeting something that is conventionally calculable may seem odd, but this provides a good way to investigate and advance what the device is capable of. Good performance was achieved out to an H12 chain. This is state of the art, but still far from offering quantum advantage .
Product formulas (Trotterisation) was used to simulate an 8-site 1D Fermi-Hubbard model (popular in materials science). It was striking that Google were able to successfully operate their algorithm out to a circuit depth of almost 500. This is a much deeper depth than many had expected with today’s generation of devices (their own quantum supremacy experiment only used a circuit depth of 20!) .
QAOA has been demonstrated for sample problems including 3-regular MaxCut and the fully connected Sherrington-Kirkpatrick model. Though a state of the art demonstration, performance suffers markedly when the problem does not naturally map to the native connectivity of the quantum device .
In each case Google has systematically applied a layered series of error mitigation techniques (post-selection, purification and variational feedback) to improve the accuracy of the results achievable on the NISQ device. In addition they emphasise the key role played by a new technique they call floquet calibration. This allows 2Q gates to be rapidly re-calibrated to compensate for drifts and fluctuations.
Google is notably cautious about the prospects for NISQ quantum advantage with these techniques. Speaking at Quantum 2020 Ryan Babbush (Google) summed up “breakthroughs are still likely required to scale to the classically intractable regime”. While this is less than the popular hype may have wished for, Fact Based Insight see this as very strong progress against what might reasonably have been expected.
Random numbers and sampling
A number of players are now investigating random numbers as an early quamtum computing service offering.
Certifiable random numbers – Google has reported progress on offering random numbers via a challenge and response protocol. The prospect that the quantum providence of such numbers can be checked remotely and publicly offers a unique differentiation compared to other QRNG products. The main limitation on the current protocol is that the remote verification step is computationally expensive. Nevertheless this may prove to be the first niche commercial application of Google’s current generation of devices .
Verifiable random numbers – In separate work, CQC have demonstrated how they can use existing quantum devices to implement a simpler cloud based QRNG service. By implementing a Bell test, the user can verify that the random numbers produced are truly from a quantum source. This beta QRNG service is already available on the IBM Quantum network. However in this protocol the user must still trust the cloud service provider, and so this application currently competes with low cost private QRNG solutions .
GBS – China’s headline grabbing quantum supremacy demonstration with Jiuzhang has again drawn attention to Gaussian boson sampling as a candidate algorithm for early devices. Again random numbers are cited as a potential application. Whether this approach can offer advantage in solving graph isomorphism related problems (an important class in computer science) are a matter of debate in the expert community .
Umesh Vazirani (Berkeley) is working to apply techniques from PQC to define ‘proofs of quantumness’ using trapdoors that make it easier for conventional computers to check the work of a quantum computer . In the end this could make quantum supremacy experiments less controversial and random numbers easier to certify
An important potential use case with applications from fincanical services to logistics and manufacturing is optimisation. Early pioneers have increasingly been able to compare the performance of algorithms such as QUBO (quantum annealing) and QAOA (gate model), together with quantum-inspired approaches running on conventional hardware.
BBVA have completed a series of discovery phase projects looking at financial sector applications, including with startups Multiverse Computing and Zapata . BBVA’s collaboration with Multiverse is a notable example of the large real portfolios which are now being used in such evaluations across a range of early hardware, including NISQ, quantum annealing and quantum-inspired approaches . BBVA’s results show just how well quantum annealing and quantum-inspired approaches can do on these problems .
In Europe, the automotive sector has been notably active with players such as Volkswagen, BMW and component supplier Bosch discussing their early experiences this year. Again optimisation is seen as a key opportunity both in logistics and manufacturing operations.
Volkswagen has conducted work looking at the very real world problem of minimising paint shop colour changes on its production lines. Again quantum annealing proves how effective it can be in such applications .
Proving that there is a genuine ‘quantum’ advantage remains elusive in this work. But client interest is certainly being sustained.
Trapped ions make their mark
With new devices launched by IonQ and Honeywell, work has also been underway to explore the possibilities they offered. Early results have highlighted the advantages of high fidelity gates and high qubit connectivity.
A notable number of early proof of principle demonstrations have been conducted in the area of quantum machine learning.
QC Ware has demonstrated their nearest centroid algorithm and Forge Data Loader on IonQ’s 11Q device ;
Zapata has demonstrated a quantum associative adversarial network technique to generate ‘handwritten’ digits on IonQ 11Q and Honeywell HS0 systems ;
Rahko has demonstrated an interesting combination of VQE and QML techniques to find the first excited state of 2Q and 4Q molecules on Honeywell HS0 .
Importantly, the ability to proceed by trial and error, as the conventional machine learning sector has done, is what pioneers in this area have been seeking.
Tugging the cat’s tail
A specific feature of Honeywell’s early devices is their support for mid-circuit measurement. In the medium term all devices will need this as a key step in implementing error correction. However its availability for the first time in a programmable device has allowed a number of particularly interesting demonstrations.
Search – Grover’s search algorithm is a prominent feature in introductions to quantum computing, but it is often neglected in work on early quantum processors because of the apparent need for very large scale FTQC and QRAM technology to realise it on problems of interesting size. However quantum startup BEIT has successfully demonstrated their optimised version of Grover’s algorithm using mid-circuit measurements for a record of 6Q searches on Honeywell’s H0 trapped ion device. The significance of this work may not be in its direct application to conventional search problems, but that it could prove a very flexible primitive for use in combination with other quantum algorithms.
Quantum Assertions – CQC has demonstrated how mid-circuit measurements can be successfully used to perform error mitigation by enforcing physical symmetries when using VQE to find the ground state energy of H3. This illustrates a trade-off in nominal circuit depth that we can expect to see exploited in NISQ optimising compilers.
Measurement Based Quantum Computing – In an eye-catching demonstration, the ability of Honeywell’s devices to perform mid-circuit measurements has been exploited by CQC to implement MBQC. This is an alternative, but equivalent, method to the circuit model used in most current devices. In a demonstration the Honeywell device was able to successfully operate out to 172 CNOT gates and 105 measurements. As Ross Duncan (CQC) explains, the ZX-calculus can be used to move between circuit based and measurement based methods. This opens up the possibility of mixed model compilation that deliberately moves between modes during a computation to optimise the use of resources or to suppress errors.
Mid-circuit measurement may sound like a technical feature, but it’s worth recalling exactly what is going on. The user of the Honeywell device is doing a controlled partial collapse of the entangled qubit wavefunction. Anyone who tells you they really understand quantum measurement hasn’t read any of the most famous quotes about quantum mechanics. What is being done is at the cutting edge of the complexity frontier.
Where might we see the first big wins?
A growing series of events saw quantum experts and industry early adopters come together in 2020 to network and debate.
Leading events such as IQT New York, IQT Europe, Quantum.Tech Europe, European Quantum Week, Q2B and others all went online in 2020 due to the COVID-19 crisis. All still offered a great round-up from across the quantum sector. For those interested specifically in quantum computing, Q2B again retained its special place as a highlight in the end of year calendar.
Simulation for quantum chemistry and material science?
Bob Sutor (IBM) sees multiple cycles of adoption coming over the decade, each spread across key impact areas: the simulation of physical systems (especially quantum chemistry), optimisation (led by the financial services sector) and then linear algebra (for machine learning and AI). IBM like to emphasise some of the big ticket applications where we can expect quantum computers to help, though we still don’t know what scale of machine will be required to achieve quantum advantage:
- Improved nitrogen-fixation process for creating ammonia-based fertilizer;
- New catalysts to make CO2 conversion into hydrocarbons more efficient and selective;
- New electrolytes for lithium-air batteries able to sustain thousands of recharging cycles for electric aircraft;
- New classes of antibiotics to counter the emergence of multidrug-resistant bacterial strains.
Joseph Emerson (Quantum Benchmark) singles out material science (and the industries that work with advanced materials) as the first area likely to see benefits. Emerson is a leading expert on characterising quantum errors and points out that these materials are inherently quantum systems, so there is often an opportunity to render the noise in our quantum devices into a benign part of the simulation.
As a rule of thumb, you need one qubit for each electron orbital you want to simulate. Interesting applications might therefore seem in reach with 100Q+ devices. However implementing calculations with the required circuit depth needs better qubits, better error mitigation or the use of error correction and logical qubits. Heuristic variational techniques are a promising approach but there is no proof they will offer quantum advantage.
Optimisation for Financial Services, Logistics and Manufacturing ?
Around the world, 20+ banks already have some initiative in quantum computing. Goldman Sachs and JP Morgan Case are established partners in the IBM Quantum Network. The financial services industry perspective is distinct. Almost uniquely, it finds it easy to go from a 1% algorithmic advantage to significant commercial advantage. However, its computational problems aren’t typically ‘natively quantum’ and practical advantages could easily be eaten-up by implementation trade-offs.
In 2020 players such as Goldman Sachs, JP Morgan Chase, Barclays, BBVA and others have been talking about the early work they have been undertaking. Many candidates for early value have been considered, two that are commonly identified are financial portfolio optimisation and Monte Carlo techniques for pricing financial derivatives.
Often the focus is on trying to understand the architecture specific benchmarks that specific qubit technologies need to hit to deliver likely advantage in the bank’s specific use cases.
Algorithms matter too. QC Ware point to the efficient shallow Monte Carlo algorithm they have developed as potentially opening up financial sector applications on the type of device we may see in 5-10 years rather than 10-20 .
In Europe, the automotive sector has been notably active with players such as Volkswagen, BMW and component supplier Bosch discussing their early experiences this year. Again optimisation is seen as a key opportunity both in logistics and manufacturing operations.
Europe is notable for work underway with special purpose quantum devices (quantum simulators). Here early engagement with end-users is doubly important, a goal that the QT Flagship helps to support.
EDF is working with Atos as part of the PASQuanS project to look at optimising smart charging of electric vehicles: minimising charging time and the number of charging stations, with implications also for energy efficiency.
There is still no proof of a specifically quantum advantage in these optimisation trials. Approaches such as quantum annealing and QAOA must still contend with competition from quantum-inspired algorithms running on conventional hardware. However in this early phase of the quantum journey, this work is certainly helping clients target value they hadn’t previously had the tools to address.
Machine learning ?
AI and machine learning are thought by many to be key opportunity areas for quantum computing. However techniques have often required the assumed availability and efficient loading of QRAM. Such technology remains many years away. For heuristic techniques offering shorter term promise there has been no proof that quantum advantage was obtainable.
Recent theoretical work involving IBM appears to offer for the first time a proof that, even with access only to classical data, we can have exponential speedups in certain supervised machine learning applications. Though the proof assumes FTQC, the approach uses a potentially NISQ-friendly variational algorithm and does not require QRAM.
QC Ware point to the Forge Data Loader as another alternative strategy for efficiently loading classical data without the need for full spec QRAM .
Matthias Troyer (Microsoft) makes the general point that to avoid the ‘input bottleneck’ we should avoid ‘big data’, but instead seek ‘small data, big compute’ problems. In an interesting example of this approach, CQC have set up a team to investigate quantum natural language processing.
QNLP brings together grammar algebra and vector-space natural language processing, using a quantum model to provide the exponential computational space required. Consider that conventional technology allows us to have dictionaries for words but not for sentences.
Hartmut Neven (Google) points to an additional unique, but subtle, resource for quantum machine learning. Early devices can prepare uniquely quantum probability distributions . This may not sound immediately relevant to mainstream machine learning; however when we consider science applications and the challenge of optimal readout across emerging quantum devices and sensors its relevance could be much more significant.
Taking it into your own hands
Airbus is a striking example of a large quantum end-user seeking early mover advantage. The Airbus Quantum Computing Challenge defined five key ‘flight physics’ problems. From over 1000 individuals 36 full proposals were narrowed down by a star-studded expert jury into a shortlist of five finalists with a notable organisational and geographic spread.
Expert Jury – Harry Buhrman (QuSoft), Wim Van Dam (QC Ware, Univ. California), Joe Fitzsimons (Horizon Quantum Computing), Elham Kashefi (Sorbonne & Univ. Edinburgh), Iordanis Kerenidis (QC Ware, CNRS), Michele Mosca (IQC), Troy Lee (Univ. Technology Sydney), Jingbo Wang (Univ. Western Australia)
Finalists – Capgemini (Netherlands/France), Machine Learning Reply (Italy), Niels Backfisch (Germany), Origin Quantum (China) and Univ. of Montevideo (Uruguay).
Winner – Machine Learning Reply are a leading systems integration and digital services consultancy. Their project seeks to optimise aircraft loading configurations under increasing degrees of flight constraints such as payload weight, centre of gravity and fuselage shear limits. The approach is an interesting hybrid. By implementing QUBO on both a D-Wave quantum annealer and on a conventional solver the results can be validated by mathematical modelling (an important consideration in safety conscious aerospace applications).
The real benefits of quantum computing may be many years off. But enlightened end-users need to build their understanding of how the opportunities it creates will evolve for them over time. Airbus has now established a high profile and strong network of connections across the quantum sector. Perhaps equally importantly it has built a strong understanding of the long term quantum opportunity across the senior Airbus team (as was clearly on display at Q2B).
What would you do with a million qubits?
In the past the field has often discussed in general terms the wide possibilities offered by large scale FTQC. However such projected future machines would have many millions if not billions of physical qubits, and access to resources such as QRAM and rapid interconnects. As an increasing number of hardware groups have now described roadmaps towards a ‘1 million qubit’ error corrected quantum computer, the question naturally arises -what can we do specifically with such ’modest’ FTQC machines?
Google, Microsoft and other expert groups have been active in investigating in unprecedented detail what a device with specs in this range might achieve.
Specs on the box – Typically this type of work assumes physical 2Q gate fidelities can be improved to 99.9% (or sometimes 99.99%); error correction is performed using the surface code with a cycle time of about 1 microsecond; Toffoli gates are added to complete the gate set, requiring a significant part of the machine to act as an optimised ‘magic state factory’ (often this last requirement proves a bottleneck).
Prime factoring – 20 million Q and a runtime of 8 hours ;
Simulating FeMoCo – 4 million Q and a runtime of 4 days ;
Simulating catalysts for CO2 – c. 4 million Q (4000 logical Q) and runtime of weeks ;
Simulating superconductors – 1 million Q and runtime of hours ;
Simulating materials in Fermi-Hubbard model – 200,000-700,000 Q and runtime of days .
The above examples all assume specs we might imagine realistic in a future superconducting qubit based device. Trapped ion systems would likely suffer from significantly slower gate time, but would aim to make up for this with higher fidelity qubits and better connectivity to reduce the error correction overhead.
Quadratic is not enough ?
As we start to understand what such future specs might imply a serious challenge has emerged. Matthias Troyer spelt out the problem in clear terms at Q2B.
Ten orders of magnitude – with current architectures, quantum computers will have much slower gate speeds than conventional computers. Worse, the need to apply error correction creates a sever additional overhead. The overall disadvantage in terms of raw logical operations may be 1010. Quantum computers aim to catch-up and beat their conventional counterparts by using the unique speedups offered by quantum algorithms. But to do this they need to target sufficiently large problems where conventional machines start to struggle.
Where an exponential speedup is available there is no real difficulty. However in cases where the quantum algorithm offers only a more modest quadratic speedup there is a catch. If we target problems that are big enough for the quantum computer to gain an advantage, then Troyer points out that the quantum computer would require an unusably long runtime . The quantum computer could deliver an advantage, but only on problems that take two weeks plus to run.
Google and others have made this same point in technical papers . Quadratic speedups don’t look sufficient to deliver useable quantum advantage on FTQC machines being built on current front running architectures.
Better specs – hardware players will seek to improve the odds, with better qubits, faster gates, better error mitigation, better error correction codes or preferably all at once.
This is a big deal. Algorithms such as Shor’s (cryptanalysis) and phase estimation (quantum chemistry) offer exponential speedups. But Grover’s algorithm only offers a quadratic speedup. Many of the most general quantum computing use cases implicitly depend on Grover’s algorithm and the flexible applications it allows.
Seeking better speedups
FTQC can perform remarkable feats but it’s power is still limited. Most experts accept the Aaronson Ambainis conjecture that strong (super-polynomial and exponential) quantum speedups are only possible for problems with ‘structure’. Problems with too much symmetry are restricted to at best modest (quadratic or higher polynominal) speedups .
Shor’s algorithm uses a quantum Fourier transform to exploit hidden structure in the integer factorisation problem, n=pxq. Very loosely – if you think of p as the length of a race track, and q as the number of laps, you’ll have an intuition about how integer factorisation has a hidden periodicity and why quantum interference can help unravel it.
Theoretical work in 2020 appears to further tighten-up the characterisation of the structure required to allow strong speedups. It turns out that problems with a wide variety of graph symmetries are limited to at best polynominal speedups, but conversely this seems to be the only limiting criteria in this model .
Other theoretical work has resolved a long outstanding question. It’s long been known that adiabatic quantum computation has in principle the same computing power as the circuit based model. However it wasn’t known if ‘stochastic Hamiltonians’ (ones with no sign problem) could support exponential speedups. Work in 2020 has shown that in certain cases they can . This is a step to removing one of the obstacles often cited as a roadblock on the future technology path for quantum annealers.
Google and Troyer’s work points to the tantalising prospect that algorithms offering polynomial speedups just a little better than quadratic (for example cubic or quartic) could make it significantly easier to achieve useful quantum advantage. Unfortunately even Scott Aaronson (Univ. Texas at Austin) can’t currently name any really useful examples of algorithms in this class.
Speaking at the Google Quantum Summery Symposium, Harmut Neven summed up “There is still a scarcity of algorithms. What would we do with [this specific] a one million qubit machine?” . This is a call-to-arms for those working at both ends of the quantum railroad.
To watch in 2021
- Proof of Quantumness – Will we see easier to check quantum supremacy experiments based on PQC trapdoors?
- Certifiable Random Numbers – Will Google use one of its new quantum devices to launch a remotely certifiable public randomness service?
- Google Early Access Program – Google are now extending access to their platform to external groups. The first wave is dominated by US academic institutions, with the notable addition of physical simulation specialist Phasecraft. Watch out for results and others selected to take part.
- IBM Quantum Partners – Watch out for results from the blue-chip partners including Daimler, ExxonMobil, JP Morgan Chase, Samsung, Goldman Sachs, Accenture, JSR and Boeing on the expanding IBM Quantum Cloud.
- D-Wave – In addition to new hardware with 5000 annealing qubits and 15-way connectivity, D-Wave have now also launched expanded hybrid solver support. Watch for impacts in the size of problem that can be addressed. Will one of its blue-chip clients put a quantum application into ongoing routine business use?
- New networks – Trapped ion processors with great specs are opening up new opportunities to explore NISQ algorithms tweaked for high-fidelity, high-connectivity, and mid-circuit measurement. Watch for results from new partnering networks around AWS Bracket and Azure Quantum.
- NEASQC – This new €4.7 QT Flagship project will target NISQ use cases with close involvement from blue-chip clients such as AstraZeneca, EDF, HSB, Tilde, Total, and quantum software players Atos and HQS. Watch out as details of the work emerge.
- Drug discovery challenge ? – Could a big pharma player copy the Airbus approach to accelerate the build-out their own quantum roadmap?
- Early applications in science – Watch out for applications in basic scientific research becoming an increasingly prominent focus area for early applications, especially for quantum machine learning.
- NISQ error mitigation – innovation here may be the most important driver of whether early commercial applications can be realised with NISQ devices. Watch out for a toolbox of standard layered techniques developing.
- Different FTQC specs – Different qubit technologies offer alternative trade-offs for FTQC. Watch out for work providing detailed estimates of the resources they require for high profile applications. Watch out for the interaction of slower gate speeds with reduced error corrections overheads.
- Computational complexity frontier – Always keep in mind that the formal basis of this field rests on some big conjectures. Anything that falls out here could have big implications for the sector: Aaronson-Ambainis conjecture; BQP vs NP-complete; BQP vs NP∩co-NP; or even P vs NP itself.
- Experimental complexity frontier – As mathematicians and computer scientists lead the charge on quantum algorithms, each should be assigned a physicist to continually whisper in their ear ‘every theory we’ve ever invented has ultimately turned out to be wrong’. Quantum mechanics will be no different. Investors will hope each new processor works. True physicists will hope it doesn’t.