In contrast to the hype, the early applications of quantum computers look modest. But they do exist. The talented teams developing quantum algorithms still have a fight on their hands to bring forward the date of true broad quantum advantage. What might be achieved and when?
Quantum computers seek advantage from the unique quantum algorithms that they are able to run. In some cases these promise remarkable (exponential) speedups over conventional approaches, in other cases more limited (quadratic) speedups. Sometimes, unique properties such as quantum randomness offer a different kind of advantage
Quantum information processing has been studied actively for over 25 years. The progress made, even in the absence of real devices, is remarkable. (For a tutorial style introduction see ).
However, the longest studied algorithms typically require full scale FTQC machines which are still beyond the horizon. In 2021 much work again focussed on what might be accomplished with algorithms for near term NISQ devices, or on what might be achieved with early FTQC devices.
Rapidly increasing interest surrounding quantum computing has seen businesses ask key questions:
- 1-2 years – Where will early quantum devices first enter routine business operational use?
Popular hype can make it sound as though this has already happened. Fact Based Insight still isn’t aware of an example of routine production use in a commercial application.
- 3-5 years – What significant business applications might be addressed in the NISQ era?
While it’s reasonable to be optimistic about specific use cases, the goal of achieving broad quantum advantage across a range of business problems still has many formidable obstacles to overcome.
- 7-10 years – What applications might be addressed by early FTQC devices?
With the current generation of leading architectures, the answer is more modest than we might hope, but important applications are coming clearly into view.
- Long term – Is the hype around quantum computing justified?
Fact Based Insight’s view is that the long term prognosis remains exceptionally bright.
Are we there yet?
Quantum on the cloud is now nothing new. Access via IBM Quantum has been available since 2016. Demonstrations of ‘quantum supremacy’ on artificial problems have followed from Google in 2019 and USTC in 2020 and 2021 .
A growing series of events have seen quantum pundits and industry early adopters come together to network and debate. The tone of discussion often seems to suggest that quantum applications are now rushing into deployment. The truth is much more modest.
D-Wave has implemented a notable number of proof of concept applications, including individual real-world demonstrations such as optimising bus routing in Lisbon in 2019. Many customers would certainly say they have already benefitted commercially by developing their teams and by taking quantum-inspired learnings into their business. However routine production use of a quantum application by a blue-chip end-user has remained elusive.
Speaking at Qubits 2021, Catherine McGeoch (D-Wave Senior Scientist) commented “D-Wave will not claim quantum advantage until the quantum processor is routinely fast enough to overcome overhead costs. We’re not there yet, but we know how to get there”.
Quantum annealing could see an application get into production use soon; however, it also faces continuing competition from conventional devices running efficient implementations of quantum-inspired algorithms.
Google think they may be close to an application with ‘quantum scrambling’. The dynamics of how local disturbances in a quantum system propagate are not well understood. A better understanding here could have applications in theoretical physics (and perhaps later knock-on more widely). These effects can be investigated using an experimental set-up similar to the random circuit’s framework pioneered by Google.
Phasecraft have published a proposal for how to simulate a kagome magnet, showing that a system with 50 qubits might be solved by quantum circuits with under 200 2Q gates. This seems on the verge of realisation with current generation NISQ devices. Understanding such spin-liquid ground states doesn’t have an immediate commercial application, but it is a well-known and significant area of interest in theoretical physics.
The Google and Phasecraft proposals are notable because they employ flexible gate-model quantum computing. However If we allow applications in theoretical physics, then analogue quantum simulators based on neutral atoms have arguably been doing this for some years .
Cambridge Quantum may be on the verge of achieving a true commercial deployment with their Quantum Origin cybersecurity SaaS offering (formerly known as IronBridge). This provides random numbers for cryptographic keys (and for other applications requiring true unbiased randomness). By implementing strong real-time entropy verification (via a Bell-test) it delivers a feature not currently matched by any other QRNG, and one not possible on a classical device.
In the medium-term this type of service will likely employ low-cost specialised devices. However, Quantum Origin is available commercially now (Fact Based Insight understands the pricing is surprisingly modest). Whoever buys and deploys it first will be able to say they took the first quantum application into regular production!
Cambridge Quantum has recently merged with Honeywell to form Quantinuum. Quantum Origin will initially be served from H-series Honeywell quantum computers.
Random numbers remain a strong early application opportunity. The important point is that algorithms can add value to raw randomness. In cryptography we generally want to minimise who we have to trust. Quantum Origin offers strong entropy verification in software. This means I no longer need to trust the manufacturer of the device, though I do still need to trust the operator. We should not be surprised to hear more and more about quantum random numbers.
Publicly certifiable random numbers – a further goal for services of this kind is entropy verification that can be done publicly and remotely. We know how to do this with FTQC combined with techniques from PQC . However implementation on NISQ devices currently faces the obstacle that the best protocol we currently have requires a verification check that is a mathematically hard problem, making it impractical. Progress here would open up an interesting new range of applications, including in proof-of-stake blockchain protocols.
Looking for broad quantum advantage
Quantum computing majors, academic groups and a growing series of startups have been working hard to bring forward the date when we will see wider advantage from quantum computing. Variational quantum algorithms, quantum annealing, applications in Monte Carlo techniques and quantum machine learning have all been active areas of focus (for an overview of NISQ algorithms see )
Variational quantum algorithms
VQAs, including VQE, VQS, QAOA, QCBM, QNN and their increasingly sophisticated refinements have increasingly matured to emerge as the body of related techniques best placed to achieve quantum advantage using a gate-model NISQ computers (for an overview see ).
Hybrid quantum-classical loop – practical implementations are seeking to balance the competing demands of the iterative processing loop: selecting a good model for the initial trial input (ansatz); selecting a cost-function that is quantum-hardware efficient but also suitably problem specific; mitigating noise and errors; efficiently measuring outputs; taking as much load as possible into classical post-processing; running an efficient classical optimisation procedure to drive each iteration.
Variational head winds
Work has increasingly focussed on identifying the practical resources required to run applications at sufficient scale and accuracy for real world problems of interest. Increasingly two challenges have emerged as significant obstacles: how to efficiently measure quantum outputs during each cycle and the problem of driving the variational optimisation process.
VQE is a family of techniques that offer promise for performing calculations in, among other things, computational chemistry. Recent work has identified a key bottleneck in this process. To achieve the levels of accuracy typically required, repeated output measurements have to be made on each trial circuit. The high number of repetitions required looks like a significant bottleneck that threatens to push run-times beyond usability. While this issue has surfaced first for VQE it has the potential to impact other variational algorithms as we better understand their need for accuracy.
Zapata completed work estimating runtimes for a series of short chain hydrocarbons (from methane to propyne). With a conventional VQE approach they estimate runtimes of 2 to 71 days for each required iteration of the optimisation. Multiple iterations would be required, and the times would likely be significantly longer for other molecules of interest. This makes VQE look unfeasible without some way to reduce the burden of measurement.
Zapata have introduced the technique of robust amplitude estimation as a first step towards closing this gap. Amplitude estimation is a key technique for estimating observables on a quantum state, a critical component of VQE. Robust amplitude estimation helps us use this technique on a noisy quantum processor.
Qu & Co, a Dutch startup has partnered with conventional computational chemistry software specialist Schrödinger. Their work benefits from strong insight into both the potential of quantum computing, but also the strengths and weaknesses of current classical approaches. Their work questions whether a NISQ era implementation of VQE can hope to achieve chemical accuracy (typically viewed as 1kcal/mol or 1.6mH). A general observation is that many quantum demonstrations to date have confused accuracy with precision: they have calculated precise results but have simulated too few orbitals for true accuracy.
Zapata have developed a technique for ‘classically boosting’ VQE by allowing some orbitals to be calculated with conventional approximations, while only those with strong quantum correlations are treated on the quantum processor.
Another general challenge for variational algorithms is to ensure that the required classical optimisation can be efficiently completed. In many cases this is analogous to finding our way down a sloping landscape to find the lowest point (the optimal solution), and we risk of finding our way into a hollow (a local minima) that isn’t the best overall solution. If we find ourselves in a very flat part of the landscape (a barren plateau) then we may not easily know which way to move next. Device noise can also threaten to turn a subtle slope into barren terrain.
The threat is that the classical costs of optimization could outweigh the quantum speedup. That the worst case is hard shouldn’t concern us unduly. Particularly where we are simulating a physical quantum system, we can use physical or chemical insight to pick a trial solution and approximate Hamiltonian. We can have an intuition that our quantum system will help us find nature’s solution.
However, our intuition may be less strong when we are dealing with a general optimisation problem, as might be tackled by approaches such as QAOA. Here we may need more specific reasons to think that our choice of trial solution and cost function give us a streamlined optimisation path.
D-Wave have been quick to point out the support that this line of reasoning gives to QUBO and quantum annealing as a preferred strategy for optimisation problems in the NISQ era.
Entanglement induced barren plateaus – visible and hidden layers are the basic building blocks of conventional deep learning. Recent work shows how excess entanglement between visible and hidden units can lead to barren plateaus and make training fail. While this is discouraging, it may also point to ways to avoid this problem.
Zapata have developed an open-source tool, orqviz, to help address the optimisation challenge by aiding the visualisation of the optimisation landscapes.
Several areas are typically being targeted for NISQ-era quantum advantage. These include in particular, applications in chemistry and material science. The underlying quantum nature of these systems help support our belief that quantum computing can help us probe their properties.
Molecular Structure – VQE is used to calculate the ground state (or excited state) energy of molecules of interest. Rapid innovation in techniques is underway in this area (for an overview see )
Zapata, Cambridge Quantum, Phasecraft, QC Ware, Rahko, QunaSys and others are all examples of startups that are using spun-out academic talent to drive innovation forward, each offering variations and refinements of VQE.
Cambridge Quantum and Roche have combined the conventional computational chemistry technique of density matrix embedding theory with VQE to examine protein-ligand binding energies. In a first of its kind demonstration, calculations were performed on IBM and Honeywell processors.
Materials Science – Conventional methods can usually only address materials with weak quantum correlations. Understanding regular lattice arrays are an important class of problems and are often characterised by what is known as the Fermi-Hubbard model. This includes systems that might be of high practical importance such as in understanding high temperature superconductivity or battery performance.
Phasecraft have proposed an optimised variational algorithm to calculate ground state energies in a 2D Fermi-Hubbard model. A 5×5 instance (larger than can be solved exactly classically) could be tackled on a 50Q device with approximately 325 2Q gates (on a fully connected architecture).
Rahko have pursued the approach of focussing first on how far state-of-the-art approaches, such as dynamical mean field theory, can be pursued on HPC platforms . It later plans to augment these with techniques based on VQE .
Molecular Dynamics – Conventional methods in computational chemistry struggle to deal with the time dynamics of electron excitations. Such dynamics could be relevant to applications in photovoltaics and light emitting diodes. VQS techniques seek to address this area.
Phasecraft’s work on simulating time-dynamics in the Fermi-Hubbard model on a 5×5 grid proposes just 3,209 2Q gates and 259 time-steps. This goes below the conventional circuit model to use native qubit operations and builds error mitigation techniques into the algorithm.
Patents – In an interesting example of what it can mean to patent innovation in quantum algorithms, Phasecraft has patented their efficient approach to mapping (encoding) fermions to qubits for such quantum calculations.
Optimisation – Grover’s search algorithm can in principle be adapted to provide a quadratic speedup for many combinatorial optimisation problems. However, the high resource requirements put it well out of reach for NISQ devices (and as we will see, perhaps also out of reach for current FTQC architectures). QAOA is a VQA that has been investigated by many groups for gate-model based NISQ applications. There is no general proof that it offers an advantage over classical methods, so it’s important to have some intuition as to why a quantum implementation might help the specific problem at hand.
Hard vs easy – recent work has allowed us to more clearly understand the performance of QAOA for hard (e.g. 3-SAT) and easy (e.g. 2-SAT) problems. This could point the way to a better intuition on where it can offer benefits.
IBM have presented work exploring how combinatorial optimisation problems can be mapped onto local quantum Hamiltonians. This enables us to treat their solution as analogous to finding the ground state of the Hamiltonian, using techniques such as VQE or quantum phase estimation.
In the absence of full error correction, research groups typically apply other error mitigation techniques to eke the most out of NISQ processor capabilities. These techniques are often based on control hardware or control software. Algorithm design also has its part to play. Device-specific optimisations will likely be necessary.
Quantum annealing for combinatorial optimisation
A wide variety of business problems can be defined as combinatorial optimisation problems. These range from portfolio optimisation in financial services, to the travelling salesman problem (which generalises many common problems in transport, logistics and manufacturing).
QUBO is the combinatoric optimisation model native to quantum annealing based devices. Here we have an intuition that quantum tunnelling can help avoid the output being trapped in a sub-optimal solution. Recently it has been shown in principal that the architecture used by D-Wave (‘stoquastic’ annealing) can indeed offer a speedup (though the proof does assume a noiseless device and a favourable choice of problem). The academic debate has continued about what expectation we can have for the noisy, short coherence time qubits available in current devices.
Paypal already uses deep learning techniques for fraud detection. However, feature selection for these algorithms is a significant combinatoric problem. Paypal is collaborating with D-Wave to compare QUBO based models and classical models for feature selection.
Multiverse Computing have previously explored financial services portfolio optimisation tasks with BBVA using both QAOA and QA, with the latter performing better . Recently they have extended this work showing promising result optimising financial portfolios using realistic assets from the S&P 500. This uses D-Wave Hybrid and its Advantage quantum processors .
Digital annealers such as those from Fujitsu, Toshiba, Hitachi and NEC continue to provide quantum-inspired competition for these approaches.
Toshiba has been pursuing quantum-inspired advantage for problems using the simulated bifurcation algorithm. Its SBM is a software package suitable to run on conventional high performance hardware. Its use has been demonstrated for problems of up to 1 billlion binary variables.
Work from Microsoft points to a scaling disadvantage that quantum annealing faces versus digital annealers due to the overheads caused by connectivity constraints when embedding QUBO problems in current architectures.
Zapata has developed a quantum enhanced optimization technique that employs generative modelling to seek better solutions to combinatorial optimization problems. The approach has the advantage of not being limited to QUBO problems and can make use of classical and quantum machine learning models. They have first demonstrated it as a quantum-inspired model for S&P 500 portfolio optimization.
Quantum speedups for Monte Carlo techniques
Conventional Monte Carlo techniques find wide application in business and industry. In particular, these techniques find prominent use in the financial services industry for derivatives pricing, credit valuations and risk management. Quantum Monte Carlo techniques promise to speedup these very useful calculations. These depend on the amplitude estimation primitive (for an overview see ).
QC Ware have reduced the resource requirements necessary to start benefitting from QMC. This reduces circuit depth but increase iterations, a trade-off that decreases the quantum speedup but brings its realisation nearer.
QC Ware have patented their low-circuit depth amplitude estimation technique , and their efficient data loader technique .
QC Ware have completed a proof of concept demonstration on IonQ’s latest hardware. This used unary coding and 4Q with circuits of up to 92 2Q gates and a circuit depth of 62.
Algorithmic innovation is bringing quantum advantage nearer to realisation. However substantial obstacles remain. QC Ware have estimated the resource requirements in terms of 2Q gate fidelity and effective clock speeds needed for benefits in realistic derivative pricing use cases ). These suggest we still need several orders of magnitude further improvement on gate fidelities; in addition, clock speed may be an insurmountable challenge for current trapped ion architectures.
Goldman Sachs and IBM have calculated a threshold for quantum advantage in derivative pricing. This starts from the business insight that it is derivatives that have path-dependent sensitivity that will benefit first from quantum methods. They foresee business benefits when devices can offer 8000 logical qubits and sufficient logical fidelity to support 1010 logical operations. This will require the use of error correction, and in this initial work the required logical clock rate of 10MHz looked very aggressive compared to what current FTQC architectures are expected to deliver.
In an interesting illustration of how this work is progressing, Goldman Sachs and IBM have presented new techniques to reduce the logical clock rate requirement to perhaps 30kHz (much closer to the 10kHz envisaged by early FTQC architectures).
Quantum machine learning
Conventional machine learning is already finding wide traction across business. Quantum machine learning, perhaps more than any other area of quantum computing, has enjoyed an accelerated hype cycle. To get beneath the hype, we have to focus on why we are expecting quantum to bring an advantage. (For an overview of QML in computational molecular biology see ; for an overview of QML in finance see ).
De-quantisation – progress has not always been smooth. The Recommendation System of Kerenidis and Prakash was initial though to offer an exponential speedup, before improved classical algorithms reduced this to a (still substantial) polynomial one . This same approach has also affected quantum algorithms for other popular machine learning techniques including principal component analysis and nearest-centroid clustering .
Speeding up the maths
An obvious way quantum could help would simply be to speed up classical techniques. HHL allows a very general speedup in linear algebra, but only for natively quantum data and outputs. Grover algorithm allows a very general quadratic speedup in many unstructured search applications. But can these techniques really be used in practice?
The most persistent difficulty for most attempts to directly speed up conventional machine learning techniques is the requirement for technology to allow data to be loaded efficiently and then queried in superposition by the quantum device. We don’t yet know how to build such QRAM technology.
Google and MIT have recently published work showing explicitly how exponential speedups in training can be achieved for ‘wide and deep’ neural networks. Specifically, this leverages the neural tangent kernel model in deep learning . This work shows how to address three out of the four ‘read the fine print’ objections normally faced by QML . However, we still don’t know how to build QRAM hardware.
QC Ware’s patented data loader protocol is a notable tool that seeks to reduce if not overcome the QRAM.bottleneck
QC Ware have performed a proof of concept demonstration of the data loader technique to implement the popular machine learning primitive, nearest centroid classifier, on an 11Q IonQ device.
More space to compute
Rather than ‘longing for a QRAM miracle’, Mattias Troyer (Microsoft) encourages the field to set aside ‘big data’ problems and focus instead on ‘small data, big compute’ problems. Such problems seek to benefit from a large computational working space. This is something that a quantum computer can uniquely provide (due to the large Hilbert space of the qubit system).
Xanadu have pointed out that many of the most promising techniques for near term QML approaches are really best understood as types of what are called kernel methods in conventional machine learning.
IBM claimed a notable milestone in 2021 by providing an example of a learning task that can offer an exponential speedup but only requires classical access to data (though it does require FTQC). This takes as its starting point a family of datasets constructed so they are difficult to classify with conventional algorithms (they are based on the discrete log problem). A quantum calculation is used as the kernel function for a conventional support vector machine).
IBM have also published work quantifying the higher dimensionality QNNs can achieve compared to comparable classical neural networks and the role of a hard-to-simulate-classically feature map in influencing trainability.
Pasqal have published a framework specifically tailored to exploit the reconfigurability of neutral atom devices for graph kernel representations.
A prominent class of problems that can be compactly expressed, but require large computation resources to solve numerically, are systems of nonlinear differential equations. Such equations occur in a wide variety of scientific and commercial applications where we need to model complex processes: from structural engineering to aerospace, from chemistry to biology, from finance to epidemiology.
Qu & Co have developed a new technique to tackle nonlinear differential equations on near-term quantum computers, differentiable quantum circuits. This approach trains a QNN to deal with the derivatives using the large available Hilbert space . Qu & Co have also extended this approach to stochastic differential equations . Qu & Co have filed a patent application covering their techniques.
Natural language processing has been an area of interest since the very conception of modern computing, and one where progress has been marked in recent years. It’s a great example of a problem that requires an exponentially growing working space (think of why we have dictionaries for words but not for sentences).
Cambridge Quantum have identified QNLP as a particular area of promise for QML. In 2021 they reported their first experimental results. Data sets of 130 sentences and 105 noun phrases were encoded using a 5Q IBM device.
QNLP makes use of the expanded computational space offered by quantum computers. Our intuition that this will be a productive approach is further strengthened by the striking parallels between the formalism Cambridge Quantum propose and the ZX-calculus representation of quantum mechanics. Perhaps, as Bob Coecke (Cambridge Quantum’s Chief Scientist) claims “language is quantum native”.
Uniquely quantum data
An increasingly important focus has been the intuition that QML should be able to out-perform classical machine learning when there are quantum correlations or quantum interference effects in the data set that need to be addressed. Work in 2021 has started to formalise and structure this intuition, both for learning tasks and specifically for generative models .
Caltech published work in 2021 that puts bounds around the power of different models of machine learning: one where the learning is classically driven but uses the measurement output of quantum systems (e.g. physics experiments, an analogue quantum simulator, or the iterations of a VQA), and one where quantum coherence is maintained during the learning process (a future technique that might one day be enabled by FTQC). A key result is that the classically driven ML can do very well, equalling the power of fully-quantum learning for ‘average case’ prediction accuracy. Fully quantum learning provides a further exponential advantage for ‘worst case’ prediction accuracy.
Google argues that the most general form of support quantum can offer for advances in fields such as chemistry is not just the theoretical simulation of a system, but could also include generating quantum data sets for classically driven machine learning.
QCBM – quantum circuit Born machines are a type of VQA that implements a generative model. Work published in 2021 demonstrated for the first time that a specific type of QCBM, an Ising Born machine, could perform a sampling task that is hard for any classical computer.
One place where we expect to find data that contains quantum correlations is in the output of quantum sensors. Elsewhere in the quantum technology sector many new quantum sensors are under active development. However, we sometimes forget that NMR and SQUID based devices have been in use for years (in medical applications NMR is referred to as MRI so as not to scare the horses).
Quantum advantage in learning from experiments – Work from Caltech and Google has demonstrated that there is an exponential advantage in learning the properties of quantum states. This work asks us to consider the Sycamore processor in two parts. In the first its qubits represent the output of some hypothetical physics experiment (or the output of a series of quantum sensors). Our task is to learn some property of this state (to investigate the physics or the experiment, or to read out our network of sensors). The rest of the quantum processor performs entangling operations on these inputs before passing the output to a classical machine learning routine. This setup is able to learn properties of the state in exponentially fewer trials than any purely classical approach could manage.
Google has developed an algorithm for quantum-assisted learning and characterisation of the system detected in NMR experiments. They present resource estimates for NISQ and FTQC applications.
Qu & Co have also proposed the extension of their DQC technique to model discovery, where we want to find the differential equation underpinning input data.
In the most adventurous case, some speculate about a connection between consciousness and the collapse of the quantum wavefunction. Indeed ‘objective collapse’ theories have long been seen as a possible way to extend quantum mechanics . Hartmut Neven (Head of Google Quantum AI) has speculated that wavefunction collapse may be a useful way to understand the ‘agency’ we will increasingly need to assign to advanced AI systems . Perhaps more people will have to start taking note of this thinking.
Early fault-tolerant machines
Quadratic is not enough?
A classical NAND gate is perhaps 10-9 transistor-seconds, a quantum NAND gate (loosely speaking) is 10 qubit-seconds. This huge 1010 factor is a major obstacle for algorithms that provide only modest, quadratic, speedups. The problem is that to get to problem sizes where the quantum computer gains an advantage, we are already dealing with cases where the runtime, with full FTQC, would be impractically long.
FTQC – The standard spec often discussed for early devices is 1M physical qubits with 99.9% 2Q fidelity running the surface code with nearest neighbour connectivity and a code cycle time of 1MHz. At this level of underlying error, this might be a 1000 logical qubit machine. However, key gates central to quantum advantage (for example T-gates or Toffoli gates) are very slow, requiring hundreds of logical qubits each. This gives rise to a huge ‘constant factor slowdown’ versus conventional computers.
What about other qubit architectures? These offer contrasting gate speeds, coherence times fidelities and qubit connectivity. Interesting work in 2021 examined the impact of these different trade-offs for trapped ion architectures . On currently approaches, code cycle times will typically be longer, though proponents of these architectures argue their greater ability to scale offsets this disadvantage. This certainly doesn’t counter the basic problem of the constant factor slowdown.
PsiQuantum have proposed a novel architecture called FBQC. Photonic clock times could be very fast. However, these devices will still be limited by the classical processing time needed for error correction decoding. We don’t know what their logical clock time will be. Fact Based Insight have asked PsiQuantum for guidance but they have declined to comment.
Amplitude amplification (for example as used in Grover Search) is a quantum primitive with wide potential applicability across business optimisation problems, but it only offers a quadratic speedup. Quantum walks yield some examples where an exponential speedup is possible, but the cases with the most obviously wide business benefits (for example in speeding up Monte Carlo simulation) offer only a quadratic benefit. This appears to be moving a wide range of business applications outside of the reach of FTQC, based on the existing known architectures under development.
D-Wave has doubled-down on quantum annealing as its preferred approach for optimisation in both the NISQ and early FTQC era. On the other hand, it has announced it will build gate-model devices for Hamiltonian simulation applications.
However, we are also seeing innovation as the community explores ways to overcome this challenge.
Peter Johnson (Lead Research Scientist and co-founder at Zapata) comments “In the era of early fault-tolerant quantum computing, we will need to maximize the performance of these machines through the design of robust quantum algorithms. Such algorithms will still account for non-negligible amounts of error but will mitigate its effect on the computational output”. Such approaches could allow us to tackle problems that aren’t tractable with NISQ error rates, but with overheads short of full scale FTQC. Johnson points to recent work from across the community that shows the way forward for this approach.
Iordanis Kerenidis (Head of Quantum Algorithms International at QC Ware) points out that though some algorithms we previously thought had an exponential advantage have been ‘de-quantised’ by subsequent advances in classical methods, in practice they still enjoy quartic or better advantage in many linear algebra applications. Kerenidis’ famous recommendation system still enjoys an 8th power advantage.
Protein Folding and AlphaFold
Proteins are perhaps the most important biomolecules. Their physical shape and how this affects their interactions have profound consequences for biological function and drug targeting so this has been a much-researched problem. Solving the protein folding problem has been an often-touted goal for quantum computing.
Levinthal’s paradox – A short protein might easily have 3198 possible bond angle combinations. However, once it forms as a long chain, it adopts one unique preferred folded shape within milliseconds. But just trying each combination in turn would take longer than the lifetime of the universe!
This is the Protein Folding problem, or more correctly a collection of related problems including: how to predict the structure of a simple protein from its amino acid sequence; understanding protein conformations; understanding the dynamic pathways for folding.
From the perspective of quantum mechanics it’s easy to see that there is no paradox, just a problem that is intractable for classical computation but accessible to quantum computers.
For sceptical observers, 2021 has seen some negative pointers for much hyped quantum expectations. AlpahFold used a conventional AI approach to ‘solve’ protein folding. The pharma industry also showed that drug development timelines can be much accelerated when the industry is suitably focussed.
AlaphFold 2 succeeded in moving the state of the art in predicting protein folding structures to 92.4-95% GDT. For single protein chains over 90% of the time it predicts with reasonable accuracy (<3-4 Å) . This is a dramatic result and DeepMind’s most striking success to date. Some argue it’s the most important result yet in the field of AI .
Dr Bhushan Bonde speaking at Quantum.Tech 2021 unpicked hype and reality . Conventional quantum chemistry approximation techniques were a key tool in screening compounds for Covid-19 drugs. However, they consumed an enormous and impractical amount of compute resources. His simulations were frequently using all of the GPU capacity on Microsoft’s Azure cloud; running for weeks and that was still not fast enough. Repeating such experiments has consequences on energy consumption, heat management (cooling computer hardware) and more important, time efficiency. He also sees AlpahFold’s performance as still short of being truly biologically relevant for Pharma where accuracy for drug binding pockets needs exact confirmation.
Bonde sees future AI advances and quantum computing techniques as complementary tools. Quantum computing offers a completely new tool for analytic solutions in quantum chemistry. It also promises to be one of the enablers that allows future machine learning systems to further enhance their performance, particularly when quantum correlations are present in the data. Bonde believes that we shouldn’t be surprised if we find these in biological systems – our leading theory of birds’ navigational sense is based on the ability of light-sensing cryptochrome proteins in the bird’s eye to exploit quantum entanglement effects.
IBM have been active in progressing quantum’s approach on Protein Folding, proposing a quantum-resource efficient variant of VQE that has already been used to ‘fold’ toy-case problems on their early devices.
Hamiltonian Simulation and the story of FeMoco
Hamiltonian simulation, simulating general quantum systems in physics, materials science or chemistry, has been a key use case for quantum computing right from its inception. Each year, the advanced materials and chemicals sector spends over $40B testing new materials for potential industrial use and big pharma over $210B on R&D . Quantum proponents point to big savings by increasing productivity in the R&D pipeline and reducing money wasted on failed clinical trials.
For an introduction to the terminology read Introduction to Hamiltonian Simulation.
Popular headlines have often captured advances in quantum computing hardware. It’s not so widely understood that advances in algorithms have also been dramatic. Simulating FeMoco has been a totemistic goal for the early quantum simulation community.
FeMoco (Fe7MoS9C) is the active site within the Nitrogenase enzyme responsible in nature for the catalytic conversion of nitrogen gas into ammonia (fertilizer). Because we don’t understand how cyanobacteria pull off this trick, we use the energy intensive Harber process to make the fertilizer upon which world agriculture depends. However, the Haber process has the worst environmental impact of any chemical industry process. It is estimated to be responsible for a full 1.2% of global CO2 emissions, more than the UK or France!
Detailed work on the quantum resources necessary to simulate FeMoco provides a window into algorithmic progress: 100M qubits (2014) to 20M qubits (2017) to 4M qubits (2020), to 2M qubits today . A lower resolution simulation has also been proposed for 300k-1M qubits. This progress makes simulation look within reach on the early FTQC machines being developed by several players. Having worked on this problem for many years, Ryan Babbush (Head of Quantum Algorithms at Google) thinks that these results now look close to optimal, at least given current simulation approaches and proposed error correcting architectures.
Schrodinger and Qu & Co have questioned whether simulating FeMoco to the level of accuracy currently envisaged is in fact enough. However, they also point to indications that other systems of interest may be addressable with fewer resources than thought. They point to Cr2, not itself a system of practical interest, as a potential cross-over point into the territory where quantum advantage will be possible.
Babbush is keen to open up a new front in the push for more efficient quantum simulation, commenting “the future of chemistry is 1st quantized!” .
In contrast to the 2nd quantised schemes more typically used in early work in Hamiltonian simulation (and in conventional computational chemistry) Babbush has proposed a new scheme for 1st quantised implementations and completed detailed resource estimates for error corrected calculations. These schemes offer similar accuracy to 2nd quantisation but can have significantly reduced resource requirements.
- 1st quantisation – qubits required ∝ number of active particles & log ( number of orbitals )
- 2nd quantisation – qubits required ∝ number of orbitals
Another advantage of this formalism is that it promises to allow us to go beyond the Born-Oppenheimer approximation (where we treat the nuclei as classical point charges).
Smell and taste – isotopologues are molecules that differ only in their isotope composition. Such differences would not be expected to affect their chemical properties, however there are numerous examples of such changes affecting properties of taste and smell. Is this a quantum tunnelling assisted effect?
Overall, what FTQC use cases does Google see? In terms of physical qubits these might be [:
- 25k-50k qubits – Simulating fast scrambling (via FTQC). Potential applications in theoretical physics, more speculatively in other areas.
- 50k-250k qubits – Simulating lattice spin models. Potential applications in analysing NMR spectra.
- 250k-1M qubits – Simulating materials and molecules at low resolution
- 1M-5M qubits – Simulating materials and molecules to high resolution
The long view
The grand unified theory of quantum algorithms
Potential business adopters often ask how many quantum algorithms are there? In one sense new algorithms are being published weekly by academic groups and a growing number of startups. However new techniques typically leverage a much smaller number of underlying primitives.
Hamiltonian simulation is used directly for materials science and chemistry; it’s also at the heart of HHL and continuous-time quantum walks. QFT is used in phase estimation, in Shor’s algorithm, solving discrete log problems and others. Amplitude amplification is used in Grover’s search algorithm, discrete-time quantum walks and others.
New work by QuSoft, Microsoft and MIT has looked at quantum algorithms in a new way. This ‘grand unified theory of quantum algorithms’ points to the quantum singular value transformation as a unifying underlying primitive for almost all known quantum algorithms. This new formalism promises to bring insight into the nature of quantum algorithms and where they can offer quantum advantage.
QSVT – Quantum gates evolve the quantum state of the qubits. This process is represented by a unitary matrix (a square normal matrix). Physicists are familiar with the useful role eigenvalues and eigenvectors play in analysing such systems. Singular value decomposition is the mathematical generalisation of eigen-decomposition to non-square matrices. Such matrices can be embedded within a larger unitary matrix. The QSVT technique allows us to apply a function to the singular values of the embedded matrix.
Physicists might say that the QSVT technique allows us to embed a more general class of problems within the quantum-native unitary structure of quantum computers. Mathematicians might say that QSVT provides clarity about how a wider class of mathematical functions can be mapped to mere physical devices.
BQP unbowed and unbound
How much optimism should we have about the long-term value of quantum computing? It’s worth recalling that we don’t have proof that quantum computers can solve problems which conventional computers can’t. We know how to use them to factor large numbers, but we don’t know that we won’t find a better way for conventional computers to tackle this problem. All we have is an intuition.
For some problems it’s easy to see where this intuition comes from. Whenever we want to simulate a real system that ultimately depends on quantum mechanics, it’s very difficult to even encode the Hamiltonian problem for a conventional computer. It’s ultimately an intuition that physicists have that there is no easy way round this. Because of its implications in materials science and chemistry this remains of exceptional importance and Fact Based Insight believes the long term consequences of this are still not widely enough appreciated.
But what should be our intuition about wider classes of problems? Here we need to look to computational complexity theory. At first glance the news is not good. We don’t yet have any absolute proof that there are problems that quantum computers can efficiently solve (BQP) that lie beyond the capabilities of classical computers (BPP). More optimistically, we also don’t know where the upper bound on the power of quantum computers lies (all we know is P ⊆ BPP ⊆ BQP ⊆ PP ⊆ P#P ⊆ PSPACE ⊆ EXP; we don’t even know where important classes of problem such as NP, or its generalisation PH, intersect with this list).
Oracles – Computational complexity theorists have their own ways of generating intuition. When they can’t prove a result absolutely, they have often made illuminating progress using ‘relativised proofs’ that explore the consequences of assuming that a black-box (oracle) is available that we can ask (query) to provide a necessary input or solution. We can then make progress by studying how many times we have to call the oracle (the query complexity of the problem).
Such techniques don’t always point in the right direction (we know now IP = PSPACE, MIP = NEXP, MIP∗ = RE). However relative proofs have again and again proved useful stepping stones for progress. In particular quantum information theorists used such techniques along the road to discovering Shor’s algorithm and Grover’s algorithm. We should take this type of intuition seriously.
In the oracle model we can show that quantum computers can exceed what is efficiently possible classically (BPPO ≠ BQPO), the Bernstein-Vazirani and Simon’s algorithms do that. This separation has recently been pushed much further to show that there is an oracle relative to which quantum machines have capabilities beyond any classical machine (BQPO ⊄ PHO) . We also know that this problem, Forrelation, illustrates a near maximal query complexity speedup .
In recent work, Scott Aaronson (UT Austin) and his students have extended this to prove a series of new results. This includes an answer to one long-standing question (we already knew NPBPP ⊆ BPPNP, we now know in contrast that relative to some oracles NPBQP ⊄ BQPNP). Interestingly, these relativising techniques seem to indicate that potential collapses and separations of classical complexity classes place surprisingly few constraints on the power of quantum computers (for example the quantum and classical versions of P vs NP are uncoupled). This could encourage an intuition that there are indeed classes of problems, yet to be fully defined, that only quantum computers can tackle. As Aaronson puts it, there is still plenty of room for BQP to perform ‘acrobatics’.
Complexity theory looks like it still has plenty to offer. Aaronson has proposed a list of key open questions related to quantum query complexity to stimulate progress.
Algorithms offering only a quadratic speedup face an uphill struggle to deliver benefits on the range of early fault tolerant machines currently envisaged. This has naturally put an emphasis on finding algorithms offering a greater speedup. The Aaronson-Ambainis conjecture (not proven but widely believed) tells us that strong (exponential) quantum speedups only occur for problems with structure (in a mathematical sense). Perhaps business intuition can help here too. Business problems often have lots of structure (in a real-world sense). However, perhaps we focus too much on the traditional categories of problems we have already defined, and not enough on where the capabilities of these new devices naturally take us? Does business fail to even recognise some of its biggest challenges and opportunities, as things that might be addressed by a different kind of maths?
There is every reason for us to believe in the remarkable ultimate potential of quantum computation. Our intuition supporting this should be strengthened, not diminished, by progress in 2021.
To watch in 2022
- Quantum randomness – who will buy Quantinuum’s freshly minted random numbers?
- Production annealing – will a D-Wave client take an application into regular production use?
- VQE – will we see a ‘beyond classical’ demonstration on a NISQ device?
- QAOA – can we develop a clear view of when this might offer an advantage?
- Quantum Monte Carlo – can further innovation bring this over the line?
- Quantum machine learning – watch out for applications in basic scientific research.
- DQC – will we see concrete resource estimates for practical use cases?
- QNLP – what roadmap can we expect for this exciting approach?
- Quantum chemistry – what new possibilities will 1st quantised form offer?
- European momentum – watch out for more from startups such as Beit, JoS Quantum, Fermioniq and others.
- Robust quantum algorithms – watch for progress exploring the space beyond NISQ but short of full-scale FTQC.
- Other architectures for NISQ – competing qubit technologies offer different performance trade-offs. Watch out for NISQ algorithms tweaked for high-connectivity trapped ion platforms.
- Other architectures for ‘modest’ FTQC – watch out for work estimating the resources they require for high profile applications. For trapped ions, understanding the impact of higher connectivity vs slower gate speeds will be a key consideration.
- Innovative error correction – watch out for developments in error correction; could quantum LDPC codes turn heads?
- Computational complexity frontier – always keep in mind that the formal basis of this field rests on some big conjectures with plenty of flexibility for how things could turn out. Any developments could have big implications for the sector: the Aaronson-Ambainis conjecture; BQP vs NP-complete; BQP vs NP∩co-NP; or even P vs NP itself.
- Experimental complexity frontier – remember that every theory physicists have ever invented has ultimately been overturned. Quantum mechanics will be no different. Investors will hope each new processor works. It might actually be a bigger deal if one doesn’t.