Quantum Algorithms Outlook 2022

In contrast to the hype, the early applications of quantum computers look modest. But they do exist. The talented teams developing quantum algorithms still have a fight on their hands to bring forward the date of true broad quantum advantage. What might be achieved and when?

Quantum computers seek advantage from the unique quantum algorithms that they are able to run. In some cases these promise remarkable (exponential) speedups over conventional approaches, in other cases more limited (quadratic) speedups. Sometimes, unique properties such as quantum randomness offer a different kind of advantage

Quantum information processing has been studied actively for over 25 years. The progress made, even in the absence of real devices, is remarkable. (For a tutorial style introduction see ).

However, the longest studied algorithms typically require full scale FTQC machines which are still beyond the horizon. In 2021 much work again focussed on what might be accomplished with algorithms for near term NISQ devices, or on what might be achieved with early FTQC devices.

Rapidly increasing interest surrounding quantum computing has seen businesses ask key questions:

  • 1-2 years – Where will early quantum devices first enter routine business operational use?
    Popular hype can make it sound as though this has already happened. Fact Based Insight still isn’t aware of an example of routine production use in a commercial application.
  • 3-5 years – What significant business applications might be addressed in the NISQ era?
    While it’s reasonable to be optimistic about specific use cases, the goal of achieving broad quantum advantage across a range of business problems still has many formidable obstacles to overcome.
  • 7-10 years – What applications might be addressed by early FTQC devices?
    With the current generation of leading architectures, the answer is more modest than we might hope, but important applications are coming clearly into view.
  • Long term – Is the hype around quantum computing justified?
    Fact Based Insight’s view is that the long term prognosis remains exceptionally bright.

Are we there yet?

Quantum on the cloud is now nothing new. Access via IBM Quantum has been available since 2016.  Demonstrations of ‘quantum supremacy’ on artificial problems have followed from Google in 2019 and USTC in 2020 and 2021 .

A growing series of events have seen quantum pundits and industry early adopters come together to network and debate. The tone of discussion often seems to suggest that quantum applications are now rushing into deployment. The truth is much more modest.

D-Wave has implemented a notable number of proof of concept applications, including individual real-world demonstrations such as optimising bus routing in Lisbon in 2019. Many customers would certainly say they have already benefitted commercially by developing their teams and by taking quantum-inspired learnings into their business. However routine production use of a quantum application by a blue-chip end-user has remained elusive. 

Speaking at Qubits 2021, Catherine McGeoch (D-Wave Senior Scientist) commented “D-Wave will not claim quantum advantage until the quantum processor is routinely fast enough to overcome overhead costs. We’re not there yet, but we know how to get there”.

Quantum annealing could see an application get into production use soon; however, it also faces continuing competition from conventional devices running efficient implementations of quantum-inspired algorithms.

Google think they may be close to an application with ‘quantum scrambling’. The dynamics of how local disturbances in a quantum system propagate are not well understood. A better understanding here could have applications in theoretical physics (and perhaps later knock-on more widely).  These effects can be investigated using an experimental set-up similar to the random circuit’s framework pioneered by Google.

Phasecraft have published a proposal for how to simulate a kagome magnet, showing that a system with 50 qubits might be solved by quantum circuits with under 200 2Q gates. This seems on the verge of realisation with current generation NISQ devices. Understanding such spin-liquid ground states doesn’t have an immediate commercial application, but it is a well-known and significant area of interest in theoretical physics.

The Google and Phasecraft proposals are notable because they employ flexible gate-model quantum computing. However If we allow applications in theoretical physics, then analogue quantum simulators based on neutral atoms have arguably been doing this for some years .

Cambridge Quantum may be on the verge of achieving a true commercial deployment with their Quantum Origin cybersecurity SaaS offering (formerly known as IronBridge). This provides random numbers for cryptographic keys (and for other applications requiring true unbiased randomness). By implementing strong real-time entropy verification (via a Bell-test) it delivers a feature not currently matched by any other QRNG, and one not possible on a classical device.

In the medium-term this type of service will likely employ low-cost specialised devices. However, Quantum Origin is available commercially now (Fact Based Insight understands the pricing is surprisingly modest). Whoever buys and deploys it first will be able to say they took the first quantum application into regular production!

Cambridge Quantum has recently merged with Honeywell to form Quantinuum. Quantum Origin will initially be served from H-series Honeywell quantum computers.

Random numbers remain a strong early application opportunity. The important point is that algorithms can add value to raw randomness. In cryptography we generally want to minimise who we have to trust. Quantum Origin offers strong entropy verification in software. This means I no longer need to trust the manufacturer of the device, though I do still need to trust the operator. We should not be surprised to hear more and more about quantum random numbers.

Publicly certifiable random numbers – a further goal for services of this kind is entropy verification that can be done publicly and remotely. We know how to do this with FTQC combined with techniques from PQC . However implementation on NISQ devices currently faces the obstacle that the best protocol we currently have requires a verification check that is a mathematically hard problem, making it impractical. Progress here would open up an interesting new range of applications, including in proof-of-stake blockchain protocols.

Looking for broad quantum advantage

Quantum computing majors, academic groups and a growing series of startups have been working hard to bring forward the date when we will see wider advantage from quantum computing. Variational quantum algorithms, quantum annealing, applications in Monte Carlo techniques and quantum machine learning have all been active areas of focus (for an overview of NISQ algorithms see )

Variational quantum algorithms

VQAs, including VQE, VQS, QAOA, QCBM, QNN and their increasingly sophisticated refinements have increasingly matured to emerge as the body of related techniques best placed to achieve quantum advantage using a gate-model NISQ computers (for an overview see ).

Hybrid quantum-classical loop – practical implementations are seeking to balance the competing demands of the iterative processing loop: selecting a good model for the initial trial input (ansatz); selecting a cost-function that is quantum-hardware efficient but also suitably problem specific; mitigating noise and errors; efficiently measuring outputs; taking as much load as possible into classical post-processing; running an efficient classical optimisation procedure to drive each iteration.

Variational head winds

Work has increasingly focussed on identifying the practical resources required to run applications at sufficient scale and accuracy for real world problems of interest. Increasingly two challenges have emerged as significant obstacles: how to efficiently measure quantum outputs during each cycle and the problem of driving the variational optimisation process.

VQE is a family of techniques that offer promise for performing calculations in, among other things, computational chemistry. Recent work has identified a key bottleneck in this process. To achieve the levels of accuracy typically required, repeated output measurements have to be made on each trial circuit. The high number of repetitions required looks like a significant bottleneck that threatens to push run-times beyond usability. While this issue has surfaced first for VQE it has the potential to impact other variational algorithms as we better understand their need for accuracy.

Zapata completed work estimating runtimes for a series of short chain hydrocarbons (from methane to propyne). With a conventional VQE approach they estimate runtimes of 2 to 71 days for each required iteration of the optimisation. Multiple iterations would be required, and the times would likely be significantly longer for other molecules of interest. This makes VQE look unfeasible without some way to reduce the burden of measurement.

Zapata have introduced the technique of robust amplitude estimation as a first step towards closing this gap. Amplitude estimation is a key technique for estimating observables on a quantum state, a critical component of VQE. Robust amplitude estimation helps us use this technique on a noisy quantum processor.

Qu & Co, a Dutch startup has partnered with conventional computational chemistry software specialist Schrödinger. Their work benefits from strong insight into both the potential of quantum computing, but also the strengths and weaknesses of current classical approaches.  Their work questions whether a NISQ era implementation of VQE can hope to achieve chemical accuracy (typically viewed as 1kcal/mol or 1.6mH). A general observation is that many quantum demonstrations to date have confused accuracy with precision: they have calculated precise results but have simulated too few orbitals for true accuracy.

Zapata have developed a technique for ‘classically boosting’ VQE by allowing some orbitals to be calculated with conventional approximations, while only those with strong quantum correlations are treated on the quantum processor.

Another general challenge for variational algorithms is to ensure that the required classical optimisation can be efficiently completed. In many cases this is analogous to finding our way down a sloping landscape to find the lowest point (the optimal solution), and we risk of finding our way into a hollow (a local minima) that isn’t the best overall solution. If we find ourselves in a very flat part of the landscape (a barren plateau) then we may not easily know which way to move next. Device noise can also threaten to turn a subtle slope into barren terrain.

NP-hard – work in 2021 has firmed up our theoretical understanding of this problem. Unfortunately, in the general case, the optimisation challenge faced by VQAs is a NP-hard problem.

The threat is that the classical costs of optimization could outweigh the quantum speedup. That the worst case is hard shouldn’t concern us unduly. Particularly where we are simulating a physical quantum system, we can use physical or chemical insight to pick a trial solution and approximate Hamiltonian. We can have an intuition that our quantum system will help us find nature’s solution.

However, our intuition may be less strong when we are dealing with a general optimisation problem, as might be tackled by approaches such as QAOA. Here we may need more specific reasons to think that our choice of trial solution and cost function give us a streamlined optimisation path.

D-Wave have been quick to point out the support that this line of reasoning gives to QUBO and quantum annealing as a preferred strategy for optimisation problems in the NISQ era.

Entanglement induced barren plateaus – visible and hidden layers are the basic building blocks of conventional deep learning. Recent work shows how excess entanglement between visible and hidden units can lead to barren plateaus and make training fail.  While this is discouraging, it may also point to ways to avoid this problem.

Zapata have developed an open-source tool, orqviz, to help address the optimisation challenge by aiding the visualisation of the optimisation landscapes.

Variational progress

Several areas are typically being targeted for NISQ-era quantum advantage. These include in particular, applications in chemistry and material science. The underlying quantum nature of these systems help support our belief that quantum computing can help us probe their properties.

Molecular Structure – VQE is used to calculate the ground state (or excited state) energy of molecules of interest. Rapid innovation in techniques is underway in this area (for an overview see )

Zapata, Cambridge Quantum, Phasecraft, QC Ware, Rahko, QunaSys and others are all examples of startups that are using spun-out academic talent to drive innovation forward, each offering variations and refinements of VQE.

Cambridge Quantum and Roche have combined the conventional computational chemistry technique of density matrix embedding theory with VQE to examine protein-ligand binding energies. In a first of its kind demonstration, calculations were performed on IBM and Honeywell processors.

Materials Science – Conventional methods can usually only address materials with weak quantum correlations. Understanding regular lattice arrays are an important class of problems and are often characterised by what is known as the Fermi-Hubbard model. This includes systems that might be of high practical importance such as in understanding high temperature superconductivity or battery performance.

Phasecraft have proposed an optimised variational algorithm to calculate ground state energies in a 2D Fermi-Hubbard model. A 5×5 instance (larger than can be solved exactly classically) could be tackled on a 50Q device with approximately 325 2Q gates (on a fully connected architecture).

Rahko have pursued the approach of focussing first on how far state-of-the-art approaches, such as dynamical mean field theory, can be pursued on HPC platforms . It later plans to augment these with techniques based on VQE .

Molecular Dynamics – Conventional methods in computational chemistry struggle to deal with the time dynamics of electron excitations. Such dynamics could be relevant to applications in photovoltaics and light emitting diodes. VQS techniques seek to address this area.

Phasecraft’s work on simulating time-dynamics in the Fermi-Hubbard model on a 5×5 grid proposes just 3,209 2Q gates and 259 time-steps.  This goes below the conventional circuit model to use native qubit operations and builds error mitigation techniques into the algorithm.

Patents – In an interesting example of what it can mean to patent innovation in quantum algorithms, Phasecraft has patented their efficient approach to mapping (encoding) fermions to qubits for such quantum calculations.

Optimisation – Grover’s search algorithm can in principle be adapted to provide a quadratic speedup for many combinatorial optimisation problems. However, the high resource requirements put it well out of reach for NISQ devices (and as we will see, perhaps also out of reach for current FTQC architectures). QAOA is a VQA that has been investigated by many groups for gate-model based NISQ applications. There is no general proof that it offers an advantage over classical methods, so it’s important to have some intuition as to why a quantum implementation might help the specific problem at hand.

Hard vs easy – recent work has allowed us to more clearly understand the performance of QAOA for hard (e.g. 3-SAT) and easy (e.g. 2-SAT) problems. This could point the way to a better intuition on where it can offer benefits.

IBM have presented work exploring how combinatorial optimisation problems can be mapped onto local quantum Hamiltonians. This enables us to treat their solution as analogous to finding the ground state of the Hamiltonian, using techniques such as VQE or quantum phase estimation.

In the absence of full error correction, research groups typically apply other error mitigation techniques to eke the most out of NISQ processor capabilities. These techniques are often based on control hardware or control software. Algorithm design also has its part to play. Device-specific optimisations will likely be necessary. 

Phasecraft and Quantum Benchmark are working with IQC and UCL on error resilient algorithms as part of a joint UKRI/Canadian funding initiative.

Quantum annealing for combinatorial optimisation

A wide variety of business problems can be defined as combinatorial optimisation problems. These range from portfolio optimisation in financial services, to the travelling salesman problem (which generalises many common problems in transport, logistics and manufacturing).

QUBO is the combinatoric optimisation model native to quantum annealing based devices. Here we have an intuition that quantum tunnelling can help avoid the output being trapped in a sub-optimal solution. Recently it has been shown in principal that the architecture used by D-Wave (‘stoquastic’ annealing) can indeed offer a speedup (though the proof does assume a noiseless device and a favourable choice of problem). The academic debate has continued about what expectation we can have for the noisy, short coherence time qubits available in current devices.

Paypal already uses deep learning techniques for fraud detection. However, feature selection for these algorithms is a significant combinatoric problem. Paypal is collaborating with D-Wave to compare QUBO based models and classical models for feature selection.

Multiverse Computing have previously explored financial services portfolio optimisation tasks with BBVA using both QAOA and QA, with the latter performing better . Recently they have extended this work showing promising result optimising financial portfolios using realistic assets from the S&P 500. This uses D-Wave Hybrid and its Advantage quantum processors .

Digital annealers such as those from Fujitsu, Toshiba, Hitachi and NEC continue to provide quantum-inspired competition for these approaches.

Toshiba has been pursuing quantum-inspired advantage for problems using the simulated bifurcation algorithm. Its SBM is a software package suitable to run on conventional high performance hardware. Its use has been demonstrated for problems of up to 1 billlion binary variables.

Work from Microsoft points to a scaling disadvantage that quantum annealing faces versus digital annealers due to the overheads caused by connectivity constraints when embedding QUBO problems in current architectures.

Zapata has developed a quantum enhanced optimization technique that employs generative modelling to seek better solutions to combinatorial optimization problems. The approach has the advantage of not being limited to QUBO problems and can make use of classical and quantum machine learning models. They have first demonstrated it as a quantum-inspired model for S&P 500 portfolio optimization.

Quantum speedups for Monte Carlo techniques

Conventional Monte Carlo techniques find wide application in business and industry. In particular, these techniques find prominent use in the financial services industry for derivatives pricing, credit valuations and risk management. Quantum Monte Carlo techniques promise to speedup these very useful calculations. These depend on the amplitude estimation primitive (for an overview see ).

QC Ware have reduced the resource requirements necessary to start benefitting from QMC. This reduces circuit depth but increase iterations, a trade-off that decreases the quantum speedup but brings its realisation nearer.

QC Ware have patented their low-circuit depth amplitude estimation technique , and their efficient data loader technique .

QC Ware have completed a proof of concept demonstration on IonQ’s latest hardware. This used unary coding and 4Q with circuits of up to 92 2Q gates and a circuit depth of 62.

Algorithmic innovation is bringing quantum advantage nearer to realisation. However substantial obstacles remain. QC Ware have estimated the resource requirements in terms of 2Q gate fidelity and effective clock speeds needed for benefits in realistic derivative pricing use cases ). These suggest we still need several orders of magnitude further improvement on gate fidelities; in addition, clock speed may be an insurmountable challenge for current trapped ion architectures.

Goldman Sachs and IBM have calculated a threshold for quantum advantage in derivative pricing. This starts from the business insight that it is derivatives that have path-dependent sensitivity that will benefit first from quantum methods. They foresee business benefits when devices can offer 8000 logical qubits and sufficient logical fidelity to support 1010 logical operations. This will require the use of error correction, and in this initial work the required logical clock rate of 10MHz looked very aggressive compared to what current FTQC architectures are expected to deliver.

In an interesting illustration of how this work is progressing, Goldman Sachs and IBM have presented new techniques to reduce the logical clock rate requirement to perhaps 30kHz (much closer to the 10kHz envisaged by early FTQC architectures).

Quantum machine learning

Conventional machine learning is already finding wide traction across business. Quantum machine learning, perhaps more than any other area of quantum computing, has enjoyed an accelerated hype cycle. To get beneath the hype, we have to focus on why we are expecting quantum to bring an advantage. (For an overview of QML in computational molecular biology see ; for an overview of QML in finance see ).

De-quantisation – progress has not always been smooth. The Recommendation System of Kerenidis and Prakash was initial though to offer an exponential speedup, before improved classical algorithms reduced this to a (still substantial) polynomial one . This same approach has also affected quantum algorithms for other popular machine learning techniques including principal component analysis and nearest-centroid clustering .

Speeding up the maths

An obvious way quantum could help would simply be to speed up classical techniques. HHL allows a very general speedup in linear algebra, but only for natively quantum data and outputs. Grover algorithm allows a very general quadratic speedup in many unstructured search applications. But can these techniques really be used in practice?

The most persistent difficulty for most attempts to directly speed up conventional machine learning techniques is the requirement for technology to allow data to be loaded efficiently and then queried in superposition by the quantum device. We don’t yet know how to build such QRAM technology.

Google and MIT have recently published work showing explicitly how exponential speedups in training can be achieved for ‘wide and deep’ neural networks. Specifically, this leverages the neural tangent kernel model in deep learning . This work shows how to address three out of the four ‘read the fine print’ objections normally faced by QML . However, we still don’t know how to build QRAM hardware.

QC Ware’s patented data loader protocol is a notable tool that seeks to reduce if not overcome the QRAM.bottleneck

QC Ware have performed a proof of concept demonstration of the data loader technique to implement the popular machine learning primitive, nearest centroid classifier, on an 11Q IonQ device.

More space to compute

Rather than ‘longing for a QRAM miracle’, Mattias Troyer (Microsoft) encourages the field to set aside ‘big data’ problems and focus instead on ‘small data, big compute’ problems. Such problems seek to benefit from a large computational working space. This is something that a quantum computer can uniquely provide (due to the large Hilbert space of the qubit system).

Xanadu have pointed out that many of the most promising techniques for near term QML approaches are really best understood as types of what are called kernel methods in conventional machine learning.

IBM claimed a notable milestone in 2021 by providing an example of a learning task that can offer an exponential speedup but only requires classical access to data (though it does require FTQC). This takes as its starting point a family of datasets constructed so they are difficult to classify with conventional algorithms (they are based on the discrete log problem). A quantum calculation is used as the kernel function for a conventional support vector machine).

IBM have also published work quantifying the higher dimensionality QNNs can achieve compared to comparable classical neural networks and the role of a hard-to-simulate-classically feature map in influencing trainability.

Pasqal have published a framework specifically tailored to exploit the reconfigurability of neutral atom devices for graph kernel representations.

A prominent class of problems that can be compactly expressed, but require large computation resources to solve numerically, are systems of nonlinear differential equations. Such equations occur in a wide variety of scientific and commercial applications where we need to model complex processes: from structural engineering to aerospace, from chemistry to biology, from finance to epidemiology.

Qu & Co have developed a new technique to tackle nonlinear differential equations on near-term quantum computers, differentiable quantum circuits. This approach trains a QNN to deal with the derivatives using the large available Hilbert space . Qu & Co have also extended this approach to stochastic differential equations . Qu & Co have filed a patent application covering their techniques.

Natural language processing has been an area of interest since the very conception of modern computing, and one where progress has been marked in recent years. It’s a great example of a problem that requires an exponentially growing working space (think of why we have dictionaries for words but not for sentences).

Cambridge Quantum have identified QNLP as a particular area of promise for QML. In 2021 they reported their first experimental results. Data sets of 130 sentences and 105 noun phrases were encoded using a 5Q IBM device.

QNLP makes use of the expanded computational space offered by quantum computers. Our intuition that this will be a productive approach is further strengthened by the striking parallels between the formalism Cambridge Quantum propose and the ZX-calculus representation of quantum mechanics. Perhaps, as Bob Coecke (Cambridge Quantum’s Chief Scientist) claims “language is quantum native”.

Uniquely quantum data

An increasingly important focus has been the intuition that QML should be able to out-perform classical machine learning when there are quantum correlations or quantum interference effects in the data set that need to be addressed. Work in 2021 has started to formalise and structure this intuition, both for learning tasks and specifically for generative models .

Caltech published work in 2021 that puts bounds around the power of different models of machine learning: one where the learning is classically driven but uses the measurement output of quantum systems (e.g. physics experiments, an analogue quantum simulator, or the iterations of a VQA), and one where quantum coherence is maintained during the learning process (a future technique that might one day be enabled by FTQC). A key result is that the classically driven ML can do very well, equalling the power of fully-quantum learning for ‘average case’ prediction accuracy. Fully quantum learning provides a further exponential advantage for ‘worst case’ prediction accuracy.

Google argues that the most general form of support quantum can offer for advances in fields such as chemistry is not just the theoretical simulation of a system, but could also include generating quantum data sets for classically driven machine learning.

QCBM – quantum circuit Born machines are a type of VQA that implements a generative model. Work published in 2021 demonstrated for the first time that a specific type of QCBM, an Ising Born machine, could perform a sampling task that is hard for any classical computer.

One place where we expect to find data that contains quantum correlations is in the output of quantum sensors. Elsewhere in the quantum technology sector many new quantum sensors are under active development. However, we sometimes forget that NMR and SQUID based devices have been in use for years (in medical applications NMR is referred to as MRI so as not to scare the horses).

Quantum advantage in learning from experiments – Work from Caltech and Google has demonstrated that there is an exponential advantage in learning the properties of quantum states. This work asks us to consider the Sycamore processor in two parts.  In the first its qubits represent the output of some hypothetical physics experiment (or the output of a series of quantum sensors). Our task is to learn some property of this state (to investigate the physics or the experiment, or to read out our network of sensors). The rest of the quantum processor performs entangling operations on these inputs before passing the output to a classical machine learning routine. This setup is able to learn properties of the state in exponentially fewer trials than any purely classical approach could manage.

Google has developed an algorithm for quantum-assisted learning and characterisation of the system detected in NMR experiments. They present resource estimates for NISQ and FTQC applications.

Qu & Co have also proposed the extension of their DQC technique to model discovery, where we want to find the differential equation underpinning input data.

In the most adventurous case, some speculate about a connection between consciousness and the collapse of the quantum wavefunction. Indeed ‘objective collapse’ theories have long been seen as a possible way to extend quantum mechanics . Hartmut Neven (Head of Google Quantum AI) has speculated that wavefunction collapse may be a useful way to understand the ‘agency’ we will increasingly need to assign to advanced AI systems . Perhaps more people will have to start taking note of this thinking.

Early fault-tolerant machines

Quadratic is not enough?

A classical NAND gate is perhaps 10-9 transistor-seconds, a quantum NAND gate (loosely speaking) is 10 qubit-seconds. This huge 1010 factor is a major obstacle for algorithms that provide only modest, quadratic, speedups. The problem is that to get to problem sizes where the quantum computer gains an advantage, we are already dealing with cases where the runtime, with full FTQC, would be impractically long.

FTQC – The standard spec often discussed for early devices is 1M physical qubits with 99.9% 2Q fidelity running the surface code with nearest neighbour connectivity and a code cycle time of 1MHz. At this level of underlying error, this might be a 1000 logical qubit machine.  However, key gates central to quantum advantage (for example T-gates or Toffoli gates) are very slow, requiring hundreds of logical qubits each. This gives rise to a huge ‘constant factor slowdown’ versus conventional computers.

What about other qubit architectures? These offer contrasting gate speeds, coherence times fidelities and qubit connectivity. Interesting work in 2021 examined the impact of these different trade-offs for trapped ion architectures . On currently approaches, code cycle times will typically be longer, though proponents of these architectures argue their greater ability to scale offsets this disadvantage. This certainly doesn’t counter the basic problem of the constant factor slowdown.

PsiQuantum have proposed a novel architecture called FBQC. Photonic clock times could be very fast. However, these devices will still be limited by the classical processing time needed for error correction decoding. We don’t know what their logical clock time will be. Fact Based Insight have asked PsiQuantum for guidance but they have declined to comment.

Amplitude amplification (for example as used in Grover Search) is a quantum primitive with wide potential applicability across business optimisation problems, but it only offers a quadratic speedup. Quantum walks yield some examples where an exponential speedup is possible, but the cases with the most obviously wide business benefits (for example in speeding up Monte Carlo simulation) offer only a quadratic benefit. This appears to be moving a wide range of business applications outside of the reach of FTQC, based on the existing known architectures under development.

D-Wave has doubled-down on quantum annealing as its preferred approach for optimisation in both the NISQ and early FTQC era. On the other hand, it has announced it will build gate-model devices for Hamiltonian simulation applications.

However, we are also seeing innovation as the community explores ways to overcome this challenge.

Peter Johnson (Lead Research Scientist and co-founder at Zapata) comments “In the era of early fault-tolerant quantum computing, we will need to maximize the performance of these machines through the design of robust quantum algorithms. Such algorithms will still account for non-negligible amounts of error but will mitigate its effect on the computational output”. Such approaches could allow us to tackle problems that aren’t tractable with NISQ error rates, but with overheads short of full scale FTQC. Johnson points to recent work from across the community that shows the way forward for this approach.

Iordanis Kerenidis (Head of Quantum Algorithms International at QC Ware) points out that though some algorithms we previously thought had an exponential advantage have been ‘de-quantised’ by subsequent advances in classical methods, in practice they still enjoy quartic or better advantage in many linear algebra applications. Kerenidis’ famous recommendation system still enjoys an 8th power advantage.

Protein Folding and AlphaFold

Proteins are perhaps the most important biomolecules. Their physical shape and how this affects their interactions have profound consequences for biological function and drug targeting so this has been a much-researched problem. Solving the protein folding problem has been an often-touted goal for quantum computing.

Levinthal’s paradox – A short protein might easily have 3198 possible bond angle combinations. However, once it forms as a long chain, it adopts one unique preferred folded shape within milliseconds. But just trying each combination in turn would take longer than the lifetime of the universe!

This is the Protein Folding problem, or more correctly a collection of related problems including: how to predict the structure of a simple protein from its amino acid sequence; understanding protein conformations; understanding the dynamic pathways for folding.

From the perspective of quantum mechanics it’s easy to see that there is no paradox, just a problem that is intractable for classical computation but accessible to quantum computers.

For sceptical observers, 2021 has seen some negative pointers for much hyped quantum expectations. AlpahFold used a conventional AI approach to ‘solve’ protein folding. The pharma industry also showed that drug development timelines can be much accelerated when the industry is suitably focussed. 

AlaphFold 2 succeeded in moving the state of the art in predicting protein folding structures to 92.4-95% GDT. For single protein chains over 90% of the time it predicts with reasonable accuracy (<3-4 Å)  . This is a dramatic result and DeepMind’s most striking success to date. Some argue it’s the most important result yet in the field of AI .

Dr Bhushan Bonde speaking at Quantum.Tech 2021 unpicked hype and reality . Conventional quantum chemistry approximation techniques were a key tool in screening compounds for Covid-19 drugs. However, they consumed an enormous and impractical amount of compute resources. His simulations were frequently using all of the GPU capacity on Microsoft’s Azure cloud; running for weeks and that was still not fast enough. Repeating such experiments has consequences on energy consumption, heat management (cooling computer hardware) and more important, time efficiency. He also sees AlpahFold’s performance as still short of being truly biologically relevant for Pharma where accuracy for drug binding pockets needs exact confirmation.

Bonde sees future AI advances and quantum computing techniques as complementary tools. Quantum computing offers a completely new tool for analytic solutions in quantum chemistry. It also promises to be one of the enablers that allows future machine learning systems to further enhance their performance, particularly when quantum correlations are present in the data. Bonde believes that we shouldn’t be surprised if we find these in biological systems – our leading theory of birds’ navigational sense is based on the ability of light-sensing cryptochrome proteins in the bird’s eye to exploit quantum entanglement effects.

IBM have been active in progressing quantum’s approach on Protein Folding, proposing a quantum-resource efficient variant of VQE that has already been used to ‘fold’ toy-case problems on their early devices.

Hamiltonian Simulation and the story of FeMoco

Hamiltonian simulation, simulating general quantum systems in physics, materials science or chemistry, has been a key use case for quantum computing right from its inception. Each year, the advanced materials and chemicals sector spends over $40B testing new materials for potential industrial use and big pharma over $210B on R&D . Quantum proponents point to big savings by increasing productivity in the R&D pipeline and reducing money wasted on failed clinical trials.

For an introduction to the terminology read Introduction to Hamiltonian Simulation.

Popular headlines have often captured advances in quantum computing hardware. It’s not so widely understood that advances in algorithms have also been dramatic. Simulating FeMoco has been a totemistic goal for the early quantum simulation community.

FeMoco (Fe7MoS9C) is the active site within the Nitrogenase enzyme responsible in nature for the catalytic conversion of nitrogen gas into ammonia (fertilizer). Because we don’t understand how cyanobacteria pull off this trick, we use the energy intensive Harber process to make the fertilizer upon which world agriculture depends. However, the Haber process has the worst environmental impact of any chemical industry process. It is estimated to be responsible for a full 1.2% of global CO2 emissions, more than the UK or France!

Detailed work on the quantum resources necessary to simulate FeMoco provides a window into algorithmic progress: 100M qubits (2014) to 20M qubits (2017) to 4M qubits (2020), to 2M qubits today . A lower resolution simulation has also been proposed for 300k-1M qubits. This progress makes simulation look within reach on the early FTQC machines being developed by several players. Having worked on this problem for many years, Ryan Babbush (Head of Quantum Algorithms at Google) thinks that these results now look close to optimal, at least given current simulation approaches and proposed error correcting architectures.

Schrodinger and Qu & Co have questioned whether simulating FeMoco to the level of accuracy currently envisaged is in fact enough. However, they also point to indications that other systems of interest may be addressable with fewer resources than thought. They point to Cr2, not itself a system of practical interest, as a potential cross-over point into the territory where quantum advantage will be possible.

Babbush is keen to open up a new front in the push for more efficient quantum simulation, commenting “the future of chemistry is 1st quantized!” .

In contrast to the 2nd quantised schemes more typically used in early work in Hamiltonian simulation (and in conventional computational chemistry) Babbush has proposed a new scheme for 1st quantised implementations and completed detailed resource estimates for error corrected calculations. These schemes offer similar accuracy to 2nd quantisation but can have significantly reduced resource requirements.

  • 1st quantisation – qubits required ∝ number of active particles & log ( number of orbitals )
  • 2nd quantisation – qubits required ∝ number of orbitals

Another advantage of this formalism is that it promises to allow us to go beyond the Born-Oppenheimer approximation (where we treat the nuclei as classical point charges).

Smell and taste – isotopologues are molecules that differ only in their isotope composition. Such differences would not be expected to affect their chemical properties, however there are numerous examples of such changes affecting properties of taste and smell. Is this a quantum tunnelling assisted effect?

Overall, what FTQC use cases does Google see? In terms of physical qubits these might be [:

  • 25k-50k qubits – Simulating fast scrambling (via FTQC). Potential applications in theoretical physics, more speculatively in other areas.
  • 50k-250k qubits – Simulating lattice spin models. Potential applications in analysing NMR spectra.
  • 250k-1M qubits – Simulating materials and molecules at low resolution
  • 1M-5M qubits – Simulating materials and molecules to high resolution

The long view

The grand unified theory of quantum algorithms

Potential business adopters often ask how many quantum algorithms are there? In one sense new algorithms are being published weekly by academic groups and a growing number of startups. However new techniques typically leverage a much smaller number of underlying primitives. 

Hamiltonian simulation is used directly for materials science and chemistry; it’s also at the heart of HHL and continuous-time quantum walks. QFT is used in phase estimation, in Shor’s algorithm, solving discrete log problems and others. Amplitude amplification is used in Grover’s search algorithm, discrete-time quantum walks and others. 

New work by QuSoft, Microsoft and MIT has looked at quantum algorithms in a new way. This ‘grand unified theory of quantum algorithms’ points to the quantum singular value transformation as a unifying underlying primitive for almost all known quantum algorithms. This new formalism promises to bring insight into the nature of quantum algorithms and where they can offer quantum advantage.

QSVT – Quantum gates evolve the quantum state of the qubits. This process is represented by a unitary matrix (a square normal matrix). Physicists are familiar with the useful role eigenvalues and eigenvectors play in analysing such systems. Singular value decomposition is the mathematical generalisation of eigen-decomposition to non-square matrices. Such matrices can be embedded within a larger unitary matrix. The QSVT technique allows us to apply a function to the singular values of the embedded matrix.

Physicists might say that the QSVT technique allows us to embed a more general class of problems within the quantum-native unitary structure of quantum computers. Mathematicians might say that QSVT provides clarity about how a wider class of mathematical functions can be mapped to mere physical devices.

BQP unbowed and unbound

How much optimism should we have about the long-term value of quantum computing? It’s worth recalling that we don’t have proof that quantum computers can solve problems which conventional computers can’t. We know how to use them to factor large numbers, but we don’t know that we won’t find a better way for conventional computers to tackle this problem. All we have is an intuition.

For some problems it’s easy to see where this intuition comes from. Whenever we want to simulate a real system that ultimately depends on quantum mechanics, it’s very difficult to even encode the Hamiltonian problem for a conventional computer. It’s ultimately an intuition that physicists have that there is no easy way round this. Because of its implications in materials science and chemistry this remains of exceptional importance and Fact Based Insight believes the long term consequences of this are still not widely enough appreciated.

But what should be our intuition about wider classes of problems? Here we need to look to computational complexity theory.  At first glance the news is not good. We don’t yet have any absolute proof that there are problems that quantum computers can efficiently solve (BQP) that lie beyond the capabilities of classical computers (BPP). More optimistically, we also don’t know where the upper bound on the power of quantum computers lies (all we know is P ⊆ BPP ⊆ BQP ⊆ PP ⊆ P#P ⊆ PSPACE ⊆ EXP; we don’t even know where important classes of problem such as NP, or its generalisation PH, intersect with this list).

Oracles – Computational complexity theorists have their own ways of generating intuition. When they can’t prove a result absolutely, they have often made illuminating progress using ‘relativised proofs’ that explore the consequences of assuming that a black-box (oracle) is available that we can ask (query) to provide a necessary input or solution. We can then make progress by studying how many times we have to call the oracle (the query complexity of the problem).

Such techniques don’t always point in the right direction (we know now IP = PSPACE, MIP = NEXP, MIP∗ = RE).  However relative proofs have again and again proved useful stepping stones for progress. In particular quantum information theorists used such techniques along the road to discovering Shor’s algorithm and Grover’s algorithm. We should take this type of intuition seriously.

In the oracle model we can show that quantum computers can exceed what is efficiently possible classically (BPPO ≠ BQPO), the Bernstein-Vazirani and Simon’s algorithms do that. This separation has recently been pushed much further to show that there is an oracle relative to which quantum machines have capabilities beyond any classical machine (BQPO ⊄ PHO) .  We also know that this problem, Forrelation, illustrates a near maximal query complexity speedup .

In recent work, Scott Aaronson (UT Austin) and his students have extended this to prove a series of new results. This includes an answer to one long-standing question (we already knew NPBPP ⊆ BPPNP, we now know in contrast that relative to some oracles NPBQP ⊄ BQPNP). Interestingly, these relativising techniques seem to indicate that potential collapses and separations of classical complexity classes place surprisingly few constraints on the power of quantum computers (for example the quantum and classical versions of P vs NP are uncoupled). This could encourage an intuition that there are indeed classes of problems, yet to be fully defined, that only quantum computers can tackle. As Aaronson puts it, there is still plenty of room for BQP to perform ‘acrobatics’.

Complexity theory looks like it still has plenty to offer. Aaronson has proposed a list of key open questions related to quantum query complexity to stimulate progress.

Algorithms offering only a quadratic speedup face an uphill struggle to deliver benefits on the range of early fault tolerant machines currently envisaged. This has naturally put an emphasis on finding algorithms offering a greater speedup. The Aaronson-Ambainis conjecture (not proven but widely believed) tells us that strong (exponential) quantum speedups only occur for problems with structure (in a mathematical sense). Perhaps business intuition can help here too. Business problems often have lots of structure (in a real-world sense). However, perhaps we focus too much on the traditional categories of problems we have already defined, and not enough on where the capabilities of these new devices naturally take us? Does business fail to even recognise some of its biggest challenges and opportunities, as things that might be addressed by a different kind of maths?

There is every reason for us to believe in the remarkable ultimate potential of quantum computation. Our intuition supporting this should be strengthened, not diminished, by progress in 2021.

To watch in 2022

  • Quantum randomness – who will buy Quantinuum’s freshly minted random numbers?
  • Production annealing – will a D-Wave client take an application into regular production use?
  • VQE – will we see a ‘beyond classical’ demonstration on a NISQ device?
  • QAOA – can we develop a clear view of when this might offer an advantage?
  • Quantum Monte Carlo – can further innovation bring this over the line?
  • Quantum machine learning – watch out for applications in basic scientific research.
  • DQC – will we see concrete resource estimates for practical use cases?
  • QNLP – what roadmap can we expect for this exciting approach?
  • Quantum chemistry – what new possibilities will 1st quantised form offer?
  • European momentum – watch out for more from startups such as Beit, JoS Quantum, Fermioniq and others.
  • Robust quantum algorithms – watch for progress exploring the space beyond NISQ but short of full-scale FTQC.
  • Other architectures for NISQ – competing qubit technologies offer different performance trade-offs. Watch out for NISQ algorithms tweaked for high-connectivity trapped ion platforms.
  • Other architectures for ‘modest’ FTQC – watch out for work estimating the resources they require for high profile applications. For trapped ions, understanding the impact of higher connectivity vs slower gate speeds will be a key consideration.
  • Innovative error correction – watch out for developments in error correction; could quantum LDPC codes turn heads?
  • Computational complexity frontier – always keep in mind that the formal basis of this field rests on some big conjectures with plenty of flexibility for how things could turn out. Any developments could have big implications for the sector: the Aaronson-Ambainis conjecture; BQP vs NP-complete; BQP vs NP∩co-NP; or even P vs NP itself.
  • Experimental complexity frontier – remember that every theory physicists have ever invented has ultimately been overturned. Quantum mechanics will be no different. Investors will hope each new processor works. It might actually be a bigger deal if one doesn’t.

References

[1]
H.-S. Zhong et al., “Quantum computational advantage using photons,” Science, vol. 370, no. 6523, pp. 1460–1463, Dec. 2020, doi: 10.1126/science.abe8770. [Online]. Available: http://arxiv.org/abs/2012.01625. [Accessed: Dec. 16, 2021]
[1]
“Quantum.Tech 2021,” Quantum.Tech 2021. [Online]. Available: https://www.quantumtechlive.com/agenda-mc. [Accessed: Dec. 16, 2021]
[1]
“Phasecraft kicks off Innovate UK-Canada project to benchmark and overcome noise and errors in near-term quantum hardware devices.” [Online]. Available: https://www.phasecraft.io/news/were-excited-to-begin-a-new-project-facilitated-by. [Accessed: Dec. 16, 2021]
[1]
B. Zhang, A. Sone, and Q. Zhuang, “Quantum Computational Phase Transition in Combinatorial Problems,” arXiv:2109.13346 [cond-mat, physics:quant-ph], Dec. 2021 [Online]. Available: http://arxiv.org/abs/2109.13346. [Accessed: Dec. 16, 2021]
[1]
“Qubits 2021 - conference.” [Online]. Available: https://qubits2021.us2.pathable.com/. [Accessed: Oct. 18, 2021]
[1]
Google Quantum AI, Google’s perspective on the viable applications of early fault-tolerant Quantum computers - QSS 2021. (Jul. 30, 2021) [Online]. Available: https://www.youtube.com/watch?v=-fcQt5C2XGY. [Accessed: Dec. 16, 2021]
[1]
Munich Center for Quantum Science & Technology, QIP2021 Tutorial: Quantum algorithms (Andrew Childs). (Feb. 01, 2021) [Online]. Available: https://www.youtube.com/watch?v=M0e5gkf7QSQ. [Accessed: Dec. 16, 2021]
[1]
J. F. Gonthier, M. D. Radin, C. Buda, E. J. Doskocil, C. M. Abuan, and J. Romero, “Identifying challenges towards practical quantum advantage through resource estimation: the measurement roadblock in the variational quantum eigensolver,” arXiv:2012.04001 [quant-ph], Dec. 2020 [Online]. Available: http://arxiv.org/abs/2012.04001. [Accessed: Nov. 25, 2021]
[1]
K. Mizuta et al., “Deep variational quantum eigensolver for excited states and its application to quantum chemistry calculation of periodic materials,” arXiv:2104.00855 [cond-mat, physics:quant-ph], Aug. 2021 [Online]. Available: http://arxiv.org/abs/2104.00855. [Accessed: Dec. 16, 2021]
[1]
“Dynamical mean field theory on the cloud | Rahko.” [Online]. Available: https://rahko.ai/efficient-and-accurate-dynamical-mean-field-theory-on-the-cloud/. [Accessed: Dec. 16, 2021]
[1]
I. Rungger et al., “Dynamical mean field theory algorithm and experiment on quantum computers,” arXiv:1910.04735 [cond-mat, physics:quant-ph], Jan. 2020 [Online]. Available: http://arxiv.org/abs/1910.04735. [Accessed: Dec. 16, 2021]
[1]
H. Chen, M. Nusspickel, J. Tilly, and G. H. Booth, “A variational quantum eigensolver for dynamic correlation functions,” Phys. Rev. A, vol. 104, no. 3, p. 032405, Sep. 2021, doi: 10.1103/PhysRevA.104.032405. [Online]. Available: http://arxiv.org/abs/2105.01703. [Accessed: Dec. 16, 2021]
[1]
N. Heim, A. Ghosh, O. Kyriienko, and V. E. Elfving, “Quantum Model-Discovery,” arXiv:2111.06376 [quant-ph, stat], Nov. 2021 [Online]. Available: http://arxiv.org/abs/2111.06376. [Accessed: Dec. 16, 2021]
[1]
A. E. Paine, V. E. Elfving, and O. Kyriienko, “Quantum Quantile Mechanics: Solving Stochastic Differential Equations for Generating Time-Series,” arXiv:2108.03190 [cond-mat, physics:quant-ph, stat], Oct. 2021 [Online]. Available: http://arxiv.org/abs/2108.03190. [Accessed: Dec. 16, 2021]
[1]
O. Kyriienko, A. E. Paine, and V. E. Elfving, “Solving nonlinear differential equations with differentiable quantum circuits,” Phys. Rev. A, vol. 103, no. 5, p. 052416, May 2021, doi: 10.1103/PhysRevA.103.052416. [Online]. Available: http://arxiv.org/abs/2011.10395. [Accessed: Dec. 16, 2021]
[1]
J. J. M. Kirsopp et al., “Quantum Computational Quantification of Protein-Ligand Interactions,” arXiv:2110.08163 [physics, physics:quant-ph], Oct. 2021 [Online]. Available: http://arxiv.org/abs/2110.08163. [Accessed: Dec. 15, 2021]
[1]
B. Fuller et al., “Approximate Solutions of Combinatorial Problems via Quantum Relaxations,” arXiv:2111.03167 [math-ph, physics:quant-ph], Nov. 2021 [Online]. Available: http://arxiv.org/abs/2111.03167. [Accessed: Dec. 15, 2021]
[1]
N. Stamatopoulos, G. Mazzola, S. Woerner, and W. J. Zeng, “Towards Quantum Advantage in Financial Market Risk using Quantum Gradient Algorithms,” arXiv:2111.12509 [quant-ph, q-fin], Nov. 2021 [Online]. Available: http://arxiv.org/abs/2111.12509. [Accessed: Dec. 15, 2021]
[1]
J. Alcazar and A. Perdomo-Ortiz, “Enhancing Combinatorial Optimization with Quantum Generative Models,” arXiv:2101.06250 [quant-ph], Jan. 2021 [Online]. Available: http://arxiv.org/abs/2101.06250. [Accessed: Dec. 15, 2021]
[1]
M. D. Radin and P. Johnson, “Classically-Boosted Variational Quantum Eigensolver,” arXiv:2106.04755 [quant-ph], Jun. 2021 [Online]. Available: http://arxiv.org/abs/2106.04755. [Accessed: Dec. 15, 2021]
[1]
“Cambridge Quantum Launches Quantum Origin,” Cambridge Quantum, Dec. 07, 2021. [Online]. Available: https://cambridgequantum.com/quantum-origin-press-release/. [Accessed: Dec. 14, 2021]
[1]
Google Quantum AI, Day 1 Keynote - Quantum Summer Symposium 2021. (Jul. 30, 2021) [Online]. Available: https://www.youtube.com/watch?v=VGGHG1a98vA. [Accessed: Dec. 12, 2021]
[1]
L.-P. Henry, S. Thabet, C. Dalyac, and L. Henriet, “Quantum evolution kernel : Machine learning on graphs with programmable arrays of qubits,” Phys. Rev. A, vol. 104, no. 3, p. 032416, Sep. 2021, doi: 10.1103/PhysRevA.104.032416. [Online]. Available: http://arxiv.org/abs/2107.03247. [Accessed: Dec. 12, 2021]
[1]
Z. Brakerski, P. Christiano, U. Mahadev, U. Vazirani, and T. Vidick, “A Cryptographic Test of Quantumness and Certifiable Randomness from a Single Quantum Device,” Apr. 2018 [Online]. Available: https://arxiv.org/abs/1804.00640v4. [Accessed: Dec. 10, 2021]
[1]
S. Hameroff and R. Penrose, “Consciousness in the universe: A review of the ‘Orch OR’ theory,” Physics of Life Reviews, vol. 11, no. 1, pp. 39–78, Mar. 2014, doi: 10.1016/j.plrev.2013.08.002. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1571064513001188. [Accessed: Nov. 30, 2021]
[1]
M. S. Könz, W. Lechner, H. G. Katzgraber, and M. Troyer, “Scaling overhead of embedding optimization problems in quantum annealing,” PRX Quantum, vol. 2, no. 4, p. 040322, Nov. 2021, doi: 10.1103/PRXQuantum.2.040322. [Online]. Available: http://arxiv.org/abs/2103.15991. [Accessed: Nov. 29, 2021]
[1]
A. Gilyén, Y. Su, G. H. Low, and N. Wiebe, “Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics,” Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pp. 193–204, Jun. 2019, doi: 10.1145/3313276.3316366. [Online]. Available: http://arxiv.org/abs/1806.01838. [Accessed: Nov. 29, 2021]
[1]
E. Tang, “Quantum principal component analysis only achieves an exponential speedup because of its state preparation assumptions,” Phys. Rev. Lett., vol. 127, no. 6, p. 060503, Aug. 2021, doi: 10.1103/PhysRevLett.127.060503. [Online]. Available: http://arxiv.org/abs/1811.00414. [Accessed: Nov. 29, 2021]
[1]
E. Tang, “A quantum-inspired classical algorithm for recommendation systems,” Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pp. 217–228, Jun. 2019, doi: 10.1145/3313276.3316310. [Online]. Available: http://arxiv.org/abs/1807.04271. [Accessed: Nov. 29, 2021]
[1]
B. Coecke, G. de Felice, K. Meichanetzidis, and A. Toumi, “How to make qubits speak,” arXiv:2107.06776 [quant-ph], Jul. 2021 [Online]. Available: http://arxiv.org/abs/2107.06776. [Accessed: Nov. 29, 2021]
[1]
S. Bravyi, D. Gosset, D. Grier, and L. Schaeffer, “Classical algorithms for Forrelation,” arXiv:2102.06963 [math-ph, physics:quant-ph], Oct. 2021 [Online]. Available: http://arxiv.org/abs/2102.06963. [Accessed: Nov. 29, 2021]
[1]
J. M. Martyn, Z. M. Rossi, A. K. Tan, and I. L. Chuang, “A Grand Unification of Quantum Algorithms,” arXiv:2105.02859 [quant-ph], Aug. 2021 [Online]. Available: http://arxiv.org/abs/2105.02859. [Accessed: Nov. 29, 2021]
[1]
M. Schuld, “Supervised quantum machine learning models are kernel methods,” arXiv:2101.11020 [quant-ph, stat], Apr. 2021 [Online]. Available: http://arxiv.org/abs/2101.11020. [Accessed: Nov. 29, 2021]
[1]
S. Aaronson, “Read the fine print,” Nature Phys, vol. 11, no. 4, pp. 291–293, Apr. 2015, doi: 10.1038/nphys3272. [Online]. Available: https://www.nature.com/articles/nphys3272. [Accessed: Nov. 29, 2021]
[1]
A. Zlokapa, H. Neven, and S. Lloyd, “A quantum algorithm for training wide and deep classical neural networks,” arXiv:2107.09200 [quant-ph], Jul. 2021 [Online]. Available: http://arxiv.org/abs/2107.09200. [Accessed: Nov. 29, 2021]
[1]
C. Outeiral, M. Strahm, J. Shi, G. M. Morris, S. C. Benjamin, and C. M. Deane, “The prospects of quantum computing in computational molecular biology,” WIREs Comput Mol Sci, vol. 11, no. 1, 2021, doi: 10.1002/wcms.1481. [Online]. Available: http://arxiv.org/abs/2005.12792. [Accessed: Nov. 28, 2021]
[1]
J. R. McClean et al., “What the foundations of quantum computer science teach us about chemistry,” J. Chem. Phys., vol. 155, no. 15, p. 150901, Oct. 2021, doi: 10.1063/5.0060367. [Online]. Available: http://arxiv.org/abs/2106.03997. [Accessed: Nov. 28, 2021]
[1]
T. E. O’Brien et al., “Quantum computation of molecular structure using data from challenging-to-classically-simulate nuclear magnetic resonance experiments,” arXiv:2109.02163 [quant-ph], Oct. 2021 [Online]. Available: http://arxiv.org/abs/2109.02163. [Accessed: Nov. 28, 2021]
[1]
H. Neven, P. Read, and T. Rees, “Do Robots powered by a Quantum Processor have the Freedom to swerve?,” arXiv:2104.11591 [physics, physics:quant-ph], Apr. 2021 [Online]. Available: http://arxiv.org/abs/2104.11591. [Accessed: Nov. 28, 2021]
[1]
H.-Y. Huang et al., “Power of data in quantum machine learning,” Nat Commun, vol. 12, no. 1, p. 2631, 2021, doi: 10.1038/s41467-021-22539-9. [Online]. Available: http://arxiv.org/abs/2011.01938. [Accessed: Nov. 28, 2021]
[1]
X. Gao, E. R. Anschuetz, S.-T. Wang, J. I. Cirac, and M. D. Lukin, “Enhancing Generative Models via Quantum Correlations,” arXiv:2101.08354 [cond-mat, physics:quant-ph, stat], Jan. 2021 [Online]. Available: http://arxiv.org/abs/2101.08354. [Accessed: Nov. 27, 2021]
[1]
B. Coyle, D. Mills, V. Danos, and E. Kashefi, “The Born Supremacy: Quantum Advantage and Training of an Ising Born Machine,” npj Quantum Inf, vol. 6, no. 1, p. 60, 2020, doi: 10.1038/s41534-020-00288-9. [Online]. Available: http://arxiv.org/abs/1904.02214. [Accessed: Nov. 27, 2021]
[1]
S. Aaronson and A. Ambainis, “The Need for Structure in Quantum Speedups,” arXiv:0911.0996 [quant-ph], Feb. 2014 [Online]. Available: http://arxiv.org/abs/0911.0996. [Accessed: Nov. 26, 2021]
[1]
“[2111.10409] The Acrobatics of BQP.” [Online]. Available: https://arxiv.org/abs/2111.10409. [Accessed: Nov. 26, 2021]
[1]
R. Raz and A. Tal, “Oracle separation of BQP and PH,” in Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, New York, NY, USA, Jun. 2019, pp. 13–23, doi: 10.1145/3313276.3316315 [Online]. Available: https://doi.org/10.1145/3313276.3316315. [Accessed: Nov. 26, 2021]
[1]
T. Giurgica-Tiron et al., “Low depth amplitude estimation on a trapped ion quantum computer,” arXiv:2109.09685 [quant-ph], Sep. 2021 [Online]. Available: http://arxiv.org/abs/2109.09685. [Accessed: Nov. 25, 2021]
[1]
J. Tilly et al., “The Variational Quantum Eigensolver: a review of methods and best practices,” arXiv:2111.05176 [quant-ph], Nov. 2021 [Online]. Available: http://arxiv.org/abs/2111.05176. [Accessed: Nov. 25, 2021]
[1]
M. S. Rudolph et al., “ORQVIZ: Visualizing High-Dimensional Landscapes in Variational Quantum Algorithms,” arXiv:2111.04695 [quant-ph], Nov. 2021 [Online]. Available: http://arxiv.org/abs/2111.04695. [Accessed: Nov. 25, 2021]
[1]
S. Chakrabarti, R. Krishnakumar, G. Mazzola, N. Stamatopoulos, S. Woerner, and W. J. Zeng, “A Threshold for Quantum Advantage in Derivative Pricing,” Quantum, vol. 5, p. 463, Jun. 2021, doi: 10.22331/q-2021-06-01-463. [Online]. Available: http://arxiv.org/abs/2012.03819. [Accessed: Nov. 25, 2021]
[1]
V. E. Elfving et al., “How will quantum computers provide an industrially relevant computational advantage in quantum chemistry?,” arXiv:2009.12472 [physics, physics:quant-ph], Sep. 2020 [Online]. Available: http://arxiv.org/abs/2009.12472. [Accessed: Nov. 25, 2021]
[1]
R. Lorenz, A. Pearson, K. Meichanetzidis, D. Kartsaklis, and B. Coecke, “QNLP in Practice: Running Compositional Models of Meaning on a Quantum Computer,” arXiv:2102.12846 [quant-ph], Feb. 2021 [Online]. Available: http://arxiv.org/abs/2102.12846. [Accessed: Nov. 24, 2021]
[1]
M. Pistoia et al., “Quantum Machine Learning for Finance,” arXiv:2109.04298 [quant-ph], Sep. 2021 [Online]. Available: http://arxiv.org/abs/2109.04298. [Accessed: Nov. 24, 2021]
[1]
H. Goto et al., “High-performance combinatorial optimization based on classical mechanics,” Science Advances, vol. 7, no. 6, p. eabe7953, doi: 10.1126/sciadv.abe7953. [Online]. Available: https://www.science.org/doi/10.1126/sciadv.abe7953. [Accessed: Nov. 24, 2021]
[1]
I. Kerenidis, “Quantum Data Loader,” US20210319350A1Oct. 14, 2021 [Online]. Available: https://patents.google.com/patent/US20210319350A1/en?assignee=qc+ware+corp&oq=qc+ware+corp. [Accessed: Nov. 24, 2021]
[1]
A. Prakash and I. Kerenidis, “Method for amplitude estimation with noisy intermediate-scale quantum computers,” US20210287126A1Sep. 16, 2021 [Online]. Available: https://patents.google.com/patent/US20210287126A1/en?oq=16%2f892229+2020. [Accessed: Nov. 24, 2021]
[1]
Q. Zhu et al., “Quantum Computational Advantage via 60-Qubit 24-Cycle Random Circuit Sampling,” arXiv:2109.03494 [quant-ph], Sep. 2021 [Online]. Available: http://arxiv.org/abs/2109.03494. [Accessed: Nov. 19, 2021]
[1]
C. Ortiz Marrero, M. Kieferová, and N. Wiebe, “Entanglement-Induced Barren Plateaus,” PRX Quantum, vol. 2, no. 4, p. 040316, Oct. 2021, doi: 10.1103/PRXQuantum.2.040316. [Online]. Available: https://link.aps.org/doi/10.1103/PRXQuantum.2.040316. [Accessed: Nov. 09, 2021]
[1]
A. Bouland, W. van Dam, H. Joorati, I. Kerenidis, and A. Prakash, “Prospects and challenges of quantum finance,” arXiv:2011.06492 [quant-ph, q-fin], Nov. 2020 [Online]. Available: http://arxiv.org/abs/2011.06492. [Accessed: Nov. 09, 2021]
[1]
S. Aaronson, “Open Problems Related to Quantum Query Complexity,” arXiv:2109.06917 [quant-ph], Sep. 2021 [Online]. Available: http://arxiv.org/abs/2109.06917. [Accessed: Nov. 09, 2021]
[1]
M. Cerezo et al., “Variational Quantum Algorithms,” Nat Rev Phys, vol. 3, no. 9, pp. 625–644, 2021, doi: 10.1038/s42254-021-00348-9. [Online]. Available: http://arxiv.org/abs/2012.09265. [Accessed: Oct. 26, 2021]
[1]
L. Bittel and M. Kliesch, “Training variational quantum algorithms is NP-hard -- even for logarithmically many qubits and free fermionic systems,” Phys. Rev. Lett., vol. 127, no. 12, p. 120502, Sep. 2021, doi: 10.1103/PhysRevLett.127.120502. [Online]. Available: http://arxiv.org/abs/2101.07267. [Accessed: Oct. 26, 2021]
[1]
S. Palmer, S. Sahin, R. Hernandez, S. Mugel, and R. Orus, “Quantum Portfolio Optimization with Investment Bands and Target Volatility,” arXiv:2106.06735 [quant-ph, q-fin], Aug. 2021 [Online]. Available: http://arxiv.org/abs/2106.06735. [Accessed: Oct. 25, 2021]
[1]
F. D. Malone et al., “Towards the Simulation of Large Scale Protein-Ligand Interactions on NISQ-era Quantum Computers,” arXiv:2110.01589 [physics, physics:quant-ph], Oct. 2021 [Online]. Available: http://arxiv.org/abs/2110.01589. [Accessed: Oct. 25, 2021]
[1]
A. Katabarwa, A. Kunitsa, B. Peropadre, and P. Johnson, “Reducing runtime and error in VQE using deeper and noisier quantum circuits,” arXiv:2110.10664 [quant-ph], Oct. 2021 [Online]. Available: http://arxiv.org/abs/2110.10664. [Accessed: Oct. 25, 2021]
[1]
M. Benedetti, M. Fiorentini, and M. Lubasch, “Hardware-efficient variational quantum algorithms for time evolution,” Phys. Rev. Research, vol. 3, no. 3, p. 033083, Jul. 2021, doi: 10.1103/PhysRevResearch.3.033083. [Online]. Available: http://arxiv.org/abs/2009.12361. [Accessed: Oct. 25, 2021]
[1]
R. Toews, “AlphaFold Is The Most Important Achievement In AI—Ever,” Forbes. [Online]. Available: https://www.forbes.com/sites/robtoews/2021/10/03/alphafold-is-the-most-important-achievement-in-ai-ever/. [Accessed: Oct. 22, 2021]
[1]
“Global pharmaceutical R&D spending 2010-2024,” Statista. [Online]. Available: https://www.statista.com/statistics/309466/global-r-and-d-expenditure-for-pharmaceuticals/. [Accessed: Oct. 22, 2021]
[1]
J. Jumper et al., “Highly accurate protein structure prediction with AlphaFold,” Nature, vol. 596, no. 7873, pp. 583–589, Aug. 2021, doi: 10.1038/s41586-021-03819-2. [Online]. Available: https://www.nature.com/articles/s41586-021-03819-2. [Accessed: Oct. 21, 2021]
[1]
A. Robert, P. K. Barkoutsos, S. Woerner, and I. Tavernelli, “Resource-Efficient Quantum Algorithm for Protein Folding,” npj Quantum Inf, vol. 7, no. 1, p. 38, 2021, doi: 10.1038/s41534-021-00368-4. [Online]. Available: http://arxiv.org/abs/1908.02163. [Accessed: Oct. 21, 2021]
[1]
J. Lee et al., “Even more efficient quantum computations of chemistry through tensor hypercontraction,” PRX Quantum, vol. 2, no. 3, p. 030305, Jul. 2021, doi: 10.1103/PRXQuantum.2.030305. [Online]. Available: http://arxiv.org/abs/2011.03494. [Accessed: Oct. 21, 2021]
[1]
J. L. Bosse and A. Montanaro, “Probing ground state properties of the kagome antiferromagnetic Heisenberg model using the Variational Quantum Eigensolver,” arXiv:2108.08086 [quant-ph], Oct. 2021 [Online]. Available: http://arxiv.org/abs/2108.08086. [Accessed: Oct. 21, 2021]
[1]
Y. Su, D. W. Berry, N. Wiebe, N. Rubin, and R. Babbush, “Fault-Tolerant Quantum Simulations of Chemistry in First Quantization,” arXiv:2105.12767 [physics, physics:quant-ph], Oct. 2021 [Online]. Available: http://arxiv.org/abs/2105.12767. [Accessed: Oct. 20, 2021]
[1]
R. Babbush, J. McClean, M. Newman, C. Gidney, S. Boixo, and H. Neven, “Focus beyond quadratic speedups for error-corrected quantum advantage,” PRX Quantum, vol. 2, no. 1, p. 010103, Mar. 2021, doi: 10.1103/PRXQuantum.2.010103. [Online]. Available: http://arxiv.org/abs/2011.04149. [Accessed: Oct. 20, 2021]
[1]
X. Mi et al., “Information Scrambling in Computationally Complex Quantum Circuits,” arXiv:2101.08870 [cond-mat, physics:hep-th, physics:quant-ph], Jan. 2021 [Online]. Available: http://arxiv.org/abs/2101.08870. [Accessed: Oct. 19, 2021]
[1]
C. Cade, L. Mineh, A. Montanaro, and S. Stanisic, “Strategies for solving the Fermi-Hubbard model on near-term quantum computers,” arXiv:1912.06007 [quant-ph], Nov. 2020, doi: 10.1103/PhysRevB.102.235122. [Online]. Available: http://arxiv.org/abs/1912.06007. [Accessed: Oct. 18, 2021]
[1]
J. D. KLASSEN and C. N. C. DERBY, “Low-weight fermion-to-qubit encoding,” WO2021160989A1Aug. 19, 2021 [Online]. Available: https://patents.google.com/patent/WO2021160989A1/en?assignee=phasecraft&oq=phasecraft. [Accessed: Oct. 18, 2021]
[1]
L. Clinton, J. Bausch, and T. Cubitt, “Hamiltonian Simulation Algorithms for Near-Term Quantum Hardware,” Nat Commun, vol. 12, no. 1, p. 4989, 2021, doi: 10.1038/s41467-021-25196-0. [Online]. Available: http://arxiv.org/abs/2003.06886. [Accessed: Oct. 18, 2021]
[1]
M. Webber, V. Elfving, S. Weidt, and W. K. Hensinger, “The Impact of Hardware Specifications on Reaching Quantum Advantage in the Fault Tolerant Regime,” arXiv:2108.12371 [quant-ph], Sep. 2021 [Online]. Available: http://arxiv.org/abs/2108.12371. [Accessed: Sep. 30, 2021]
[1]
K. Bharti et al., “Noisy intermediate-scale quantum (NISQ) algorithms,” arXiv:2101.08448 [cond-mat, physics:quant-ph], Jan. 2021 [Online]. Available: http://arxiv.org/abs/2101.08448. [Accessed: Jan. 25, 2021]
[1]
Z. Brakerski, V. Koppula, U. Vazirani, and T. Vidick, “Simpler Proofs of Quantumness,” arXiv:2005.04826 [quant-ph], May 2020 [Online]. Available: http://arxiv.org/abs/2005.04826. [Accessed: Dec. 15, 2020]
[1]
A. Gilyén and U. Vazirani, “(Sub)Exponential advantage of adiabatic quantum computation with no sign problem,” arXiv:2011.09495 [quant-ph], Nov. 2020 [Online]. Available: http://arxiv.org/abs/2011.09495. [Accessed: Dec. 15, 2020]
[1]
Y. Liu, S. Arunachalam, and K. Temme, “A rigorous and robust quantum speed-up in supervised machine learning,” arXiv:2010.02174 [quant-ph], Nov. 2020 [Online]. Available: http://arxiv.org/abs/2010.02174. [Accessed: Dec. 09, 2020]
[1]
S. Mugel et al., “Dynamic Portfolio Optimization with Real Datasets Using Quantum Processors and Quantum-Inspired Tensor Networks,” arXiv:2007.00017 [quant-ph, q-fin], Jun. 2020 [Online]. Available: http://arxiv.org/abs/2007.00017. [Accessed: Dec. 04, 2020]
[1]
T. Giurgica-Tiron, I. Kerenidis, F. Labib, A. Prakash, and W. Zeng, “Low depth algorithms for quantum amplitude estimation,” arXiv:2012.03348 [quant-ph], Dec. 2020 [Online]. Available: http://arxiv.org/abs/2012.03348. [Accessed: Dec. 10, 2020]
[1]
S. Johri et al., “Nearest Centroid Classification on a Trapped Ion Quantum Computer,” arXiv:2012.04145 [quant-ph], Dec. 2020 [Online]. Available: http://arxiv.org/abs/2012.04145. [Accessed: Dec. 10, 2020]
[1]
F. Arute et al., “Supplementary information for ‘Quantum supremacy using a programmable superconducting processor,’” Nature, vol. 574, no. 7779, pp. 505–510, 2019, doi: 10.1038/s41586-019-1666-5. [Online]. Available: http://arxiv.org/abs/1910.11333. [Accessed: Jun. 26, 2020]
[1]
H. Bernien et al., “Probing many-body dynamics on a 51-atom quantum simulator,” Nature, vol. 551, no. 7682, pp. 579–584, 2017, doi: 10.1038/nature24622. [Online]. Available: http://arxiv.org/abs/1707.04344. [Accessed: Jun. 23, 2020]

Quick Navigation

Quantum Outlook 2022 – Hardware / Algorithms / Software / Internet

David Shaw

About the Author

David Shaw has worked extensively in consulting, market analysis & advisory businesses across a wide range of sectors including Technology, Healthcare, Energy and Financial Services. He has held a number of senior executive roles in public and private companies. David studied Physics at Balliol College, Oxford and has a PhD in Particle Physics from UCL. He is a member of the Institute of Physics. Follow David on Twitter and LinkedIn

2 comments

  1. Pingback: Quantum Software Outlook 2022 – Fact Based Insight

  2. Pingback: Quantum Outlook 2022 – Fact Based Insight

Leave Comment