Quantum software – beneath the quantum hype

Commercial activity around quantum computing has risen to fever pitch over the last year. Many point to the traditional cycle of hype and consolidation in tech markets. Some have become quick to call ‘bullshit’ much of the activity at the current commercial frontier. Others predict a coming quantum winter. How can investors and early adopters understand what progress is really being made?

Investors in quantum software startups and potential early adopters need foresight on key questions facing the sector:

  • How do the growing proliferation of early devices really compare?
  • Can a hybrid quantum algorithm offer commercial value in the early NISQ era?
  • Are the big-ticket long-term promises of FTQC really getting any closer?
  • Does it make sense to work on software for hardware that doesn’t yet exist?

After visiting QCTIP 2019 last year, Fact Based Insight has again used the recent QCTIP 2020 to explore these questions.

Many great conferences have emerged and continue to emerge in the quantum space. There is a continuing role for specialist academic events and a growing one for business oriented events. Fact Based Insight has found QCTIP useful because while it seeks to retain the tradition of academic rigour, it is self-consciously focussed on those issues that make a difference in the practical application of the technology.

QCTIP 2020 could not go ahead as originally planned in Cambridge UK due to the ongoing coronavirus crisis. To its great credit the Riverlane team was able to move the event to a virtual format at short notice. Many delegates will want innovative features such as Q&A via Slack and short poster videos to continue at future events. Riverlane CEO Steve Brierley will be proud of his staff, but will no doubt want to see them return to their day job of building ground breaking quantum algorithms!

Algorithms for the NISQ era

Since the Google Sycamore system claimed the milestone of quantum supremacy there has been a surge of interest in what might be achievable with early devices. However qubit technology is still immature and quantum gate fidelity limited. In the NISQ era the common assumption is that we will only be able to perform a short ‘low depth’ quantum circuit before errors pile-up and overwhelm the calculation. Variational quantum algorithms have emerged as a leading approach to work within these limitations.

Variational Quantum Algorithms are a hybrid quantum/classical approach. The core of the algorithm is a short calculation performed on a quantum processor. This is repeatedly run as a sub-routine by a conventional computer which is responsible for varying the input parameters until the desired result is achieved. Techniques being explored include: QAOA [ ], VQE [ ], VQS [ ], QNN [ ] and others. All of these have potential commercial applications, buy only if they can be realised for problems of sufficient scale, outside of the reach of conventional computers.

The proponents of quantum computing frequently point to potential applications in the optimisation problems ubiquitous in business, improved quantum simulation for chemistry and material science (and the industries that depend on them), together with new power in machine learning and in quantitative finance. QCTIP heard about some of the latest developments across all these areas.

Optimisation

Matthew Harrigan (Google) treated QCTIP to a presentation of recent results implementing QAOA for three different classes of problem on the Google Sycamore system [ ]. It turns out Sycamore really can do more than produce random numbers (albeit not yet faster than conventional machines). Unsurprisingly the best performance is achieved on a simple problem that maps directly to the native hardware layout of the device (in Sycamore’s case a 2D grid of nearest neighbour connected qubits). However problems on a simple grid are already easy to solve conventionally.

To explore more challenging cases, with potentially wider and more industrially relevant applications, Google also looked at two other problems from the standard computer science repertoire: 3-regular MaxCut (example applications in integrated circuit design and communications networks) and the fully connected Sherrington-Kirkpatrick model (example applications in spin glass/Ising models). The performance of Sycamore exceeds that of any other published quantum results. However it still tails off rapidly with increasing problem size due to the overheads of working around its limited native connectivity. Harrigan sums up that there remain real challenges to be solved in scaling up QAOA, when mapped to an actual device, to the problem sizes that are industrially relevant.

QAOA Benchmark – Given its relative simplicity, combined with a good general test of device gate set and connectivity, Google point to the usefulness of QAOA as a system-level benchmark for near term devices (they are already able to present Sycamore’s performance alongside results on other superconducting qubit, trapped ion and photonic devices). Fact Based Insight also believes that the ‘landscape’ pictures naturally produced as an offshoot of this work will become an increasingly popular way to communicate with the non-specialist community, at least until increasing problem size puts them out of reach.

Mario Szegedy (Alibaba) described how the Alibaba Cloud Quantum development platform utilises Alibaba’s massive cloud resources to provide a powerful simulator for quantum development purposes. Empirical tests demonstrate it could simulate the Google Sycamore chip to a circuit depth of 14 in 171 days.

Szegedy has explored adapting QAOA to another computer science classic, the graph isomorphism problem (example applications in image processing and protein structure analysis). Despite progress with a wide variety of problem instances, the approach does seem to confirm the existence of worst case counter examples [ ]. Szegedy comments “These instances do not only look indistinguishable to the graph isomorphism solver, but they also turn out to resist known variational optimisation techniques. This brings the generality of QAOA itself into question”

Quantum Simulation

Simon Benjamin (Oxford Univ. & Quantum Motion) outlined interesting recent developments in our understanding of the mechanics of variational optimisation of quantum circuits, including a newly realised connection with theory from conventional machine learning.

Benjamin has previously studied the use of VQS to simulate how real-world quantum systems evolve in time. This led to the observation that the mathematical trick of ‘imaginary time evolution’ was a surprisingly good way of finding molecular ground state energies (outperforming VQE using conventional gradient descent) [  ].

More recently it was realised that this success is illustrating an underlying connection to the concept of ‘natural gradient decent’ studied in conventional Machine Learning [ ]. Extending the concept of quantum natural gradient decent has allowed Benjamin to propose a new generalised variational method [ ]. Because it no longer assumes perfect (unitary) circuits it may offer an advantage over other techniques in the presence of noise.

Benjamin also highlighted the likely requirement for realising NISQ error mitigation techniques. Numerical studies of two prominent techniques (error extrapolation and quasi-probability decomposition) point to just how sensitive their practicality is to the underlying quantum gate error rates. Even with today’s best error rates calculations at interesting scale (e.g. 80+ qubits for 80+ gate depth) look out of reach. However if error rates could be reduced by a factor of 10 (very challenging, but not inconceivable) the prospects for NISQ calculations would be transformed [ ].

UK NQCC – Benjamin has recently been appointed to the interim leadership team of the UK’s new National Quantum Computing Centre. He takes the role of Deputy Director of Research. We can no doubt expect a strong programme of variational algorithm work.

Lana Mineh (Phasecraft) discussed the application of VQE to solving problems in a framework commonly used in materials science (the Fermi-Hubbard model – a simplified lattice of interacting particles used to explore problems such as the magnetic and superconducting properties of metals). Simulation suggests that a NISQ device able to retain coherence for a circuit depth of 300-500 (on a fully connected architecture) could achieve quantum advantage over the best exact conventional calculation [ ].

1D Fermi-Hubbard model benchmark – Zapata Computing have proposed using the 1D FHM as an application specific benchmark for near-term quantum devices [ ]. This side-steps the problem of validation as the 1D case can be solved conventionally. A specific implementation is proposed for the Google Sycamore system.  Fact Based Insight also envisages that managing such cross-platform benchmarks will be a good way of demonstrating the wider utility of Orquestra, Zapata’s workflow oriented quantum software platform for enterprise users.

Ophelia Crawford (Riverlane) focussed on strategies to make the quantum calculation at the heart of VQE more efficient. A key step is optimising the structure of the quantum circuit to maximise what the rules of quantum mechanics allow to be measured in parallel. Their approach, called Sorted Insertion, compares favourably with other known techniques [ ].

Machine Learning

QCTIP 2019 had heard extensively about quantum machine learning with keynote addresses from both Iordanis Kerenidis (PCQC & QC Ware) and Hartmut Neven (Google). This year growing activity in this area was reflected in the QCTIP 2020 poster session.

BPIFrance – Iordanis Kerenidis and QC Ware have just been awarded a Concours d’Innovation i-Nov award to accelerate quantum machine learning in France. QC Ware have previously worked with Goldman Sachs in this area and will use the award to expand the capabilities of its Forge quantum software platform.

Carlos Bravo-Prieto (Barcelona Supercomputing Centre) presented a poster detailing work on a Variational Quantum Linear Solver (VQLS) with potential to solve linear algebra problems on near-term devices. This approach has been demonstrated solving a linear system of size 32 x 32 on a Rigetti 16Q Aspen quantum back-end [ ].

Logan Wright (Cornell) presented a poster with an interesting perspective on Quantum Neural Networks. By using information theory to look at neural networks in terms of the capacity of information they can store (and hence at what point training data forces them to generalise). This confirms the intuition that quantum neural networks with just classical parameters (such as parameterised quantum circuits) will only be able to access general quantum speedups in underlying linear algebra [ ]. More general quantum networks continue to offer greater potential on natively quantum data (e.g. that might be produced by another quantum circuit, or by a quantum sensor).

For more on why quantum sensors may soon not be as niche as they sound read Beating quantum winter – opportunities further up the quantum value chain.

Quantitative Finance

Adrián Pérez-Salinas (Barcelona Supercomputing Centre) presented a poster on implementing options pricing using quantum amplitude estimation. This is an approach that IBM have also investigated with JP Morgan Chase on the IBM Q Tokyo system. Pérez-Salinas shows how unary coding rather than the more usual binary can be used to trade-off increased qubit count for decreased gate count and improved error robustness [ ].

Fact Based Insight sees an impressive cluster of activity developing in Spain. In additional to the activities of the BSC represented at QCTIP, there is Multiverse Computing a quantum startup focused on the financial sector, Entanglement Partners focuses on consulting in quantum information and cyber-security, AMETIC helps steer the EU Flagship as part of its strategic advisory board, QWA has Quantum4Quants working groups. Qubit Institute focuses on practical training in quantum technologies.

The NISQ software stack

The NISQ software environment needs to focus on squeezing maximum performance out of the limited resources available. A great deal of commercial activity is already flowing into this space.

The compiler is the single most important tool in the conventional software developer’s armory. QCTIP heard in particular about some of the latest efforts to develop optimising quantum compilers. These face unique challenges and opportunities across circuit optimisation, placement and routing of qubits, and error compensation.

The first consideration is processing high-level circuit so see if it can be re-ordered and simplified. This is more complex but potentially more fruitful than the conventional case because certain quantum gates ‘commute’ meaning they can be re-ordered to facilitate simplification. Furthermore, the idealised gates of an algorithm may have to be mapped to the actual gates natively supported by the target device. Given current device characteristics, the objective is typically to minimise the number of two-qubit gates that must be physically executed.

Raban Iten (IBM) presented a new efficient scheme for matching and replacing small template circuits within a larger circuit. Work is currently underway to implement this approach in IBM’s Qiskit so that its practical performance can be evaluated [ ].

Qiskit is IBM’s quantum software development framework and enjoys unrivaled user penetration across the wider quantum community (it has been downloaded over 300,000 times). Together with the online circuit composer, its educationally oriented cousin, it is currently being used to run about 400 million quantum circuit shots on a typical day on the IBM Q cloud. Though a complete solution, Qiskit is not a closed ecosystem. The IBM Q Showcase emphasises contributions from the community from Q-CTRL, CQC, Quantum Benchmark, Max Kelsen and Xanadu.

John van de Wetering (Radboud University Nijmegen) showed how ZX-calculus can be used to assist in the identification of circuit simplifications. This works by transforming the circuits into equivalent ZX-diagrams to which a simpler set of re-writing rules can be applied [ ].

t|ket> is a retargetable compiler system (supporting input from multiple programming languages and generating output for execution on multiple quantum hardware back-ends). A current feature is a clever heuristic approach to placement and routing of qubits to overcome physical qubit connectivity constraints in the targets device (it earned a significant plaudit by being referenced in this regard in Google’s recent paper on QAOA on Google Sycamore [ ]).  The latest 0.5 release adds support for processors from AQT and Honeywell, and for the Microsoft Q# simulator. For the medium term Fact Based Insight also notes that the father of t|ket>, CQC’s Ross Duncan, is a leading proponent of the ZX-calculus. In future we expect to see further ZX-inspired optimisation in t|ket>.

Joseph Emerson (Quantum Benchmark) showed how the technique of cycle benchmarking can be used not just to measure errors on active qubits, but also those induced on unrelated idle qubits [ ]. Crucially, once these errors have been characterised at the individual device level compensating single-qubit gates can be inserted at the compiler level. Quantum Benchmark identifies this ‘error adaptive compiling’ as a potential key NISQ technique. Emerson presented data from across a variety of IBM 5Q chips, with benefits of up to a 10 times suppression in error, extending the native capabilities of the hardware.

Randomized benchmarking has been the de facto standard for hardware groups presenting results on gate fidelity [ ]. But as Emerson told QCTIP “randomized benchmarking only shows the tip of the iceberg. It’s the actual cross-talk and correlated errors that are the true challenge”. Cycle benchmarking seeks to address this gap.

For the medium term, Emerson sees cycle benchmarking as particularly pertinent to the challenge of suppressing errors as devices scale-up by introducing multiplexed qubit addressing and control.

True-Q is Quantum Benchmark’s low-level error analysis tool-set and quantum compiler.  It features the randomized compiling technique originally developed by Quantum Benchmark to overcome systematic control errors. This has now been expanded with the new error adaptive compiling technique. True-Q is being successfully used across both superconducting qubit and trapped ion devices.

When we operate beyond quantum supremacy we can no longer directly check our calculations on a conventional computer. Instead we will want an error-aware compiler to provide some certification of the hardware output based on the measured error characteristics of the device. Fact Based Insight believes True-Q may be well placed to lead the way on this.

NISQ hardware

Popular coverage has often focused on the superconducting qubit hardware used by early leaders such as Google, IBM and Rigetti. However rivals are pursuing a range of different approaches including trapped ion, integrated photonic, silicon spin, neutral atom, topological and other qubit technologies. Each points to their own unique advantages. The list of early quantum hardware providers continues to grow.

The QCTIP hardware panel discussion explored the challenges being faced in scaling up devices across technologies and their key pros and cons.

  • Building better two-qubit gates – Ilana Wisby (OQC) summed up “Over the last year, leading hardware players have backed-off chasing higher and higher headline qubit numbers, instead they are prioritising quality”. This focus reflects the struggle to maintain two-qubit gate fidelities across larger devices. Exactly what types of two-qubit gates you can execute also matters. “It’s an engineering and material science problem and there’s lots of R&D still to do”. Wisby hints that OQC are about to announce performance results for their own coaxmon superconducting qubit tech. Laser-driven trapped ion gates still hold the record for gate performance in the lab.
  • Qubit connectivity – Winfried Hensinger (Universal Quantum) drew attention to some of the wider benefits of trapped ion technology, for example the potential for easier all-to-all qubit connectivity. The Sussex-blueprint [ ] to which Universal Quantum is working uses global microwave fields to drive gates and promises extended connectivity via ion shuffling. Hensinger also hints that a big announcement is due shortly.
  • Control multiplexing – Fernando Gonzalez-Zalba (Hitachi Europe) is working to develop silicon spin qubit technology and emphasises the compatibility of this approach with existing CMOS fabrication technology. This helps with a key issue faced by many “We need to solve the problem of addressing millions of qubits. Being able to build monolithically in silicon is a key approach to address this”. In a big announcement breaking just after QCTIP, teams in UNSW [ ] and QuTech [ ] have operated silicon spin qubits in the 1.1-1.5 K range, a significantly easier cryogenic regime in which to operate a CMOS control system.
  • Special purpose architectures – Anthony Laing (Univ. of Bristol & Duality Quantum Photonics) is building special purpose quantum computers using integrated photonics technology. “Duality is targeting industrial applications and wants to deliver quantum advantage in those applications in 3-5 years”. This works by deliberately mapping the device architecture to the problem of interest (e.g. the bosonic statistics of photons in wave-guides is analogous to vibrational excitations in molecules, which can be intimately tied to driving reaction rates [ ]).
    Neither is integrated photonics just for special purposes devices. PsiQ has raised $215m of funding and is seeking to build a 1 million qubit device in 5-8 years.

The proliferation of devices, with technologies and architectures that promise a range of different capabilities is sharply increasing the focus of the sector on benchmarks that can allow the comparison of different devices.

Quantum Volume (QV) – Since 2017 IBM have promoted this measure for those wanting a single figure-of-merit with which to compare different devices [ ]. It captures the challenge of implementing a random set of generic two-qubit gate operations between randomly selected qubits. It naturally builds in the degree to which device control and low-level compilation is able to compensate for gate errors and compose gates in terms of the devices native gate-set and connectivity. By design it gives equal weight to the number of qubits that can be managed and the depth of circuit evolutions that can be performed. Crudely speaking it captures the largest ‘square-shaped’ circuit that the device can on average successfully run.

Honeywell have recently used QV in press releases surrounding the trapped ion devices they have unveiled. Universal Quantum has been able to use QV to illustrate the architectural trade-off between different routing strategies on their planned devices [ ]. Dedicated technologists will always want the ‘complete detail’ on underling performance specs. However Fact Based Insight believes QV could prove useful for those wanting a simple single number summary, particularly if it proves a useful way to broadly relate NISQ algorithms to the scale of device they require.

Prospects for NISQ quantum advantage

Fact Based Insight sees very real progress in NISQ algorithms over the last year. There are also growing indications that any true quantum advantage will require significant improvements in the performance specs of the hardware currently available.

In his well respected blog, Scott Aaronson (Univ. of Texas at Austin) recently sounded a warning on what he identifies as a fundamental trap in quantum algorithms research “to treat as promising that a quantum algorithm works at all, or works better than some brute-force classical algorithm, without asking yourself whether there are any indications that your approach will ever be able to exploit interference amplitudes to outperform the best classical algorithm”.

Fact Based Insight fulsomely supports entrepreneurs who are happy to push ahead of academic theory. However it would be good to see more trying to convey the intuition on which their approach is based.

A growing number of companies are pointing to the opportunity for companies to see benefits from quantum-inspired algorithms running on conventional hardware. Microsoft points to trials on Azure Quantum of MRI pulse sequence optimisation, and in vehicle scheduling logistics. Hitachi points to financial sector users of their CMOS Classical Annealing Machine. Fact Based Insight sees this as a valid approach, especially where businesses can learn from thinking about their problems in new ways and where a future upgrade path to quantum advantage may be available.

Algorithms for FTQC

The long-term promise of quantum computers rests on their ability to run certain quantum algorithms that in some cases offer remarkable (exponential) speedups over the best known conventional algorithms [ ]. Unfortunately the large scale FTQC systems required to complete these calculations are commonly thought to be still 10-20 years away. Caltech professor John Preskill talks of “the quantum chasm” the field must cross to move from hundreds of qubits to the millions of required. Some algorithms also require the development of new quantum memory technology, QRAM.

Key quantum algorithms with (almost) exponential speedup:
Phase estimation – used as a primitive in other algorithms inc. quantum simulation
Quantum Fourier transform – used in Shor’s algorithm for integer factorisation,
Harrow Hassidim Lloyd (HHL)- used for solving linear equations.

Key quantum algorithms with quadratic speedup:
Amplitude amplification – used in Grover’s search algorithm,
Quantum walks – used to accelerate classical random walks (some exponential speedups).

In some areas this is a threat. Quantum computers will one-day be able to break the public key cryptography on which current Internet security depends. But the upside is massive. Many point to the potential for large scale quantum simulation to drive a new era of discover in chemistry and materials science, with big impacts in sectors from agriculture to pharmaceuticals. Work continues apace.

Cryptography

Craig Gidney (Google) has earned the nickname ‘The Compiler’ within the quantum community. He does not want to accelerate the day future quantum computers could break the current Internet, but he points to the importance of having a robust understanding of what exactly it would take. Gidney presented detailed work on how to physically implement and optimise the required Shor’s algorithm calculations [ ].

Many previous future looking run-time assessments of large scale quantum algorithms have focused solely on a presumed key bottleneck step (magic state production). Gidney looked at all steps of the calculation throwing-up a range of interesting additional insights: minimising the number of logical qubits is not always optimal, arithmetic optimisations inspired by conventional computing techniques can be even more important that specifically quantum know-how.

The work assumes hardware performance stats that look like a natural evolution of Google’s current technology. The projected calculation would take 8 hours on a machine with 20 million physical qubits (7 megaqubitdays). Increasing the size of the key in conventional public key encryption technology doesn’t help that much.

While estimates for this code-breaking calculation have come down by a factor of 100 over the last 5 years, Internet users should take comfort from the fact that this new estimate is in line with other less detailed estimates in use over the last 18 months. This could still be disrupted however by a technology achieving materially different performance to Gidney’s assumptions: relatively noisy qubits, only nearest-neighbour qubit connectivity and being forced to implement fault tolerance via the surface code.

Hidden Subgroup Problem – Integer factorisation and the discrete logarithm problem are at the heart of conventional public key cryptography. Each can be reduced to solving the hidden subgroup problem for finite Abelian groups, and it’s that problem that Shor’s algorithm solves. Lattice cryptography is a leading contender to form the basis of new post-quantum crypto protocols. These rely on the hardness of the shortest vector problem. This reduces to the hidden subgroup problem for the non-Abelian dihedral group for which no efficient quantum algorithm is known.

For more on the worlds preparations to meet this challenge read Quantum Safe Cryptography – waiting to save the world.

Quantum Simulation

QCTIP 2019 had heard Andrew Childs (QuICS) survey the state of the art in quantum simulation techniques. One indication of continuing progress in this field is to look back to the challenges Child set for improving quantum simulation algorithms: tighter error bounds, faster methods for structured problems and more efficient methods of implementing QSP. QCTIP 2020 saw significant progress in each of these areas

Product formulas (Trotterisation) are one of the main quantum algorithmic approaches for the simulation of quantum chemistry. Yuan Su (Univ. of Maryland) described an analysis of Trotter error, backing-up empirical results that have shown the strong relative empirical performance of product formulas over rival approaches such as Taylor series [ ]. Yingkai Ouyang (Univ. of Sheffield) introduced SparSto a streamlined form of this approach that simplifies the calculation (sparsification) while also benefiting from randomising (stochastic) techniques to suppress errors [ ].

Brigitta Whaley (Berkeley CQIC) emphasised recent progress with another technique, quantum signal processing (qubitisation). This is arguably the most natural, approach to simulating quantum systems. However a previous obstacle to the wider application of this technique has been the difficulty in efficiently estimating the phase factors required to configure the quantum calculation. Whaley described a new method for solving this problem [ ]. Interestingly this also has potential application to solving linear systems (with potential wide application also in machine learning).

Tackling the problem from a more general direction, Barbara Terhal (QuTech) pointed to the importance of phase estimation as an underlying quantum algorithm.  “The real workhorse of full scale quantum algorithms” this is used as a sub-routine in many more discussed algorithms for example Shor’s factoring algorithm, HHL’s linear algebra algorithm and quantum simulation in general. Terhal discussed the latest techniques to speed up its evaluation [ ].

While nothing overturns the view that these techniques require large scale FTQC, it is clear that very active progress is continuing on the details of how these techniques can best be realised in practice.

The FTQC Software Stack

In theory we already know how to stop errors piling up in longer calculations: we can use quantum error correcting codes to map multiple noisy physical qubits into a smaller number of reliable logical qubits; with these we construct fault tolerant logical gate operations. The rub is that the overheads are very high and we don’t yet have the know-how to build the large scale systems required.

The Surface Code – the most widely cited approach to quantum fault tolerance
– Works for physical qubits and gates with errors below a threshold of about 1%.
– Only requires gates between ‘nearest neighbour’ qubits in a 2D lattice.
– T-gates cannot be implemented natively, instead ‘magic state’ injection is used to complete a universal gate set. This is a likely bottleneck step for practical calculations.

In presenting his work on integer factorization Craig Gidney observed “One of the consequences of making real physical estimates is you realise just how incredibly expensive fault tolerance is. If there is a quantum winter this may be a big reason why it happens”. Specifically the reaction time of the conventional computation required to ‘decode’ the fault tolerance protocol and apply corrections in real-time is a limiting factor in Gidney’s detailed work. He naturally asks the question – with only current error correction techniques and the overhead they imply, would quantum algorithms offering ‘only’ quadratic speedup be useful in practice? Indeed this challenge has recently been flagged by Earl Campbell and Ashley Montanaro, both chairing sessions at QCTIP [ ].

In work that starts to rise to this challenge, Nicolas Delfosse (Microsoft) presented detailed work on how conventional computer architecture techniques can be used to optimise a schematic design for the Union-Find decoder, a popular approach for surface code implementation. Oriented towards an eventual ASIC based implementation, the design retains the accuracy of Union-Find while maintaining the required latency and scalability [ ].

The surface code is not the only game in town. Aleksander Kubica (Perimeter Institute) summarised the latest progress with color codes. Previously a serious obstacle was inefficient decoding times. Kubica showed how surface code decoding techniques (including the Union-Find decoder) can be adapted to allow efficient color code decoding. This brings the 2D color code to a point where it is close to the performance of the surface code in terms of error threshold [ ]. It offers the potential advantage of only requiring 3-way qubit connectivity (versus the 4-way assumed in most surface code implementations).  If we could move to more advanced schemes with 3D color codes then we can also potentially avoid the need for magic states.

Lower qubit connectivity might not sound like a good thing, but as Christopher Chamberland (AWS, but presenting work from his time at IBM) pointed out, certain qubit technologies (e.g. fixed-frequency transmon superconducting qubits) can suffer from a ‘frequency crowding’ effect that makes lower connectivity much easier to fabricate while still minimising cross-talk. Chamberland presented three codes designed to work with such architectures: the heavy-hexagon code, the heavy-square code [ ] and a triangular color code [ ]. Using the Restriction decoder he showed that these have thresholds only modestly worse than the standard surface code. Fact Based Insight observes that the layout of recently released IBM 53Q Rochester system is exactly the heavy hexagonal lattice required to implement the heavy-hexagon code and the triangular colour code.

Fact Based Insight believes further innovation in this area will have a significant part to play in the long-term development of quantum computing.

For more on the latest developments in this area read Quantum error correction – from geek to superhero.

Engineering future success

In his keynote address, Matthias Troyer (Microsoft) encouraged the audience to see the big picture. New physics drives new technology but this is a long journey. Analogue quantum simulators based on ultra-cold atoms have outperformed conventional computers for over ten years. Digital quantum computers will ultimately do much more but existing qubit technologies are too noisy.

In the short-term Troyer pointed to the opportunity for clients to see benefits now from quantum-inspired algorithms running on conventional hardware, but with the additional upside that future quantum advantage could follow.

Troyer emphasised the important role that software has to play in this process. Pointing out that advances in algorithms have often outpaced Moore’s Law he gave advice for the type of teams companies need to build “It’s not just important to solve the problem in principle you have to work out how to solve it in practice. Good software engineers focussed on this task are essential.”

QCTIP 2021 will take place in Bristol, UK. Fact Based Insight is already looking forward to it!

Actions for Business

Businesses unsure of how and when they should engage with the quantum revolution:

  • What is the potential impact of quantum computing on my sector? Is it likely to be incremental or disruptive?
  • What are my competitors doing? How and on what timescale do investors expect us to respond?
  • Those seeking benefits on a 1-2 year horizon should focus on quantum-inspired approaches, particularly where this gives them experience thinking about their problems in new ways. A potential upgrade path when quantum advantage is available is a useful bonus.
  • Those happy to focus on a 3-5 year horizon have a wide selection of partners and tools with which to work. They need to be ready however for the possibility that useful quantum advantage may not be delivered in this timescale.
  • Those looking to the full promise of FTQC should be ready to wait 10-20 years for direct end-use benefits to be realised. However, they may see significant indirect benefits along the way.
  • Working across these strategies, do I have the right internal staff skills and external partners to seize these opportunities and respond to new threats in good time?

Investors seeking opportunities in quantum software:

  • Be ready for a growing distinction between hardware and software focussed on NISQ opportunities versus that prioritising the long-term FTQC prize. Be ready for senior staff to disagree over the way forward.
  • There is no guarantee that significant commercial applications will emerge in the NISQ era. Investors need to be ready to wait a long time for returns.
  • There is no guarantee that applications will not emerge in the NISQ era! Executive teams need to be ready to adapt to emerging opportunities or risk being eclipsed by more nimble rivals.
  • This stuff isn’t easy. Perhaps more than any other sector, the expertise of extremely bright individuals will be at an absolute premium. Business and financial models need to reflect this.

References

[1]
A. W. Cross, L. S. Bishop, S. Sheldon, P. D. Nation, and J. M. Gambetta, “Validating quantum computers using randomized model circuits,” Phys. Rev. A, vol. 100, no. 3, p. 032328, Sep. 2019, doi: 10.1103/PhysRevA.100.032328. Available: https://link.aps.org/doi/10.1103/PhysRevA.100.032328. [Accessed: Apr. 22, 2020]
[1]
A. Erhard et al., “Characterizing large-scale quantum computers via cycle benchmarking,” Nat Commun, vol. 10, no. 1, p. 5347, 2019, doi: 10.1038/s41467-019-13068-7. Available: http://arxiv.org/abs/1902.08543. [Accessed: Apr. 22, 2020]
[1]
B. Lekitsch et al., “Blueprint for a microwave trapped ion quantum computer,” Science Advances, vol. 3, no. 2, p. e1601540, Feb. 2017, doi: 10.1126/sciadv.1601540. Available: https://advances.sciencemag.org/content/3/2/e1601540. [Accessed: Apr. 22, 2020]
[1]
K. Wright et al., “Benchmarking an 11-qubit quantum computer,” Nature Communications, vol. 10, no. 1, pp. 1–6, Nov. 2019, doi: 10.1038/s41467-019-13534-2. Available: https://www.nature.com/articles/s41467-019-13534-2. [Accessed: Apr. 22, 2020]
[1]
M. Webber, S. Herbert, S. Weidt, and W. Hensinger, “Efficient Qubit Routing for a Globally Connected Trapped Ion Quantum Computer,” arXiv:2002.12782 [quant-ph], Feb. 2020, Available: http://arxiv.org/abs/2002.12782. [Accessed: Apr. 22, 2020]
[1]
C. Cade, L. Mineh, A. Montanaro, and S. Stanisic, “Strategies for solving the Fermi-Hubbard model on near-term quantum computers,” arXiv:1912.06007 [quant-ph], Dec. 2019, Available: http://arxiv.org/abs/1912.06007. [Accessed: Apr. 22, 2020]
[1]
S. Ramos-Calderer et al., “Quantum unary approach to option pricing,” arXiv:1912.01618 [quant-ph], Dec. 2019, Available: http://arxiv.org/abs/1912.01618. [Accessed: Apr. 22, 2020]
[1]
L. G. Wright and P. L. McMahon, “The Capacity of Quantum Neural Networks,” arXiv:1908.01364 [physics, physics:quant-ph], Aug. 2019, Available: http://arxiv.org/abs/1908.01364. [Accessed: Apr. 22, 2020]
[1]
C. Bravo-Prieto, R. LaRose, M. Cerezo, Y. Subasi, L. Cincio, and P. J. Coles, “Variational Quantum Linear Solver: A Hybrid Algorithm for Linear Systems,” arXiv:1909.05820 [quant-ph], Sep. 2019, Available: http://arxiv.org/abs/1909.05820. [Accessed: Apr. 22, 2020]
[1]
P.-L. Dallaire-Demers, M. Stęchły, J. F. Gonthier, N. T. Bashige, J. Romero, and Y. Cao, “An application benchmark for fermionic quantum simulations,” arXiv:2003.01862 [quant-ph], Mar. 2020, Available: http://arxiv.org/abs/2003.01862. [Accessed: Apr. 22, 2020]
[1]
R. Duncan, A. Kissinger, S. Perdrix, and J. van de Wetering, “Graph-theoretic Simplification of Quantum Circuits with the ZX-calculus,” arXiv:1902.03178 [quant-ph], Mar. 2020, Available: http://arxiv.org/abs/1902.03178. [Accessed: Apr. 22, 2020]
[1]
“Triangular color codes on trivalent graphs with flag qubits - IOPscience.” Available: https://iopscience.iop.org/article/10.1088/1367-2630/ab68fd. [Accessed: Apr. 22, 2020]
[1]
C. Chamberland, G. Zhu, T. J. Yoder, J. B. Hertzberg, and A. W. Cross, “Topological and Subsystem Codes on Low-Degree Graphs with Flag Qubits,” Phys. Rev. X, vol. 10, no. 1, p. 011022, Jan. 2020, doi: 10.1103/PhysRevX.10.011022. Available: https://link.aps.org/doi/10.1103/PhysRevX.10.011022. [Accessed: Apr. 22, 2020]
[1]
A. Kubica and N. Delfosse, “Efficient color code decoders in $d\geq 2$ dimensions from toric code decoders,” arXiv:1905.07393 [quant-ph], May 2019, Available: http://arxiv.org/abs/1905.07393. [Accessed: Apr. 22, 2020]
[1]
R. Iten, D. Sutter, and S. Woerner, “Efficient template matching in quantum circuits,” arXiv:1909.05270 [quant-ph], Sep. 2019, Available: http://arxiv.org/abs/1909.05270. [Accessed: Apr. 22, 2020]
[1]
P. Das et al., “A Scalable Decoder Micro-architecture for Fault-Tolerant Quantum Computing,” arXiv:2001.06598 [quant-ph], Jan. 2020, Available: http://arxiv.org/abs/2001.06598. [Accessed: Apr. 22, 2020]
[1]
C. Gidney and M. Ekerå, “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits,” arXiv:1905.09749 [quant-ph], Dec. 2019, Available: http://arxiv.org/abs/1905.09749. [Accessed: Apr. 22, 2020]
[1]
E. Campbell, A. Khurana, and A. Montanaro, “Applying quantum algorithms to constraint satisfaction problems,” Quantum, vol. 3, p. 167, Jul. 2019, doi: 10.22331/q-2019-07-18-167. Available: http://arxiv.org/abs/1810.05582. [Accessed: Apr. 22, 2020]
[1]
T. E. O’Brien, B. Tarasinski, and B. M. Terhal, “Quantum phase estimation of multiple eigenvalues for small-scale (noisy) experiments,” New J. Phys., vol. 21, no. 2, p. 023022, Feb. 2019, doi: 10.1088/1367-2630/aafb8e. Available: https://doi.org/10.1088%2F1367-2630%2Faafb8e. [Accessed: Apr. 22, 2020]
[1]
Y. Dong, X. Meng, K. B. Whaley, and L. Lin, “Efficient Phase Factor Evaluation in Quantum Signal Processing,” arXiv:2002.11649 [physics, physics:quant-ph], Feb. 2020, Available: http://arxiv.org/abs/2002.11649. [Accessed: Apr. 22, 2020]
[1]
Y. Ouyang, D. R. White, and E. T. Campbell, “Compilation by stochastic Hamiltonian sparsification,” Quantum, vol. 4, p. 235, Feb. 2020, doi: 10.22331/q-2020-02-27-235. Available: http://arxiv.org/abs/1910.06255. [Accessed: Apr. 22, 2020]
[1]
A. M. Childs, Y. Su, M. C. Tran, N. Wiebe, and S. Zhu, “A Theory of Trotter Error,” arXiv:1912.08854 [cond-mat, physics:physics, physics:quant-ph], Jan. 2020, Available: http://arxiv.org/abs/1912.08854. [Accessed: Apr. 22, 2020]
[1]
A. Montanaro, “Quantum algorithms: an overview,” npj Quantum Inf, vol. 2, no. 1, p. 15023, 2016, doi: 10.1038/npjqi.2015.23. Available: http://arxiv.org/abs/1511.04206. [Accessed: Apr. 22, 2020]
[1]
L. Petit et al., “Universal quantum logic in hot silicon qubits,” Nature, vol. 580, no. 7803, pp. 355–359, Apr. 2020, doi: 10.1038/s41586-020-2170-7. Available: https://www.nature.com/articles/s41586-020-2170-7. [Accessed: Apr. 22, 2020]
[1]
C. H. Yang et al., “Operation of a silicon quantum processor unit cell above one kelvin,” Nature, vol. 580, no. 7803, pp. 350–354, Apr. 2020, doi: 10.1038/s41586-020-2171-6. Available: https://www.nature.com/articles/s41586-020-2171-6. [Accessed: Apr. 22, 2020]
[1]
“Simulating the vibrational quantum dynamics of molecules using photonics | Nature.” Available: https://www.nature.com/articles/s41586-018-0152-9?proof=true1. [Accessed: Apr. 22, 2020]
[1]
O. Crawford, B. van Straaten, D. Wang, T. Parks, E. Campbell, and S. Brierley, “Efficient quantum measurement of Pauli operators in the presence of finite sampling error,” arXiv:1908.06942 [quant-ph], Apr. 2020, Available: http://arxiv.org/abs/1908.06942. [Accessed: Apr. 22, 2020]
[1]
S. Endo, S. C. Benjamin, and Y. Li, “Practical Quantum Error Mitigation for Near-Future Applications,” Phys. Rev. X, vol. 8, no. 3, p. 031027, Jul. 2018, doi: 10.1103/PhysRevX.8.031027. Available: https://link.aps.org/doi/10.1103/PhysRevX.8.031027. [Accessed: Apr. 22, 2020]
[1]
B. Koczor and S. C. Benjamin, “Quantum natural gradient generalised to non-unitary circuits,” arXiv:1912.08660 [quant-ph], Mar. 2020, Available: http://arxiv.org/abs/1912.08660. [Accessed: Apr. 22, 2020]
[1]
M. Szegedy, “What do QAOA energies reveal about graphs?,” arXiv:1912.12277 [quant-ph], Dec. 2019, Available: http://arxiv.org/abs/1912.12277. [Accessed: Apr. 22, 2020]
[1]
F. Arute et al., “Quantum Approximate Optimization of Non-Planar Graph Problems on a Planar Superconducting Processor,” arXiv:2004.04197 [quant-ph], Apr. 2020, Available: http://arxiv.org/abs/2004.04197. [Accessed: Apr. 22, 2020]
[1]
X. Yuan, S. Endo, Q. Zhao, Y. Li, and S. Benjamin, “Theory of variational quantum simulation,” Quantum, vol. 3, p. 191, Oct. 2019, doi: 10.22331/q-2019-10-07-191. Available: http://arxiv.org/abs/1812.08767. [Accessed: Apr. 22, 2020]
[1]
E. Farhi and H. Neven, “Classification with Quantum Neural Networks on Near Term Processors,” arXiv:1802.06002 [quant-ph], Aug. 2018, Available: http://arxiv.org/abs/1802.06002. [Accessed: Apr. 21, 2020]
[1]
E. Farhi, J. Goldstone, and S. Gutmann, “A Quantum Approximate Optimization Algorithm,” arXiv:1411.4028 [quant-ph], Nov. 2014, Available: http://arxiv.org/abs/1411.4028. [Accessed: Apr. 21, 2020]
[1]
A. Peruzzo et al., “A variational eigenvalue solver on a photonic quantum processor,” Nature Communications, vol. 5, no. 1, pp. 1–7, Jul. 2014, doi: 10.1038/ncomms5213. Available: https://www.nature.com/articles/ncomms5213. [Accessed: Apr. 21, 2020]
[1]
J. Stokes, J. Izaac, N. Killoran, and G. Carleo, “Quantum Natural Gradient,” arXiv:1909.02108 [quant-ph, stat], Dec. 2019, Available: http://arxiv.org/abs/1909.02108. [Accessed: Apr. 21, 2020]
[1]
F. Arute et al., “Quantum supremacy using a programmable superconducting processor,” Nature, vol. 574, no. 7779, pp. 505–510, Oct. 2019, doi: 10.1038/s41586-019-1666-5. Available: https://www.nature.com/articles/s41586%20019%201666%205. [Accessed: Apr. 21, 2020]
David Shaw

About the Author

David Shaw has worked extensively in consulting, market analysis & advisory businesses across a wide range of sectors including Technology, Healthcare, Energy and Financial Services. He has held a number of senior executive roles in public and private companies. David studied Physics at Balliol College, Oxford and has a PhD in Particle Physics from UCL. He is a member of the Institute of Physics. Follow David on Twitter and LinkedIn