Introduction to Hamiltonian simulation

Simulating other quantum systems was the founding idea behind quantum computing, and in many eyes is still its number one killer app. Even without the maths, it helps to understand some of the jargon used in this process.

‘Nature isn’t classical dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem because it doesn’t look so easy’ Feynman .

Classical mechanics gives us the equations of motion of a system based on Newton’s celebrated law, F=ma.  Later equivalent formulations of the maths by Lagrange and Hamilton helped extend classical mechanics to more generalised co-ordinate systems. It’s these mathematical structures that were adapted for the purposes of early quantum mechanics.

In quantum mechanics the dynamics of a system are specified by a piece of maths referred to as the ‘Hamiltonian’.  This is such a central concept in quantum mechanics that it crops up, confusingly, with multiple different usages in quantum computing.

Hamiltonian – the actual physics of the quantum computing hardware itself. Often discussed, for example, in the analysis of hardware fidelity.

Hamiltonian – the qubit evolution that a particular quantum algorithm is equivalent to.

Hamiltonian – the physics of another system we are trying to simulate.

In this introduction we are focussing on the terminology used in quantum algorithms that seek to simulate other systems in chemistry and material science.  In practice, most of these questions are dominated by the physics of ‘electrons moving around’. Regardless of whether we are trying to calculate the ground state energy of a molecule or study how a system evolves over time many of the concepts remain the same. Regardless of whether we are implementing conventional computational quantum chemistry on a classical computer, VQE on a NISQ device or a full-scale Hamiltonian simulation on a FTQC, much of the same jargon is used. (For a full introduction see )

Even without delving into the full maths, Fact Based Insight believes it’s useful to understand some of the jargon used at each stage in the process.

Step 1. Define the Hamiltonian to be simulated

The first choice is to decide which formulation of quantum mechanics is the most convenient for us to use to specify the Hamiltonian we want to simulate.

1st quantisation – In place of the classical equation of motion, quantum mechanics gives us the time-dependent Schrödinger equation iħdΨ/dt =  ĤΨ, where Ψ is the quantum wavefunction and Ĥ, the Hamiltonian, determines how it changes over time. The Hamiltonian is also central to telling us how to measure the energy of the system, ĤΨ = EΨ.  In this formulation, Ψ tells us which particle is in which state.

2nd quantisation – Later, starting with Dirac, a more sophisticated representation of quantum mechanics was developed. In this, states become our focus and Ψ tells us how many particles there are in each state. Historically this representation was central to the development of quantum field theories.

In the context of quantum computing, references to 1st or 2nd quantisation normally refer to which mathematical description is most convenient or efficient, rather than being a comment about the physics being simulated. It’s pertinent to understand that 2nd quantisation has been the dominant method adopted by conventional quantum chemistry simulations.

Step 2. Select the discretization method – grid or basis states

Quantum mechanics describes all of the continuous position and momentum information we can know about a system of particles (for example, electrons surrounding the nucleus of an atom). Useful decompositions naturally arise, for example electron orbitals in atoms, molecules or crystals. The Fermi-Hubbard model is a simpler approximation, treating the system as a discrete grid of interacting sites.

The wavefunction is just a function like any other (though it does use complex numbers). It can be calculated at a discrete set of grid points, or it can be represented as the sum of other functions (a basis set) that have convenient shapes. Conventionally a planewave basis set has been viewed as a natural fit for problems with a repeating lattice structure, such as found in many materials. Gaussian basis sets have often been a useful for approximating shapes such as molecular orbitals.

Step 3. Map system wavefunction to qubits

As we’ll see, popular accounts of why quantum computing offers an advantage for quantum chemistry are often implicitly focussing on this step because it has some important implications.

It was realised early on in the development of quantum mechanics that wavefunctions for multi-electron systems have to be written as a superposition of all possible permutations of the identical electrons. In particular this wavefunction must be antisymmetric (it changes sign) under the exchange of any two elections.  In terms of quantum physics, this is the defining property of a class of particles called fermions. Electrons are a type of fermion. Physicists and algorithm authors (and even now patent lawyers) will often use the term fermion rather than electron to indicate the wider generality of a result.

If you try to write down and store all the required permutations of the electron orbitals in a conventional computer the memory required increases exponentially. This is where qubits offer a crucial advantage.

MoleculeChemical FormulaClassical BitsQubits
WaterH2O10414
EthanolC2H6O101242
AcetaminophenC8H9NO21036120
CaffeineC8H10N4O21048160
SucroseC12H22O111082274
PenicillinC16H18N2NaO4S1086286

Early work on quantum simulation has often focussed on reducing the number of qubits required to encode the system wavefunction. More advanced techniques can also borrow ideas from the theory of error correcting codes to build-in error mitigation at this step.

However, this is not the whole story.

Step 4. Map system Hamiltonian to qubit Hamiltonian

This step is often glossed over in the popular quantum sim story. Simulating a system Hamiltonian is not necessarily easy on a quantum computer. In fact, in the general case we know that it can be a hard problem (technically we say it is QMA-complete, the quantum analogue of NP-complete).

Regardless of whether we may want to find the ground state (lowest energy state) of a molecule, or study how a system evolves dynamically over time, we need to iteratively apply the simulated Hamiltonian as a series of steps. The original technique developed for doing this is called the product formula approach (also known colloquially as Trotterisation). More advanced techniques such as Taylor series expansion, qubitization and quantum signal processing are also studied as they offer theoretical performance advantages (though this hasn’t always been borne-out in practice).

Increasingly work on NISQ devices has focussed minds on minimising the number of gates that must be executed to complete the calculation of each step. In parallel, work on algorithms for FTQC machines running surface code error correction has increasingly confirmed that gates requiring magic states are a key bottleneck. Again, a key consideration is how many such gates does your Hamiltonian circuit require.

For example, by adapting the most common approaches used by conventional computational chemistry, for a system of N electrons we end up with a qubit Hamiltonian circuit that scales in complexity as N4.  Recent work is increasingly focussing on how this complexity can be reduced. Often this necessitates changes in how we treat the earlier steps in the simulation.

Overall Hamiltonian simulation is a very important application for quantum computers. Even without knowing the full maths familiarity with the jargon can be useful.

References

[1]
S. McArdle, S. Endo, A. Aspuru-Guzik, S. Benjamin, and X. Yuan, “Quantum computational chemistry,” Rev. Mod. Phys., vol. 92, no. 1, p. 015003, Mar. 2020, doi: 10.1103/RevModPhys.92.015003. Available: http://arxiv.org/abs/1808.10402. [Accessed: Oct. 23, 2021]
[1]
R. P. Feynman, “Quantum mechanical computers,” Found Phys, vol. 16, no. 6, pp. 507–531, 1986, doi: 10.1007/BF01886518. Available: http://link.springer.com/10.1007/BF01886518. [Accessed: Feb. 09, 2021]
David Shaw

About the Author

David Shaw has worked extensively in consulting, market analysis & advisory businesses across a wide range of sectors including Technology, Healthcare, Energy and Financial Services. He has held a number of senior executive roles in public and private companies. David studied Physics at Balliol College, Oxford and has a PhD in Particle Physics from UCL. He is a member of the Institute of Physics. Follow David on Twitter and LinkedIn

Leave Comment