The references for this lecture are here. Note that this lecture needs the Symbol font enabled on your browser.

Quantum Mechanics (QM) was developed after it was realized that what we
now call *Classical Mechanics* was not applicable to the microscopic
world. In particular, Classical Mechanics failed to account for the
spectroscopic observation of the *discrete energy levels* in atoms
and molecules, the *photoelectric effect*, and *black body radiation*,
among other observations. Efforts of the scientific community to explain
these observations eventually resulted in today's formulation of QM.

According to QM, a given system (an atom or molecule, say) is
characterised by a *Hamiltonian* H, and by a *wave function*
y(**r**) (generally a complex function),
the two being related through the Schrödinger equation:
Hy = Ey,
where E is the energy associated with y.
The Schrödinger equation is of fundamental importance; we will be
referring to it all the time. The wave function is also crucial, as,
according to the principles of QM, it contains all the information that
we can possibly extract from the system. In mathematical language, the
Hamiltonian is an *operator*, an entity which transforms a function
into another function. An operator acting on a function can (for example)
scale it by a constant factor, multiply by another function, take the
derivative or integrate it.

The Schrödinger equation is called an *eigen-value* equation,
which means that, given H, there are certain functions u_{n}, called the
*eigen-functions* of H, which when transformed by the Hamiltonian
result in the same function u_{n}, multiplied by a constant
number E_{n}, the eigenvalue associated with u_{n}. In the particular
case of the Hamiltonian, the eigenvalues are real numbers, because the
Hamiltonian is a *Hermitian* operator (you may want to revise at
this stage the definition of Hermitian operators). Some
operator and matrix
pages prepared for John's Quantum Physics Course may be relevant here.

We say that u_{n} characterizes a
*state* of the system, and that E_{n} is the energy of that
state. Depending on the system, and hence on its Hamiltonian, the eigenvalues
E_{n} may vary discretely and/or continuously. Discrete eigenvalues
correspond to bound states, and continuously varying eigenvalues correspond
to unbound states. Some systems only have discrete eigenvalues
such as the harmonic oscillator or the particle in a box with infinite potential
bounding the box (see Lecture 2). Others have
continously varying eigenvalues (like the free electron gas), and others have a
mixture of discrete and continuum spectra of eigenvalues (atoms, molecules, etc).

Solving a problem in QM means solving the corresponding Schrödinger equation. However, there are only a handful of cases for which this can be done analytically (i.e. with pencil and paper). Two such cases are the harmonic oscillator and the hydrogen atom, but in general for real situations one has to use numerical approximations. An appreciation of these numerical methods, and associated practice with computing, form an important part of this course.

Therefore, it makes sense to speak of the states of a given system without
making any reference to the way in which we choose to represent them. To do this
we make use of Dirac's notation, by which a state characterised by the wave function
u_{n}(**r**) (in real space) is denoted by |u_{n}> or
simply by |n>. |n> is the *ket* notation of state n. We also need
to manipulate the complex conjugate of kets, for example when calculating
expectation values, and these are denoted by <n|, and are called *bra*.
Therefore in terms of bras and kets, the Schrödinger equation is
written:

The abstract representation of wave functions (states) in terms of bras and kets brings out a parallelism with vectors. In vector algebra we can manipulate vectors in an abstract way, without making use of a specific frame of reference; for specific problems certain frames of reference may be more useful than others. In QM it is the same: we can manipulate bras and kets, and only use a specific representation when it happens to be more useful for the problem at hand.

The parallelism between kets (and bras) and vectors becomes even more apparent
after the introduction of *orthonormal basis sets*. An orthonormal
basis set is a collection of kets {|n>} which are all
normalised, i.e. <n|n> = 1, and which fulfil the condition of being
linearly independent. Linear independence means that no ket from the set can be
expressed as a linear combination of the other kets in the set. This implies that
<n|m> = 0 for n not equal to m. You will sometimes see this
pair of equations written in terms of the Kronecker d
as

_{nm}.
(1.2) |

Orthonormal basis sets are useful because in the world of bras and kets they play the same role as frames of reference play in vector algebra. To give a specific example let us assume that we have an orthonormal basis set {|n>} which spans a given space. Contained within this space we have a state |f>. We can then express |f> in terms of the set {|n>} as

_{n}
<n|f>|n>,
(1.3) |

where the sum extends over all elements of the basis set. Remember that in
vector algebra a vector **v** is expressed in terms of the set of unit
vectors {**e**_{n}} forming a frame of reference as

v = Σ_{n} (e
_{n} · v) e_{n}.
(1.4) |

The term contained within brackets represents the scalar product of vector
**e**_{n} with vector **v**. The parallelism between kets
and vectors is thus obvious, as well described by Sutton. Note that we need you to
read the relevant parts of chapters 1 & 2 following this lecture, as explained in the
reference list.

But what about operators? Are operators also
isomorphic with vectors? No, operators are represented as *matrices*, which
is not so surprising if one bears in mind that a matrix transforms one vector
into another vector (remember that an operator transforms a ket into another
ket!). This matrix/vector formulation of QM may sound abstract and convoluted, but
it is in fact extremely useful, and constitutes the starting point
by which most practical problems are solved, as we shall see.

Let us consider again the Schrödinger equation H|n> = E_{n}|n>.
When we want to solve this equation for a specific system, we usually know
the Hamiltonian, but the eigenvectors |n> and eigenvalues
E_{n} are unknown and our task is to find them. We then proceed
as follows:

- We choose an appropriate basis set of functions. This choice is generally physically motivated. For example, if we want to solve the Schrödinger equation for a molecule, an appropriate basis set may be the set of atomic eigenstates of the atoms constituting the molecule.
- We evaluate the Hamiltonian matrix elements <n|H|m> for every n, m in the basis set. Note that we are not making reference to any representation here; it can be done in any representation that we choose.
- Then, with the Hamiltonian in matrix form, solving the Schrödinger
equation is equivalent to the matrix operation called
*diagonalisation*, which is a standard matrix problem for which many practical algorithms exist. This process returns the eigenvalues and eigenvectors (expressed in the basis set chosen) of the Hamiltonian. Computers are very good at solving problems involving vectors and matrices, and this is why the matrix/vector representation of QM is so useful.

In the next section we will illustrate all this machinery with a specific example, and then pose some problems for you to do.

The following treatment is of course quite simplified, but it serves as an illustration of the process. Initially, consider an H atom (proton+electron) in its ground state (i.e. the electron in the 1s atomic state) and a proton, very far away from each other. As we bring the H atom and the proton closer together, the electron will start feeling the presence of the extra proton, and the total energy of the system will be lower if the electron is located in those areas of space in which it is closest to both protons simultaneously. This is the essence of covalent bonding; now we see how this intuitive picture arises in QM.

It is reasonable, as a first approximation, to adopt a basis set which consists
only of two elements (two kets), namely two 1s atomic orbitals, one
centred on the H atom, the other centred on the proton (at this stage
you might like to revise the form of 1s functions, which is discussed
for example in Chapter 1 of Sutton's book, as well as in any Quantum Mechanics
or Physical Chemistry textbook). So let us now construct the matrix form
of the Hamiltonian in this basis. For the H^{+}_{2} system
the Hamiltonian has the following form:

_{1}(r) + V_{2}(r),
(1.5) |

where T is the *kinetic energy* operator for the electron,
V_{1}(**r**) is the *Coulomb potential energy* operator
describing the interaction of the electron with proton 1, and the last term,
V_{2}(**r**) is just the same thing but with proton 2.

We will label our basis functions simply as |1> (1s function centred on proton 1) and |2> (1s function centred on proton 2). Then we have:

_{1}(r)|1> +
<1|V_{2}(r)|1> = E_{1s} + V.
(1.6) |

Here, E_{1s} is the energy of the ground state of the isolated hydrogen
atom, and V is the energy of interaction of the electron with the second
proton.

The second matrix element, <1|H|2>, will look like this:
<1|H|2> = <1|T + V_{2}(**r**)|2> +
<1|V_{1}(**r**)|2>.
But notice that the first term in the right hand side is zero, because we
are assuming that <1| and |2> form an orthonormal set, and thus
<1|T + V_{2}(**r**)|2> = <1|E_{1s}
|2> = E_{1s} <1|2> = 0.
Therefore we have

_{1}(r)|2> = W.
(1.7) |

Likewise, it is easy to see that the other remaining integrals are

_{1s} + V.
(1.9) |

So we now have the matrix form of the Hamiltonian for the
H^{+}_{2} molecule. Now, let's turn to the wave function;
we still don't know what this is, but we do know that it will be
expressed in terms of our basis set as
|y> = C_{1}
|1> + C_{2} |2>,
i.e. as a linear combination of our chosen basis set, the two 1s functions
centred on either proton. And we also know that the wave function will be the
solution of the Schrödinger equation. So let us write down the
Schrödinger equation in matrix form:

_{b} = E_{1s} + V + W,
(1.10a)_{a} = E_{1s} + V - W.
(1.10b) |

Thus we have obtained, in equations (1.10), the two eigenvalues of the system.

_{b}> = N (
|1> + |2>),
(1.11a)_{a}> = N (
|1> - |2>),
(1.11b) |

where N is a normalisation constant (equal to 2^{-½}).
Because W is negative, E_{b} is the lowest of the two eigenvalues,
i.e. it is the energy of the ground state.
|y_{b}>, the ground state
wave function, looks pictorially like this:

However, for the excited state, the wave function looks like this:

There are some differences between the above treatment and that presented by Sutton on pages 25-31. Make sure that you are clear about these differences via problem 1.4.2.

The worked-out example for the H_{2}^{+} molecule can serve
as a template for the *heteronuclear diatomic* molecule. The process
of solution is essentially identical, but the atomic levels are now different.
Nevertheless, the fact that the nuclei are not the same has some profound
consequences, which can result, in the extreme case, in ionic bonding in
the molecule.

We suggest that you try to work out problem 1.4.3 for the heteronuclear diatomic molecule for yourself, and then consider the consequences of your findings on the nature of the chemical bond.

*There seems to be some shifty footwork going on: we discuss the
molecular ion, while Sutton seems to be discussing the molecule. See
if you can get clear what is going on, and if not pose a question in
class.*

Return to timetable or to course home page.