I will assume you have read the first three chapters of “Modern Quantum Chemistry” by Szabo and Ostlund, or any other similar book, or have taken an introductory course into computational quantum chemistry. I will be referring to said book throughout the post. The book is very cheap (£10 on Amazon) and is a good investment.
I’ll also assume you have had a bit of practice coding in python, and know the basics, like how for loops work, etc.
I will go through the important maths again here and there. This is to function as a reminder, and it will not make sense if you are reading about this for the first time. If this is the case, simply refer to the book.
In many lines of the code, I will put a page reference to Szabo & Ostlund (I will just refer to the page number mostly).
Advice for following this tutorial
I suggest you have a jupyter notebook open, and simply code along. If you don’t understand a line of code, test it in another line to pick apart its function.
If you get stuck/some code doesn’t work
In this case, I recommend going to my GitHub and downloading the Jupyter Notebook for this. Then, replace the coding giving an error with the code from the notebook.
Summary of the practical steps
On pp146 of Szabo & Ostlund (I will stop referring to the name of the book from here on), we will find a summary of the key steps in Hartree-Fock, and go through these.
We will need some imports for this. Go ahead and install these if you haven’t already.
1a) Specify a molecule (a set of nuclear coordinates, atomic numbers, and a number of electrons).
.XYZ files are a common file type to store data about chemical structure. As an example, take the .XYZ file of pyridine.
Let’s write a function to read in this type of file.
We will now read in the XYZ file for HeH:
You can simply make this file in any text editor (remember to name the file HeH.xyz). Note, we are using the experimentally determined bond length, in atomic units.
We will choose the number of electrons in the system later on. We can set this to whatever we like.
1b) Specify a basis set. Also, set the number of electrons for the system (pp152)
We want to represent our Slater-like orbitals as linear combinations of Gaussian orbitals so that the integrals can be performed easily. For a discussion turn to pp152 of Szabo and Ostlund. In brief, a Gaussian can be specified by two parameters: its center, and its exponent. Furthermore, since we are representing slater orbitals as a sum of Gaussian orbitals, we need contraction coefficients. The exponents and contraction coefficients are optimized by a least-squares fitting procedure. More information here: Hehre, Stewart, Pople, 1969.
The zeta coefficients are the exponents of the Slater orbitals, and they have been optimized by the variational principle. They are in essence an effective nuclear charge of an atom. They have been historically estimated using Slater’s rules, which you might come across in an undergraduate Chemistry course.
Some more book-keeping is below. Most importantly, here is where we store the number of electrons. The storage of atom charges is required for calculation of the potential energy (although this is not that important per se since the potential energy just raises the overall energy by a constant value).
2. Computing all the required integrals in the Gaussian basis
2.1) Writing definitions for integrals between the Gaussian functions
We want to form the Fock matrix in the basis of our atomic orbitals. But our atomic orbitals are a linear sum of Gaussian orbitals. The integrals between individual Gaussian orbitals can be calculated easily and their derivations are given in the back of the book (pp410).
2.2) The product of two Gaussians is a Gaussian (pp410)
This lovely property allows easy calculation of integrals. Let’s write a function that takes in two Gaussians and spits out a new Gaussian.
Note that in our code, we have absorbed the normalizing factors into K, and thus do not need to worry about normalisation.
2.3) The Overlap and Kinetic integrals between two Gaussians (pp411)
2.4) The Potential integral, the Multi-electron Tensor and Boys Integral (pp412)
To get the potential integral and multi-electron tensor, we need to define a variant of the Boys function, which in turn (for this case) is related to the error function.
For higher orbitals (2p, 3d, etc) we can’t express the Boys function in terms of the error function and different methods are required. This has been subject to great academic study. Carrying on, we can now give the potential and multi-electron integrals:
3. Computing integrals in the atomic orbital basis
This is probably the trickiest part of this tutorial, and care needs to be taken thinking about the for loops. The idea is that at each stage of the for loop, we store information so that we don’t have to keep calling the same things over and over again. It doesn’t matter here, because we are doing a very simple calculation. But it would matter for more expensive calculations.
We will iterate first through the atoms. On each atom, we iterate through its orbitals. Finally, for each orbital, we iterate through its three Gaussians. We perform this triple iteration over each atom.
In this simple case, we could have just summed over the three Gaussians on each atom directly (because each atom has only 1 atomic orbital). But by doing it this way, we can easily extend our program to solve more complicated molecules, which we will do in a future tutorial.
You might need to spend some time convincing yourself of this, or (even better) try to code it out yourself.
If you got through that, great! The rest of the algorithm is relatively straightforward.
Lastly, since the kinetic and potential energy integrals aren’t affected by the iterative process, we can just assign a variable to the sum of them, Hcore.
4. Symmetric Orthogonalisation of the Basis (pp144)
If we remember the Hartree-Fock equations in a basis (the Roothan equations), we cannot solve it like a normal eigenvalue equation due to the overlap matrix.
We can, however, transform into an orthogonal basis. There are several ways to find a matrix that will orthogonalize the basis set but we will use symmetric orthogonalization. We note that since S is Hermitian (symmetric in the case of real orbitals), S can always be diagonalized, the proof of which is in any linear algebra text. We can write:
where s is a diagonal matrix. Then we can define:
It is easy to show that:
If we then rotate our orbital matrix with X we obtain:
Substituting the above into the Roothan equations yields:
Left multiplying with the Hermitian-transpose of X we obtain:
Now we can easily solve the Roothan equations by diagonalizing F’. Below is a code implementation to obtain X.
5. The Hartree-Fock Algorithm
We are finally in a position to write the iterative algorithm
The reason why Hartree-Fock is iterative is that the Fock matrix depends on the molecular orbitals. That is to say, you can’t get the answer…without the answer. Of course, you can take a guess at the answer, and solve the Roothan equations. The solution you get will be better than your previous guess. \
In order to quantify when we stop making more guesses, we can see how the orbital matrix has changed compared to the last guess. This is completely valid, but it turns out that, probably due to convention, we use the density matrix instead (pp138).
One really important thing about the density matrix is to remember the sum is only over the occupied orbitals (in the closed shell case). It can, thus, be interpreted as a bond order matrix as well. Let’s write a function to check the difference between the two most recent guesses of the density matrix.
We can now initiate a while loop that keeps repeating until convergence.
5.1 Take a guess at P
Remember that we’ve defined B as our basis-set size, which is 2 in this case. We will also store the subsequent guesses of P to see how fast we converge.
5.2 Initiate the while loop
5.3 Compute the Fock matrix with the initial guess of P (pp140–141)
One qualm that the hawk-eyed might wonder is why only 1 instance of P comes out of the sum in G(and not twice, or even at all). This is because we want to find F in the basis of atomic orbitals. The coulomb and exchange operators are defined in the basis of molecular orbitals which we must expand in terms of our atomic basis. If the operators were defined in the atomic basis, we would not need any instance of P.
5.4 Symmetric Orthogonalisation, as discussed before
The second block of code is to make sure the eigenvalues, and orbital matrix, is sorted in ascending order. This is not the case by default (for some reason) and so if this part is ignored, the density matrix will be computed wrongly in the next part.
5.5 Form the new Density matrix and check for convergence
Note the use of the convergence-checking function we wrote earlier. If we have converged, the while loop will break and we can simply read off our energies and orbital matrix.
5.6 Print results and we are done!
We can go ahead and enter this:
The result (if everything has gone correctly) should be this:
You might be wondering why the anti-bonding orbital is still negative? That’s because we haven’t added nuclear-nuclear repulsion. We can do that easily:
We can see the anti-bonding orbital is raised in energy a lot more than the bonding orbital is lowered. This is especially so because the species is positively charged, but applies less strongly in the general case.
Thanks for following through. Leave a comment if you got stuck anywhere and I’ll try to answer it. In the next tutorial, we will see what other properties we can obtain other than just the orbital energies (population analysis, dipole moment, etc.). In the tutorial after that, we will see what the main issue with Hartree-Fock is, and how we can improve on its predictions with second-order perturbation theory (Moller-Plesset theory). In the tutorial after that, we will see how to treat p and d orbitals. Combined with this, we can calculate the electronic structure accurately of much larger organic molecules.