2015-07-14

# Introduction to Electronic Structure Calculations: The variational principle

## Introduction

This post is the first part in an upcoming series explaining how to perform electronic structure calculations. The focus of this series is how to set-up an electronic structure calculation from scratch. We are not going to use any of the available quantum chemical codes, but write our very own code. This way, we will obtain a better understanding of how quantum chemical calculations are being performed. The way I am going to present the topic is by first providing the underlying theory and then further explaining the theory by giving a set of examples.

The outline of this series is as follows: We start of with explaining some basics such as the (linear) variational principle, the Hartree product and the Slater determinant. From there on, we derive the Hartree-Fock equations and show how to set-up a simple calculation for the H2 molecule. Next, we explain about basis sets, Gaussian integrals and the Hartree-Fock self-consistent field procedure. Finally, we talk about atomic and molecular orbitals, the construction of MO diagrams and the visualization of orbitals.

For this series, I am going to assume that you have a basic understanding of quantum mechanics. You know what a wave function is and its statistical interpretation. Furthermore, I assume that you have passed introductory courses in calculus and linear algebra.

## The variational principle

Given a normalized wave function $\lvert\phi\rangle$ that satisfies the appropriate boundary conditions, then the expectation value of the Hamiltonian is an upper bound to the exact ground state energy. In other words, the lowest energy we can find, gives us the best wave function. Thus,

if $\langle \phi \rvert \phi \rangle=1$, then $\langle \phi \lvert \hat{H} \rvert \phi \rangle \geq E_{0}$.

#### Example 1: The energy of the 1s orbital in hydrogen

Let's exemplify this by providing an example. In the following example, we wish to obtain the best value for $\alpha$ for the trial function $\lvert \phi \rangle = N \exp \left( - \alpha r^{2} \right)$ (where $N$ is a normalization constant) that describes the 1s orbital in the H-atom. The Hamiltonian for the H-atom in atomic units is: $$\hat{H} = -\frac{1}{2} \nabla^{2} - \frac{1}{r}.$$ The first step is to find $E$ as a function of $\alpha$. Therefore, we need to evaluate the following integral $$E = \langle \phi \lvert \hat{H} \rvert \phi \rangle = \int_{0}^{\infty} \textrm{d}r \int_{0}^{2\pi} \textrm{d} \theta \int_{0}^{\pi} \textrm{d} \varphi \, r^{2} \sin \varphi N \exp \left( - \alpha r^{2} \right) \left[ -\frac{1}{2} \nabla^{2} - \frac{1}{r} \right] N \exp \left( - \alpha r^{2} \right) .$$ Note that because we are integrating in spherical coordinates, we have introduced the Jacobian $$J = r^{2} \sin \varphi .$$ Furthermore, the radial part of the Laplacian $\nabla^{2}$ in spherical coordinates has the following form: $$\nabla^{2} = r^{-2} \frac{\textrm{d}}{\textrm{d} r} \left( r^{2} \frac{\textrm{d}}{\textrm{d} r} \right)$$ (we can ignore the angular parts of the Laplacian, because the 1s trial wave function is spherically symmetrical)

The above integral can be divided into two seperate integrals of the following forms $$I_1 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty r^2 \exp \left( -\alpha r^2 \right) \left[ r^{-2} \frac{\textrm{d}}{\textrm{d} r} \left( r^{2} \frac{\textrm{d}}{\textrm{d} r} \right) \exp \left( -\alpha r^2 \right) \right] \textrm{d}r$$ and $$I_2 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty r^2 \exp \left( -\alpha r^2 \right) \frac{1}{r} \exp \left( -\alpha r^2 \right) \textrm{d}r .$$ Since the wavefunction has no angular dependencies, the azimuthal ($\theta$) and polar ($\varphi$) parts have already been integrated in the above equations. These give the factor $4\pi$. To solve the first integral ($I_{1}$), we need to apply the Laplacian. This gives $$I_1 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty \exp \left( -\alpha r^2 \right) \left[ -6 \alpha^{2} + 4 \alpha^{2}r^{4} \right] \exp \left( -\alpha r^2 \right) \textrm{d}r .$$ Slightly rearranging this equation gives us $$I_1 = - 2 \pi N^{2} \int_0^\infty \left( 4 \alpha^{2}r^{4} -6 \alpha^{2} \right) \exp \left( -2 \alpha r^2 \right) \textrm{d}r .$$ This integral can be solved by performing integration by parts, but here we will use just use the following general formulas: $$\int_{0}^{\infty} \textrm{d}r \, r^{2m} \exp \left(-\alpha r^{2} \right) = \frac{(2m)!\pi^{1/2}}{2^{2m+1}m!\alpha^{m+1/2}}$$ and $$\int_{0}^{\infty} \textrm{d}r \, r^{2m+1} \exp \left(-\alpha r^{2} \right) = \frac{m!}{2\alpha^{m+1}} .$$ Applying these formulas for $I_1$ yields $$I_{1} = -2 \pi N^{2} \left[ 4 \alpha^{2} \frac{4! \pi^{1/2}}{2^5 2! (2 \alpha)^{5/2}} - 6 \alpha \frac{2!\pi^{1/2}}{2^3 1! (2\alpha)^{3/2}} \right] = N^{2} \frac{3 \pi^{3/2}} {4 \sqrt{2\alpha}}$$ and for $I_{2}$: $$I_{2} = -4 \pi N^{2} \int_{0}^{\infty} r \exp \left( -2 \alpha r^{2} \right) \textrm{d}r = -4 \pi N^{2} \frac{1}{4 \alpha} = \frac{-\pi}{\alpha} N^{2} .$$ Summing these two parts gives $$\langle \phi \lvert H \rvert \phi \rangle = I_1 + I_2 = N^2 \left( \frac{3 \pi ^{3/2}}{4 \sqrt{2 \alpha}} - \frac{\pi}{\alpha} \right) .$$ Using the same procedure, we can also evaluate the integral $\langle \phi \rvert \phi \rangle$. $$\langle \phi \rvert \phi \rangle = 4 \pi N^{2} \int_{0}^{\infty} r^2 \exp \left( -2 \alpha r^2 \right) \textrm{d}r = N^{2} \frac{\pi^{3/2}}{2 \sqrt{2}\alpha^{3/2}}$$ In order to evaluate the expectation value for the energy $E$ for the trial wave function $\lvert \varphi \rangle$, we need to evaluate $$E = \frac{\langle \phi \lvert \hat{H} \rvert \phi \rangle }{\langle \phi \rvert \phi \rangle} = \frac{3 \alpha}{2} - \frac{2 \sqrt{2} \alpha ^{1/2}}{\pi ^{1/2}} .$$ The reason we need to divide by $\langle \phi \rvert \phi \rangle$ is because $\lvert \phi \rangle$ is not normalized yet. If $\lvert \phi \rangle$ would be normalized, then the integral $\langle \phi \rvert \phi \rangle$ would be equal to one and thus drop out. The best value for $\alpha$ can then be found by differentiating $E$ with respect to $\alpha$ and finding that value for $\alpha$ where the differential is zero. $$\frac{\textrm{d}E}{\textrm{d}\alpha} = 0 .$$ Thus $$\frac{\textrm{d}E}{\textrm{d}\alpha} = \frac{3}{2} - \frac{\sqrt{2}}{\pi ^{1/2} \alpha^{1/2}}$$ and $$\alpha = \frac{8}{9 \pi}.$$ This gives us the following upper bound for the energy $E$: $$E = \frac{3}{2} \frac{8}{9 \pi} - \frac{2 \sqrt{2} \left( \frac{8}{9 \pi} \right)^{1/2}}{\pi ^{1/2}} = -\frac{4}{3 \pi} \approx -0.4244 .$$

The exact energy for the 1s orbital in hydrogen is -0.5 HT with the corresponding wave function $\lvert \phi \rangle = \frac{1}{\sqrt{\pi}} \exp \left( - r \right)$.

## The linear variational principle

Although the exact answer to the energy of the 1s orbital in hydrogen is known, let us assume that we do not know it and would like to obtain it. One way is to pick various trial wave functions with one independent variable and optimize that variable. The lowest energy we are going to find gives us then the best approximation of the ground state wave function. This approach would be quite tedious (and how would you know what trial wave function to choose in the first place anyway?).

It would be nice if we could have just a single trial wave function that has a lot of 'variational freedom'. Here, I propose to use a trial wave function that is a linear set of fixed wave function like so:

$$\lvert \phi \rangle = \sum_{i=1}^N c_{i} \lvert \psi_{i} \rangle$$

If we can assume that the wave functions $\lvert \psi_i \rangle$ are orthonormal in the sense that $\langle \psi_i \lvert \psi_j \rangle = \delta_{ij}$, then the energy is

$$E = \sum_{ij} \langle \psi_i \lvert \hat{H} \rvert \psi_j \rangle = \sum_{ij} c_{i}c_{j} H_{ij}$$

and we are tasked with finding the best set of coefficients $c_{i}$ that minimizes $E$. It are exactly these coefficients (and accompanying wave functions) that provide us with the necessary variational freedom. (Note that $H_{ij} = \langle \psi_i \lvert \hat{H} \rvert \psi_j \rangle$)

The procedure to find this set is by using Langrange's method of undetermined multiplies. Herein

$$L \left( c_1, c_2, \cdots, E \right) = \langle \phi \lvert \hat{H} \rvert \phi \rangle - E \left(\langle \phi \lvert \phi \rangle - 1 \right)$$ and we need to find that particular set of $c_k$ where

$$\frac{\partial L}{\partial c_{k}} = 0$$

for each $k$.

For each $k$, this gives the following equation

$$\sum_{j} c_j H_{kj} + \sum_i c_i H_{ik} - 2 E c_{k} = 0$$.

Since $\langle \psi_a \lvert \hat{H} \rvert \psi_b \rangle = \langle \psi_b \lvert \hat{H} \rvert \psi_a \rangle = H_{ab} = H_{ba}$, we can simplify the above equation to

$$\sum_{j} H_{ij} c_j - E c_{i} = 0.$$

The above expression is just the matrix expression

$$H\vec{c} = E\vec{c} .$$

This last equation looks probably very familiar to you and relates to the matrix eigenvalue problem. It is a very typical problem in linear algebra. To solve the matrix eigenvalue problem, one needs to diagonalize the $H$ matrix. This gives a set of eigenvalues and corresponding eigenvalues. The eigenvector corresponding to the lowest eigenvalue is the best set of $c_k$ we are interested in.

#### Example 2: The 2x2 linear variational problem for finding the energy of the 1s orbital of hydrogen

To illustrate the above procedure, we are going to find the best solution for the 1s orbital in the hydrogen atom using a trial wave function that is a linear combination of two orthonormal wave functions. The two orthonormal wave functions are $$\psi_{1} = \frac{2}{\pi^{3/4}} \exp \left( -r^2 / 2 \right)$$ and $$\psi_{2} = \frac{2\pi^{1/4}r - 4/\pi^{1/4}}{\sqrt{2\pi(3\pi-8)}} \exp \left( -r^2 / 2 \right).$$ And because of the orthonormality condition, the following integrals hold $$\langle \psi_{1} \lvert \psi_{2} \rangle = 0$$ and $$\langle \psi_{1} \lvert \psi_{1} \rangle = \langle \psi_{2} \lvert \psi_{2} \rangle = 1.$$ Applying the Hamiltonian to these wave functions, we can obtain the following 2x2 H-matrix: $$H = \begin{bmatrix} \langle \psi_{1} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{1} \lvert \hat{H} \rvert \psi_{2} \rangle \\\langle \psi_{2} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{2} \lvert \hat{H} \rvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} -0.378379 & -0.0185959 \\ -0.0185959 & 1.09531 \end{bmatrix}$$ Note that the H-matrix is symmetric. This is because we can exchange $\lvert \psi_{1} \rangle$ and $\lvert \psi_{2} \rangle$ and obtain the same result. Furthermore, note that although $\langle \psi_{1} \lvert \psi_{2} \rangle = 0$, this does not mean that the integral is also zero when the Hamiltonian operator is applied to either of the wave functions.

In order to find the best energy and approximation to the ground state wave function, we need to diagonalize the H-matrix. This can be easily done by hand, or using the very handy Wolfram Alpha website. The result we obtain is $$D = \begin{bmatrix} 1.09554 & 0 \\ 0 & -0.378614 \end{bmatrix}$$ for the diagonal (eigenvalue) matrix with corresponding $$X = \begin{bmatrix} -0.0126156 & -0.99992 \\ 0.99992 & -0.0126156 \end{bmatrix}$$ for the eigenvector matrix. The first column of $X$ contains the coefficients corresponding to the first eigenvalue (energy) of the $D$-matrix and the second column of $X$ for the second eigenvalue. Since the second eigenvalue is lower ($-0.378614 < 1.09554$), the second column of $X$ contains the coefficients we are looking for. Thus, the best approximation to the ground state wave function is: $$\lvert \phi \rangle = c_{1} \lvert \psi_{1} \rangle + c_{2} \lvert \psi_{2} \rangle = -0.99992 \lvert \psi_{1} \rangle - 0.0126156 \lvert \psi_{2} \rangle.$$ The corresponding energy for this wave function is -0.378614 HT. This result is actually worse than what we obtained using the procedure of example 1 (which was -0.4244 HT). The reason for this is that the variational freedom using a linear combination of only two wave functions is actually less than one where we can vary a coefficient inside an exponent. To get better results, we would need to use a larger set of wave functions in the linear combination.

In Example 2 it was shown how we can obtain the best energy given a trial wave function which is a linear combination of a set of wave functions. Let us call these wave functions in the linear combination the basis functions. In order for the procedure in example 2 to work, the basis functions need to be orthonormal to each other. Finding a set of orthonormal functions by hand is somewhat tedious (try it for yourself if you do not believe me!). It would therefore be very handy if we could apply the above methodology (albeit somewhat changed) using a set of non-orthonormal basis functions.

I already showed you how you can use Lagrange's method of undetermined multipliers in order to derive the matrix equation $Hc = Ec$. When the basis functions are no longer orthonormal, this matrix equation changes slightly and becomes

$$H\vec{c} = ES\vec{c}.$$

Herein, $S$ is termed the overlap matrix where each term in the matrix is

$$S_{ij} = \langle \psi_{i} \lvert \psi_{j} \rangle.$$

The introduction of the overlap matrix $S$ makes the matrix diagonalization a tiny bit more complicated. There are several procedures for obtaining the eigenvalues and -vectors. The procedure we are going to use follows the canonical orthogonalization scheme. It works as follows

1. Construct the overlap matrix $S$.
2. Diagonalize the overlap matrix $S$ to obtain the eigenvalues of the overlap matrix and its corresponding eigenvector matrix $U$.
3. Construct a transformation matrix $X = U s^{-1/2}$.
4. Transform the Hamiltonian matrix to the canonically orthonormalized basis set $H' = X^{\dagger} H X$.
5. Diagonalize the transformed Hamiltonian matrix to obtain $D$ and $C'$ (where C' are the transformed coefficients).
6. Obtain the coefficients of the original basis by calculating $C = XC'$.

The above probably looks a bit complicated, but really, it is not. Let me elaborate by showing you another example.

#### Example 3: Finding the energy of the 1s orbital of hydrogen using a non-orthonormal set of basis functions

The two functions we are going to use are $$\psi_{1} = \exp \left( -r^{2} / 2 \right)$$ and $$\psi_{2} = r \exp \left( -r^{2} / 2 \right).$$ (Step 1)Their corresponding overlap matrix is $$S = \begin{bmatrix} \langle \psi_{1} \lvert \psi_{1} \rangle & \langle \psi_{1} \lvert \psi_{2} \rangle \\\langle \psi_{2} \lvert \psi_{1} \rangle & \langle \psi_{2} \lvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} 5.56833 & 6.2832 \\ 6.2832 & 8.35249 \end{bmatrix}.$$ From this matrix we can clearly see that these functions are neither orthogonal to each other nor normalized.

(Step 2)Diagonalizing the overlap matrix yields the following diagonal and eigenvector matrices $$D' = \begin{bmatrix} 13.396 & 0 \\ 0 & 0.524846 \end{bmatrix}.$$ and $$U = \begin{bmatrix} 0.625975 & -0.779843 \\ 0.779843 & 0.625975 \end{bmatrix}.$$ (Step 3)We can obtain the transformation matrix by calculating $$X = U s^{-1/2} = \begin{bmatrix} 0.625975 & -0.779843 \\ 0.779843 & 0.625975 \end{bmatrix} \begin{bmatrix} 1/\sqrt{13.396} & 0 \\ 0 & 1/\sqrt{0.524846} \end{bmatrix} = \begin{bmatrix} 0.171029 & -1.07644 \\ 0.213069 & 0.864054 \end{bmatrix}.$$ The Hamiltonian matrix is $$H = \begin{bmatrix} \langle \psi_{1} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{1} \lvert \hat{H} \rvert \psi_{2} \rangle \\\langle \psi_{2} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{2} \lvert \hat{H} \rvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} -2.10694 & -2.42674 \\ -2.42674 & -1.4109 \end{bmatrix}.$$ Because our wave function $\lvert \psi_{1} \rangle$ in this example differs only by a normalization constant to the wave function $\lvert \psi_{1} \rangle$ in example 2, the value for the Hamiltonian element $H_{00}$ should only differ by a factor equal to the square of the normalization constant. In other words, if we divide $H_{00}$ by the value for $S_{00}$ (which represents that difference), we obtain the value of $H_{00}$ in example 2. The same applies of course for $H_{11}$. We can obtain the value in example 1 by calculating $H_{11} / S_{11}$.

(Step 4)And applying the transformation matrix to the Hamiltonian matrix gives $$H' = X^{\dagger} H X = \begin{bmatrix} 0.171029 & 0.213069 \\ -1.07644 & 0.864054 \end{bmatrix} \begin{bmatrix} -2.10694 & -2.42674 \\ -2.42674 & -1.4109 \end{bmatrix} \begin{bmatrix} 0.171029 & -1.07644 \\ 0.213069 & 0.864054 \end{bmatrix}$$ $$H' = X^{\dagger} H X = \begin{bmatrix} -0.302548 & 0.32611 \\ 0.32611 & 1.01951 \end{bmatrix}.$$ (Step 5)Diagonalizing the transformed Hamiltonian matrix gives $$D = \begin{bmatrix} 1.09557 & 0 \\ 0 & -0.378614 \end{bmatrix}$$ with corresponding eigenvector matrix $$C' = \begin{bmatrix} 0.227151 & -0.973859 \\ 0.973859 & 0.227151 \end{bmatrix}.$$ (Step 6)Finally, in order to get the coefficients in the original basis, we need to calculate $$C = XC' = \begin{bmatrix} 0.171029 & -1.07644 \\ 0.213069 & 0.864054 \end{bmatrix} \begin{bmatrix} 0.227151 & -0.973859 \\ 0.973859 & 0.227151 \end{bmatrix} = \begin{bmatrix} -1.00945 & -0.411073 \\ 0.889866 & -0.0112284 \end{bmatrix}.$$ The lowest eigenvalue found from the diagonal matrix $D$ is -0.378614 HT. The corresponding wave function can be found by taking the coefficients of the corresponding column vector, thus giving $$\lvert \phi \rangle = -0.411073 \cdot \exp \left(-r^2 / 2 \right) - 0.0112284 \cdot r \exp \left(-r^2 / 2 \right).$$ This is effectively the same answer as found in example 2 as can be seen from the radial distribution plot (below)! This is of course not very surprising. The variation used in both approximations was by taking different polynomials in front of the exponent. Both attempts at finding the best wave function (example 2 as well as this example) use a zeroth and first order polynomial in front of the exponent. If we compare the radial distribution function of the wave function here and the one obtained in example 2, we can see that these functions overlap. This does not necessarily imply that the underlying contributions of the basis functions are the same. If you would examine the coefficients and the corresponding basis functions, you would notice a particular difference. The contribution of each basis functions in each set is visualized in the graph below. We can clearly see that the contributions differ. If we now sum up the contributions of the two basis functions for each linear combination, we note that the two wave functions in example 2 and this example are in principle the same. Thus, we obtain the same result for each wave function.

Although we have now the tools to find the best approximation of the ground state energy and its corresponding wave function using any set of basis functions, the result so far was not very satisfactory. Using only a single trial wave function as for instance given in example 1 gave better results. In order to improve on the previous results, we need to expand our set of basis functions. A larger basis set should in principle give a better result.

#### Example 4: The energy of the 1s orbital of hydrogen using a basis set including polynomials up to 4th order

For our basis set, we are going to use the following set of basis functions: $$\lvert \psi_{i} \rangle = r^{i} \exp \left( -r^2/2 \right)$$ for $i = 0,1,2,3,4$. The overlap matrix $S$ then looks like: $$S = \begin{bmatrix} 5.56833 & 6.28319 & 8.35249 & 12.5664 & 20.8812 \\ 6.28319 & 8.35249 & 12.5664 & 20.8812 & 37.6991 \\ 8.35249 & 12.5664 & 20.8812 & 37.6991 & 73.0843 \\ 12.5664 & 20.8812 & 37.6991 & 73.0843 & 150.796 \\ 20.8812 & 37.6991 & 73.0843 & 150.796 & 328.879 \end{bmatrix}$$ and the hamiltonian matrix $H$ is $$H = \begin{bmatrix} -2.10694 & -2.42674 & -4.19506 & -8.35249 & -17.7867 \\ -2.42674 & \ -1.4109 & -2.06931 & -5.25794 & -14.598 \\ -4.19506 & -2.06931 & -1.08169 & \ -2.03167 & -8.98742 \\ -8.35249 & -5.25794 & -2.03167 & 1.45319 & 2.31392 \\ -17.7867 & -14.598 & -8.98742 & 2.31392 & 22.7788 \end{bmatrix}$$ Using the same procedure as in example 3, we obtain for the lowest eigenvalue of the transformed hamiltonian matrix the energy of -0.487773 HT. This result is a significant improvement and comes very close to the exact energy of -0.5HT!

Further improvement by expanding the basis set to 10 basis functions gives a result of -0.498481 HT.

## Summary

In this blogpost I have shown you that in order to find the best approximation to the ground state energy, the variational principle can be employed. By introducing some kind of variety in a trial wave function and then minimizing its corresponding energy, gives the best approximation to the ground state energy can be found.

In the linear variational method, the variational freedom is provided by the specific set of basis functions used. In this method, the best coefficients in the linear combination of said basis functions that minimize the energy can be found. Here, I have shown that a sufficiently large set of basis functions composed of a Gaussian part (the exponent) multiplied with a polynomial of increasing order is able to approach the true ground state energy of the hydrogen atom.

### Acknowledgements

Finally, I would like to acknowledge Bart Zijlstra for thoroughly proofreading this post and pointing out the mistakes.

#### Drop a line

Question:
What is the answer to Ten + Nine?

### Related posts

49736
9
28-07-2014
7637
1
22-03-2015
3695
0
27-12-2017
3517
0
29-12-2017
• • History

## Request History

4 previous requests available
====
• Session

• 0(null)
====
• Request

## Request

#### Cake Params

• plugin(null)
• controllerposts
• actionview
• named(array)
• Introduction to Electronic Structure Calculations The variational principle
• pass(array)
• 029

#### Post data

No post data.

#### Query string

No querystring data.

#### Current Route

• keys(array)
• 0controller
• 1action
• options(array)
• defaultRoute(true)
• defaults(array)
• plugin(null)
• template/:controller/:action/*
====
• Sql Log

## Sql Logs

#### default

##### Total Time: 4 ms Total Queries: 22 queries
Query Affected Num. rows Took (ms) Actions
SELECT COUNT(*) AS count FROM ivofilot_nl.posts AS Post WHERE Post.id = 29 1 1 0
SELECT Post.active, Post.id FROM ivofilot_nl.posts AS Post WHERE Post.id = 29 LIMIT 1 1 1 0
SELECT Comment.email, Comment.comment, Comment.id, Comment.post_id, Comment.created FROM ivofilot_nl.comments AS Comment WHERE Comment.post_id = (29) 0 0 0
SELECT Tag.id, Tag.name, Tag.icon, Tag.color, PostsTag.post_id, PostsTag.tag_id FROM ivofilot_nl.tags AS Tag JOIN ivofilot_nl.posts_tags AS PostsTag ON (PostsTag.post_id = 29 AND PostsTag.tag_id = Tag.id) 2 2 0
UPDATE posts SET watched=watched+1 WHERE id='29' 1 1 0
SELECT Comment.id, Comment.email, Comment.comment, Comment.post_id, Comment.parent_id, Comment.lft, Comment.rght, Comment.active, Comment.code, Comment.deleted, Comment.created, Comment.modified, Post.id, Post.title, Post.content, Post.watched, Post.active, Post.created, Post.modified, ParentComment.id, ParentComment.email, ParentComment.comment, ParentComment.post_id, ParentComment.parent_id, ParentComment.lft, ParentComment.rght, ParentComment.active, ParentComment.code, ParentComment.deleted, ParentComment.created, ParentComment.modified FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 29 AND Comment.active = '1' 0 0 0
SELECT Tag.id, Tag.name, Tag.icon, Tag.color FROM ivofilot_nl.tags AS Tag inner JOIN ivofilot_nl.posts_tags AS PostsTag ON (Tag.id = PostsTag.tag_id) inner JOIN ivofilot_nl.posts AS Post ON (PostsTag.post_id = Post.id) WHERE Post.id = 29 2 2 0
SELECT Post.id, Post.title, Post.content, Post.watched, Post.active, Post.created, Post.modified, PostsTag.post_id, PostsTag.tag_id FROM ivofilot_nl.posts AS Post JOIN ivofilot_nl.posts_tags AS PostsTag ON (PostsTag.tag_id IN (25, 23) AND PostsTag.post_id = Post.id) 14 14 1 maybe slow
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 29 1 1 1
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 36 1 1 1
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 27 1 1 0
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 28 1 1 1
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 21 1 1 0
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 24 1 1 0
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 29 1 1 0
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 31 1 1 0
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 33 1 1 0
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 20 1 1 0
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 36 1 1 0
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 35 1 1 0
SELECT Post.id, Post.title, Post.content, Post.watched, Post.active, Post.created, Post.modified FROM ivofilot_nl.posts AS Post WHERE Post.id = 29 LIMIT 1 1 1 0
SELECT COUNT(*) AS count FROM ivofilot_nl.comments AS Comment LEFT JOIN ivofilot_nl.posts AS Post ON (Comment.post_id = Post.id) LEFT JOIN ivofilot_nl.comments AS ParentComment ON (Comment.parent_id = ParentComment.id) WHERE Comment.post_id = 29 1 1 0

#### Query Explain:

Click an "Explain" link above, to see the query explanation.

====
• Timer

## Memory

Peak Memory Use 3.81 MB

Message Memory use
Component initialization 792 KB
Controller action start 838 KB
Controller render start 1.52 MB
View render complete 1.92 MB

## Timers

Total Request Time: 175 (ms)

Message Time in ms Graph
Core Processing (Derived from $_SERVER["REQUEST_TIME"]) 12.37 Event: Controller.initialize 0.07 Event: Controller.startup 3.39 Controller action 65.54 Event: Controller.beforeRender 19.35 » Processing toolbar data 19.29 Rendering View 35.14 » Event: View.beforeRender 0.02 » Rendering APP/View/Posts/view.ctp 33.65 » » Rendering APP/View/Elements/code_highlighting.ctp 0.76 » » Rendering APP/View/Elements/post.commentform.ctp 13.74 » » » Rendering APP/View/Elements/post.comment.captcha.ctp 0.89 » » Rendering APP/View/Elements/post.comments.ctp 0.45 » » Rendering APP/View/Elements/post.relatedpost.ctp 1.04 » Event: View.afterRender 0.02 » Event: View.beforeLayout 0.02 » Rendering APP/View/Layouts/default.ctp 0.63 » » Rendering APP/View/Elements/navbar.ctp 0.15 » » Rendering APP/View/Elements/footer.ctp 0.13 » » » Rendering APP/View/Elements/biography.ctp 0.04 Event: View.afterLayout 0.00 ==== • Log ## Logs There were no log entries made this request ==== • Variables ## View Variables • meta_keywordsQuantum Chemistry, Computational Chemistry, Blog, Post, Ivo Filot • meta_descriptionIntroduction to Electronic Structure Calculations: The variational principle • title_for_layoutIntroduction to Electronic Structure Calculations: The variational principle • tags(array) • 0(array) • Tag(array) • id25 • nameQuantum Chemistry • iconfa fa-graduation-cap • colorinfo • Post(array) • 0(array) • id29 • titleIntroduction to Electronic Structure Calculations: The variational principle • content# Introduction to Electronic Structure Calculations: The variational principle ## Introduction This post is the first part in an upcoming series explaining how to perform electronic structure calculations. The focus of this series is how to set-up an electronic structure calculation from scratch. We are not going to use any of the available quantum chemical codes, but write our very own code. This way, we will obtain a better understanding of how quantum chemical calculations are being performed. The way I am going to present the topic is by first providing the underlying theory and then further explaining the theory by giving a set of examples. The outline of this series is as follows: We start of with explaining some basics such as the (linear) variational principle, the Hartree product and the Slater determinant. From there on, we derive the Hartree-Fock equations and show how to set-up a simple calculation for the H<sub>2</sub> molecule. Next, we explain about basis sets, Gaussian integrals and the Hartree-Fock self-consistent field procedure. Finally, we talk about atomic and molecular orbitals, the construction of MO diagrams and the visualization of orbitals. For this series, I am going to assume that you have a basic understanding of quantum mechanics. You know what a wave function is and its statistical interpretation. Furthermore, I assume that you have passed introductory courses in calculus and linear algebra. ## The variational principle Given a normalized wave function$\lvert\phi\rangle$that satisfies the appropriate boundary conditions, then the expectation value of the Hamiltonian is an upper bound to the exact ground state energy. In other words, the lowest energy we can find, gives us the best wave function. Thus, if$ \langle \phi \rvert \phi \rangle=1 $, then$ \langle \phi \lvert \hat{H} \rvert \phi \rangle \geq E_{0} $. <div class="bs-callout bs-callout-primary"> <h4>Example 1: The energy of the 1s orbital in hydrogen</h4> Let's exemplify this by providing an example. In the following example, we wish to obtain the best value for$ \alpha $for the trial function$ \lvert \phi \rangle = N \exp \left( - \alpha r^{2} \right) $(where$N$is a normalization constant) that describes the 1s orbital in the H-atom. The Hamiltonian for the H-atom in <a href="https://en.wikipedia.org/wiki/Atomic_units">atomic units</a> is: $$\hat{H} = -\frac{1}{2} \nabla^{2} - \frac{1}{r}.$$ The first step is to find$E$as a function of$\alpha$. Therefore, we need to evaluate the following integral $$E = \langle \phi \lvert \hat{H} \rvert \phi \rangle = \int_{0}^{\infty} \textrm{d}r \int_{0}^{2\pi} \textrm{d} \theta \int_{0}^{\pi} \textrm{d} \varphi \, r^{2} \sin \varphi N \exp \left( - \alpha r^{2} \right) \left[ -\frac{1}{2} \nabla^{2} - \frac{1}{r} \right] N \exp \left( - \alpha r^{2} \right) .$$ Note that because we are integrating in <a href="http://mathworld.wolfram.com/SphericalCoordinates.html">spherical coordinates</a>, we have introduced the <a href="http://mathworld.wolfram.com/Jacobian.html">Jacobian</a> $$J = r^{2} \sin \varphi .$$ Furthermore, the radial part of the <a href="http://mathworld.wolfram.com/Laplacian.html">Laplacian</a>$\nabla^{2}$in spherical coordinates has the following form: $$\nabla^{2} = r^{-2} \frac{\textrm{d}}{\textrm{d} r} \left( r^{2} \frac{\textrm{d}}{\textrm{d} r} \right)$$ (we can ignore the angular parts of the Laplacian, because the 1s trial wave function is spherically symmetrical)<br><br> The above integral can be divided into two seperate integrals of the following forms $$I_1 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty r^2 \exp \left( -\alpha r^2 \right) \left[ r^{-2} \frac{\textrm{d}}{\textrm{d} r} \left( r^{2} \frac{\textrm{d}}{\textrm{d} r} \right) \exp \left( -\alpha r^2 \right) \right] \textrm{d}r$$ and $$I_2 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty r^2 \exp \left( -\alpha r^2 \right) \frac{1}{r} \exp \left( -\alpha r^2 \right) \textrm{d}r .$$ Since the wavefunction has no angular dependencies, the azimuthal ($\theta$) and polar ($\varphi$) parts have already been integrated in the above equations. These give the factor \pi$. To solve the first integral ($I_{1}$), we need to apply the Laplacian. This gives $$I_1 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty \exp \left( -\alpha r^2 \right) \left[ -6 \alpha^{2} + 4 \alpha^{2}r^{4} \right] \exp \left( -\alpha r^2 \right) \textrm{d}r .$$ Slightly rearranging this equation gives us $$I_1 = - 2 \pi N^{2} \int_0^\infty \left( 4 \alpha^{2}r^{4} -6 \alpha^{2} \right) \exp \left( -2 \alpha r^2 \right) \textrm{d}r .$$ This integral can be solved by performing <a href="https://en.wikipedia.org/wiki/Integration_by_parts">integration by parts</a>, but here we will use just use the following general formulas: $$\int_{0}^{\infty} \textrm{d}r \, r^{2m} \exp \left(-\alpha r^{2} \right) = \frac{(2m)!\pi^{1/2}}{2^{2m+1}m!\alpha^{m+1/2}}$$ and $$\int_{0}^{\infty} \textrm{d}r \, r^{2m+1} \exp \left(-\alpha r^{2} \right) = \frac{m!}{2\alpha^{m+1}} .$$ Applying these formulas for $I_1$ yields $$I_{1} = -2 \pi N^{2} \left[ 4 \alpha^{2} \frac{4! \pi^{1/2}}{2^5 2! (2 \alpha)^{5/2}} - 6 \alpha \frac{2!\pi^{1/2}}{2^3 1! (2\alpha)^{3/2}} \right] = N^{2} \frac{3 \pi^{3/2}} {4 \sqrt{2\alpha}}$$ and for $I_{2}$: $$I_{2} = -4 \pi N^{2} \int_{0}^{\infty} r \exp \left( -2 \alpha r^{2} \right) \textrm{d}r = -4 \pi N^{2} \frac{1}{4 \alpha} = \frac{-\pi}{\alpha} N^{2} .$$ Summing these two parts gives $$\langle \phi \lvert H \rvert \phi \rangle = I_1 + I_2 = N^2 \left( \frac{3 \pi ^{3/2}}{4 \sqrt{2 \alpha}} - \frac{\pi}{\alpha} \right) .$$ Using the same procedure, we can also evaluate the integral $\langle \phi \rvert \phi \rangle$. $$\langle \phi \rvert \phi \rangle = 4 \pi N^{2} \int_{0}^{\infty} r^2 \exp \left( -2 \alpha r^2 \right) \textrm{d}r = N^{2} \frac{\pi^{3/2}}{2 \sqrt{2}\alpha^{3/2}}$$ In order to evaluate the expectation value for the energy $E$ for the trial wave function $\lvert \varphi \rangle$, we need to evaluate $$E = \frac{\langle \phi \lvert \hat{H} \rvert \phi \rangle }{\langle \phi \rvert \phi \rangle} = \frac{3 \alpha}{2} - \frac{2 \sqrt{2} \alpha ^{1/2}}{\pi ^{1/2}} .$$ The reason we need to divide by $\langle \phi \rvert \phi \rangle$ is because $\lvert \phi \rangle$ is not normalized yet. If $\lvert \phi \rangle$ would be normalized, then the integral $\langle \phi \rvert \phi \rangle$ would be equal to one and thus drop out. The best value for $\alpha$ can then be found by differentiating $E$ with respect to $\alpha$ and finding that value for $\alpha$ where the differential is zero. $$\frac{\textrm{d}E}{\textrm{d}\alpha} = 0 .$$ Thus $$\frac{\textrm{d}E}{\textrm{d}\alpha} = \frac{3}{2} - \frac{\sqrt{2}}{\pi ^{1/2} \alpha^{1/2}}$$ and $$\alpha = \frac{8}{9 \pi}.$$ This gives us the following upper bound for the energy $E$: $$E = \frac{3}{2} \frac{8}{9 \pi} - \frac{2 \sqrt{2} \left( \frac{8}{9 \pi} \right)^{1/2}}{\pi ^{1/2}} = -\frac{4}{3 \pi} \approx -0.4244 .$$ </div> The exact energy for the 1s orbital in hydrogen is -0.5 HT with the corresponding wave function $\lvert \phi \rangle = \frac{1}{\sqrt{\pi}} \exp \left( - r \right)$. ## The linear variational principle Although the exact answer to the energy of the 1s orbital in hydrogen is known, let us assume that we do *not* know it and would like to obtain it. One way is to pick various trial wave functions with one independent variable and optimize that variable. The lowest energy we are going to find gives us then the best approximation of the ground state wave function. This approach would be quite tedious (and how would you know what trial wave function to choose in the first place anyway?). It would be nice if we could have just a single trial wave function that has a lot of 'variational freedom'. Here, I propose to use a trial wave function that is a linear set of fixed wave function like so: $$\lvert \phi \rangle = \sum\_{i=1}^N c_{i} \lvert \psi\_{i} \rangle$$ If we can assume that the wave functions $\lvert \psi\_i \rangle$ are orthonormal in the sense that $\langle \psi\_i \lvert \psi\_j \rangle = \delta\_{ij}$, then the energy is $$E = \sum\_{ij} \langle \psi\_i \lvert \hat{H} \rvert \psi\_j \rangle = \sum_{ij} c\_{i}c\_{j} H\_{ij}$$ and we are tasked with finding the best set of coefficients $c_{i}$ that minimizes $E$. It are exactly these coefficients (and accompanying wave functions) that provide us with the necessary variational freedom. (Note that $H\_{ij} = \langle \psi\_i \lvert \hat{H} \rvert \psi\_j \rangle$) The procedure to find this set is by using [Langrange's method of undetermined multiplies](https://en.wikipedia.org/wiki/Lagrange_multiplier). Herein $$L \left( c\_1, c\_2, \cdots, E \right) = \langle \phi \lvert \hat{H} \rvert \phi \rangle - E \left(\langle \phi \lvert \phi \rangle - 1 \right)$$ and we need to find that particular set of $c\_k$ where $$\frac{\partial L}{\partial c\_{k}} = 0$$ for each $k$. For each $k$, this gives the following equation $$\sum\_{j} c\_j H\_{kj} + \sum\_i c\_i H\_{ik} - 2 E c\_{k} = 0$$. Since $\langle \psi\_a \lvert \hat{H} \rvert \psi\_b \rangle = \langle \psi\_b \lvert \hat{H} \rvert \psi\_a \rangle = H\_{ab} = H\_{ba}$, we can simplify the above equation to $$\sum\_{j} H\_{ij} c\_j - E c\_{i} = 0.$$ The above expression is just the matrix expression $$H\vec{c} = E\vec{c} .$$ This last equation looks probably very familiar to you and relates to the [matrix eigenvalue problem](http://mathworld.wolfram.com/Eigenvalue.html). It is a very typical problem in linear algebra. To solve the matrix eigenvalue problem, one needs to diagonalize the $H$ matrix. This gives a set of eigenvalues and corresponding eigenvalues. The eigenvector corresponding to the lowest eigenvalue is the best set of $c\_k$ we are interested in. <div class="bs-callout bs-callout-primary"> <h4>Example 2: The 2x2 linear variational problem for finding the energy of the 1s orbital of hydrogen </h4> To illustrate the above procedure, we are going to find the best solution for the 1s orbital in the hydrogen atom using a trial wave function that is a linear combination of two orthonormal wave functions. The two orthonormal wave functions are $$\psi_{1} = \frac{2}{\pi^{3/4}} \exp \left( -r^2 / 2 \right)$$ and $$\psi_{2} = \frac{2\pi^{1/4}r - 4/\pi^{1/4}}{\sqrt{2\pi(3\pi-8)}} \exp \left( -r^2 / 2 \right).$$ And because of the orthonormality condition, the following integrals hold $$\langle \psi_{1} \lvert \psi_{2} \rangle = 0$$ and $$\langle \psi_{1} \lvert \psi_{1} \rangle = \langle \psi_{2} \lvert \psi_{2} \rangle = 1.$$ Applying the Hamiltonian to these wave functions, we can obtain the following 2x2 H-matrix: $$H = \begin{bmatrix} \langle \psi_{1} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{1} \lvert \hat{H} \rvert \psi_{2} \rangle \\langle \psi_{2} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{2} \lvert \hat{H} \rvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} -0.378379 & -0.0185959 \ -0.0185959 & 1.09531 \end{bmatrix}$$ Note that the H-matrix is symmetric. This is because we can exchange $\lvert \psi_{1} \rangle$ and $\lvert \psi_{2} \rangle$ and obtain the same result. Furthermore, note that although $\langle \psi_{1} \lvert \psi_{2} \rangle = 0$, this does not mean that the integral is also zero when the Hamiltonian operator is applied to either of the wave functions.<br><br> In order to find the best energy and approximation to the ground state wave function, we need to diagonalize the H-matrix. This can be easily done by hand, or using the <a href="https://www.wolframalpha.com/input/?i=Diagonalize%5B%7B%7B-0.378379%2C+-0.0185959%7D%2C+%7B-0.0185959%2C+1.09531%7D%7D%5D">very handy Wolfram Alpha website</a>. The result we obtain is $$D = \begin{bmatrix} 1.09554 & 0 \ 0 & -0.378614 \end{bmatrix}$$ for the diagonal (eigenvalue) matrix with corresponding $$X = \begin{bmatrix} -0.0126156 & -0.99992 \ 0.99992 & -0.0126156 \end{bmatrix}$$ for the eigenvector matrix. The first column of $X$ contains the coefficients corresponding to the first eigenvalue (energy) of the $D$-matrix and the second column of $X$ for the second eigenvalue. Since the second eigenvalue is lower ($-0.378614 < 1.09554$), the second column of $X$ contains the coefficients we are looking for. Thus, the best approximation to the ground state wave function is: $$\lvert \phi \rangle = c_{1} \lvert \psi_{1} \rangle + c_{2} \lvert \psi_{2} \rangle = -0.99992 \lvert \psi_{1} \rangle - 0.0126156 \lvert \psi_{2} \rangle.$$ The corresponding energy for this wave function is -0.378614 HT. This result is actually worse than what we obtained using the procedure of example 1 (which was -0.4244 HT). The reason for this is that the variational freedom using a linear combination of only two wave functions is actually less than one where we can vary a coefficient inside an exponent. To get better results, we would need to use a larger set of wave functions in the linear combination. </div> In Example 2 it was shown how we can obtain the best energy given a trial wave function which is a linear combination of a set of wave functions. Let us call these wave functions in the linear combination the basis functions. In order for the procedure in example 2 to work, the basis functions need to be orthonormal to each other. Finding a set of orthonormal functions by hand is somewhat tedious (try it for yourself if you do not believe me!). It would therefore be very handy if we could apply the above methodology (albeit somewhat changed) using a set of non-orthonormal basis functions. I already showed you how you can use Lagrange's method of undetermined multipliers in order to derive the matrix equation $Hc = Ec$. When the basis functions are no longer orthonormal, this matrix equation changes slightly and becomes $$H\vec{c} = ES\vec{c}.$$ Herein, $S$ is termed the overlap matrix where each term in the matrix is $$S\_{ij} = \langle \psi\_{i} \lvert \psi\_{j} \rangle.$$ The introduction of the overlap matrix $S$ makes the matrix diagonalization a tiny bit more complicated. There are several procedures for obtaining the eigenvalues and -vectors. The procedure we are going to use follows the canonical orthogonalization scheme. It works as follows 1. Construct the overlap matrix $S$. 2. Diagonalize the overlap matrix $S$ to obtain the eigenvalues of the overlap matrix and its corresponding eigenvector matrix $U$. 3. Construct a transformation matrix $X = U s^{-1/2}$. 4. Transform the Hamiltonian matrix to the canonically orthonormalized basis set $H' = X^{\dagger} H X$. 5. Diagonalize the transformed Hamiltonian matrix to obtain $D$ and $C'$ (where C' are the transformed coefficients). 6. Obtain the coefficients of the original basis by calculating $C = XC'$. The above probably looks a bit complicated, but really, it is not. Let me elaborate by showing you another example. <div class="bs-callout bs-callout-primary"> <h4>Example 3: Finding the energy of the 1s orbital of hydrogen using a non-orthonormal set of basis functions </h4> The two functions we are going to use are $$\psi_{1} = \exp \left( -r^{2} / 2 \right)$$ and $$\psi_{2} = r \exp \left( -r^{2} / 2 \right).$$ (<b>Step 1</b>)Their corresponding overlap matrix is $$S = \begin{bmatrix} \langle \psi_{1} \lvert \psi_{1} \rangle & \langle \psi_{1} \lvert \psi_{2} \rangle \\langle \psi_{2} \lvert \psi_{1} \rangle & \langle \psi_{2} \lvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} 5.56833 & 6.2832 \ 6.2832 & 8.35249 \end{bmatrix}.$$ From this matrix we can clearly see that these functions are neither orthogonal to each other nor normalized. <br><br>(<b>Step 2</b>)Diagonalizing the overlap matrix yields the following diagonal and eigenvector matrices $$D' = \begin{bmatrix} 13.396 & 0 \ 0 & 0.524846 \end{bmatrix}.$$ and $$U = \begin{bmatrix} 0.625975 & -0.779843 \ 0.779843 & 0.625975 \end{bmatrix}.$$ (<b>Step 3</b>)We can obtain the transformation matrix by calculating $$X = U s^{-1/2} = \begin{bmatrix} 0.625975 & -0.779843 \ 0.779843 & 0.625975 \end{bmatrix} \begin{bmatrix} 1/\sqrt{13.396} & 0 \ 0 & 1/\sqrt{0.524846} \end{bmatrix} = \begin{bmatrix} 0.171029 & -1.07644 \ 0.213069 & 0.864054 \end{bmatrix}.$$ The Hamiltonian matrix is $$H = \begin{bmatrix} \langle \psi_{1} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{1} \lvert \hat{H} \rvert \psi_{2} \rangle \\langle \psi_{2} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{2} \lvert \hat{H} \rvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} -2.10694 & -2.42674 \ -2.42674 & -1.4109 \end{bmatrix}.$$ Because our wave function $\lvert \psi_{1} \rangle$ in this example differs only by a normalization constant to the wave function $\lvert \psi_{1} \rangle$ in example 2, the value for the Hamiltonian element $H_{00}$ should only differ by a factor equal to the square of the normalization constant. In other words, if we divide $H_{00}$ by the value for $S_{00}$ (which represents that difference), we obtain the value of $H_{00}$ in example 2. The same applies of course for $H_{11}$. We can obtain the value in example 1 by calculating $H_{11} / S_{11}$.<br><br> (<b>Step 4</b>)And applying the transformation matrix to the Hamiltonian matrix gives $$H' = X^{\dagger} H X = \begin{bmatrix} 0.171029 & 0.213069 \ -1.07644 & 0.864054 \end{bmatrix} \begin{bmatrix} -2.10694 & -2.42674 \ -2.42674 & -1.4109 \end{bmatrix} \begin{bmatrix} 0.171029 & -1.07644 \ 0.213069 & 0.864054 \end{bmatrix}$$ $$H' = X^{\dagger} H X = \begin{bmatrix} -0.302548 & 0.32611 \ 0.32611 & 1.01951 \end{bmatrix}.$$ (<b>Step 5</b>)Diagonalizing the transformed Hamiltonian matrix gives $$D = \begin{bmatrix} 1.09557 & 0 \ 0 & -0.378614 \end{bmatrix}$$ with corresponding eigenvector matrix $$C' = \begin{bmatrix} 0.227151 & -0.973859 \ 0.973859 & 0.227151 \end{bmatrix}.$$ (<b>Step 6</b>)Finally, in order to get the coefficients in the original basis, we need to calculate $$C = XC' = \begin{bmatrix} 0.171029 & -1.07644 \ 0.213069 & 0.864054 \end{bmatrix} \begin{bmatrix} 0.227151 & -0.973859 \ 0.973859 & 0.227151 \end{bmatrix} = \begin{bmatrix} -1.00945 & -0.411073 \ 0.889866 & -0.0112284 \end{bmatrix}.$$ The lowest eigenvalue found from the diagonal matrix $D$ is -0.378614 HT. The corresponding wave function can be found by taking the coefficients of the corresponding column vector, thus giving $$\lvert \phi \rangle = -0.411073 \cdot \exp \left(-r^2 / 2 \right) - 0.0112284 \cdot r \exp \left(-r^2 / 2 \right).$$ This is effectively the same answer as found in example 2 as can be seen from the radial distribution plot (below)! This is of course not very surprising. The variation used in both approximations was by taking different polynomials in front of the exponent. Both attempts at finding the best wave function (example 2 as well as this example) use a zeroth and first order polynomial in front of the exponent. If we compare the radial distribution function of the wave function here and the one obtained in example 2, we can see that these functions overlap.<br><br> <img src="http://www.ivofilot.nl/images/24/bp_qc1_fig1.png"><br><br> This does not necessarily imply that the underlying contributions of the basis functions are the same. If you would examine the coefficients and the corresponding basis functions, you would notice a particular difference. The contribution of each basis functions in each set is visualized in the graph below. We can clearly see that the contributions differ.<br><br> <img src="http://www.ivofilot.nl/images/25/bp_qc1_fig2.png"><br><br> If we now sum up the contributions of the two basis functions for each linear combination, we note that the two wave functions in example 2 and this example are in principle the same.<br><br> <img src="http://www.ivofilot.nl/images/26/bp_qc1_fig3.png"><br><br> Thus, we obtain the same result for each wave function. </div> Although we have now the tools to find the best approximation of the ground state energy and its corresponding wave function using any set of basis functions, the result so far was not very satisfactory. Using only a single trial wave function as for instance given in example 1 gave better results. In order to improve on the previous results, we need to expand our set of basis functions. A larger basis set should in principle give a better result. <div class="bs-callout bs-callout-primary"> <h4>Example 4: The energy of the 1s orbital of hydrogen using a basis set including polynomials up to 4th order</h4> For our basis set, we are going to use the following set of basis functions: $$\lvert \psi_{i} \rangle = r^{i} \exp \left( -r^2/2 \right)$$ for $i = 0,1,2,3,4$. The overlap matrix $S$ then looks like: $$S = \begin{bmatrix} 5.56833 & 6.28319 & 8.35249 & 12.5664 & 20.8812 \ 6.28319 & 8.35249 & 12.5664 & 20.8812 & 37.6991 \ 8.35249 & 12.5664 & 20.8812 & 37.6991 & 73.0843 \ 12.5664 & 20.8812 & 37.6991 & 73.0843 & 150.796 \ 20.8812 & 37.6991 & 73.0843 & 150.796 & 328.879 \end{bmatrix}$$ and the hamiltonian matrix $H$ is $$H = \begin{bmatrix} -2.10694 & -2.42674 & -4.19506 & -8.35249 & -17.7867 \ -2.42674 & \ -1.4109 & -2.06931 & -5.25794 & -14.598 \ -4.19506 & -2.06931 & -1.08169 & \ -2.03167 & -8.98742 \ -8.35249 & -5.25794 & -2.03167 & 1.45319 & 2.31392 \ -17.7867 & -14.598 & -8.98742 & 2.31392 & 22.7788 \end{bmatrix}$$ Using the same procedure as in example 3, we obtain for the lowest eigenvalue of the transformed hamiltonian matrix the energy of <b>-0.487773 HT</b>. This result is a significant improvement and comes very close to the exact energy of -0.5HT! <br><br> Further improvement by expanding the basis set to 10 basis functions gives a result of -0.498481 HT. </div> ## Summary In this blogpost I have shown you that in order to find the best approximation to the ground state energy, the variational principle can be employed. By introducing some kind of variety in a trial wave function and then minimizing its corresponding energy, gives the best approximation to the ground state energy can be found. In the linear variational method, the variational freedom is provided by the specific set of basis functions used. In this method, the best coefficients in the linear combination of said basis functions that minimize the energy can be found. Here, I have shown that a sufficiently large set of basis functions composed of a Gaussian part (the exponent) multiplied with a polynomial of increasing order is able to approach the true ground state energy of the hydrogen atom. ### Acknowledgements Finally, I would like to acknowledge <a href="https://www.linkedin.com/pub/bart-zijlstra/74/682/29b/de">Bart Zijlstra</a> for thoroughly proofreading this post and pointing out the mistakes.
• watched10541
• active(true)
• created2015-07-14 12:00:00
• modified2015-07-20 14:33:15
• 1(array)
• id30
• titleIntroduction to Electronic Structure Calculations: Many-electron wave functions
• content# Introduction to Electronic Structure Calculations: Many-electron wave functions ## Introduction This post is the second part in the Introduction to electronic structure calculations series. In [our previous post](http://www.ivofilot.nl/posts/view/29), we discussed about the minimalization problem for one electron. Here, we are going to expand this to many electrons by introducing the many electron wave function. ## The many-electron Hamiltonian In the Born-Oppenheimer approximation, we consider that the electrons are moving in the field of fixed nuclei. This gives the following Hamiltonian that describes the motion of $N$ electrons in field of $M$ nuclei: $$H\_{\textrm{elec}} = - \sum\_{i=1}^{N} \frac{1}{2} \nabla\_{i}^{2} - \sum\_{i=1}^{N} \sum\_{A=1}^{M} \frac{Z\_{A}}{r\_{i,A}} + \sum\_{i=1}^{N} \sum\_{j>i}^{N} \frac{1}{r\_{i,j}}$$ The first term in this equation describes the kinetic energy of the electrons, the second term the electron-nuclei interactions and the final term the electron-electron repulsion. Of course, the total energy of a molecule also depends on the nuclei-nuclei interactions, but because we are considering that these nuclei have fixed positions, this is just a single term that is added to the electronic energy afterwards. ## The many-electron wave function The wave function for a single electron (or single particle) is fairly easy to understand. But how does the wave function for many electrons look like? Let us propose that a many-electron wave function is just a simple product of one electron wave functions, termed the Hartree product. $$\lvert \Psi \rangle= \prod \_{i=0}^{N} \lvert \psi\_{i} \rangle$$ For this many-electron wave function to be a valid candidate of the many-electron Hamiltonian, it needs to be an eigenfunction. We can test this by letting the many-electron Hamiltonian operate on the many-electron wave function and see if we can extract the eigenvalue (which should be the energy). Because our many-electron Hamiltonian is just a sum of one-electron Hamiltonians, we simply extract for each electron its energy and leave the rest unchanged. In other words, if $h\_{i}$ is the one-electron Hamiltonian and $$H\_{\textrm{elec}} = \sum\_{i}^{N}h\_{i}$$ then $$H\_{\textrm{elec}} \lvert \Psi \rangle = \left[ \sum\_{i}^{N} \epsilon_{i} \right] \lvert \Psi \rangle.$$ Thus the total energy of the many-electron wave function is the sum of the energies of its constituting one-electron wave functions. Regrettably, there is a fundamental problem with the above wave function. The Hartree product is an uncorrelated or independent-electron wave function. For example, the probability of finding electron one in volume element $\textrm{d}\vec{x\_{1}}$ and electron two in volume element $\textrm{d}\vec{x\_{2}}$ at the same time is just the product of these two probabilities. $$P\_{1,2}(x\_{1}, x\_{2}) = \left| \psi\_{1} \right|^{2} (x\_{1}) \cdot \left| \psi\_{2} \right|^{2} (x\_{2})$$ This is of course non-realistic, because electron one will repel electron two when it comes too close and clearly these chances should be correlated in some way. There is another problem with the Hartree product: it takes no account of the indistinguishability of electrons, but specifically distinguishes electron-one as occupying orbital one and electron two as occupying orbital two. The anti-symmetry principle does not distinguish between identical electrons and requires that electronic wave functions be anti-symmetric (change sign) with respect to the interchange of any two electrons in two different orbitals. The way to resolve these problems is by introducing a new wave function that does take the anti-symmetry principle into account, but before that is possible, I have to introduce the concept of spin. From here on, any wave function has both a spatial as well as a spin part. The spatial part is described by the spatial function $\psi(\vec{x})$ (just as before), and the spin part is described by $\alpha(\omega)$ for spin-up and by $\beta(\omega)$ for spin-down. $\omega$ is here the unspecified spin variable. Furthermore, the spin functions are orthogonal, such that $$\langle \alpha \lvert \beta \rangle = \delta\_{\alpha \beta}$$ From here on, we denote the spin-orbital function as $$\chi(x)$$ where $x$ represents three spatial coordinates $\vec{x}$ and one spin coordinate $\omega$. The new many-electron wave function that I am proposing can be formed by using a Slater determinant. The Slater determinant looks as follows: $$\Psi (x\_{1}, x\_{2}, \cdots , x\_{N}) = \frac{1}{\sqrt{N!}} \left| \begin{matrix} \chi\_{1}(x\_{1}) & \chi\_{2}(x\_{1}) & \cdots & \chi\_{N}(x\_{1}) \\ \chi\_{1}(x\_{2}) & \chi\_{2}(x\_{2}) & \cdots & \chi\_{N}(x\_{2}) \\ \vdots & & & \vdots \\ \chi\_{1}(x\_{N}) & \chi\_{2}(x\_{N}) & \cdots & \chi\_{N}(x\_{N}) \end{matrix} \right|$$ Let us exemplify that this wave function is a suitable candidate by considering the two-electron case. <div class="bs-callout bs-callout-primary"> <h4>Example 1: The two-electron Slater determinant</h4> From the equation for the Slater determinant, we can determine that the equation for the two-electron wave function is $$\Psi(x_{1},x_{2}) = \frac{1}{\sqrt{2}} \left[ \chi_{1}(x_{1})\chi_{2}(x_{2}) - \chi_{2}(x_{1})\chi_{1}(x_{2}) \right]$$ From this equation, we can easily see that if we interchange electrons one and two, we obtain the same wave function with a minus sign in front of it. $$\Psi(x_{1},x_{2}) = -\Psi(x_{2},x_{1})$$ In other words, this function meets the anti-symmetry criterion. Furthermore, if we assume that the spin-orbitals in the Slater determinant are an orthogonal set, then the resulting wave function is normalized: $$\langle \Psi(x_{1},x_{2}) \lvert \Psi(x_{1},x_{2}) \rangle = \frac{1}{2} \left[ \langle \chi_{1} \lvert \chi_{1}\rangle \langle \chi_{2} \lvert \chi_{2} \rangle - 2 \langle \chi_{1} \lvert \chi_{2}\rangle \langle \chi_{2} \lvert \chi_{1} \rangle + \langle \chi_{1} \lvert \chi_{1}\rangle \langle \chi_{2} \lvert \chi_{2} \rangle \right] = \frac{1}{2} \left[1 - 0 + 1 \right] = 1$$ If two electrons are occupying the same spin-orbital (a forbidden situation), then the Slater determinant automatically gives zero: $$\Psi(x_{1},x_{2}) = \frac{1}{\sqrt{2}} \left[ \chi_{1}(x_{1})\chi_{1}(x_{2}) -\ chi_{1}(x_{1})\chi_{1}(x_{2}) \right] = 0$$ Finally, the probability to find electron one in spatial orbital one and electron two in spatial orbital two, where electron one and electron two have the same spin is $$P_{1,2}(x_{1}, x_{2}) = \frac{1}{2} \left[ 2\left| \psi_{1} (x_{1}) \right|^{2} \left| \psi_{2}(x_{2}) \right|^{2} - \psi_{1}^{*} (x_{1})\psi_{2}(x_{1})\psi_{2}^{*} (x_{1})\psi_{1}(x_{1}) - \psi_{1} (x_{1})\psi_{2}^{*}(x_{1})\psi_{2} (x_{2})\psi_{1}^{*}(x_{2}) \right]$$ In the above expression, we have obtained two extra terms that make the probabilities correlated. This kind of correlation is termed exchange correlation. The chance to find electron two around electron one (or vice versa) is lower (note that the terms are negative) if the two electrons have the same spin. A Fermi hole is said to exist around an electron. </div> Using our new many electron anti-symmetric wave function and our many electron Hamiltonian, we have all the necessary ingredients to calculate the energy of a many-electron system. Let us see what happens if we plug in an anti-symmetrized two-electron wave function into a single-nucleus two-electron Hamiltonian. In other words, we are going to attempt to calculate the energy of the helium atom. $$H\_{\textrm{elec}} \lvert \Psi \rangle = H\_{\textrm{elec}} \frac{1}{\sqrt{2}} \left[ \chi\_{1}(x\_{1})\chi\_{2}(x\_{2}) - \chi\_{2}(x\_{1})\chi\_{1}(x\_{2}) \right]$$ For two electrons and one nucleus, our Hamiltonian has the following form $$H\_{\textrm{elec}} = - \sum\_{i=1}^{2} \frac{1}{2} \nabla\_{i}^{2} - \sum\_{i=1}^{2} \frac{Z\_{A}}{r\_{i,A}} + \sum\_{i=1}^{2} \sum\_{j>i}^{2} \frac{1}{r\_{i,j}}.$$ We can rewrite this Hamiltonian to emphasize its one- and two-electron components as $$H\_{\textrm{elec}} =T(1) + T(2) + V(1) + V(2) + R(1,2).$$ Here, $T(1)$ denotes the kinetic energy operator on electron one and $T(2)$ the kinetic energy operator on electron two. Similarly, $V(1)$ and $V(2)$ denote the potential energy operator on electron one and two, respectively. The last term is the two-electron repulsion operator. Importantly, this operator acts on two electrons instead of one. Let us now evaluate what happens when these operators act on the wave function. When the kinetic operator $T(1)$ acts on the two-electron wave function, it will extract the kinetic energy of electron one of the specific spin-orbital where electron one resides in. Thus, $$T(1) \frac{1}{\sqrt{2}} \left[ \chi\_{1}(x\_{1})\chi\_{2}(x\_{2}) - \chi\_{2}(x\_{1})\chi\_{1}(x\_{2}) \right] = \frac{1}{\sqrt{2}} \left[ E^{(1)}\_{k}\chi\_{1}(x\_{1})\chi\_{2}(x\_{2}) - E^{(2)}\_{k}\chi\_{2}(x\_{1})\chi\_{1}(x\_{2}) \right]$$ $E^{(1)}\_{k}$ stands for the kinetic energy of spin-orbital one. To get the expectation value for the kinetic energy of electron one, we need to multiply the above result by $\lvert \Psi \rangle$ and integrate over all space. What we obtain is: $$\langle \frac{1}{\sqrt{2}} \left[ \chi\_{1}(x\_{1})\chi\_{2}(x\_{2}) - \chi\_{2}(x\_{1})\chi\_{1}(x\_{2}) \right] \lvert \frac{1}{\sqrt{2}} \left[ E^{(1)}\_{k}\chi\_{1}(x\_{1})\chi\_{2}(x\_{2}) - E^{(2)}\_{k}\chi\_{2}(x\_{1})\chi\_{1}(x\_{2}) \right] \rangle$$ After some collection of terms and rearranging, we obtain a similar integral we have seen before in Example 1: $$\frac{1}{2} \left[ E^{(1)}\_{k}\langle \chi\_{1} \lvert \chi\_{1}\rangle \langle \chi\_{2} \lvert \chi\_{2} \rangle - 2(E^{(1)}\_{k} + E^{(2)}\_{k}) \langle \chi\_{1} \lvert \chi\_{2}\rangle \langle \chi\_{2} \lvert \chi\_{1} \rangle + E^{(2)}\_{k}\langle \chi\_{1} \lvert \chi\_{1}\rangle \langle \chi\_{2} \lvert \chi\_{2} \rangle \right].$$ Because the spin-orbitals are orthonormal, the center term between the brackets is zero and the final result is $$\frac{1}{2} (E^{(1)}\_{k} + E^{(2)}\_{k})$$ Conclusively, we obtain the sum of the kinetic energies of all the spin-orbitals in the Slater determinant divided by the number of electrons in the system. We obtain exactly the same result if we had used the operator $T(2)$. In other words, the total kinetic energy of our wave function is just the sum of the kinetic energies of all the spin-orbitals in the Slater determinant. $$\langle \Psi \lvert T(1) + T(2) \lvert \Psi \rangle = \frac{1}{2} (E^{(1)}\_{k} + E^{(2)}\_{k}) + \frac{1}{2} (E^{(1)}\_{k} + E^{(2)}\_{k}) = E^{(1)}\_{k} + E^{(2)}\_{k}$$ Similarly, if we examine the effect of the potential energy operator, we find $$\langle \Psi \lvert V(1) + V(2) \lvert \Psi \rangle = \frac{1}{2} (E^{(1)}\_{P} + E^{(2)}\_{P}) + \frac{1}{2} (E^{(1)}\_{P} + E^{(2)}\_{P}) = E^{(1)}\_{P} + E^{(2)}\_{P}.$$
• watched0
• active(false)
• created2015-12-13 00:00:00
• modified2015-12-13 11:04:56
• 2(array)
• id36
• titleAnimated gifs of molecular orbitals of CO
• content# Introduction The electron density of a molecular orbital is basically a three-dimensional scalar field which can be visualized using both [contour plots](http://www.itl.nist.gov/div898/handbook/eda/section3/contour.htm) as well as [isosurfaces](https://en.wikipedia.org/wiki/Isosurface). Typically, we see that isosurfaces are being employed, as they more clearly show the three-dimensional character of the molecular orbital. However, such isosurfaces are nothing more than those [set of points in 3D space which have the same value](http://paulbourke.net/geometry/polygonise/). Thus in some sense, there is loss of information, especially if there are interesting features within the three-dimensional object spanned by the isosurface. # Animated gifs A very simple way to resolve this, is to use contour plots. As mentioned, the downside of using contour plots though is that we are no longer able to effectively show the three-dimensional character as we are simply [projecting planes within the scalar field](http://paulbourke.net/papers/conrec/) and colorizing each point on the plane based on the value of that point in the scalar field. We are able to generate a series of contour plots for a set of relevant planes within the three-dimensional space. Using the [previously demonstrated EDP program](http://www.ivofilot.nl/posts/view/27/Visualising+the+electron+density+of+the+binding+orbitals+of+the+CO+molecule+using+VASP), we are able to efficiently create this contour plots using the command line. In turn, using [Imagemagick](https://www.imagemagick.org) and a small Python script as shown below, we are able to construct [animated gifs](https://en.wikipedia.org/wiki/GIF) that loop over the generated set of contour plots. <pre> <code class="python"> #!/usr/bin/env python from subprocess import call import numpy # input file identifier filestr = "CO_5SIGMA" # output file name output = "co_5sigma" # generate slices and create contour plot for z in numpy.linspace(2.0, 8.0, 50): filename = "%s_%f.png" % (filestr, z) call(["/home/ivo/Documents/PROGRAMMING/CPP/edp/build/edp", "-i", "/scratch/ivo/vasp/CO/PARCHG_5SIGMA", "-s", "100", "-v", "1,0,0", "-w", "0,0,1", "-p", "0,%f,0" % z, "-o", filename]) call(["convert", filename, "-resize", "25%", filename.replace(".png",".gif")]) # combine everything inside a single animated gif call(["convert", "-delay", "0", "-loop" , "0", "%s*.gif" % filestr, "%s.gif" % output]) </code> </pre> # Showcase As usual, we use the electron density of the binding molecular orbitals of CO to demonstrate the tool. Below, you can see that the animated gifs loop over a set of contour planes. One of the things that are now easily visible are the regions of near-zero electron density at the center of the 1π and 5σ orbitals, which would normally not be visible when using isosurfaces. <table class="table"> <tr> <td> <b>3σ</b><br> <img src="http://www.ivofilot.nl/images/38/co_3sigma.gif"> </td> <td> <b>4σ</b><br> <img src="http://www.ivofilot.nl/images/39/co_4sigma.gif"> </td> </tr> <tr> <td> <b>1π</b><br> <img src="http://www.ivofilot.nl/images/40/co_1pi.gif"> </td> <td> <b>5σ</b><br> <img src="http://www.ivofilot.nl/images/41/co_5sigma.gif"> </td> </tr> </table> What other molecular orbitals would you like to see? Share your thoughts in the comments!
• watched3517
• active(true)
• created2017-12-29 00:00:00
• modified2018-01-01 00:08:29
• 1(array)
• Tag(array)
• id23
• nameComputational Chemistry
• iconfa fa-cogs
• colorinfo
• Post(array)
• 0(array)
• id27
• titleVisualising the electron density of the binding orbitals of the CO molecule using VASP
• content<div class="alert alert-info" role="alert"><i class="fa fa-github"></i> Looking for the Github repository of EDP?<br><i class="fa fa-link"></i> <a href="https://github.com/ifilot/edp">https://github.com/ifilot/edp</a> </div> # 1. Introduction Let us start by stating what we would like to obtain. We wish to visualise the electron density of the molecular orbitals of CO using VASP. We wish to create a surface cut rather than generating a 3D image containing an isosurface, as the former more clearly conveys the electron density distribution. In this short tutorial, I will show you how to create the image below: <img src="http://www.ivofilot.nl/images/19/CO_molecular_orbitals_plot.jpg" class="img img-responsive img-rounded"> <b>Figure 1:</b> <i>The electron density plots of the molecular orbitals of CO. The yellow line in the lower-left image is added to indicate the nodal plane.</i> The molecular orbitals of CO are formed from the atomic orbitals of C and O. The molecular orbitals of CO can be identified from the molecular orbital diagram of CO as shown below. Each of these orbitals have specific symmetry characteristics and these symmetry characteristics give rise to the name of the molecular orbitals. The molecular orbital lowest in energy, which is a non-bonding molecular orbital that only contains core electrons, is termed the 1σ orbital. The σ denotes the symmetry characteristics of the orbital. σ-symmetry means that the orbital can be rotated arbitrarily around the bonding axis of C and O. Hence, there are an infinite number of mirror planes parallel to this rotation axis. The four molecular orbitals lowest in energy all have this particular symmetry. The fifth molecular orbital however, has π- symmetry and is hence denoted as the 1π molecular orbital. π-symmetric orbitals have a nodal plane on the bonding axis and hence only have a C<sub>2</sub> rotation operation. Consequently, they also have only two mirror planes parallel to this bonding axis. The electron density plots shown above are in agreement with these symmetry characteristics. <img src="http://www.ivofilot.nl/images/18/COMOdiagram.jpg"> <b>Figure 2:</b> <i>The occupied orbitals of CO, that are constructed from the valence atomic orbitals of C and O, are in order of low to high energy: 3σ, 4σ, 1π and 5σ.</i> <image here> # 2. VASP calculations In order to construct the electron density, we first have to perform a series of VASP calculations. * Perform a structure optimisation * Perform a single point calculation and calculate the DOS for the final state (printed in DOSCAR) * Identify the molecular orbitals using the DOS plot (select the "energy range") * Perform for each identified molecular orbital another calculation where the partial charge corresponding to the energy range is saved (printed in PARCHG) If you know how to do this, skip to the next section, else continue reading. ## 2.1 POSCAR and INCAR files Our POSCAR files looks as follows: <pre> <code class="text"> CO 1.0000000000000000 10.0000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 10.0000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 10.0000000000000000 1 1 Direct 0.5000000000000000 0.5000000000000000 0.500000 0.5000000000000000 0.5000000000000000 0.617000 </code> </pre> and the INCAR file as: <pre> <code class="text"> SYSTEM = CO LREAL=.FALSE. LWAVE= .TRUE. LCHARG= .TRUE. PREC = normal ENCUT = 400 ISMEAR = 0 SIGMA = 0.05 EDIFF = 1E-5 EDIFFG = -1E-4 NSW = 100 ISIF = 2 IBRION = 2 ISPIN = 1 POTIM = 0.5 IALGO = 38 NELMIN = 6 NELM = 40 EMIN = -30 EMAX = 15 NEDOS = 4500 NGX = 150 NGY = 150 NGZ = 150 </code> </pre> The KPOINTS file specifies the gamma-point (1x1x1 <i>k</i>-points) and the POTCAR file corresponds to one with the potentials of C and O (in that order). I am not going to discuss all the directives in detail (the VASP manual is really good at that), but I will explain a couple of directives. The INCAR file tells the program to optimise the geometry (IBRION = 2). The WAVECAR and CHGCAR files will be generated and a DOSCAR file will be made with a resolution of 4500 data points on the interval [-30 eV;15 eV]. The mesh of the CHGCAR is 150x150x150 grid points. After the optimisation run, change NSW to 0 and set ISTART = 1. Copy the CONTCAR to POSCAR and start the calculation again. (this way, we have the DOS of the final state and not an averaged DOS). The generated DOSCAR file has 6 header lines and then 4500 lines containing the DOS and the integrated DOS as a function of the energy. To extract that part (and ignore the header lines) and save it DOS-total, we run: <pre> <code class="bash"> sed -n '7,4506p' DOSCAR > DOS_total </code> </pre> This file can be readily visualised using your favourite program (Excel, Origin, GnuPlot, your pick...). I chose Origin and made the following graph: <img src="http://www.ivofilot.nl/images/16/CO_dos.jpg"> <b>Figure 3:</b> <i>The density of states (DOS) and integrated DOS of CO. Each of the peaks corresponds to a molecular orbital.</i> In this graph, I have depicted the molecular orbitals 3σ, 4σ, 1π and 5σ. The corresponding energy intervals (in eV) are (roughly): <table class="table table-striped"> <tr> <th>3σ</th> <td>-30</td> <td>-27</td> </tr> <tr> <th>4σ</th> <td>-15</td> <td>-13</td> </tr> <tr> <th>1π </th> <td>-13</td> <td>-10</td> </tr> <tr> <th>5σ</th> <td>-10</td> <td>-5</td> </tr> </table> For each of the energy intervals, we are going to run VASP again with the following example INCAR file. This INCAR file is for the energy interval [-30; -27.5], but you can chose any interval you like. The LPARD directive instructs the program to generate a PARCHG file of the partial charge corresponding to the wavefunctions between -30 and -27.5 eV. <pre> <code class="text"> SYSTEM = CO LREAL=.FALSE. LWAVE= .TRUE. LCHARG= .TRUE. PREC = medium ENCUT = 400 ISMEAR = 0 SIGMA = 0.05 EDIFF = 1E-5 EDIFFG = -1E-4 NSW = 100 ISIF = 2 IBRION = 2 ISPIN = 1 POTIM = 0.5 IALGO = 38 NELMIN = 6 NELM = 40 EMIN = -30 EMAX = 15 NEDOS = 4500 NGX = 150 NGY = 150 NGZ = 150 LPARD = .TRUE. EINT = -30 -27.5 </code> </pre> The resulting PARCHG files are stored separately and will be used for the visualisation in the next section. # 3. Visualising electron density To visualise the electron density, we are going to use the EDP program, which is freely available via [its Github repository](https://github.com/ifilot/edp). You can [compile the program yourself](https://github.com/ifilot/edp/blob/master/README.md#compilation) or [obtain it from the repository (if you are running Debian)](https://github.com/ifilot/edp/blob/master/README.md#obtaining-edp-from-the-repository). ## 3.1 Visualising electron density with EDP To use the mathematical terms, the electron density is a scalar field. In other words, it is a scalar value that depends on the position in the unit cell. Although the electron density is a smooth function, it is stored at particular grid points in the CHGCAR and PARCHG file. The resolution of that grid can be set by specifying NGX, NGY and NGZ in the INCAR file. The purpose of EDP is to plot the electron density on a so-called cutting plane. The end result is therefore a heat map (as shown above). In order to specify the cutting plane, the user is required to provide two vectors and one point. In our example, CO is placed along the z-axis at the centre of a cubic unit cell of 10x10x10 angstrom. The centre of the unit cell is at position (5.0, 5.0, 5.0). If we want to project the electron density on an (<i>xz</i>)- plane cutting through the point (5.0, 5.0, 5.0), then the command is: <pre> <code class="bash"> ./bin/edp -i path/to/CHGCAR -v 1,0,0 -w 0,0,1 -p 5,5,5 -s 100 -o test.png </code> </pre> The tags -v and -w specify the vectors and -p specifies the point. The -s tag is used to set the resolution of the final image. The value of 100 means that the final image has 100px per angstrom. Finally, the -i and -o tags designate the input (a CHGCAR of PARCHG file) and output files (an .png image file). Maybe you are wondering how the values are generated that do not lie exactly at the grid points specified in the CHGCAR file. To plot such values, a trilinear interpolation scheme is used. A detailed description of that algorithm is given [here](http://en.wikipedia.org/wiki/Trilinear_interpolation). ## 3.2 Plotting the orbitals Plotting the four binding orbitals has now become a simple matter of running the program for each of the PARCHG files corresponding to the molecular orbitals. To exemplify: <pre> <code class="bash"> ./bin/edp -i path/to/PARCHG_3s -v 1,0,0 -w 0,0,1 -p 5,5,5 -s 100 -o 3s.png ./bin/edp -i path/to/PARCHG_4s -v 1,0,0 -w 0,0,1 -p 5,5,5 -s 100 -o 4s.png ./bin/edp -i path/to/PARCHG_1p -v 1,0,0 -w 0,0,1 -p 5,5,5 -s 100 -o 1p.png ./bin/edp -i path/to/PARCHG_5s -v 1,0,0 -w 0,0,1 -p 5,5,5 -s 100 -o 5s.png </code> </pre> Finally, you can collect all images and create a nice compilation in your favourite photo-editing program (such as what I did in Figure 1). ## 3.3 Concluding remarks Obviously, you can use the above procedure to visualise any kind of molecule. Especially simple molecules such as N<sub>2</sub> or H<sub>2</sub> are easy to visualise. Less obvious is that you can also use the above procedure to visualise the molecular orbitals of a substrate adsorbed on a catalytic surface or the electron density difference between CO in its molecular state and C and O in their atomic states. It all depends on the way the CHGCAR and PARCHG files are created. I hope this tutorial was informative. If you have used this tutorial and found it useful for your work, please drop a line or send me an e-mail. Especially if you have made some nice pictures. :-) # 4.Comments [Bart Zijlstra](https://www.linkedin.com/pub/bart-zijlstra/74/682/29b/de) has made a very nice analysis of the results of the electron density visualisation routine as a function of the number of k-points and the smearing value. The result can be seen below. Increasing the number of K-points shows a more detailed result, yet there is no significant change in the characteristic features. <img src="http://www.ivofilot.nl/images/20/elec_dens_analysis.jpg" class="img img-responsive img-rounded" />
• watched16915
• active(true)
• created2015-04-28 12:00:00
• modified2017-12-31 23:52:34
• 1(array)
• id28
• titleHow to build VASP 5.3.5 using the GNU compiler on Linux Ubuntu 14.04 LTS
• contentThis tutorial will explain how to install VASP 5.3.5 on a Linux Ubuntu system. I used the latest version at hand (14.04 LTS) and started with a fresh install (in a Virtualbox environment). This tutorial is to a large extend based [on the previous work of Peter Larsson](https://www.nsc.liu.se/~pla/blog/2013/11/05/compile-vasp-on-ubuntu/). This tutorial also works for Debian 8. For Debian 7, you need to use different packages in step 1, but the same procedure holds. For installing VASP 4.6.38, you can refer to [my previous blogpost](http://www.ivofilot.nl/posts/view/21/How+to+build+VASP+4.6.38+using+the+GNU+compiler). In that post, I got some comments regarding how to install VASP 5, this post is therefore also (more-or-less) an answer to these questions. This tutorial proceeds in 4 steps: * <a href="#step1">Installing the necessary packages</a> * <a href="#step2">Building OpenBLAS</a> * <a href="#step3">Building ScaLAPACK</a> * <a href="#step4">Building VASP</a> <a name="step1"></a> # Step 1: Installing the necessary packages <pre> <code class="bash"> sudo apt-get install build-essential gfortran libopenmpi-dev libopenmpi1.6 openmpi1.6 libfftw3-double3 libfftw3-single3 libfftw3-dev git </code> </pre> <a name="step2"></a> # Step 2: Building OpenBLAS From here on, I will assume that everything is compiled in its own build directory. For convenience, let's declare a variable to hold this path. <pre> <code class="bash"> export BUILDPATH=$HOME/vaspbuild mkdir$BUILDPATH cd $BUILDPATH </code> </pre> The latest version of OpenBLAS can be readily obtained via the command below: <pre> <code class="bash"> git clone https://github.com/xianyi/OpenBLAS.git --depth 1 </code> </pre> To compile OpenBLAS run: <pre> <code class="bash"> cd$BUILDPATH/OpenBLAS make FC=gfortran CC=gcc USE_THREAD=0 </code> </pre> This will generate a file called <b>libopenblas.a</b>. We will use this file for the compilation of scaLAPACK as well as VASP. <a name="step3"></a> # Step 3: Building ScaLAPACK <pre> <code class="bash"> cd $BUILDPATH wget http://www.netlib.org/scalapack/scalapack-2.0.2.tgz tar -xvzf scalapack-2.0.2.tgz cd$BUILDPATH/scalapack-2.0.2 cp SLmake.inc.example SLmake.inc </code> </pre> We are going to make a few modifications to SLmake.inc in order to properly link everything to our version of OpenBLAS. To do so, let BLASLIB refer to our OpenBLAS library. Because OpenBLAS also contains LAPACK routines, refer LAPACKLIB to BLASLIB. <pre> <code class="bash"> BLASLIB = -L/home/$USER/vaspbuild/OpenBLAS -lopenblas LAPACKLIB =$(BLASLIB) LIBS = $(LAPACKLIB)$(BLASLIB) </code> </pre> Then, in order to compile everything, simply type <pre> <code class="bash"> make </code> </pre> This will generate a library file called <b>libscalapack.a</b>. Now, we have everything to build VASP. <a name="step4"></a> # Step 4: Building VASP I assume you have extracted the two VASP packages (vasp5.lib.tar.gz and vasp5.tar.gz) in the build folder. Compiling the library part of VASP is easy. <pre> <code class="bash"> cd $BUILDPATH/vasp.5.lib cp Makefile.linux_gfortran Makefile make </code> </pre> Building the other part is somewhat more difficult and requires a bit of tweaking and fixing some things. Start by copying the template makefile. <pre> <code class="bash"> cd$BUILDPATH/vasp.5.3 cp Makefile.linux_gfortran Makefile </code> </pre> Next, we are going to change some directives in the makefile. Open the Makefile with your favorite text editor. First, remove the <i>-C</i> part from the CPP line, like so: <pre> <code class="makefile"> CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -traditional >$*$(SUFFIX) </code> </pre> Next, we are going to comment the last line of the FPP block. <pre> <code class="makefile"> # this release should be fpp clean # we now recommend fpp as preprocessor # if this fails go back to cpp # CPP_=fpp -f_com=no -free -w0$*.F $*$(SUFFIX) </code> </pre> Scroll down to the FFLAGS directive and change it to: <pre> <code class="makefile"> FFLAGS =-ffree-form -ffree-line-length-0 </code> </pre> <b>Note that there has to be a space after the 0!!</b> Now scroll further down to the part mentioning <b>MPI section</b>. Uncomment the FC and FCL line and change <i>mpif77</i> to <i>mpif90</i>, like so: <pre> <code class="makefile"> FC=mpif90 FCL=$(FC) </code> </pre> Uncomment the whole block for CPP and add <i>-DMPI</i> and <i>-DscaLAPACK</i> to the block, like so: <pre> <code class="makefile"> CPP =$(CPP_) -DHOST=\"gfortran\" \ -DCACHE_SIZE=4096 -DMINLOOP=1 -DNGZhalf \ -DMPI_BLOCK=8000 -DMPI -DscaLAPACK </code> </pre> Add our version of scaLAPACK to the SCA variable and create a reference to our version of BLAS and LAPACK: <pre> <code class="makefile"> SCA=/home/$USER/vaspbuild/scalapack-2.0.2/libscalapack.a BLAS=/home/$USER/vaspbuild/OpenBLAS/libopenblas.a LAPACK=/home/$USER/vaspbuild/OpenBLAS/libopenblas.a </code> </pre> Uncomment the lines for the LIB and the (second!) FFT3D variables in the <b>libraries for MPI</b> block. Refer to the fftw3 library of the OS. (last part of the FFT3D line) <pre> <code class="makefile"> LIB = -fall-intrinsics \ -L../vasp.5.lib -ldmy \ ../vasp.5.lib/linpack_double.o$(LAPACK) \ $(SCA)$(BLAS) # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller # FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o /usr/lib/x86_64-linux-gnu/libfftw3.a </code> </pre> When you would now start to compile VASP by typing make, you would encounter an error like the following: <pre> <code class="bash"> us.f90:1403.10: USE us 1 Error: 'setdij' of module 'us', imported at (1), is also the name of the current program unit </code> </pre> The way to resolve this error, is by changing the names. The way we are going to do that, is by applying a patch to this file. You can download the patch file [here](http://www.ivofilot.nl/downloads/vasp535.patch). Just place it in the same directory as where the VASP sources reside and apply the patch by typing: <pre> <code class="bash"> wget http://www.ivofilot.nl/downloads/vasp535.patch patch -p0 < vasp535.patch </code> </pre> This will change the names automatically. You will receive a message such as <pre> <code class="makefile"> patching file us.F </code> </pre> Upon typing make, the compilation will resume. The next error we are going to encounter is <pre> <code class="bash"> fftmpiw.f90:70: Error: Can't open included file 'fftw3.f' </code> </pre> The way to resolve this error is by copying our system's version of fftw3.h to the source folder like so <pre> <code class="bash"> cp /usr/include/fftw3.f . </code> </pre> (note that all .f files get removed after a make clean) Now, everything should compile perfectly. Type make again and the vasp executable will be generated.
• watched18656
• active(true)
• created2015-06-29 12:00:00
• modified2015-06-30 17:30:20
• 2(array)
• id21
• titleHow to build VASP 4.6.38 using the GNU compiler
• contentThis tutorial will explain how to install VASP 4.6.38 from scratch on a Linux system. For VASP 5, [please see this blogpost](http://www.ivofilot.nl/posts/view/28/How+to+build+VASP+5.3.5+using+the+GNU+compiler+on+Linux+Ubuntu). # Step 0: Preparing (VASP)[http://www.vasp.at/] relies on a couple of libraries which you have to install. You can install these system-wide (assuming you have root privileges) or locally. For the sake of this tutorial, I will consider that you do not have root privileges and install everything locally. If you want to install things system-wide though, just change the --prefix tags to something like /opt/software/version-x and perform the last make install or cp step as a root user. Furthermore, I assume you have some basic understanding of make and have some basic software such as wget, tar, git, and grep installed. (most systems have this software installed by default) A couple of things before I start: * I assume you have a valid VASP license and that you are using VASP according to that license. * Although it is very hard to break a Linux system as a regular (i.e. non-administrative) user, people still keep surprising me that they actually have done so. So use this guide **at your own risk**. If you break something, you have to suffer the consequences. This is especially true if you are going to do things as a superuser. * Last but not least: really carefully read this guide. If you cannot figure things out, read it again. If you run into something, remember that Google is your friend. If you are stuck at some point though and you are absolutely 100% sure that you have done everything in your power to try to resolve the issue yourself, feel free to leave a comment. But no promises! :-) This tutorial proceeds in 6 steps: * <a href="#step1">Building our own gcc (optional)</a> * <a href="#step2">Building OpenMPI</a> * <a href="#step3">Building OpenBLAS</a> * <a href="#step4">Building ScaLAPACK</a> * <a href="#step5">Building VASP</a> If you run into problems, please have a look at this section. * <a href="#step6">Troubleshooting</a> OK, let's start! Make sure you have a sufficient amount of space available in your /home folder and start by creating a build and compile directory: <pre> <code class="bash"> mkdir -pv $HOME/vasp/{libraries,compilation,software} INSTALL=$HOME/vasp </code> </pre> <a name="step1"></a> # Step 1: Install gcc-4.8.3 (optional) In my [previous post](http://www.ivofilot.nl/posts/view/20/Optimizing+VASP), I mentioned that gcc-4.8.3 gave the best performance for my type of processor. Therefore, I would like to show here how to compile your own compiler. If you don't think this is necessary or if it sounds overcomplicated to you, feel free to skip this step. Start by grabbing the tarball of gcc-4.8.3. <pre> <code class="bash"> mkdir -pv $INSTALL/compilation/gcc cd$INSTALL/compilation/gcc wget ftp://ftp.nluug.nl/mirror/languages/gcc/releases/gcc-4.8.3/gcc-4.8.3.tar.bz2 tar -xvjf gcc-4.8.3.tar.bz2 </code> </pre> gcc-4.8.3 relies on a couple of other packages, which you can conveniently grab by executing the following commands: <pre> <code class="bash"> cd gcc-4.8.3 ./contrib/download_prerequisites </code> </pre> gcc needs to be compiled in a separate folder, create a separate build folder (the name is arbitrary), configure, make and make install. I assume that you have at least a quad-core machine, so we are going to compile with five threads. Feel free to change the -j5 flag to something more appropriate for your machine. <pre> <code class="bash"> mkdir -v $INSTALL/compilation/gcc/build cd$INSTALL/compilation/gcc/build ../gcc-4.8.3/configure --enable-languages=c,c++,fortran \ --enable-shared --enable-linker-build-id --with-system-zlib \ --without-included-gettext --enable-threads=posix \ --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug \ --enable-libstdcxx-time=yes --enable-gnu-unique-object \ --enable-plugin --with-tune=generic --enable-checking=release \ --disable-multilib --prefix=$INSTALL/software/gcc/4.8.3 make -j5 </code> </pre> To make sure that gcc-4.8.3 is going to be used for compiling the libraries and the VASP program, **prepend** the bin folder to the PATH like so <pre> <code class="bash"> PATH=$INSTALL/software/gcc/4.8.3/bin:$PATH </code> </pre> jhpowell_at_odu_dot_edu (see the comment below) suggested to also append the lib and lib64 folders of gcc-4.8.3 to the LD_LIBRARY PATH variable, like so: <pre> <code class="bash"> LD_LIBRARY_PATH=$INSTALL/software/gcc/4.8.3/lib:$INSTALL/software/gcc/4.8.3/lib64 :$LD_LIBRARY_PATH </code> </pre> to check if your gcc compiler is correctly added to the PATH, test the following <pre> <code class="bash"> which gcc which gfortran </code> </pre> it should output something like: <pre> <code class="bash"> /home/user/vasp/software/gcc/4.8.3/bin/gcc /home/user/vasp/software/gcc/4.8.3/bin/gfortran </code> </pre> Finally, it is time to test your freshly built compiler by creating a very small program: <pre> <code class="bash"> cd $INSTALL/software echo "int main(){return 0;}" > test.c gcc test.c -o test ./test && echo 'Success!' || echo 'Something went wrong' </code> </pre> if you get the 'Success!' statement, feel free to remove the files: <pre> <code class="bash"> rm -v test.c test </code> </pre> <a name="step2"></a> # Step 2: Install OpenMPI Download the latest tarball of the stable release. <pre> <code class="bash"> cd$INSTALL/compilation wget http://www.open-mpi.org/software/ompi/v1.8/downloads/openmpi-1.8.1.tar.bz2 tar -xvjf openmpi-1.8.1.tar.bz2 cd openmpi-1.8.1 ./configure --prefix=$INSTALL/libraries/openmpi/1.8.1 </code> </pre> At this point, you probably want to make sure that the proper version of gcc is going to be used. Especially if you have spend some effort and time in building one yourself. To do so, check for the OPAL_CC_ABSOLUTE and OMPI_FC_ABSOLUTE patterns in config.log like so: <pre> <code class="bash"> grep -E "OPAL_CC_ABSOLUTE=|OMPI_FC_ABSOLUTE=" config.log </code> </pre> which should give you something like: <pre> <code class="bash"> OMPI_FC_ABSOLUTE='/home/user/vasp/software/gcc/4.8.3/bin/gfortran' OPAL_CC_ABSOLUTE='/home/user/vasp/software/gcc/4.8.3/bin/gcc' </code> </pre> If everything looks OK. It is time to build and install the package. <pre lang="bash"> make -j5 make install -j5 </pre> Testing OpenMPI is a bit more difficult as testing GCC, moreover, it really isn't that necessary. From personal experience I found out that if OpenMPI compiles without any errors, it will run just fine. <a name="step3"></a> # Step 3: Install OpenBlas In my [previous post](http://www.ivofilot.nl/posts/view/20/Optimizing+VASP), I mentioned that OpenBLAS gave better performance than ATLAS, as such, we are going to install OpenBLAS. Grab the latest version from Github: <pre> <code class="bash"> cd$INSTALL/compilation git clone https://github.com/xianyi/OpenBLAS.git --depth 1 cd OpenBLAS </code> </pre> We should use the non-threaded variant of OpenBLAS, as such, build the library using <pre> <code class="bash"> make USE_THREAD=0 </code> </pre> Finally install the library <pre> <code class="bash"> make PREFIX=$INSTALL/libraries/openblas/1.13 install </code> </pre> <a name="step4"></a> # Step 4: Install ScaLAPACK Grab the package from the repository and extract it. ScaLAPACK reads the compilation instructions from SLmake.inc, which does not exist yet. Copy SLmake.inc.example to SLmake.inc and refer to our previously built OpenBLAS library. In the listing below, we use sed to do this for us. By adding the bin folder of our OpenMPI installation to the PATH, we will use our own mpicc compiler for building ScaLAPACK. Acter properly configuring, we can now use make to build the library. <pre> <code class="bash"> cd$INSTALL/compilation wget http://www.netlib.org/scalapack/scalapack-2.0.2.tgz tar -xvzf scalapack-2.0.2.tgz cd scalapack-2.0.2 PATH=$PATH:$INSTALL/libraries/openmpi/1.8.1/bin OPENBLASPATH=$INSTALL/libraries/openblas/1.13/lib cp SLmake.inc.example SLmake.inc sed -i "s,^BLASLIB.*$,BLASLIB = ${OPENBLASPATH}/libopenblas.a,g" SLmake.inc sed -i "s,^LAPACKLIB.*$,LAPACKLIB = ${OPENBLASPATH}/libopenblas.a,g" SLmake.inc make </code> </pre> Now, we copy the freshly built libopenblas.a file to our custom installation directory. <pre> <code class="bash"> mkdir -pv$INSTALL/libraries/scalapack/2.0.2/lib cp -v libscalapack.a $INSTALL/libraries/scalapack/2.0.2/lib </code> </pre> <a name="step5"></a> # Step 5: Building VASP 4.6.38 Finally, we are ready to build VASP. VASP is build in two steps. First, we have to build a VASP library and finally we can build a VASP executable. Start by downloading the two corresponding TAR files from the VASP repository (or get them from your system administrator). <pre> <code class="bash"> mkdir -pv$INSTALL/compilation/vasp # I assume that you have placed here your vasp.4.6.tar.gz and vasp.4.lib.tar.gz files tar -xvzf vasp.4.6.tar.gz tar -xvzf vasp.4.lib.tar.gz cd vasp.4.lib cp makefile.linux_gfortran Makefile </code> </pre> At this point, make sure that which gfortran refers to the correct version of gfortran. That is, either the one your compiled previously, or the one of your system! If you have compiled gfortran yourself and which gfortran refers to the wrong version, execute the command below: <pre> <code class="bash"> PATH=$INSTALL/software/gcc/4.8.3/bin:$PATH </code> </pre> If everything is set up correct, you are ready to build the VASP library. <pre> <code class="bash"> make </code> </pre> Now it is time to build the VASP executable, start by going to the source folder, copy the example makefile.linux_gfortran to Makefile and edit this file. <pre> <code class="bash"> cd ../vasp.4.6 cp makefile.linux_gfortran Makefile nano Makefile </code> </pre> We have to change a couple of directives in this file. To start, we are going to link to OpenBLAS. <pre> <code class="makefile"> # Atlas based libraries #ATLASHOME= /usr/lib/blas/threaded-atlas # BLAS= -L/usr/lib/blas/atlas -lblas #!! this line has to be commented out #BLAS= -L$(ATLASHOME) -lf77blas -latlas # use specific libraries (default library path points to other libraries) #BLAS=$(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a # use the mkl Intel libraries for p4 (www.intel.com) #BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lpthread # LAPACK, simplest use vasp.4.lib/lapack_double #LAPACK= ../vasp.4.lib/lapack_double.o # use atlas optimized part of lapack #LAPACK= ../vasp.4.lib/lapack_atlas.o -llapack -lblas # use the mkl Intel lapack #LAPACK= -lmkl_lapack #LAPACK= -L/usr/lib/lapack/atlas -llapack #!! this line has to be commented out # Use our own OpenBLAS LAPACK=/home/user/vasp/libraries/openblas/1.13/lib/libopenblas.a </code> </pre> Take care to change user to the proper folder name! We are going to use MPI, so set the proper compiler below. <pre> <code class="makefile"> #----------------------------------------------------------------------- # fortran linker for mpi: if you use LAM and compiled it with the options # suggested above, you can use the following lines #----------------------------------------------------------------------- FC=mpif90 FCL=$(FC) #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net) #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxPgi\" \ -Dkind8 -DNGZhalf -DCACHE_SIZE=2000 -DPGF90 -Davoidalloc -DRPROMU_DGEMV \ -DscaLAPACK #!! note the addition here #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply uncomment the line SCA #----------------------------------------------------------------------- BLACS=/usr/local/BLACS_lam SCA_= /usr/local/SCALAPACK_lam SCA=$(SCA_)/scalapack_LINUX.a $(SCA_)/pblas_LINUX.a$(SCA_)/tools_LINUX.a \ $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a$(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI$ SCA=/home/user/vasp/libraries/scalapack/2.0.2/lib/libscalapack.a #----------------------------------------------------------------------- # libraries for mpi #----------------------------------------------------------------------- LIB = -L../vasp.4.lib -ldmy \ ../vasp.4.lib/linpack_double.o $(LAPACK) \$(SCA) $(BLAS) # FFT: only option fftmpi.o with fft3dlib of Juergen Furthmueller FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o </code> </pre> Now we are ready to build vasp: <pre> <code class="bash"> make </code> </pre> ### Troubleshooting <a name="step6"></a> If you get an error message such as: <pre> <code class="fortran"> base.f:1.1: /* Copyright (C) 1991-2014 Free Software Foundation, Inc. 1 Error: Invalid character in name at (1) </code> </pre> then you have to remove the -C directive from the CPP line in the Makefile as below and run make again. <pre> <code class="makefile"> #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE 6.X, maybe some Red Hat distributions: #!! I removed the -C directive in the line below CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -traditional >$*$(SUFFIX) </code> </pre> So far, I have only seen this error occurring on Arch Linux. Most likely because this is a very cutting-edge distro. For Debian and Ubuntu, you should be fine. For any other distro, I am not aware. #### Update (18-sep-2014) If you encounter the error below compiling VASP <pre> <code class="bash"> /opt/scalapack/2.0.2-gcc-4.8.3/lib/libscalapack.a(PB_Cztypeset.o): In function PB_Cztypeset': PB_Cztypeset.c:(.text+0x233): undefined reference to zgeru_' PB_Cztypeset.c:(.text+0x249): undefined reference to zher_' PB_Cztypeset.c:(.text+0x275): undefined reference to zsymm_' PB_Cztypeset.c:(.text+0x28b): undefined reference to zsyrk_' PB_Cztypeset.c:(.text+0x296): undefined reference to zherk_' PB_Cztypeset.c:(.text+0x2a1): undefined reference to zsyr2k_' /opt/scalapack/2.0.2-gcc-4.8.3/lib/libscalapack.a(zvvdotu.o): In function zvvdotu_': zvvdotu.f:(.text+0x2b): undefined reference to zdotu_' collect2: error: ld returned 1 exit status Makefile:264: recipe for target 'vasp' failed make: *** [vasp] Error 1 </code> </pre> Then you have to change the order in which the libraries are linked, like so: <pre> <code lang="makefile"> LIB = -L../vasp.4.lib -ldmy \ ../vasp.4.lib/linpack_double.o \ $(SCA)$(BLAS) $(LAPACK) </code> </pre> • watched49736 • active(true) • created2014-07-28 00:00:00 • modified2015-06-30 17:29:18 • 3(array) • id24 • titleHow to build VASP 4.6.38 using the Intel compiler • content# Introduction In [my previous post](http://www.ivofilot.nl/posts/view/21/How+to+build+VASP+4.6.38+using+the+GNU+compiler), I explained how to build VASP using the GNU compilers. Here, I will show you how you can build VASP using the Intel Compilers. In principle, the Intel Compilers should give better performance for VASP on Intel processors. # Step 0: Install the Intel Compilers Install the Intel Compiler Suite in /opt/intel (this is the default setting). Please refer to the installation instructions given by Intel. # Step 1: Load the Intel Compilers <pre> <code class="bash"> source /opt/intel/composer\_xe\_2013\_sp1/bin/compilervars.sh intel64 </code> </pre> #Step 2: Compile the Intel FFTW wrappers <pre> <code class="bash"> cd /opt/intel/composer\_xe\_2013_sp1/mkl/interfaces/fftw3xf make libintel64 </code> </pre> #Step 3: Compile the VASP libraries <pre> <code class="bash"> tar -xvzf vasp.4.lib.tar.gz cd vasp.4.lib cp makefile.linux\_ifc\_P4 Makefile </code> </pre> and change the file so that FC is linking to ifort <pre> <code class="makefile"> .SUFFIXES: .inc .f .F #----------------------------------------------------------------------- # Makefile for Portland Group F90/HPF compiler # the makefile was tested only under Linux on Intel platforms # however it might work on other platforms as well # # this release of vasp.4.lib contains lapack v2.0 # this can be compiled with pgf90 compiler if the option -O1 is used # # Mind: one user reported that he had to copy preclib.F diolib.F # dlexlib.F and drdatab.F to the directory vasp.4.4, compile the files # there and link them directly into vasp # for no obvious reason these files could not be linked from the library # #----------------------------------------------------------------------- # C-preprocessor CPP = gcc -E -P -C$*.F >$*.f FC=ifort CFLAGS = -O FFLAGS = -O0 -FI FREE = -FR DOBJ = preclib.o timing_.o derrf_.o dclock_.o diolib.o dlexlib.o drdatab.o </code> </pre> and type <pre> <code class="bash"> make </code> </pre> #Step 4: Compile VASP <pre> <code class="bash"> source /opt/intel/impi/4.1.1.036/bin64/mpivars.sh tar -xvzf vasp.4.6.tar.gz cd vasp.4.6 cp makefile.linux\_ifc\_P4 Makefile </code> </pre> Alter the Makefile on the following points and finally run make <pre> <code class="bash"> make </code> </pre> when you get this error <pre> <code lang="bash"> mpif77 -fc=ifort -FR -names lowercase -assume byterecl -O3 -xAVX -c fftmpiw.f90 fftmpiw.f90(55): error #5102: Cannot open include file 'fftw3.f' include 'fftw3.f' --------------^ fftmpiw.f90(89): error #5102: Cannot open include file 'fftw3.f' include 'fftw3.f' --------------^ fftmpiw.f90(110): error #5102: Cannot open include file 'fftw3.f' include 'fftw3.f' --------------^ fftmpiw.f90(220): error #5102: Cannot open include file 'fftw3.f' include 'fftw3.f' --------------^ compilation aborted for fftmpiw.f90 (code 1) make: *** [fftmpiw.o] Error 1 </code> </pre> copy the header file <pre> <code class="bash"> cp /opt/intel/mkl/include/fftw/fftw3.f . </code> </pre> and run make again <pre> <code class="bash"> make </code> </pre> # Step 5: Testing our compilation and comparing against the GNU compilation We have used three different models to check the computational efficiency of the Intel Compilations versus that of the GNU compilation: * A single CO molecule in a large box (gamma-point calculation) * A bulk Cu system * A Fe<sub>5</sub>C<sub>2</sub> (Hagg carbide) system The results are the following: <table class="table"> <thead> <th>System</th> <th>gcc-4.8.4 VASP 4.6.38</th> <th>Intel VASP 4.6.38</th> </thead> <tr> <td>CO</td> <td>240.483</td> <td>240.683</td> </tr> <tr> <td>Cu</td> <td>54.623</td> <td>34.994</td> </tr> <tr> <td>Fe<sub>5</sub>C<sub>2</sub></td> <td>907.617</td> <td>656.705</td> </tr> </table> Interestingly, the Intel compiler does not give significantly different result for the gamma-point calculation (CO molecule). However, the speed improvement for the other calculations is **roughly 30%**! • watched7637 • active(true) • created2015-03-22 12:00:00 • modified2015-06-30 17:18:32 • 4(array) • id29 • titleIntroduction to Electronic Structure Calculations: The variational principle • content# Introduction to Electronic Structure Calculations: The variational principle ## Introduction This post is the first part in an upcoming series explaining how to perform electronic structure calculations. The focus of this series is how to set-up an electronic structure calculation from scratch. We are not going to use any of the available quantum chemical codes, but write our very own code. This way, we will obtain a better understanding of how quantum chemical calculations are being performed. The way I am going to present the topic is by first providing the underlying theory and then further explaining the theory by giving a set of examples. The outline of this series is as follows: We start of with explaining some basics such as the (linear) variational principle, the Hartree product and the Slater determinant. From there on, we derive the Hartree-Fock equations and show how to set-up a simple calculation for the H<sub>2</sub> molecule. Next, we explain about basis sets, Gaussian integrals and the Hartree-Fock self-consistent field procedure. Finally, we talk about atomic and molecular orbitals, the construction of MO diagrams and the visualization of orbitals. For this series, I am going to assume that you have a basic understanding of quantum mechanics. You know what a wave function is and its statistical interpretation. Furthermore, I assume that you have passed introductory courses in calculus and linear algebra. ## The variational principle Given a normalized wave function$\lvert\phi\rangle$that satisfies the appropriate boundary conditions, then the expectation value of the Hamiltonian is an upper bound to the exact ground state energy. In other words, the lowest energy we can find, gives us the best wave function. Thus, if$ \langle \phi \rvert \phi \rangle=1 $, then$ \langle \phi \lvert \hat{H} \rvert \phi \rangle \geq E_{0} $. <div class="bs-callout bs-callout-primary"> <h4>Example 1: The energy of the 1s orbital in hydrogen</h4> Let's exemplify this by providing an example. In the following example, we wish to obtain the best value for$ \alpha $for the trial function$ \lvert \phi \rangle = N \exp \left( - \alpha r^{2} \right) $(where$N$is a normalization constant) that describes the 1s orbital in the H-atom. The Hamiltonian for the H-atom in <a href="https://en.wikipedia.org/wiki/Atomic_units">atomic units</a> is: $$\hat{H} = -\frac{1}{2} \nabla^{2} - \frac{1}{r}.$$ The first step is to find$E$as a function of$\alpha$. Therefore, we need to evaluate the following integral $$E = \langle \phi \lvert \hat{H} \rvert \phi \rangle = \int_{0}^{\infty} \textrm{d}r \int_{0}^{2\pi} \textrm{d} \theta \int_{0}^{\pi} \textrm{d} \varphi \, r^{2} \sin \varphi N \exp \left( - \alpha r^{2} \right) \left[ -\frac{1}{2} \nabla^{2} - \frac{1}{r} \right] N \exp \left( - \alpha r^{2} \right) .$$ Note that because we are integrating in <a href="http://mathworld.wolfram.com/SphericalCoordinates.html">spherical coordinates</a>, we have introduced the <a href="http://mathworld.wolfram.com/Jacobian.html">Jacobian</a> $$J = r^{2} \sin \varphi .$$ Furthermore, the radial part of the <a href="http://mathworld.wolfram.com/Laplacian.html">Laplacian</a>$\nabla^{2}$in spherical coordinates has the following form: $$\nabla^{2} = r^{-2} \frac{\textrm{d}}{\textrm{d} r} \left( r^{2} \frac{\textrm{d}}{\textrm{d} r} \right)$$ (we can ignore the angular parts of the Laplacian, because the 1s trial wave function is spherically symmetrical)<br><br> The above integral can be divided into two seperate integrals of the following forms $$I_1 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty r^2 \exp \left( -\alpha r^2 \right) \left[ r^{-2} \frac{\textrm{d}}{\textrm{d} r} \left( r^{2} \frac{\textrm{d}}{\textrm{d} r} \right) \exp \left( -\alpha r^2 \right) \right] \textrm{d}r$$ and $$I_2 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty r^2 \exp \left( -\alpha r^2 \right) \frac{1}{r} \exp \left( -\alpha r^2 \right) \textrm{d}r .$$ Since the wavefunction has no angular dependencies, the azimuthal ($\theta$) and polar ($\varphi$) parts have already been integrated in the above equations. These give the factor \pi$. To solve the first integral ($I_{1}$), we need to apply the Laplacian. This gives $$I_1 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty \exp \left( -\alpha r^2 \right) \left[ -6 \alpha^{2} + 4 \alpha^{2}r^{4} \right] \exp \left( -\alpha r^2 \right) \textrm{d}r .$$ Slightly rearranging this equation gives us $$I_1 = - 2 \pi N^{2} \int_0^\infty \left( 4 \alpha^{2}r^{4} -6 \alpha^{2} \right) \exp \left( -2 \alpha r^2 \right) \textrm{d}r .$$ This integral can be solved by performing <a href="https://en.wikipedia.org/wiki/Integration_by_parts">integration by parts</a>, but here we will use just use the following general formulas: $$\int_{0}^{\infty} \textrm{d}r \, r^{2m} \exp \left(-\alpha r^{2} \right) = \frac{(2m)!\pi^{1/2}}{2^{2m+1}m!\alpha^{m+1/2}}$$ and $$\int_{0}^{\infty} \textrm{d}r \, r^{2m+1} \exp \left(-\alpha r^{2} \right) = \frac{m!}{2\alpha^{m+1}} .$$ Applying these formulas for $I_1$ yields $$I_{1} = -2 \pi N^{2} \left[ 4 \alpha^{2} \frac{4! \pi^{1/2}}{2^5 2! (2 \alpha)^{5/2}} - 6 \alpha \frac{2!\pi^{1/2}}{2^3 1! (2\alpha)^{3/2}} \right] = N^{2} \frac{3 \pi^{3/2}} {4 \sqrt{2\alpha}}$$ and for $I_{2}$: $$I_{2} = -4 \pi N^{2} \int_{0}^{\infty} r \exp \left( -2 \alpha r^{2} \right) \textrm{d}r = -4 \pi N^{2} \frac{1}{4 \alpha} = \frac{-\pi}{\alpha} N^{2} .$$ Summing these two parts gives $$\langle \phi \lvert H \rvert \phi \rangle = I_1 + I_2 = N^2 \left( \frac{3 \pi ^{3/2}}{4 \sqrt{2 \alpha}} - \frac{\pi}{\alpha} \right) .$$ Using the same procedure, we can also evaluate the integral $\langle \phi \rvert \phi \rangle$. $$\langle \phi \rvert \phi \rangle = 4 \pi N^{2} \int_{0}^{\infty} r^2 \exp \left( -2 \alpha r^2 \right) \textrm{d}r = N^{2} \frac{\pi^{3/2}}{2 \sqrt{2}\alpha^{3/2}}$$ In order to evaluate the expectation value for the energy $E$ for the trial wave function $\lvert \varphi \rangle$, we need to evaluate $$E = \frac{\langle \phi \lvert \hat{H} \rvert \phi \rangle }{\langle \phi \rvert \phi \rangle} = \frac{3 \alpha}{2} - \frac{2 \sqrt{2} \alpha ^{1/2}}{\pi ^{1/2}} .$$ The reason we need to divide by $\langle \phi \rvert \phi \rangle$ is because $\lvert \phi \rangle$ is not normalized yet. If $\lvert \phi \rangle$ would be normalized, then the integral $\langle \phi \rvert \phi \rangle$ would be equal to one and thus drop out. The best value for $\alpha$ can then be found by differentiating $E$ with respect to $\alpha$ and finding that value for $\alpha$ where the differential is zero. $$\frac{\textrm{d}E}{\textrm{d}\alpha} = 0 .$$ Thus $$\frac{\textrm{d}E}{\textrm{d}\alpha} = \frac{3}{2} - \frac{\sqrt{2}}{\pi ^{1/2} \alpha^{1/2}}$$ and $$\alpha = \frac{8}{9 \pi}.$$ This gives us the following upper bound for the energy $E$: $$E = \frac{3}{2} \frac{8}{9 \pi} - \frac{2 \sqrt{2} \left( \frac{8}{9 \pi} \right)^{1/2}}{\pi ^{1/2}} = -\frac{4}{3 \pi} \approx -0.4244 .$$ </div> The exact energy for the 1s orbital in hydrogen is -0.5 HT with the corresponding wave function $\lvert \phi \rangle = \frac{1}{\sqrt{\pi}} \exp \left( - r \right)$. ## The linear variational principle Although the exact answer to the energy of the 1s orbital in hydrogen is known, let us assume that we do *not* know it and would like to obtain it. One way is to pick various trial wave functions with one independent variable and optimize that variable. The lowest energy we are going to find gives us then the best approximation of the ground state wave function. This approach would be quite tedious (and how would you know what trial wave function to choose in the first place anyway?). It would be nice if we could have just a single trial wave function that has a lot of 'variational freedom'. Here, I propose to use a trial wave function that is a linear set of fixed wave function like so: $$\lvert \phi \rangle = \sum\_{i=1}^N c_{i} \lvert \psi\_{i} \rangle$$ If we can assume that the wave functions $\lvert \psi\_i \rangle$ are orthonormal in the sense that $\langle \psi\_i \lvert \psi\_j \rangle = \delta\_{ij}$, then the energy is $$E = \sum\_{ij} \langle \psi\_i \lvert \hat{H} \rvert \psi\_j \rangle = \sum_{ij} c\_{i}c\_{j} H\_{ij}$$ and we are tasked with finding the best set of coefficients $c_{i}$ that minimizes $E$. It are exactly these coefficients (and accompanying wave functions) that provide us with the necessary variational freedom. (Note that $H\_{ij} = \langle \psi\_i \lvert \hat{H} \rvert \psi\_j \rangle$) The procedure to find this set is by using [Langrange's method of undetermined multiplies](https://en.wikipedia.org/wiki/Lagrange_multiplier). Herein $$L \left( c\_1, c\_2, \cdots, E \right) = \langle \phi \lvert \hat{H} \rvert \phi \rangle - E \left(\langle \phi \lvert \phi \rangle - 1 \right)$$ and we need to find that particular set of $c\_k$ where $$\frac{\partial L}{\partial c\_{k}} = 0$$ for each $k$. For each $k$, this gives the following equation $$\sum\_{j} c\_j H\_{kj} + \sum\_i c\_i H\_{ik} - 2 E c\_{k} = 0$$. Since $\langle \psi\_a \lvert \hat{H} \rvert \psi\_b \rangle = \langle \psi\_b \lvert \hat{H} \rvert \psi\_a \rangle = H\_{ab} = H\_{ba}$, we can simplify the above equation to $$\sum\_{j} H\_{ij} c\_j - E c\_{i} = 0.$$ The above expression is just the matrix expression $$H\vec{c} = E\vec{c} .$$ This last equation looks probably very familiar to you and relates to the [matrix eigenvalue problem](http://mathworld.wolfram.com/Eigenvalue.html). It is a very typical problem in linear algebra. To solve the matrix eigenvalue problem, one needs to diagonalize the $H$ matrix. This gives a set of eigenvalues and corresponding eigenvalues. The eigenvector corresponding to the lowest eigenvalue is the best set of $c\_k$ we are interested in. <div class="bs-callout bs-callout-primary"> <h4>Example 2: The 2x2 linear variational problem for finding the energy of the 1s orbital of hydrogen </h4> To illustrate the above procedure, we are going to find the best solution for the 1s orbital in the hydrogen atom using a trial wave function that is a linear combination of two orthonormal wave functions. The two orthonormal wave functions are $$\psi_{1} = \frac{2}{\pi^{3/4}} \exp \left( -r^2 / 2 \right)$$ and $$\psi_{2} = \frac{2\pi^{1/4}r - 4/\pi^{1/4}}{\sqrt{2\pi(3\pi-8)}} \exp \left( -r^2 / 2 \right).$$ And because of the orthonormality condition, the following integrals hold $$\langle \psi_{1} \lvert \psi_{2} \rangle = 0$$ and $$\langle \psi_{1} \lvert \psi_{1} \rangle = \langle \psi_{2} \lvert \psi_{2} \rangle = 1.$$ Applying the Hamiltonian to these wave functions, we can obtain the following 2x2 H-matrix: $$H = \begin{bmatrix} \langle \psi_{1} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{1} \lvert \hat{H} \rvert \psi_{2} \rangle \\langle \psi_{2} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{2} \lvert \hat{H} \rvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} -0.378379 & -0.0185959 \ -0.0185959 & 1.09531 \end{bmatrix}$$ Note that the H-matrix is symmetric. This is because we can exchange $\lvert \psi_{1} \rangle$ and $\lvert \psi_{2} \rangle$ and obtain the same result. Furthermore, note that although $\langle \psi_{1} \lvert \psi_{2} \rangle = 0$, this does not mean that the integral is also zero when the Hamiltonian operator is applied to either of the wave functions.<br><br> In order to find the best energy and approximation to the ground state wave function, we need to diagonalize the H-matrix. This can be easily done by hand, or using the <a href="https://www.wolframalpha.com/input/?i=Diagonalize%5B%7B%7B-0.378379%2C+-0.0185959%7D%2C+%7B-0.0185959%2C+1.09531%7D%7D%5D">very handy Wolfram Alpha website</a>. The result we obtain is $$D = \begin{bmatrix} 1.09554 & 0 \ 0 & -0.378614 \end{bmatrix}$$ for the diagonal (eigenvalue) matrix with corresponding $$X = \begin{bmatrix} -0.0126156 & -0.99992 \ 0.99992 & -0.0126156 \end{bmatrix}$$ for the eigenvector matrix. The first column of $X$ contains the coefficients corresponding to the first eigenvalue (energy) of the $D$-matrix and the second column of $X$ for the second eigenvalue. Since the second eigenvalue is lower ($-0.378614 < 1.09554$), the second column of $X$ contains the coefficients we are looking for. Thus, the best approximation to the ground state wave function is: $$\lvert \phi \rangle = c_{1} \lvert \psi_{1} \rangle + c_{2} \lvert \psi_{2} \rangle = -0.99992 \lvert \psi_{1} \rangle - 0.0126156 \lvert \psi_{2} \rangle.$$ The corresponding energy for this wave function is -0.378614 HT. This result is actually worse than what we obtained using the procedure of example 1 (which was -0.4244 HT). The reason for this is that the variational freedom using a linear combination of only two wave functions is actually less than one where we can vary a coefficient inside an exponent. To get better results, we would need to use a larger set of wave functions in the linear combination. </div> In Example 2 it was shown how we can obtain the best energy given a trial wave function which is a linear combination of a set of wave functions. Let us call these wave functions in the linear combination the basis functions. In order for the procedure in example 2 to work, the basis functions need to be orthonormal to each other. Finding a set of orthonormal functions by hand is somewhat tedious (try it for yourself if you do not believe me!). It would therefore be very handy if we could apply the above methodology (albeit somewhat changed) using a set of non-orthonormal basis functions. I already showed you how you can use Lagrange's method of undetermined multipliers in order to derive the matrix equation $Hc = Ec$. When the basis functions are no longer orthonormal, this matrix equation changes slightly and becomes $$H\vec{c} = ES\vec{c}.$$ Herein, $S$ is termed the overlap matrix where each term in the matrix is $$S\_{ij} = \langle \psi\_{i} \lvert \psi\_{j} \rangle.$$ The introduction of the overlap matrix $S$ makes the matrix diagonalization a tiny bit more complicated. There are several procedures for obtaining the eigenvalues and -vectors. The procedure we are going to use follows the canonical orthogonalization scheme. It works as follows 1. Construct the overlap matrix $S$. 2. Diagonalize the overlap matrix $S$ to obtain the eigenvalues of the overlap matrix and its corresponding eigenvector matrix $U$. 3. Construct a transformation matrix $X = U s^{-1/2}$. 4. Transform the Hamiltonian matrix to the canonically orthonormalized basis set $H' = X^{\dagger} H X$. 5. Diagonalize the transformed Hamiltonian matrix to obtain $D$ and $C'$ (where C' are the transformed coefficients). 6. Obtain the coefficients of the original basis by calculating $C = XC'$. The above probably looks a bit complicated, but really, it is not. Let me elaborate by showing you another example. <div class="bs-callout bs-callout-primary"> <h4>Example 3: Finding the energy of the 1s orbital of hydrogen using a non-orthonormal set of basis functions </h4> The two functions we are going to use are $$\psi_{1} = \exp \left( -r^{2} / 2 \right)$$ and $$\psi_{2} = r \exp \left( -r^{2} / 2 \right).$$ (<b>Step 1</b>)Their corresponding overlap matrix is $$S = \begin{bmatrix} \langle \psi_{1} \lvert \psi_{1} \rangle & \langle \psi_{1} \lvert \psi_{2} \rangle \\langle \psi_{2} \lvert \psi_{1} \rangle & \langle \psi_{2} \lvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} 5.56833 & 6.2832 \ 6.2832 & 8.35249 \end{bmatrix}.$$ From this matrix we can clearly see that these functions are neither orthogonal to each other nor normalized. <br><br>(<b>Step 2</b>)Diagonalizing the overlap matrix yields the following diagonal and eigenvector matrices $$D' = \begin{bmatrix} 13.396 & 0 \ 0 & 0.524846 \end{bmatrix}.$$ and $$U = \begin{bmatrix} 0.625975 & -0.779843 \ 0.779843 & 0.625975 \end{bmatrix}.$$ (<b>Step 3</b>)We can obtain the transformation matrix by calculating $$X = U s^{-1/2} = \begin{bmatrix} 0.625975 & -0.779843 \ 0.779843 & 0.625975 \end{bmatrix} \begin{bmatrix} 1/\sqrt{13.396} & 0 \ 0 & 1/\sqrt{0.524846} \end{bmatrix} = \begin{bmatrix} 0.171029 & -1.07644 \ 0.213069 & 0.864054 \end{bmatrix}.$$ The Hamiltonian matrix is $$H = \begin{bmatrix} \langle \psi_{1} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{1} \lvert \hat{H} \rvert \psi_{2} \rangle \\langle \psi_{2} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{2} \lvert \hat{H} \rvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} -2.10694 & -2.42674 \ -2.42674 & -1.4109 \end{bmatrix}.$$ Because our wave function $\lvert \psi_{1} \rangle$ in this example differs only by a normalization constant to the wave function $\lvert \psi_{1} \rangle$ in example 2, the value for the Hamiltonian element $H_{00}$ should only differ by a factor equal to the square of the normalization constant. In other words, if we divide $H_{00}$ by the value for $S_{00}$ (which represents that difference), we obtain the value of $H_{00}$ in example 2. The same applies of course for $H_{11}$. We can obtain the value in example 1 by calculating $H_{11} / S_{11}$.<br><br> (<b>Step 4</b>)And applying the transformation matrix to the Hamiltonian matrix gives $$H' = X^{\dagger} H X = \begin{bmatrix} 0.171029 & 0.213069 \ -1.07644 & 0.864054 \end{bmatrix} \begin{bmatrix} -2.10694 & -2.42674 \ -2.42674 & -1.4109 \end{bmatrix} \begin{bmatrix} 0.171029 & -1.07644 \ 0.213069 & 0.864054 \end{bmatrix}$$ $$H' = X^{\dagger} H X = \begin{bmatrix} -0.302548 & 0.32611 \ 0.32611 & 1.01951 \end{bmatrix}.$$ (<b>Step 5</b>)Diagonalizing the transformed Hamiltonian matrix gives $$D = \begin{bmatrix} 1.09557 & 0 \ 0 & -0.378614 \end{bmatrix}$$ with corresponding eigenvector matrix $$C' = \begin{bmatrix} 0.227151 & -0.973859 \ 0.973859 & 0.227151 \end{bmatrix}.$$ (<b>Step 6</b>)Finally, in order to get the coefficients in the original basis, we need to calculate $$C = XC' = \begin{bmatrix} 0.171029 & -1.07644 \ 0.213069 & 0.864054 \end{bmatrix} \begin{bmatrix} 0.227151 & -0.973859 \ 0.973859 & 0.227151 \end{bmatrix} = \begin{bmatrix} -1.00945 & -0.411073 \ 0.889866 & -0.0112284 \end{bmatrix}.$$ The lowest eigenvalue found from the diagonal matrix $D$ is -0.378614 HT. The corresponding wave function can be found by taking the coefficients of the corresponding column vector, thus giving $$\lvert \phi \rangle = -0.411073 \cdot \exp \left(-r^2 / 2 \right) - 0.0112284 \cdot r \exp \left(-r^2 / 2 \right).$$ This is effectively the same answer as found in example 2 as can be seen from the radial distribution plot (below)! This is of course not very surprising. The variation used in both approximations was by taking different polynomials in front of the exponent. Both attempts at finding the best wave function (example 2 as well as this example) use a zeroth and first order polynomial in front of the exponent. If we compare the radial distribution function of the wave function here and the one obtained in example 2, we can see that these functions overlap.<br><br> <img src="http://www.ivofilot.nl/images/24/bp_qc1_fig1.png"><br><br> This does not necessarily imply that the underlying contributions of the basis functions are the same. If you would examine the coefficients and the corresponding basis functions, you would notice a particular difference. The contribution of each basis functions in each set is visualized in the graph below. We can clearly see that the contributions differ.<br><br> <img src="http://www.ivofilot.nl/images/25/bp_qc1_fig2.png"><br><br> If we now sum up the contributions of the two basis functions for each linear combination, we note that the two wave functions in example 2 and this example are in principle the same.<br><br> <img src="http://www.ivofilot.nl/images/26/bp_qc1_fig3.png"><br><br> Thus, we obtain the same result for each wave function. </div> Although we have now the tools to find the best approximation of the ground state energy and its corresponding wave function using any set of basis functions, the result so far was not very satisfactory. Using only a single trial wave function as for instance given in example 1 gave better results. In order to improve on the previous results, we need to expand our set of basis functions. A larger basis set should in principle give a better result. <div class="bs-callout bs-callout-primary"> <h4>Example 4: The energy of the 1s orbital of hydrogen using a basis set including polynomials up to 4th order</h4> For our basis set, we are going to use the following set of basis functions: $$\lvert \psi_{i} \rangle = r^{i} \exp \left( -r^2/2 \right)$$ for $i = 0,1,2,3,4$. The overlap matrix $S$ then looks like: $$S = \begin{bmatrix} 5.56833 & 6.28319 & 8.35249 & 12.5664 & 20.8812 \ 6.28319 & 8.35249 & 12.5664 & 20.8812 & 37.6991 \ 8.35249 & 12.5664 & 20.8812 & 37.6991 & 73.0843 \ 12.5664 & 20.8812 & 37.6991 & 73.0843 & 150.796 \ 20.8812 & 37.6991 & 73.0843 & 150.796 & 328.879 \end{bmatrix}$$ and the hamiltonian matrix $H$ is $$H = \begin{bmatrix} -2.10694 & -2.42674 & -4.19506 & -8.35249 & -17.7867 \ -2.42674 & \ -1.4109 & -2.06931 & -5.25794 & -14.598 \ -4.19506 & -2.06931 & -1.08169 & \ -2.03167 & -8.98742 \ -8.35249 & -5.25794 & -2.03167 & 1.45319 & 2.31392 \ -17.7867 & -14.598 & -8.98742 & 2.31392 & 22.7788 \end{bmatrix}$$ Using the same procedure as in example 3, we obtain for the lowest eigenvalue of the transformed hamiltonian matrix the energy of <b>-0.487773 HT</b>. This result is a significant improvement and comes very close to the exact energy of -0.5HT! <br><br> Further improvement by expanding the basis set to 10 basis functions gives a result of -0.498481 HT. </div> ## Summary In this blogpost I have shown you that in order to find the best approximation to the ground state energy, the variational principle can be employed. By introducing some kind of variety in a trial wave function and then minimizing its corresponding energy, gives the best approximation to the ground state energy can be found. In the linear variational method, the variational freedom is provided by the specific set of basis functions used. In this method, the best coefficients in the linear combination of said basis functions that minimize the energy can be found. Here, I have shown that a sufficiently large set of basis functions composed of a Gaussian part (the exponent) multiplied with a polynomial of increasing order is able to approach the true ground state energy of the hydrogen atom. ### Acknowledgements Finally, I would like to acknowledge <a href="https://www.linkedin.com/pub/bart-zijlstra/74/682/29b/de">Bart Zijlstra</a> for thoroughly proofreading this post and pointing out the mistakes.
• watched10541
• active(true)
• created2015-07-14 12:00:00
• modified2015-07-20 14:33:15
• 5(array)
• id30
• titleIntroduction to Electronic Structure Calculations: Many-electron wave functions
• content# Introduction to Electronic Structure Calculations: Many-electron wave functions ## Introduction This post is the second part in the Introduction to electronic structure calculations series. In [our previous post](http://www.ivofilot.nl/posts/view/29), we discussed about the minimalization problem for one electron. Here, we are going to expand this to many electrons by introducing the many electron wave function. ## The many-electron Hamiltonian In the Born-Oppenheimer approximation, we consider that the electrons are moving in the field of fixed nuclei. This gives the following Hamiltonian that describes the motion of $N$ electrons in field of $M$ nuclei: $$H\_{\textrm{elec}} = - \sum\_{i=1}^{N} \frac{1}{2} \nabla\_{i}^{2} - \sum\_{i=1}^{N} \sum\_{A=1}^{M} \frac{Z\_{A}}{r\_{i,A}} + \sum\_{i=1}^{N} \sum\_{j>i}^{N} \frac{1}{r\_{i,j}}$$ The first term in this equation describes the kinetic energy of the electrons, the second term the electron-nuclei interactions and the final term the electron-electron repulsion. Of course, the total energy of a molecule also depends on the nuclei-nuclei interactions, but because we are considering that these nuclei have fixed positions, this is just a single term that is added to the electronic energy afterwards. ## The many-electron wave function The wave function for a single electron (or single particle) is fairly easy to understand. But how does the wave function for many electrons look like? Let us propose that a many-electron wave function is just a simple product of one electron wave functions, termed the Hartree product. $$\lvert \Psi \rangle= \prod \_{i=0}^{N} \lvert \psi\_{i} \rangle$$ For this many-electron wave function to be a valid candidate of the many-electron Hamiltonian, it needs to be an eigenfunction. We can test this by letting the many-electron Hamiltonian operate on the many-electron wave function and see if we can extract the eigenvalue (which should be the energy). Because our many-electron Hamiltonian is just a sum of one-electron Hamiltonians, we simply extract for each electron its energy and leave the rest unchanged. In other words, if $h\_{i}$ is the one-electron Hamiltonian and $$H\_{\textrm{elec}} = \sum\_{i}^{N}h\_{i}$$ then $$H\_{\textrm{elec}} \lvert \Psi \rangle = \left[ \sum\_{i}^{N} \epsilon_{i} \right] \lvert \Psi \rangle.$$ Thus the total energy of the many-electron wave function is the sum of the energies of its constituting one-electron wave functions. Regrettably, there is a fundamental problem with the above wave function. The Hartree product is an uncorrelated or independent-electron wave function. For example, the probability of finding electron one in volume element $\textrm{d}\vec{x\_{1}}$ and electron two in volume element $\textrm{d}\vec{x\_{2}}$ at the same time is just the product of these two probabilities. $$P\_{1,2}(x\_{1}, x\_{2}) = \left| \psi\_{1} \right|^{2} (x\_{1}) \cdot \left| \psi\_{2} \right|^{2} (x\_{2})$$ This is of course non-realistic, because electron one will repel electron two when it comes too close and clearly these chances should be correlated in some way. There is another problem with the Hartree product: it takes no account of the indistinguishability of electrons, but specifically distinguishes electron-one as occupying orbital one and electron two as occupying orbital two. The anti-symmetry principle does not distinguish between identical electrons and requires that electronic wave functions be anti-symmetric (change sign) with respect to the interchange of any two electrons in two different orbitals. The way to resolve these problems is by introducing a new wave function that does take the anti-symmetry principle into account, but before that is possible, I have to introduce the concept of spin. From here on, any wave function has both a spatial as well as a spin part. The spatial part is described by the spatial function $\psi(\vec{x})$ (just as before), and the spin part is described by $\alpha(\omega)$ for spin-up and by $\beta(\omega)$ for spin-down. $\omega$ is here the unspecified spin variable. Furthermore, the spin functions are orthogonal, such that $$\langle \alpha \lvert \beta \rangle = \delta\_{\alpha \beta}$$ From here on, we denote the spin-orbital function as $$\chi(x)$$ where $x$ represents three spatial coordinates $\vec{x}$ and one spin coordinate $\omega$. The new many-electron wave function that I am proposing can be formed by using a Slater determinant. The Slater determinant looks as follows: $$\Psi (x\_{1}, x\_{2}, \cdots , x\_{N}) = \frac{1}{\sqrt{N!}} \left| \begin{matrix} \chi\_{1}(x\_{1}) & \chi\_{2}(x\_{1}) & \cdots & \chi\_{N}(x\_{1}) \\ \chi\_{1}(x\_{2}) & \chi\_{2}(x\_{2}) & \cdots & \chi\_{N}(x\_{2}) \\ \vdots & & & \vdots \\ \chi\_{1}(x\_{N}) & \chi\_{2}(x\_{N}) & \cdots & \chi\_{N}(x\_{N}) \end{matrix} \right|$$ Let us exemplify that this wave function is a suitable candidate by considering the two-electron case. <div class="bs-callout bs-callout-primary"> <h4>Example 1: The two-electron Slater determinant</h4> From the equation for the Slater determinant, we can determine that the equation for the two-electron wave function is $$\Psi(x_{1},x_{2}) = \frac{1}{\sqrt{2}} \left[ \chi_{1}(x_{1})\chi_{2}(x_{2}) - \chi_{2}(x_{1})\chi_{1}(x_{2}) \right]$$ From this equation, we can easily see that if we interchange electrons one and two, we obtain the same wave function with a minus sign in front of it. $$\Psi(x_{1},x_{2}) = -\Psi(x_{2},x_{1})$$ In other words, this function meets the anti-symmetry criterion. Furthermore, if we assume that the spin-orbitals in the Slater determinant are an orthogonal set, then the resulting wave function is normalized: $$\langle \Psi(x_{1},x_{2}) \lvert \Psi(x_{1},x_{2}) \rangle = \frac{1}{2} \left[ \langle \chi_{1} \lvert \chi_{1}\rangle \langle \chi_{2} \lvert \chi_{2} \rangle - 2 \langle \chi_{1} \lvert \chi_{2}\rangle \langle \chi_{2} \lvert \chi_{1} \rangle + \langle \chi_{1} \lvert \chi_{1}\rangle \langle \chi_{2} \lvert \chi_{2} \rangle \right] = \frac{1}{2} \left[1 - 0 + 1 \right] = 1$$ If two electrons are occupying the same spin-orbital (a forbidden situation), then the Slater determinant automatically gives zero: $$\Psi(x_{1},x_{2}) = \frac{1}{\sqrt{2}} \left[ \chi_{1}(x_{1})\chi_{1}(x_{2}) -\ chi_{1}(x_{1})\chi_{1}(x_{2}) \right] = 0$$ Finally, the probability to find electron one in spatial orbital one and electron two in spatial orbital two, where electron one and electron two have the same spin is $$P_{1,2}(x_{1}, x_{2}) = \frac{1}{2} \left[ 2\left| \psi_{1} (x_{1}) \right|^{2} \left| \psi_{2}(x_{2}) \right|^{2} - \psi_{1}^{*} (x_{1})\psi_{2}(x_{1})\psi_{2}^{*} (x_{1})\psi_{1}(x_{1}) - \psi_{1} (x_{1})\psi_{2}^{*}(x_{1})\psi_{2} (x_{2})\psi_{1}^{*}(x_{2}) \right]$$ In the above expression, we have obtained two extra terms that make the probabilities correlated. This kind of correlation is termed exchange correlation. The chance to find electron two around electron one (or vice versa) is lower (note that the terms are negative) if the two electrons have the same spin. A Fermi hole is said to exist around an electron. </div> Using our new many electron anti-symmetric wave function and our many electron Hamiltonian, we have all the necessary ingredients to calculate the energy of a many-electron system. Let us see what happens if we plug in an anti-symmetrized two-electron wave function into a single-nucleus two-electron Hamiltonian. In other words, we are going to attempt to calculate the energy of the helium atom. $$H\_{\textrm{elec}} \lvert \Psi \rangle = H\_{\textrm{elec}} \frac{1}{\sqrt{2}} \left[ \chi\_{1}(x\_{1})\chi\_{2}(x\_{2}) - \chi\_{2}(x\_{1})\chi\_{1}(x\_{2}) \right]$$ For two electrons and one nucleus, our Hamiltonian has the following form $$H\_{\textrm{elec}} = - \sum\_{i=1}^{2} \frac{1}{2} \nabla\_{i}^{2} - \sum\_{i=1}^{2} \frac{Z\_{A}}{r\_{i,A}} + \sum\_{i=1}^{2} \sum\_{j>i}^{2} \frac{1}{r\_{i,j}}.$$ We can rewrite this Hamiltonian to emphasize its one- and two-electron components as $$H\_{\textrm{elec}} =T(1) + T(2) + V(1) + V(2) + R(1,2).$$ Here, $T(1)$ denotes the kinetic energy operator on electron one and $T(2)$ the kinetic energy operator on electron two. Similarly, $V(1)$ and $V(2)$ denote the potential energy operator on electron one and two, respectively. The last term is the two-electron repulsion operator. Importantly, this operator acts on two electrons instead of one. Let us now evaluate what happens when these operators act on the wave function. When the kinetic operator $T(1)$ acts on the two-electron wave function, it will extract the kinetic energy of electron one of the specific spin-orbital where electron one resides in. Thus, $$T(1) \frac{1}{\sqrt{2}} \left[ \chi\_{1}(x\_{1})\chi\_{2}(x\_{2}) - \chi\_{2}(x\_{1})\chi\_{1}(x\_{2}) \right] = \frac{1}{\sqrt{2}} \left[ E^{(1)}\_{k}\chi\_{1}(x\_{1})\chi\_{2}(x\_{2}) - E^{(2)}\_{k}\chi\_{2}(x\_{1})\chi\_{1}(x\_{2}) \right]$$ $E^{(1)}\_{k}$ stands for the kinetic energy of spin-orbital one. To get the expectation value for the kinetic energy of electron one, we need to multiply the above result by $\lvert \Psi \rangle$ and integrate over all space. What we obtain is: $$\langle \frac{1}{\sqrt{2}} \left[ \chi\_{1}(x\_{1})\chi\_{2}(x\_{2}) - \chi\_{2}(x\_{1})\chi\_{1}(x\_{2}) \right] \lvert \frac{1}{\sqrt{2}} \left[ E^{(1)}\_{k}\chi\_{1}(x\_{1})\chi\_{2}(x\_{2}) - E^{(2)}\_{k}\chi\_{2}(x\_{1})\chi\_{1}(x\_{2}) \right] \rangle$$ After some collection of terms and rearranging, we obtain a similar integral we have seen before in Example 1: $$\frac{1}{2} \left[ E^{(1)}\_{k}\langle \chi\_{1} \lvert \chi\_{1}\rangle \langle \chi\_{2} \lvert \chi\_{2} \rangle - 2(E^{(1)}\_{k} + E^{(2)}\_{k}) \langle \chi\_{1} \lvert \chi\_{2}\rangle \langle \chi\_{2} \lvert \chi\_{1} \rangle + E^{(2)}\_{k}\langle \chi\_{1} \lvert \chi\_{1}\rangle \langle \chi\_{2} \lvert \chi\_{2} \rangle \right].$$ Because the spin-orbitals are orthonormal, the center term between the brackets is zero and the final result is $$\frac{1}{2} (E^{(1)}\_{k} + E^{(2)}\_{k})$$ Conclusively, we obtain the sum of the kinetic energies of all the spin-orbitals in the Slater determinant divided by the number of electrons in the system. We obtain exactly the same result if we had used the operator $T(2)$. In other words, the total kinetic energy of our wave function is just the sum of the kinetic energies of all the spin-orbitals in the Slater determinant. $$\langle \Psi \lvert T(1) + T(2) \lvert \Psi \rangle = \frac{1}{2} (E^{(1)}\_{k} + E^{(2)}\_{k}) + \frac{1}{2} (E^{(1)}\_{k} + E^{(2)}\_{k}) = E^{(1)}\_{k} + E^{(2)}\_{k}$$ Similarly, if we examine the effect of the potential energy operator, we find $$\langle \Psi \lvert V(1) + V(2) \lvert \Psi \rangle = \frac{1}{2} (E^{(1)}\_{P} + E^{(2)}\_{P}) + \frac{1}{2} (E^{(1)}\_{P} + E^{(2)}\_{P}) = E^{(1)}\_{P} + E^{(2)}\_{P}.$$
• watched0
• active(false)
• created2015-12-13 00:00:00
• modified2015-12-13 11:04:56
• 6(array)
• id31
• titleHow to compile VASP 5.4.1 for Linux Debian using the GNU compiler
• contentThis tutorial will explain how to install VASP 5.4.1 on a Linux Debian system. The developers of VASP have completely remade the compilation procedure. Personally, I believe they have done a marvelous job and the new compilation system will make compiling VASP significantly easier. <i class="fa fa-info-circle"></i> For installing VASP 4.6.38, you can refer to [my previous blogpost](http://www.ivofilot.nl/posts/view/21/How+to+build+VASP+4.6.38+using+the+GNU+compiler). <i class="fa fa-info-circle"></i> If you are interested in compiling VASP 5.3.5 using the old procedure, have a look at [this tutorial](http://www.ivofilot.nl/posts/view/28/How+to+build+VASP+5.3.5+using+the+GNU+compiler+on+Linux+Ubuntu+14.04+LTS). # Step 1: Acquiring the necessary packages <pre> <code class="bash"> sudo apt-get install build-essential libopenmpi-dev libfftw3-dev libblas-dev liblapack-dev libscalapack-mpi-dev libblacs-mpi-dev </code> </pre> # Step 2: Extracting the VASP source tarball Extract the VASP sources and go to the root directory of the VASP package. <pre> <code class="bash"> tar -xvzf vasp.5.4.1.tar.gz cd vasp.5.4.1 </code> </pre> # Step 3: Setting the build procedure You need to copy the makefile corresponding to your build architecture. Here, I assume we are going to use the GNU compiler, so you need to copy the makefile for gfortran: <pre> <code class="bash"> cp -v arch/makefile.include.linux_gfortran makefile.include </code> </pre> # Step 4: Configuring the makefile You need to tweak some directives in the Makefile to point the compiler to the right libraries and header files. For my Linux Debian system, I have used the following directives: <pre> <code class="makefile"> # Precompiler options CPP_OPTIONS= -DMPI -DHOST=\"IFC91_ompi\" -DIFC \ -DCACHE_SIZE=4000 -Davoidalloc \ -DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective \ -DnoAugXCmeta -Duse_bse_te \ -Duse_shmem -Dtbdyn CPP = gcc -E -P -C $*$(FUFFIX) >$*$(SUFFIX) $(CPP_OPTIONS) FC = mpif90 FCL = mpif90 FREE = -ffree-form -ffree-line-length-none FFLAGS = OFLAG = -O2 OFLAG_IN =$(OFLAG) DEBUG = -O0 LIBDIR = /usr/lib/x86_64-linux-gnu BLAS = -L$(LIBDIR) -lblas LAPACK = -L$(LIBDIR) -llapack BLACS = -L$(LIBDIR) -lblacs-openmpi -lblacsCinit-openmpi SCALAPACK = -L$(LIBDIR) -lscalapack-openmpi $(BLACS) OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o \ /usr/lib/x86_64-linux-gnu/libfftw3.a INCS =-I/usr/include LLIBS =$(SCALAPACK) $(LAPACK)$(BLAS) OBJECTS_O1 += fft3dfurth.o fftw3d.o fftmpi.o fftmpiw.o chi.o OBJECTS_O2 += fft3dlib.o # For what used to be vasp.5.lib CPP_LIB = $(CPP) FC_LIB =$(FC) CC_LIB = gcc CFLAGS_LIB = -O FFLAGS_LIB = -O1 FREE_LIB = $(FREE) OBJECTS_LIB= linpack_double.o getshmem.o # Normally no need to change this SRCDIR = ../../src BINDIR = ../../bin </code> </pre> # Step 5: Compiling You are now ready to compile VASP! <pre> <code class="bash"> make std </code> </pre> # Troubleshooting When compiling, I encountered the same error as for VASP 5.3.5 relating to the us.f90. <pre> <code class="bash"> us.f90:1403.10: USE us 1 Error: 'setdij' of module 'us', imported at (1), is also the name of the current program unit </code> </pre> The way to resolve this error, is by changing the names. The way we are going to do that, is by applying a patch to this file. You can download the patch file [here](http://www.ivofilot.nl/downloads/vasp535.patch). The patch was created for VASP 5.3.5, but works fine for VASP 5.4.1. Just place it in the same directory as where the VASP sources reside (src) and apply the patch by typing: <pre> <code class="bash"> wget http://www.ivofilot.nl/downloads/vasp535.patch patch -p0 < vasp535.patch </code> </pre> This will change the names automatically. You will receive a message such as <pre> <code class="makefile"> patching file us.F </code> </pre> Upon typing make, the compilation will resume. • watched15382 • active(true) • created2015-11-03 12:00:00 • modified2018-01-08 18:14:25 • 7(array) • id33 • titleHow to compile VASP 5.4.1 on a MacBook running OS X El Capitan • contentThis tutorial will explain how to install [VASP](https://www.vasp.at/) 5.4.1 on a MacBook. As a small warning, note that this procedure will take quite some time, so make sure you do not need your MacBook for a couple of hours. # Step 0: Directories and settings We are going to compile the source in the /Users/<username>/Downloads folder and install everything in /Users/<username>/vaspbuild. Note that you need to change <username> with your current login (i.e. the name of your user account). Feel free to change any of these paths to suit your needs. Furthermore, in this tutorial, I will embed the [ScaLAPACK](http://www.netlib.org/scalapack/), [OpenBLAS](http://www.openblas.net/) and [FFTW](http://www.fftw.org/) libraries in the executable. # Step 1: Installing the tool-chain To install VASP, we are going to use [MacPorts](https://www.macports.org) for our tool-chain. First, install MacPorts. On their website, they provide a [set of instructions](https://www.macports.org/install.php). Thereafter, run the following command to install GCC 4.9 with gfortran.  sudo port install gcc49 +gfortran  and install cmake as well as we are going to use it for building ScaLAPACK.  sudo port install cmake  # Step 2: OpenMPI Next, we are going to install OpenMPI. Go to [their website](https://www.open-mpi.org/software/ompi) and pick the latest version. At the time of this writing, this is version 1.10.3 and can be downloaded using [this link](https://www.open-mpi.org/software/ompi/v1.10/downloads/openmpi-1.10.3.tar.bz2). Extract the archive  tar -xvjf openmpi-1.10.3.tar.bz2  go to the folder and configure it  cd openpmi-1.10.3   ./configure --prefix=/Users/<username>/vaspbuild/openmpi-1.10.3  and compile and install it  make -j5 && make -j5 install  To use this version of OpenMPI for the compilation of VASP and ScaLAPACK, execute the following two commands  export PATH=/Users/<username>/vaspbuild/openmpi-1.10.3/bin:$PATH   export LD_LIBRARY_PATH=/Users/<username>/vaspbuild/openmpi-1.10.3/lib:$LD_LIBRARY_PATH  # Step 3: OpenBLAS Go to the website of [OpenBLAS](http://www.openblas.net/) and download the latest version. At the time of this writing, this is version 0.2.18. Unzip the archive  unzip OpenBLAS-0.2.18.zip  and go to the folder  cd OpenBLAS-0.2.18  and compile the library with  make USE_THREAD=0  and install it  make PREFIX=/Users/<username>/vaspbuild/openblas-0.2.18 install  More information about compiling and installing OpenBLAS can be found [here](https://github.com/xianyi/OpenBLAS/wiki/User-Manual). # Step 4: ScaLAPACK <div class="alert alert-info alert-hover">Note: Make sure you have adjusted your $PATH and $LD_LIBRARY_PATH to use the version of OpenMPI we just built.</div> Download the latest version of ScaLAPACK from [their website](http://www.netlib.org/scalapack/) and extract the tarball.  tar -xvzf scalapack.tgz  Create a seperate build folder  mkdir spbuild && cd spbuild  and execute cmake  cmake ../scalapack-2.0.2  We need to make a few adjustments to CMakeCache.txt: On line 18, replace <pre> <code class="cmake"> BLAS_Accelerate_LIBRARY:FILEPATH=/System/Library/Frameworks/Accelerate.framework </code> </pre> with <pre> <code class="cmake"> BLAS_Accelerate_LIBRARY:FILEPATH=/Users/<username>/vaspbuild/openblas-0.2.18/lib/libopenblas.dylib </code> </pre> and as well around line 253 we have to replace, <pre> <code class="cmake"> LAPACK_Accelerate_LIBRARY:FILEPATH=/System/Library/Frameworks/Accelerate.framework </code> </pre> with <pre> <code class="cmake"> LAPACK_Accelerate_LIBRARY:FILEPATH=/Users/<username>/vaspbuild-openblas/0.2.18/lib/libopenblas.dylib </code> </pre> Now we are ready to compile  make -j5  We are manually installing the library by copying it to an installation folder  mkdir -pv /Users/<username>/vaspbuild/scalapack-2.0.2/lib   cp -v lib/libscalapack.a /Users/<username>/vaspbuild/scalapack-2.0.2/lib  # Step 5: FFTW The last library we need to compile is FFTW. The procedure is fairly similar to how we compiled OpenMPI. As usual, go to [the website of FFTW](http://www.fftw.org/download.html) and download the latest version. Extract the archive  tar -xvzf fftw-3.3.4.tar.gz  and go to the folder. Therein, configure, build and install the library  ./configure --prefix=/Users/<username>/vaspbuild/fftw-3.3.4   make -j5 && make -j5 install  # Step 6: VASP <div class="alert alert-info alert-hover">Note: Make sure you have adjusted your $PATH and $LD_LIBRARY_PATH to use the version of OpenMPI we just built.</div> Extract the VASP tarball, go to the source folder and copy the makefile.include for gfortran  cp arch/makefile.include.linux_gfortran makefile.include  We need to adjust part of the makefile.include so that it looks as follows: <pre> <code class="makefile"> BLAS = LAPACK = /Users/<username>/vaspbuild/openblas-0.2.18/lib/libopenblas.a BLACS = SCALAPACK = /Users/<username>/vaspbuild/scalapack-2.0.2/lib/libscalapack.a OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o \ /Users/<username>/vaspbuild/fftw/3.3.4/lib/libfftw3.a INCS =-I/Users/<username>/vaspbuild/fftw/3.3.4/include </code> </pre> The first two references ensure that our recently built versions of OpenBLAS and ScaLAPACK are used to build VASP and the last few lines grab the header and library files from FFTW. To compile the standard version (non-gamma-point version, run)  make std  For the gamma point version, you need to run  make gam  The binary files are written in the bin folder. To check that our VASP executable only depend on OpenMPI, run otool -L bin/vasp_std and/or otool -L bin/vasp_gam. For me, the result looks like this: <pre> <code lang="bash"> My-MacBook-Air:vasp.5.4.1 ivofilot$ otool -L bin/vasp_std bin/vasp_std: /Users/ivofilot/vaspbuild/openmpi-1.10.3/lib/libmpi_usempif08.11.dylib (compatibility version 13.0.0, current version 13.1.0) /Users/ivofilot/vaspbuild/openmpi-1.10.3/lib/libmpi_mpifh.12.dylib (compatibility version 13.0.0, current version 13.1.0) /Users/ivofilot/vaspbuild/openmpi-1.10.3/lib/libmpi.12.dylib (compatibility version 13.0.0, current version 13.3.0) /opt/local/lib/libgcc/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1226.10.1) /opt/local/lib/libgcc/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /opt/local/lib/libgcc/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) </code> </pre> Let's place the VASP executable in the vaspbuild folder as well  cp -v bin/vasp_* /Users/<username>/vaspbuild/  In order to run and test VASP, create a seperate folder wherein your place the INCAR, POTCAR, POSCAR and KPOINTS files. From within that folder you run VASP (in this example using four threads) like so  mpirun -np 4 /Users/<username>/vaspbuild/vasp_std 
• watched8724
• active(true)
• created2016-06-30 00:00:00
• modified2017-06-24 10:15:22
• 8(array)
• id20
• titleOptimizing VASP performance by tuning the compilation using the GNU compilers
• watched13400
• active(true)
• created2014-07-25 00:00:00
• modified2017-06-24 10:15:41
• 9(array)
• id36
• titleAnimated gifs of molecular orbitals of CO
• content# Introduction The electron density of a molecular orbital is basically a three-dimensional scalar field which can be visualized using both [contour plots](http://www.itl.nist.gov/div898/handbook/eda/section3/contour.htm) as well as [isosurfaces](https://en.wikipedia.org/wiki/Isosurface). Typically, we see that isosurfaces are being employed, as they more clearly show the three-dimensional character of the molecular orbital. However, such isosurfaces are nothing more than those [set of points in 3D space which have the same value](http://paulbourke.net/geometry/polygonise/). Thus in some sense, there is loss of information, especially if there are interesting features within the three-dimensional object spanned by the isosurface. # Animated gifs A very simple way to resolve this, is to use contour plots. As mentioned, the downside of using contour plots though is that we are no longer able to effectively show the three-dimensional character as we are simply [projecting planes within the scalar field](http://paulbourke.net/papers/conrec/) and colorizing each point on the plane based on the value of that point in the scalar field. We are able to generate a series of contour plots for a set of relevant planes within the three-dimensional space. Using the [previously demonstrated EDP program](http://www.ivofilot.nl/posts/view/27/Visualising+the+electron+density+of+the+binding+orbitals+of+the+CO+molecule+using+VASP), we are able to efficiently create this contour plots using the command line. In turn, using [Imagemagick](https://www.imagemagick.org) and a small Python script as shown below, we are able to construct [animated gifs](https://en.wikipedia.org/wiki/GIF) that loop over the generated set of contour plots. <pre> <code class="python"> #!/usr/bin/env python from subprocess import call import numpy # input file identifier filestr = "CO_5SIGMA" # output file name output = "co_5sigma" # generate slices and create contour plot for z in numpy.linspace(2.0, 8.0, 50): filename = "%s_%f.png" % (filestr, z) call(["/home/ivo/Documents/PROGRAMMING/CPP/edp/build/edp", "-i", "/scratch/ivo/vasp/CO/PARCHG_5SIGMA", "-s", "100", "-v", "1,0,0", "-w", "0,0,1", "-p", "0,%f,0" % z, "-o", filename]) call(["convert", filename, "-resize", "25%", filename.replace(".png",".gif")]) # combine everything inside a single animated gif call(["convert", "-delay", "0", "-loop" , "0", "%s*.gif" % filestr, "%s.gif" % output]) </code> </pre> # Showcase As usual, we use the electron density of the binding molecular orbitals of CO to demonstrate the tool. Below, you can see that the animated gifs loop over a set of contour planes. One of the things that are now easily visible are the regions of near-zero electron density at the center of the 1π and 5σ orbitals, which would normally not be visible when using isosurfaces. <table class="table"> <tr> <td> <b>3σ</b><br> <img src="http://www.ivofilot.nl/images/38/co_3sigma.gif"> </td> <td> <b>4σ</b><br> <img src="http://www.ivofilot.nl/images/39/co_4sigma.gif"> </td> </tr> <tr> <td> <b>1π</b><br> <img src="http://www.ivofilot.nl/images/40/co_1pi.gif"> </td> <td> <b>5σ</b><br> <img src="http://www.ivofilot.nl/images/41/co_5sigma.gif"> </td> </tr> </table> What other molecular orbitals would you like to see? Share your thoughts in the comments!
• watched3517
• active(true)
• created2017-12-29 00:00:00
• modified2018-01-01 00:08:29
• 10(array)
• id35
• titleEfficient compression of CHGCAR and LOCPOT files
• content<div class="alert alert-info" role="alert"><i class="fa fa-github"></i> Looking for the Github repository of den2bin?<br><i class="fa fa-link"></i> <a href="https://github.com/ifilot/den2bin">https://github.com/ifilot/den2bin</a> </div> # Introduction I was recently involved in a research endeavor where we had to generate and analyze a substantial amount of [CHGCAR](https://cms.mpi.univie.ac.at/vasp/vasp/CHGCAR_file.html) and [LOCPOT](https://cms.mpi.univie.ac.at/vasp/vasp/LOCPOT_file.html) files. Each of these are relatively big in size (~150MB) and were consuming a lot of free space on our hard drives. Since we employ periodic back-upping of our data; you can imagine that things started to pile up and clutter. We looked into [BZIP2](http://www.bzip.org/) and [ZIP](https://en.wikipedia.org/wiki/Zip_(file_format)) to reduce the size, but found out early on that we only obtained [compression ratios](https://en.wikipedia.org/wiki/Data_compression_ratio) of about 3 (i.e. the zip-archive is only three times smaller than the original file). In this blogpost, I will show you the steps we took to get improved compression ratios. Using [lossy compression](https://en.wikipedia.org/wiki/Lossy_compression), we were able to get compression ratios of over 30 with decent accuracy of the original data. # From human-readable to binary As mentioned in the intro, we did not gain much by simply storing the CHGCAR or LOCPOT files in an archive. Since these files are human-readable, we can easily get better compression results by storing the values in binary format. If we for instance consider the following lines inside the CHGCAR file: <pre> <code class="bash"> 112 128 288 0.46419088767E-03 0.22525899949E-02 0.39391537405E-03 -.28562888829E-02 0.43623947025E-02 0.50580398663E-02 0.31440260409E-02 -.60090684201E-04 0.66497361263E-03 0.25302278248E-02 0.52120528002E-02 0.50812471621E-02 -.19041084105E-02 -.10359924871E-02 0.23382124189E-02 </code> </pre> we can see that each line contains 5 floating point values. Each of these floating point values takes about 18 bytes of storage. If we would store the human-readable number as a float, it would only take up 4 bytes of storage, a reduction of almost a factor of 5. In turn, if we do this for all the numbers inside the CHGCAR file and then compress the binary file, we were able to obtain compression ratios of ~10. But can we do better? # Discrete Cosine Transform So far, we have only considered storing the density files in a lossless format. This means that we do not loose any information upon compression and decompression of the data. The opposite is to store data in a lossy fashion, wherein the decompressed data is no longer identical to the original. [JPEG](https://en.wikipedia.org/wiki/JPEG) is an example of such a lossy compression technique. The underlying algorithm behind JPEG is a [Discrete Cosine Transformation (DCT)](https://en.wikipedia.org/wiki/Discrete_cosine_transform), which is really well explained in this [Computerphile YouTube movie](https://www.youtube.com/watch?v=Q2aEzeMDHMA). What the DCT does is fit the finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. For example, if we have a set of 64 data points, then the DCT gives us a set of 64 coefficients corresponding to the cosines at different frequencies. The DCT can be done one-dimensional, two-dimensional in the case of images, three-dimensional in our case and in theory even beyond three-dimensional. The DCT itself is lossless, that is, from the set of coefficients, using a inverse discrete cosine transformation, the original data set can be obtained. The way we can employ this algorithm to obtain increased compression, is to drop the higher order frequencies in the DCT, similarly to how higher order frequencies are dropped in an [MP3](https://en.wikipedia.org/wiki/MP3). Let's exemplify this. Assume that we have 3D dataset of 2x2x2 grid points (8 grid points in total) with the values <pre> <code class="bash"> d = 0.84 d = 0.39 d = 0.78 d = 0.80 d = 0.91 d = 0.20 d = 0.34 d = 0.77 </code> </pre> The discrete cosine transformation in three dimensions of this dataset gives the following coefficients: <pre> <code class="bash"> c = 0.63 c = 0.13 c = -0.06 c = 0.40 c = 0.11 c = 0.04 c = -0.09 c = -0.24 </code> </pre> Let us now set the last coefficient to zero and perform a inverse DCT. We obtain the following values: <pre> <code class="bash"> d = 0.93 d = 0.31 d = 0.70 d = 0.88 d = 0.83 d = 0.28 d = 0.42 d = 0.68 </code> </pre> Despite that these values are off, they qualitatively resemble the original data. This example teaches illustrates the trade-off: we are able to increase compression performance but we have to pay in terms of accuracy. # The mathematics Given a cubic block of NxNxN grid points, then each DCT coefficient is calculated using the formula below $$c\_{i,j,k} = s\_{i,j,k} \sum\_{l=0}^{N-1} \sum\_{m=0}^{N-1} \sum\_{n=0}^{N-1} d\_{l,m,n} \cdot \cos \left( \frac{\pi}{N} (i + \frac{1}{2}) l \right) \cdot \cos \left( \frac{\pi}{N} (j + \frac{1}{2}) m \right) \cdot \cos \left( \frac{\pi}{N} (k + \frac{1}{2}) n \right)$$ wherein $s\_{i,j,k}$ is a scaling constant and $d\_{l,m,n}$ is the value at grid point (l,m,n). Thus, for each coefficient (of which there are $N^{3}$), we have to loop over all the $N^{3}$ data points. The scaling constant is given by $$s\_{i,j,k} = s\_{i} \cdot s\_{j} \cdot s\_{k}$$ where $$s\_{i} = \begin{cases} \frac{1}{N},& \text{if } i\geq 0 \\ \frac{2}{N},& \text{otherwise} \end{cases} .$$ This scaling is in principle arbitrary, but we introduce it to ensure that the inverse discrete cosine transformation is given by $$d\_{i,j,k} = \sum\_{i=0}^{N-1} \sum\_{j=0}^{N-1} \sum\_{k=0}^{N-1} c\_{l,m,n} \cdot \cos \left( \frac{\pi}{N} (l + \frac{1}{2}) i \right) \cdot \cos \left( \frac{\pi}{N} (m + \frac{1}{2}) j \right) \cdot \cos \left( \frac{\pi}{N} (n+ \frac{1}{2}) k \right) .$$ # Block size and quality The above DCT transformation can be done for any size $N$, but it turns out that performing the transformation on typical data sets of 300x300x300 grid points is very expensive. So instead of calculating the DCT for the whole grid, we divide the grid into smaller chunks. This strategy is similar to that employed in JPEG compression, but instead of having 2D chunks, we have 3D chunks. These chunks are called blocks and it turns out that for CHGCAR and LOCPOT files, blocks of 4x4x4 turn out to give the best results (see below for the results). Each (4x4x4) block thus contains 64 values and the corresponding DCT contains 64 coefficients. We can now drop a number of coefficients corresponding to the higher frequencies of the DCT. To formalize this, let us introduce the concept of a quality value. A quality value of q means that all coefficients corresponding to those triplet of cosines which frequencies where i+j+k <= q holds, are kept and all other coefficients are dropped. This means that the quality values are not transferable between block sizes as the block size determines the maximum values of i,j,k. # The algorithm and the code The whole procedure can be summarized in the following six steps: 1. Read in the CHGCAR / LOCPOT 2. Divide the large grid into smaller chunks 3. Perform the DCT on each of these chunks 4. Drop the higher order coefficients based on the quality parameter 5. Store the other coefficients in binary format 6. Compress the final result I have written a small C++ program called [den2bin](https://github.com/ifilot/den2bin) that allows you to perform compression and decompression on CHGCAR and LOCPOT files. The program depends on the following development packages: [boost](http://www.boost.org/), [bz2](http://www.bzip.org/), [glm](https://glm.g-truc.net/) and [tclap](http://tclap.sourceforge.net/), which are typically readily available by your package manager. Also, you need to have CMake and the GNU compiler installed. On Ubuntu or Debian, you can get these as follows: <pre> <code class="bash"> sudo apt-get install build-essential cmake libboost-all-dev libbz2-dev libglm-dev libtclap-dev </code> </pre> To compile the code, simply clone the repository and compile it using CMake: <pre> <code class="bash"> git clone https://github.com/ifilot/den2bin cd den2bin mkdir build cd build cmake ../src make -j5 </code> </pre> The code checks if your compiler supports [OpenMP](https://en.wikipedia.org/wiki/OpenMP) for shared memory multiprocessing. If so, the program automatically makes use of all the available cores in your machine. Use the following commands to compress and decompress using the DCT: <pre> <code class="bash"> # to pack (note the "-l") ./den2bin -i *DENSITY_FILENAME* -o *BINARY_FILENAME* -m *MESSAGE_HEADER* -l -b *BLOCKSIZE* -q *QUALITY* # to unpack (same as for the lossless version) ./den2bin -x -i *BINARY_FILENAME* -o *DENSITY_FILENAME* </code> </pre> In the [README.md in the Github repository](https://github.com/ifilot/den2bin/blob/master/README.md), detailed information about all the program option and directives is provided. If you do not like to compile the program yourself, there is also the option to download the binary from the repository [by following these instructions](https://github.com/ifilot/den2bin/blob/master/README.md#obtaining-den2bin-from-the-repository). # Results To get an idea of the capabilities of our tool, I have compressed a CHGCAR file corresponding to the 1PI orbital of CO based on my [previous blog post on visualizing the electron density of the binding orbital of CO](http://www.ivofilot.nl/posts/view/27/Visualising+the+electron+density+of+the+binding+orbitals+of+the+CO+molecule+using+VASP). We are going to compare the compression ratios and data loss (purely qualitatively by looking at a contour plot) as a function of blocksize and quality. In this comparison, we have used cubic blocks of 2x2x2, 4x4x4, 8x8x8 and 16x16x16 grid points. The results are visualized in Figure 1. A high resolution PNG version of this image [can be found here](http://www.ivofilot.nl/images/33/1pi.png). From Figure 1, it can be seen that when using a quality of 1 (i.e. only a single DCT coefficient), we obtain a kind of down-scaling of the image as a single DCT coefficient only contains the average value for the whole data block. Increasing the quality factor decreases the compression ratio but increases the data quality (and hence the contour plot). Similar to JPEG, there is an optimum for the block size. Whereas this optimum is [8x8 for JPEG](https://stackoverflow.com/questions/10780425/why-jpeg-compression-processes-image-by-8x8-blocks), I would say the sweet spot for VASP density files is using a block size of 4x4x4 and a quality of 4. Nevertheless, do not take my word for it but test it for yourself. <img src="http://www.ivofilot.nl/images/32/1pi.jpg" width="100%" class="img img-responsive"> __Figure 1__: *Compression factor versus Blocksize and quality for the 1PI orbital of CO.* # Closing remarks If you like the program, feel free to [star the repository](https://help.github.com/articles/about-stars/) on Github. Bug reports or other comments to improve the program are always highly appreciated. Please use the [Issues](https://github.com/ifilot/den2bin/issues) functionality of Github for that. I am also interested in hearing your experience with the program. Feel free to share in the [comments](http://www.ivofilot.nl/posts/view/35/Efficient+compression+of+CHGCAR+and+LOCPOT+files#comments) where you have used the program for and at what block size and quality settings you found the compression to be ideal for your branch of work. # Bonus: more examples Below are some additional examples to illustrate the correlation between compression efficiency and data set accuracy. <img src="http://www.ivofilot.nl/images/29/chgcar.jpg" width="100%" class="img img-responsive"> __Figure 2__: *Compression factor versus Blocksize and quality for a CHGCAR file.* <img src="http://www.ivofilot.nl/images/28/locpot.jpg" width="100%" class="img img-responsive"> __Figure 3__: *Compression factor versus Blocksize and quality for a LOCPOT file.*
• watched3695
• active(true)
• created2017-12-27 00:00:00
• modified2018-01-01 00:10:23
• relatedposts(array)
• 0(array)
• id21
• titleHow to build VASP 4.6.38 using the GNU compiler
• contentThis tutorial will explain how to install VASP 4.6.38 from scratch on a Linux system. For VASP 5, [please see this blogpost](http://www.ivofilot.nl/posts/view/28/How+to+build+VASP+5.3.5+using+the+GNU+compiler+on+Linux+Ubuntu). # Step 0: Preparing (VASP)[http://www.vasp.at/] relies on a couple of libraries which you have to install. You can install these system-wide (assuming you have root privileges) or locally. For the sake of this tutorial, I will consider that you do not have root privileges and install everything locally. If you want to install things system-wide though, just change the --prefix tags to something like /opt/software/version-x and perform the last make install or cp step as a root user. Furthermore, I assume you have some basic understanding of make and have some basic software such as wget, tar, git, and grep installed. (most systems have this software installed by default) A couple of things before I start: * I assume you have a valid VASP license and that you are using VASP according to that license. * Although it is very hard to break a Linux system as a regular (i.e. non-administrative) user, people still keep surprising me that they actually have done so. So use this guide **at your own risk**. If you break something, you have to suffer the consequences. This is especially true if you are going to do things as a superuser. * Last but not least: really carefully read this guide. If you cannot figure things out, read it again. If you run into something, remember that Google is your friend. If you are stuck at some point though and you are absolutely 100% sure that you have done everything in your power to try to resolve the issue yourself, feel free to leave a comment. But no promises! :-) This tutorial proceeds in 6 steps: * <a href="#step1">Building our own gcc (optional)</a> * <a href="#step2">Building OpenMPI</a> * <a href="#step3">Building OpenBLAS</a> * <a href="#step4">Building ScaLAPACK</a> * <a href="#step5">Building VASP</a> If you run into problems, please have a look at this section. * <a href="#step6">Troubleshooting</a> OK, let's start! Make sure you have a sufficient amount of space available in your /home folder and start by creating a build and compile directory: <pre> <code class="bash"> mkdir -pv $HOME/vasp/{libraries,compilation,software} INSTALL=$HOME/vasp </code> </pre> <a name="step1"></a> # Step 1: Install gcc-4.8.3 (optional) In my [previous post](http://www.ivofilot.nl/posts/view/20/Optimizing+VASP), I mentioned that gcc-4.8.3 gave the best performance for my type of processor. Therefore, I would like to show here how to compile your own compiler. If you don't think this is necessary or if it sounds overcomplicated to you, feel free to skip this step. Start by grabbing the tarball of gcc-4.8.3. <pre> <code class="bash"> mkdir -pv $INSTALL/compilation/gcc cd$INSTALL/compilation/gcc wget ftp://ftp.nluug.nl/mirror/languages/gcc/releases/gcc-4.8.3/gcc-4.8.3.tar.bz2 tar -xvjf gcc-4.8.3.tar.bz2 </code> </pre> gcc-4.8.3 relies on a couple of other packages, which you can conveniently grab by executing the following commands: <pre> <code class="bash"> cd gcc-4.8.3 ./contrib/download_prerequisites </code> </pre> gcc needs to be compiled in a separate folder, create a separate build folder (the name is arbitrary), configure, make and make install. I assume that you have at least a quad-core machine, so we are going to compile with five threads. Feel free to change the -j5 flag to something more appropriate for your machine. <pre> <code class="bash"> mkdir -v $INSTALL/compilation/gcc/build cd$INSTALL/compilation/gcc/build ../gcc-4.8.3/configure --enable-languages=c,c++,fortran \ --enable-shared --enable-linker-build-id --with-system-zlib \ --without-included-gettext --enable-threads=posix \ --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug \ --enable-libstdcxx-time=yes --enable-gnu-unique-object \ --enable-plugin --with-tune=generic --enable-checking=release \ --disable-multilib --prefix=$INSTALL/software/gcc/4.8.3 make -j5 </code> </pre> To make sure that gcc-4.8.3 is going to be used for compiling the libraries and the VASP program, **prepend** the bin folder to the PATH like so <pre> <code class="bash"> PATH=$INSTALL/software/gcc/4.8.3/bin:$PATH </code> </pre> jhpowell_at_odu_dot_edu (see the comment below) suggested to also append the lib and lib64 folders of gcc-4.8.3 to the LD_LIBRARY PATH variable, like so: <pre> <code class="bash"> LD_LIBRARY_PATH=$INSTALL/software/gcc/4.8.3/lib:$INSTALL/software/gcc/4.8.3/lib64 :$LD_LIBRARY_PATH </code> </pre> to check if your gcc compiler is correctly added to the PATH, test the following <pre> <code class="bash"> which gcc which gfortran </code> </pre> it should output something like: <pre> <code class="bash"> /home/user/vasp/software/gcc/4.8.3/bin/gcc /home/user/vasp/software/gcc/4.8.3/bin/gfortran </code> </pre> Finally, it is time to test your freshly built compiler by creating a very small program: <pre> <code class="bash"> cd $INSTALL/software echo "int main(){return 0;}" > test.c gcc test.c -o test ./test && echo 'Success!' || echo 'Something went wrong' </code> </pre> if you get the 'Success!' statement, feel free to remove the files: <pre> <code class="bash"> rm -v test.c test </code> </pre> <a name="step2"></a> # Step 2: Install OpenMPI Download the latest tarball of the stable release. <pre> <code class="bash"> cd$INSTALL/compilation wget http://www.open-mpi.org/software/ompi/v1.8/downloads/openmpi-1.8.1.tar.bz2 tar -xvjf openmpi-1.8.1.tar.bz2 cd openmpi-1.8.1 ./configure --prefix=$INSTALL/libraries/openmpi/1.8.1 </code> </pre> At this point, you probably want to make sure that the proper version of gcc is going to be used. Especially if you have spend some effort and time in building one yourself. To do so, check for the OPAL_CC_ABSOLUTE and OMPI_FC_ABSOLUTE patterns in config.log like so: <pre> <code class="bash"> grep -E "OPAL_CC_ABSOLUTE=|OMPI_FC_ABSOLUTE=" config.log </code> </pre> which should give you something like: <pre> <code class="bash"> OMPI_FC_ABSOLUTE='/home/user/vasp/software/gcc/4.8.3/bin/gfortran' OPAL_CC_ABSOLUTE='/home/user/vasp/software/gcc/4.8.3/bin/gcc' </code> </pre> If everything looks OK. It is time to build and install the package. <pre lang="bash"> make -j5 make install -j5 </pre> Testing OpenMPI is a bit more difficult as testing GCC, moreover, it really isn't that necessary. From personal experience I found out that if OpenMPI compiles without any errors, it will run just fine. <a name="step3"></a> # Step 3: Install OpenBlas In my [previous post](http://www.ivofilot.nl/posts/view/20/Optimizing+VASP), I mentioned that OpenBLAS gave better performance than ATLAS, as such, we are going to install OpenBLAS. Grab the latest version from Github: <pre> <code class="bash"> cd$INSTALL/compilation git clone https://github.com/xianyi/OpenBLAS.git --depth 1 cd OpenBLAS </code> </pre> We should use the non-threaded variant of OpenBLAS, as such, build the library using <pre> <code class="bash"> make USE_THREAD=0 </code> </pre> Finally install the library <pre> <code class="bash"> make PREFIX=$INSTALL/libraries/openblas/1.13 install </code> </pre> <a name="step4"></a> # Step 4: Install ScaLAPACK Grab the package from the repository and extract it. ScaLAPACK reads the compilation instructions from SLmake.inc, which does not exist yet. Copy SLmake.inc.example to SLmake.inc and refer to our previously built OpenBLAS library. In the listing below, we use sed to do this for us. By adding the bin folder of our OpenMPI installation to the PATH, we will use our own mpicc compiler for building ScaLAPACK. Acter properly configuring, we can now use make to build the library. <pre> <code class="bash"> cd$INSTALL/compilation wget http://www.netlib.org/scalapack/scalapack-2.0.2.tgz tar -xvzf scalapack-2.0.2.tgz cd scalapack-2.0.2 PATH=$PATH:$INSTALL/libraries/openmpi/1.8.1/bin OPENBLASPATH=$INSTALL/libraries/openblas/1.13/lib cp SLmake.inc.example SLmake.inc sed -i "s,^BLASLIB.*$,BLASLIB = ${OPENBLASPATH}/libopenblas.a,g" SLmake.inc sed -i "s,^LAPACKLIB.*$,LAPACKLIB = ${OPENBLASPATH}/libopenblas.a,g" SLmake.inc make </code> </pre> Now, we copy the freshly built libopenblas.a file to our custom installation directory. <pre> <code class="bash"> mkdir -pv$INSTALL/libraries/scalapack/2.0.2/lib cp -v libscalapack.a $INSTALL/libraries/scalapack/2.0.2/lib </code> </pre> <a name="step5"></a> # Step 5: Building VASP 4.6.38 Finally, we are ready to build VASP. VASP is build in two steps. First, we have to build a VASP library and finally we can build a VASP executable. Start by downloading the two corresponding TAR files from the VASP repository (or get them from your system administrator). <pre> <code class="bash"> mkdir -pv$INSTALL/compilation/vasp # I assume that you have placed here your vasp.4.6.tar.gz and vasp.4.lib.tar.gz files tar -xvzf vasp.4.6.tar.gz tar -xvzf vasp.4.lib.tar.gz cd vasp.4.lib cp makefile.linux_gfortran Makefile </code> </pre> At this point, make sure that which gfortran refers to the correct version of gfortran. That is, either the one your compiled previously, or the one of your system! If you have compiled gfortran yourself and which gfortran refers to the wrong version, execute the command below: <pre> <code class="bash"> PATH=$INSTALL/software/gcc/4.8.3/bin:$PATH </code> </pre> If everything is set up correct, you are ready to build the VASP library. <pre> <code class="bash"> make </code> </pre> Now it is time to build the VASP executable, start by going to the source folder, copy the example makefile.linux_gfortran to Makefile and edit this file. <pre> <code class="bash"> cd ../vasp.4.6 cp makefile.linux_gfortran Makefile nano Makefile </code> </pre> We have to change a couple of directives in this file. To start, we are going to link to OpenBLAS. <pre> <code class="makefile"> # Atlas based libraries #ATLASHOME= /usr/lib/blas/threaded-atlas # BLAS= -L/usr/lib/blas/atlas -lblas #!! this line has to be commented out #BLAS= -L$(ATLASHOME) -lf77blas -latlas # use specific libraries (default library path points to other libraries) #BLAS=$(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a # use the mkl Intel libraries for p4 (www.intel.com) #BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lpthread # LAPACK, simplest use vasp.4.lib/lapack_double #LAPACK= ../vasp.4.lib/lapack_double.o # use atlas optimized part of lapack #LAPACK= ../vasp.4.lib/lapack_atlas.o -llapack -lblas # use the mkl Intel lapack #LAPACK= -lmkl_lapack #LAPACK= -L/usr/lib/lapack/atlas -llapack #!! this line has to be commented out # Use our own OpenBLAS LAPACK=/home/user/vasp/libraries/openblas/1.13/lib/libopenblas.a </code> </pre> Take care to change user to the proper folder name! We are going to use MPI, so set the proper compiler below. <pre> <code class="makefile"> #----------------------------------------------------------------------- # fortran linker for mpi: if you use LAM and compiled it with the options # suggested above, you can use the following lines #----------------------------------------------------------------------- FC=mpif90 FCL=$(FC) #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net) #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxPgi\" \ -Dkind8 -DNGZhalf -DCACHE_SIZE=2000 -DPGF90 -Davoidalloc -DRPROMU_DGEMV \ -DscaLAPACK #!! note the addition here #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply uncomment the line SCA #----------------------------------------------------------------------- BLACS=/usr/local/BLACS_lam SCA_= /usr/local/SCALAPACK_lam SCA=$(SCA_)/scalapack_LINUX.a $(SCA_)/pblas_LINUX.a$(SCA_)/tools_LINUX.a \ $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a$(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI$ SCA=/home/user/vasp/libraries/scalapack/2.0.2/lib/libscalapack.a #----------------------------------------------------------------------- # libraries for mpi #----------------------------------------------------------------------- LIB = -L../vasp.4.lib -ldmy \ ../vasp.4.lib/linpack_double.o $(LAPACK) \$(SCA) $(BLAS) # FFT: only option fftmpi.o with fft3dlib of Juergen Furthmueller FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o </code> </pre> Now we are ready to build vasp: <pre> <code class="bash"> make </code> </pre> ### Troubleshooting <a name="step6"></a> If you get an error message such as: <pre> <code class="fortran"> base.f:1.1: /* Copyright (C) 1991-2014 Free Software Foundation, Inc. 1 Error: Invalid character in name at (1) </code> </pre> then you have to remove the -C directive from the CPP line in the Makefile as below and run make again. <pre> <code class="makefile"> #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE 6.X, maybe some Red Hat distributions: #!! I removed the -C directive in the line below CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -traditional >$*$(SUFFIX) </code> </pre> So far, I have only seen this error occurring on Arch Linux. Most likely because this is a very cutting-edge distro. For Debian and Ubuntu, you should be fine. For any other distro, I am not aware. #### Update (18-sep-2014) If you encounter the error below compiling VASP <pre> <code class="bash"> /opt/scalapack/2.0.2-gcc-4.8.3/lib/libscalapack.a(PB_Cztypeset.o): In function PB_Cztypeset': PB_Cztypeset.c:(.text+0x233): undefined reference to zgeru_' PB_Cztypeset.c:(.text+0x249): undefined reference to zher_' PB_Cztypeset.c:(.text+0x275): undefined reference to zsymm_' PB_Cztypeset.c:(.text+0x28b): undefined reference to zsyrk_' PB_Cztypeset.c:(.text+0x296): undefined reference to zherk_' PB_Cztypeset.c:(.text+0x2a1): undefined reference to zsyr2k_' /opt/scalapack/2.0.2-gcc-4.8.3/lib/libscalapack.a(zvvdotu.o): In function zvvdotu_': zvvdotu.f:(.text+0x2b): undefined reference to zdotu_' collect2: error: ld returned 1 exit status Makefile:264: recipe for target 'vasp' failed make: *** [vasp] Error 1 </code> </pre> Then you have to change the order in which the libraries are linked, like so: <pre> <code lang="makefile"> LIB = -L../vasp.4.lib -ldmy \ ../vasp.4.lib/linpack_double.o \ $(SCA)$(BLAS) $(LAPACK) </code> </pre> • watched49736 • active(true) • created2014-07-28 00:00:00 • modified2015-06-30 17:29:18 • comments9 • 1(array) • id28 • titleHow to build VASP 5.3.5 using the GNU compiler on Linux Ubuntu 14.04 LTS • contentThis tutorial will explain how to install VASP 5.3.5 on a Linux Ubuntu system. I used the latest version at hand (14.04 LTS) and started with a fresh install (in a Virtualbox environment). This tutorial is to a large extend based [on the previous work of Peter Larsson](https://www.nsc.liu.se/~pla/blog/2013/11/05/compile-vasp-on-ubuntu/). This tutorial also works for Debian 8. For Debian 7, you need to use different packages in step 1, but the same procedure holds. For installing VASP 4.6.38, you can refer to [my previous blogpost](http://www.ivofilot.nl/posts/view/21/How+to+build+VASP+4.6.38+using+the+GNU+compiler). In that post, I got some comments regarding how to install VASP 5, this post is therefore also (more-or-less) an answer to these questions. This tutorial proceeds in 4 steps: * <a href="#step1">Installing the necessary packages</a> * <a href="#step2">Building OpenBLAS</a> * <a href="#step3">Building ScaLAPACK</a> * <a href="#step4">Building VASP</a> <a name="step1"></a> # Step 1: Installing the necessary packages <pre> <code class="bash"> sudo apt-get install build-essential gfortran libopenmpi-dev libopenmpi1.6 openmpi1.6 libfftw3-double3 libfftw3-single3 libfftw3-dev git </code> </pre> <a name="step2"></a> # Step 2: Building OpenBLAS From here on, I will assume that everything is compiled in its own build directory. For convenience, let's declare a variable to hold this path. <pre> <code class="bash"> export BUILDPATH=$HOME/vaspbuild mkdir $BUILDPATH cd$BUILDPATH </code> </pre> The latest version of OpenBLAS can be readily obtained via the command below: <pre> <code class="bash"> git clone https://github.com/xianyi/OpenBLAS.git --depth 1 </code> </pre> To compile OpenBLAS run: <pre> <code class="bash"> cd $BUILDPATH/OpenBLAS make FC=gfortran CC=gcc USE_THREAD=0 </code> </pre> This will generate a file called <b>libopenblas.a</b>. We will use this file for the compilation of scaLAPACK as well as VASP. <a name="step3"></a> # Step 3: Building ScaLAPACK <pre> <code class="bash"> cd$BUILDPATH wget http://www.netlib.org/scalapack/scalapack-2.0.2.tgz tar -xvzf scalapack-2.0.2.tgz cd $BUILDPATH/scalapack-2.0.2 cp SLmake.inc.example SLmake.inc </code> </pre> We are going to make a few modifications to SLmake.inc in order to properly link everything to our version of OpenBLAS. To do so, let BLASLIB refer to our OpenBLAS library. Because OpenBLAS also contains LAPACK routines, refer LAPACKLIB to BLASLIB. <pre> <code class="bash"> BLASLIB = -L/home/$USER/vaspbuild/OpenBLAS -lopenblas LAPACKLIB = $(BLASLIB) LIBS =$(LAPACKLIB) $(BLASLIB) </code> </pre> Then, in order to compile everything, simply type <pre> <code class="bash"> make </code> </pre> This will generate a library file called <b>libscalapack.a</b>. Now, we have everything to build VASP. <a name="step4"></a> # Step 4: Building VASP I assume you have extracted the two VASP packages (vasp5.lib.tar.gz and vasp5.tar.gz) in the build folder. Compiling the library part of VASP is easy. <pre> <code class="bash"> cd$BUILDPATH/vasp.5.lib cp Makefile.linux_gfortran Makefile make </code> </pre> Building the other part is somewhat more difficult and requires a bit of tweaking and fixing some things. Start by copying the template makefile. <pre> <code class="bash"> cd $BUILDPATH/vasp.5.3 cp Makefile.linux_gfortran Makefile </code> </pre> Next, we are going to change some directives in the makefile. Open the Makefile with your favorite text editor. First, remove the <i>-C</i> part from the CPP line, like so: <pre> <code class="makefile"> CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -traditional >$*$(SUFFIX) </code> </pre> Next, we are going to comment the last line of the FPP block. <pre> <code class="makefile"> # this release should be fpp clean # we now recommend fpp as preprocessor # if this fails go back to cpp # CPP_=fpp -f_com=no -free -w0 $*.F$*$(SUFFIX) </code> </pre> Scroll down to the FFLAGS directive and change it to: <pre> <code class="makefile"> FFLAGS =-ffree-form -ffree-line-length-0 </code> </pre> <b>Note that there has to be a space after the 0!!</b> Now scroll further down to the part mentioning <b>MPI section</b>. Uncomment the FC and FCL line and change <i>mpif77</i> to <i>mpif90</i>, like so: <pre> <code class="makefile"> FC=mpif90 FCL=$(FC) </code> </pre> Uncomment the whole block for CPP and add <i>-DMPI</i> and <i>-DscaLAPACK</i> to the block, like so: <pre> <code class="makefile"> CPP = $(CPP_) -DHOST=\"gfortran\" \ -DCACHE_SIZE=4096 -DMINLOOP=1 -DNGZhalf \ -DMPI_BLOCK=8000 -DMPI -DscaLAPACK </code> </pre> Add our version of scaLAPACK to the SCA variable and create a reference to our version of BLAS and LAPACK: <pre> <code class="makefile"> SCA=/home/$USER/vaspbuild/scalapack-2.0.2/libscalapack.a BLAS=/home/$USER/vaspbuild/OpenBLAS/libopenblas.a LAPACK=/home/$USER/vaspbuild/OpenBLAS/libopenblas.a </code> </pre> Uncomment the lines for the LIB and the (second!) FFT3D variables in the <b>libraries for MPI</b> block. Refer to the fftw3 library of the OS. (last part of the FFT3D line) <pre> <code class="makefile"> LIB = -fall-intrinsics \ -L../vasp.5.lib -ldmy \ ../vasp.5.lib/linpack_double.o $(LAPACK) \$(SCA) $(BLAS) # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller # FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o /usr/lib/x86_64-linux-gnu/libfftw3.a </code> </pre> When you would now start to compile VASP by typing make, you would encounter an error like the following: <pre> <code class="bash"> us.f90:1403.10: USE us 1 Error: 'setdij' of module 'us', imported at (1), is also the name of the current program unit </code> </pre> The way to resolve this error, is by changing the names. The way we are going to do that, is by applying a patch to this file. You can download the patch file [here](http://www.ivofilot.nl/downloads/vasp535.patch). Just place it in the same directory as where the VASP sources reside and apply the patch by typing: <pre> <code class="bash"> wget http://www.ivofilot.nl/downloads/vasp535.patch patch -p0 < vasp535.patch </code> </pre> This will change the names automatically. You will receive a message such as <pre> <code class="makefile"> patching file us.F </code> </pre> Upon typing make, the compilation will resume. The next error we are going to encounter is <pre> <code class="bash"> fftmpiw.f90:70: Error: Can't open included file 'fftw3.f' </code> </pre> The way to resolve this error is by copying our system's version of fftw3.h to the source folder like so <pre> <code class="bash"> cp /usr/include/fftw3.f . </code> </pre> (note that all .f files get removed after a make clean) Now, everything should compile perfectly. Type make again and the vasp executable will be generated. • watched18656 • active(true) • created2015-06-29 12:00:00 • modified2015-06-30 17:30:20 • comments12 • 2(array) • id27 • titleVisualising the electron density of the binding orbitals of the CO molecule using VASP • content<div class="alert alert-info" role="alert"><i class="fa fa-github"></i> Looking for the Github repository of EDP?<br><i class="fa fa-link"></i> <a href="https://github.com/ifilot/edp">https://github.com/ifilot/edp</a> </div> # 1. Introduction Let us start by stating what we would like to obtain. We wish to visualise the electron density of the molecular orbitals of CO using VASP. We wish to create a surface cut rather than generating a 3D image containing an isosurface, as the former more clearly conveys the electron density distribution. In this short tutorial, I will show you how to create the image below: <img src="http://www.ivofilot.nl/images/19/CO_molecular_orbitals_plot.jpg" class="img img-responsive img-rounded"> <b>Figure 1:</b> <i>The electron density plots of the molecular orbitals of CO. The yellow line in the lower-left image is added to indicate the nodal plane.</i> The molecular orbitals of CO are formed from the atomic orbitals of C and O. The molecular orbitals of CO can be identified from the molecular orbital diagram of CO as shown below. Each of these orbitals have specific symmetry characteristics and these symmetry characteristics give rise to the name of the molecular orbitals. The molecular orbital lowest in energy, which is a non-bonding molecular orbital that only contains core electrons, is termed the 1σ orbital. The σ denotes the symmetry characteristics of the orbital. σ-symmetry means that the orbital can be rotated arbitrarily around the bonding axis of C and O. Hence, there are an infinite number of mirror planes parallel to this rotation axis. The four molecular orbitals lowest in energy all have this particular symmetry. The fifth molecular orbital however, has π- symmetry and is hence denoted as the 1π molecular orbital. π-symmetric orbitals have a nodal plane on the bonding axis and hence only have a C<sub>2</sub> rotation operation. Consequently, they also have only two mirror planes parallel to this bonding axis. The electron density plots shown above are in agreement with these symmetry characteristics. <img src="http://www.ivofilot.nl/images/18/COMOdiagram.jpg"> <b>Figure 2:</b> <i>The occupied orbitals of CO, that are constructed from the valence atomic orbitals of C and O, are in order of low to high energy: 3σ, 4σ, 1π and 5σ.</i> <image here> # 2. VASP calculations In order to construct the electron density, we first have to perform a series of VASP calculations. * Perform a structure optimisation * Perform a single point calculation and calculate the DOS for the final state (printed in DOSCAR) * Identify the molecular orbitals using the DOS plot (select the "energy range") * Perform for each identified molecular orbital another calculation where the partial charge corresponding to the energy range is saved (printed in PARCHG) If you know how to do this, skip to the next section, else continue reading. ## 2.1 POSCAR and INCAR files Our POSCAR files looks as follows: <pre> <code class="text"> CO 1.0000000000000000 10.0000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 10.0000000000000000 0.0000000000000000 0.0000000000000000 0.0000000000000000 10.0000000000000000 1 1 Direct 0.5000000000000000 0.5000000000000000 0.500000 0.5000000000000000 0.5000000000000000 0.617000 </code> </pre> and the INCAR file as: <pre> <code class="text"> SYSTEM = CO LREAL=.FALSE. LWAVE= .TRUE. LCHARG= .TRUE. PREC = normal ENCUT = 400 ISMEAR = 0 SIGMA = 0.05 EDIFF = 1E-5 EDIFFG = -1E-4 NSW = 100 ISIF = 2 IBRION = 2 ISPIN = 1 POTIM = 0.5 IALGO = 38 NELMIN = 6 NELM = 40 EMIN = -30 EMAX = 15 NEDOS = 4500 NGX = 150 NGY = 150 NGZ = 150 </code> </pre> The KPOINTS file specifies the gamma-point (1x1x1 <i>k</i>-points) and the POTCAR file corresponds to one with the potentials of C and O (in that order). I am not going to discuss all the directives in detail (the VASP manual is really good at that), but I will explain a couple of directives. The INCAR file tells the program to optimise the geometry (IBRION = 2). The WAVECAR and CHGCAR files will be generated and a DOSCAR file will be made with a resolution of 4500 data points on the interval [-30 eV;15 eV]. The mesh of the CHGCAR is 150x150x150 grid points. After the optimisation run, change NSW to 0 and set ISTART = 1. Copy the CONTCAR to POSCAR and start the calculation again. (this way, we have the DOS of the final state and not an averaged DOS). The generated DOSCAR file has 6 header lines and then 4500 lines containing the DOS and the integrated DOS as a function of the energy. To extract that part (and ignore the header lines) and save it DOS-total, we run: <pre> <code class="bash"> sed -n '7,4506p' DOSCAR > DOS_total </code> </pre> This file can be readily visualised using your favourite program (Excel, Origin, GnuPlot, your pick...). I chose Origin and made the following graph: <img src="http://www.ivofilot.nl/images/16/CO_dos.jpg"> <b>Figure 3:</b> <i>The density of states (DOS) and integrated DOS of CO. Each of the peaks corresponds to a molecular orbital.</i> In this graph, I have depicted the molecular orbitals 3σ, 4σ, 1π and 5σ. The corresponding energy intervals (in eV) are (roughly): <table class="table table-striped"> <tr> <th>3σ</th> <td>-30</td> <td>-27</td> </tr> <tr> <th>4σ</th> <td>-15</td> <td>-13</td> </tr> <tr> <th>1π </th> <td>-13</td> <td>-10</td> </tr> <tr> <th>5σ</th> <td>-10</td> <td>-5</td> </tr> </table> For each of the energy intervals, we are going to run VASP again with the following example INCAR file. This INCAR file is for the energy interval [-30; -27.5], but you can chose any interval you like. The LPARD directive instructs the program to generate a PARCHG file of the partial charge corresponding to the wavefunctions between -30 and -27.5 eV. <pre> <code class="text"> SYSTEM = CO LREAL=.FALSE. LWAVE= .TRUE. LCHARG= .TRUE. PREC = medium ENCUT = 400 ISMEAR = 0 SIGMA = 0.05 EDIFF = 1E-5 EDIFFG = -1E-4 NSW = 100 ISIF = 2 IBRION = 2 ISPIN = 1 POTIM = 0.5 IALGO = 38 NELMIN = 6 NELM = 40 EMIN = -30 EMAX = 15 NEDOS = 4500 NGX = 150 NGY = 150 NGZ = 150 LPARD = .TRUE. EINT = -30 -27.5 </code> </pre> The resulting PARCHG files are stored separately and will be used for the visualisation in the next section. # 3. Visualising electron density To visualise the electron density, we are going to use the EDP program, which is freely available via [its Github repository](https://github.com/ifilot/edp). You can [compile the program yourself](https://github.com/ifilot/edp/blob/master/README.md#compilation) or [obtain it from the repository (if you are running Debian)](https://github.com/ifilot/edp/blob/master/README.md#obtaining-edp-from-the-repository). ## 3.1 Visualising electron density with EDP To use the mathematical terms, the electron density is a scalar field. In other words, it is a scalar value that depends on the position in the unit cell. Although the electron density is a smooth function, it is stored at particular grid points in the CHGCAR and PARCHG file. The resolution of that grid can be set by specifying NGX, NGY and NGZ in the INCAR file. The purpose of EDP is to plot the electron density on a so-called cutting plane. The end result is therefore a heat map (as shown above). In order to specify the cutting plane, the user is required to provide two vectors and one point. In our example, CO is placed along the z-axis at the centre of a cubic unit cell of 10x10x10 angstrom. The centre of the unit cell is at position (5.0, 5.0, 5.0). If we want to project the electron density on an (<i>xz</i>)- plane cutting through the point (5.0, 5.0, 5.0), then the command is: <pre> <code class="bash"> ./bin/edp -i path/to/CHGCAR -v 1,0,0 -w 0,0,1 -p 5,5,5 -s 100 -o test.png </code> </pre> The tags -v and -w specify the vectors and -p specifies the point. The -s tag is used to set the resolution of the final image. The value of 100 means that the final image has 100px per angstrom. Finally, the -i and -o tags designate the input (a CHGCAR of PARCHG file) and output files (an .png image file). Maybe you are wondering how the values are generated that do not lie exactly at the grid points specified in the CHGCAR file. To plot such values, a trilinear interpolation scheme is used. A detailed description of that algorithm is given [here](http://en.wikipedia.org/wiki/Trilinear_interpolation). ## 3.2 Plotting the orbitals Plotting the four binding orbitals has now become a simple matter of running the program for each of the PARCHG files corresponding to the molecular orbitals. To exemplify: <pre> <code class="bash"> ./bin/edp -i path/to/PARCHG_3s -v 1,0,0 -w 0,0,1 -p 5,5,5 -s 100 -o 3s.png ./bin/edp -i path/to/PARCHG_4s -v 1,0,0 -w 0,0,1 -p 5,5,5 -s 100 -o 4s.png ./bin/edp -i path/to/PARCHG_1p -v 1,0,0 -w 0,0,1 -p 5,5,5 -s 100 -o 1p.png ./bin/edp -i path/to/PARCHG_5s -v 1,0,0 -w 0,0,1 -p 5,5,5 -s 100 -o 5s.png </code> </pre> Finally, you can collect all images and create a nice compilation in your favourite photo-editing program (such as what I did in Figure 1). ## 3.3 Concluding remarks Obviously, you can use the above procedure to visualise any kind of molecule. Especially simple molecules such as N<sub>2</sub> or H<sub>2</sub> are easy to visualise. Less obvious is that you can also use the above procedure to visualise the molecular orbitals of a substrate adsorbed on a catalytic surface or the electron density difference between CO in its molecular state and C and O in their atomic states. It all depends on the way the CHGCAR and PARCHG files are created. I hope this tutorial was informative. If you have used this tutorial and found it useful for your work, please drop a line or send me an e-mail. Especially if you have made some nice pictures. :-) # 4.Comments [Bart Zijlstra](https://www.linkedin.com/pub/bart-zijlstra/74/682/29b/de) has made a very nice analysis of the results of the electron density visualisation routine as a function of the number of k-points and the smearing value. The result can be seen below. Increasing the number of K-points shows a more detailed result, yet there is no significant change in the characteristic features. <img src="http://www.ivofilot.nl/images/20/elec_dens_analysis.jpg" class="img img-responsive img-rounded" /> • watched16915 • active(true) • created2015-04-28 12:00:00 • modified2017-12-31 23:52:34 • comments7 • 3(array) • id31 • titleHow to compile VASP 5.4.1 for Linux Debian using the GNU compiler • contentThis tutorial will explain how to install VASP 5.4.1 on a Linux Debian system. The developers of VASP have completely remade the compilation procedure. Personally, I believe they have done a marvelous job and the new compilation system will make compiling VASP significantly easier. <i class="fa fa-info-circle"></i> For installing VASP 4.6.38, you can refer to [my previous blogpost](http://www.ivofilot.nl/posts/view/21/How+to+build+VASP+4.6.38+using+the+GNU+compiler). <i class="fa fa-info-circle"></i> If you are interested in compiling VASP 5.3.5 using the old procedure, have a look at [this tutorial](http://www.ivofilot.nl/posts/view/28/How+to+build+VASP+5.3.5+using+the+GNU+compiler+on+Linux+Ubuntu+14.04+LTS). # Step 1: Acquiring the necessary packages <pre> <code class="bash"> sudo apt-get install build-essential libopenmpi-dev libfftw3-dev libblas-dev liblapack-dev libscalapack-mpi-dev libblacs-mpi-dev </code> </pre> # Step 2: Extracting the VASP source tarball Extract the VASP sources and go to the root directory of the VASP package. <pre> <code class="bash"> tar -xvzf vasp.5.4.1.tar.gz cd vasp.5.4.1 </code> </pre> # Step 3: Setting the build procedure You need to copy the makefile corresponding to your build architecture. Here, I assume we are going to use the GNU compiler, so you need to copy the makefile for gfortran: <pre> <code class="bash"> cp -v arch/makefile.include.linux_gfortran makefile.include </code> </pre> # Step 4: Configuring the makefile You need to tweak some directives in the Makefile to point the compiler to the right libraries and header files. For my Linux Debian system, I have used the following directives: <pre> <code class="makefile"> # Precompiler options CPP_OPTIONS= -DMPI -DHOST=\"IFC91_ompi\" -DIFC \ -DCACHE_SIZE=4000 -Davoidalloc \ -DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective \ -DnoAugXCmeta -Duse_bse_te \ -Duse_shmem -Dtbdyn CPP = gcc -E -P -C$*$(FUFFIX) >$*$(SUFFIX)$(CPP_OPTIONS) FC = mpif90 FCL = mpif90 FREE = -ffree-form -ffree-line-length-none FFLAGS = OFLAG = -O2 OFLAG_IN = $(OFLAG) DEBUG = -O0 LIBDIR = /usr/lib/x86_64-linux-gnu BLAS = -L$(LIBDIR) -lblas LAPACK = -L$(LIBDIR) -llapack BLACS = -L$(LIBDIR) -lblacs-openmpi -lblacsCinit-openmpi SCALAPACK = -L$(LIBDIR) -lscalapack-openmpi$(BLACS) OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o \ /usr/lib/x86_64-linux-gnu/libfftw3.a INCS =-I/usr/include LLIBS = $(SCALAPACK)$(LAPACK) $(BLAS) OBJECTS_O1 += fft3dfurth.o fftw3d.o fftmpi.o fftmpiw.o chi.o OBJECTS_O2 += fft3dlib.o # For what used to be vasp.5.lib CPP_LIB =$(CPP) FC_LIB = $(FC) CC_LIB = gcc CFLAGS_LIB = -O FFLAGS_LIB = -O1 FREE_LIB =$(FREE) OBJECTS_LIB= linpack_double.o getshmem.o # Normally no need to change this SRCDIR = ../../src BINDIR = ../../bin </code> </pre> # Step 5: Compiling You are now ready to compile VASP! <pre> <code class="bash"> make std </code> </pre> # Troubleshooting When compiling, I encountered the same error as for VASP 5.3.5 relating to the us.f90. <pre> <code class="bash"> us.f90:1403.10: USE us 1 Error: 'setdij' of module 'us', imported at (1), is also the name of the current program unit </code> </pre> The way to resolve this error, is by changing the names. The way we are going to do that, is by applying a patch to this file. You can download the patch file [here](http://www.ivofilot.nl/downloads/vasp535.patch). The patch was created for VASP 5.3.5, but works fine for VASP 5.4.1. Just place it in the same directory as where the VASP sources reside (src) and apply the patch by typing: <pre> <code class="bash"> wget http://www.ivofilot.nl/downloads/vasp535.patch patch -p0 < vasp535.patch </code> </pre> This will change the names automatically. You will receive a message such as <pre> <code class="makefile"> patching file us.F </code> </pre> Upon typing make, the compilation will resume.
• watched15382
• active(true)
• created2015-11-03 12:00:00
• modified2018-01-08 18:14:25
• 4(array)
• id20
• titleOptimizing VASP performance by tuning the compilation using the GNU compilers
• watched13400
• active(true)
• created2014-07-25 00:00:00
• modified2017-06-24 10:15:41
• 5(array)
• id33
• titleHow to compile VASP 5.4.1 on a MacBook running OS X El Capitan
• contentThis tutorial will explain how to install [VASP](https://www.vasp.at/) 5.4.1 on a MacBook. As a small warning, note that this procedure will take quite some time, so make sure you do not need your MacBook for a couple of hours. # Step 0: Directories and settings We are going to compile the source in the /Users/<username>/Downloads folder and install everything in /Users/<username>/vaspbuild. Note that you need to change <username> with your current login (i.e. the name of your user account). Feel free to change any of these paths to suit your needs. Furthermore, in this tutorial, I will embed the [ScaLAPACK](http://www.netlib.org/scalapack/), [OpenBLAS](http://www.openblas.net/) and [FFTW](http://www.fftw.org/) libraries in the executable. # Step 1: Installing the tool-chain To install VASP, we are going to use [MacPorts](https://www.macports.org) for our tool-chain. First, install MacPorts. On their website, they provide a [set of instructions](https://www.macports.org/install.php). Thereafter, run the following command to install GCC 4.9 with gfortran.  sudo port install gcc49 +gfortran  and install cmake as well as we are going to use it for building ScaLAPACK.  sudo port install cmake  # Step 2: OpenMPI Next, we are going to install OpenMPI. Go to [their website](https://www.open-mpi.org/software/ompi) and pick the latest version. At the time of this writing, this is version 1.10.3 and can be downloaded using [this link](https://www.open-mpi.org/software/ompi/v1.10/downloads/openmpi-1.10.3.tar.bz2). Extract the archive  tar -xvjf openmpi-1.10.3.tar.bz2  go to the folder and configure it  cd openpmi-1.10.3   ./configure --prefix=/Users/<username>/vaspbuild/openmpi-1.10.3  and compile and install it  make -j5 && make -j5 install  To use this version of OpenMPI for the compilation of VASP and ScaLAPACK, execute the following two commands  export PATH=/Users/<username>/vaspbuild/openmpi-1.10.3/bin:$PATH   export LD_LIBRARY_PATH=/Users/<username>/vaspbuild/openmpi-1.10.3/lib:$LD_LIBRARY_PATH  # Step 3: OpenBLAS Go to the website of [OpenBLAS](http://www.openblas.net/) and download the latest version. At the time of this writing, this is version 0.2.18. Unzip the archive  unzip OpenBLAS-0.2.18.zip  and go to the folder  cd OpenBLAS-0.2.18  and compile the library with  make USE_THREAD=0  and install it  make PREFIX=/Users/<username>/vaspbuild/openblas-0.2.18 install  More information about compiling and installing OpenBLAS can be found [here](https://github.com/xianyi/OpenBLAS/wiki/User-Manual). # Step 4: ScaLAPACK <div class="alert alert-info alert-hover">Note: Make sure you have adjusted your $PATH and $LD_LIBRARY_PATH to use the version of OpenMPI we just built.</div> Download the latest version of ScaLAPACK from [their website](http://www.netlib.org/scalapack/) and extract the tarball.  tar -xvzf scalapack.tgz  Create a seperate build folder  mkdir spbuild && cd spbuild  and execute cmake  cmake ../scalapack-2.0.2  We need to make a few adjustments to CMakeCache.txt: On line 18, replace <pre> <code class="cmake"> BLAS_Accelerate_LIBRARY:FILEPATH=/System/Library/Frameworks/Accelerate.framework </code> </pre> with <pre> <code class="cmake"> BLAS_Accelerate_LIBRARY:FILEPATH=/Users/<username>/vaspbuild/openblas-0.2.18/lib/libopenblas.dylib </code> </pre> and as well around line 253 we have to replace, <pre> <code class="cmake"> LAPACK_Accelerate_LIBRARY:FILEPATH=/System/Library/Frameworks/Accelerate.framework </code> </pre> with <pre> <code class="cmake"> LAPACK_Accelerate_LIBRARY:FILEPATH=/Users/<username>/vaspbuild-openblas/0.2.18/lib/libopenblas.dylib </code> </pre> Now we are ready to compile  make -j5  We are manually installing the library by copying it to an installation folder  mkdir -pv /Users/<username>/vaspbuild/scalapack-2.0.2/lib   cp -v lib/libscalapack.a /Users/<username>/vaspbuild/scalapack-2.0.2/lib  # Step 5: FFTW The last library we need to compile is FFTW. The procedure is fairly similar to how we compiled OpenMPI. As usual, go to [the website of FFTW](http://www.fftw.org/download.html) and download the latest version. Extract the archive  tar -xvzf fftw-3.3.4.tar.gz  and go to the folder. Therein, configure, build and install the library  ./configure --prefix=/Users/<username>/vaspbuild/fftw-3.3.4   make -j5 && make -j5 install  # Step 6: VASP <div class="alert alert-info alert-hover">Note: Make sure you have adjusted your $PATH and $LD_LIBRARY_PATH to use the version of OpenMPI we just built.</div> Extract the VASP tarball, go to the source folder and copy the makefile.include for gfortran  cp arch/makefile.include.linux_gfortran makefile.include  We need to adjust part of the makefile.include so that it looks as follows: <pre> <code class="makefile"> BLAS = LAPACK = /Users/<username>/vaspbuild/openblas-0.2.18/lib/libopenblas.a BLACS = SCALAPACK = /Users/<username>/vaspbuild/scalapack-2.0.2/lib/libscalapack.a OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o \ /Users/<username>/vaspbuild/fftw/3.3.4/lib/libfftw3.a INCS =-I/Users/<username>/vaspbuild/fftw/3.3.4/include </code> </pre> The first two references ensure that our recently built versions of OpenBLAS and ScaLAPACK are used to build VASP and the last few lines grab the header and library files from FFTW. To compile the standard version (non-gamma-point version, run)  make std  For the gamma point version, you need to run  make gam  The binary files are written in the bin folder. To check that our VASP executable only depend on OpenMPI, run otool -L bin/vasp_std and/or otool -L bin/vasp_gam. For me, the result looks like this: <pre> <code lang="bash"> My-MacBook-Air:vasp.5.4.1 ivofilot$otool -L bin/vasp_std bin/vasp_std: /Users/ivofilot/vaspbuild/openmpi-1.10.3/lib/libmpi_usempif08.11.dylib (compatibility version 13.0.0, current version 13.1.0) /Users/ivofilot/vaspbuild/openmpi-1.10.3/lib/libmpi_mpifh.12.dylib (compatibility version 13.0.0, current version 13.1.0) /Users/ivofilot/vaspbuild/openmpi-1.10.3/lib/libmpi.12.dylib (compatibility version 13.0.0, current version 13.3.0) /opt/local/lib/libgcc/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1226.10.1) /opt/local/lib/libgcc/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /opt/local/lib/libgcc/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) </code> </pre> Let's place the VASP executable in the vaspbuild folder as well  cp -v bin/vasp_* /Users/<username>/vaspbuild/  In order to run and test VASP, create a seperate folder wherein your place the INCAR, POTCAR, POSCAR and KPOINTS files. From within that folder you run VASP (in this example using four threads) like so  mpirun -np 4 /Users/<username>/vaspbuild/vasp_std  • watched8724 • active(true) • created2016-06-30 00:00:00 • modified2017-06-24 10:15:22 • comments0 • 6(array) • id24 • titleHow to build VASP 4.6.38 using the Intel compiler • content# Introduction In [my previous post](http://www.ivofilot.nl/posts/view/21/How+to+build+VASP+4.6.38+using+the+GNU+compiler), I explained how to build VASP using the GNU compilers. Here, I will show you how you can build VASP using the Intel Compilers. In principle, the Intel Compilers should give better performance for VASP on Intel processors. # Step 0: Install the Intel Compilers Install the Intel Compiler Suite in /opt/intel (this is the default setting). Please refer to the installation instructions given by Intel. # Step 1: Load the Intel Compilers <pre> <code class="bash"> source /opt/intel/composer\_xe\_2013\_sp1/bin/compilervars.sh intel64 </code> </pre> #Step 2: Compile the Intel FFTW wrappers <pre> <code class="bash"> cd /opt/intel/composer\_xe\_2013_sp1/mkl/interfaces/fftw3xf make libintel64 </code> </pre> #Step 3: Compile the VASP libraries <pre> <code class="bash"> tar -xvzf vasp.4.lib.tar.gz cd vasp.4.lib cp makefile.linux\_ifc\_P4 Makefile </code> </pre> and change the file so that FC is linking to ifort <pre> <code class="makefile"> .SUFFIXES: .inc .f .F #----------------------------------------------------------------------- # Makefile for Portland Group F90/HPF compiler # the makefile was tested only under Linux on Intel platforms # however it might work on other platforms as well # # this release of vasp.4.lib contains lapack v2.0 # this can be compiled with pgf90 compiler if the option -O1 is used # # Mind: one user reported that he had to copy preclib.F diolib.F # dlexlib.F and drdatab.F to the directory vasp.4.4, compile the files # there and link them directly into vasp # for no obvious reason these files could not be linked from the library # #----------------------------------------------------------------------- # C-preprocessor CPP = gcc -E -P -C$*.F >$*.f FC=ifort CFLAGS = -O FFLAGS = -O0 -FI FREE = -FR DOBJ = preclib.o timing_.o derrf_.o dclock_.o diolib.o dlexlib.o drdatab.o </code> </pre> and type <pre> <code class="bash"> make </code> </pre> #Step 4: Compile VASP <pre> <code class="bash"> source /opt/intel/impi/4.1.1.036/bin64/mpivars.sh tar -xvzf vasp.4.6.tar.gz cd vasp.4.6 cp makefile.linux\_ifc\_P4 Makefile </code> </pre> Alter the Makefile on the following points and finally run make <pre> <code class="bash"> make </code> </pre> when you get this error <pre> <code lang="bash"> mpif77 -fc=ifort -FR -names lowercase -assume byterecl -O3 -xAVX -c fftmpiw.f90 fftmpiw.f90(55): error #5102: Cannot open include file 'fftw3.f' include 'fftw3.f' --------------^ fftmpiw.f90(89): error #5102: Cannot open include file 'fftw3.f' include 'fftw3.f' --------------^ fftmpiw.f90(110): error #5102: Cannot open include file 'fftw3.f' include 'fftw3.f' --------------^ fftmpiw.f90(220): error #5102: Cannot open include file 'fftw3.f' include 'fftw3.f' --------------^ compilation aborted for fftmpiw.f90 (code 1) make: *** [fftmpiw.o] Error 1 </code> </pre> copy the header file <pre> <code class="bash"> cp /opt/intel/mkl/include/fftw/fftw3.f . </code> </pre> and run make again <pre> <code class="bash"> make </code> </pre> # Step 5: Testing our compilation and comparing against the GNU compilation We have used three different models to check the computational efficiency of the Intel Compilations versus that of the GNU compilation: * A single CO molecule in a large box (gamma-point calculation) * A bulk Cu system * A Fe<sub>5</sub>C<sub>2</sub> (Hagg carbide) system The results are the following: <table class="table"> <thead> <th>System</th> <th>gcc-4.8.4 VASP 4.6.38</th> <th>Intel VASP 4.6.38</th> </thead> <tr> <td>CO</td> <td>240.483</td> <td>240.683</td> </tr> <tr> <td>Cu</td> <td>54.623</td> <td>34.994</td> </tr> <tr> <td>Fe<sub>5</sub>C<sub>2</sub></td> <td>907.617</td> <td>656.705</td> </tr> </table> Interestingly, the Intel compiler does not give significantly different result for the gamma-point calculation (CO molecule). However, the speed improvement for the other calculations is **roughly 30%**! • watched7637 • active(true) • created2015-03-22 12:00:00 • modified2015-06-30 17:18:32 • comments1 • 7(array) • id35 • titleEfficient compression of CHGCAR and LOCPOT files • content<div class="alert alert-info" role="alert"><i class="fa fa-github"></i> Looking for the Github repository of den2bin?<br><i class="fa fa-link"></i> <a href="https://github.com/ifilot/den2bin">https://github.com/ifilot/den2bin</a> </div> # Introduction I was recently involved in a research endeavor where we had to generate and analyze a substantial amount of [CHGCAR](https://cms.mpi.univie.ac.at/vasp/vasp/CHGCAR_file.html) and [LOCPOT](https://cms.mpi.univie.ac.at/vasp/vasp/LOCPOT_file.html) files. Each of these are relatively big in size (~150MB) and were consuming a lot of free space on our hard drives. Since we employ periodic back-upping of our data; you can imagine that things started to pile up and clutter. We looked into [BZIP2](http://www.bzip.org/) and [ZIP](https://en.wikipedia.org/wiki/Zip_(file_format)) to reduce the size, but found out early on that we only obtained [compression ratios](https://en.wikipedia.org/wiki/Data_compression_ratio) of about 3 (i.e. the zip-archive is only three times smaller than the original file). In this blogpost, I will show you the steps we took to get improved compression ratios. Using [lossy compression](https://en.wikipedia.org/wiki/Lossy_compression), we were able to get compression ratios of over 30 with decent accuracy of the original data. # From human-readable to binary As mentioned in the intro, we did not gain much by simply storing the CHGCAR or LOCPOT files in an archive. Since these files are human-readable, we can easily get better compression results by storing the values in binary format. If we for instance consider the following lines inside the CHGCAR file: <pre> <code class="bash"> 112 128 288 0.46419088767E-03 0.22525899949E-02 0.39391537405E-03 -.28562888829E-02 0.43623947025E-02 0.50580398663E-02 0.31440260409E-02 -.60090684201E-04 0.66497361263E-03 0.25302278248E-02 0.52120528002E-02 0.50812471621E-02 -.19041084105E-02 -.10359924871E-02 0.23382124189E-02 </code> </pre> we can see that each line contains 5 floating point values. Each of these floating point values takes about 18 bytes of storage. If we would store the human-readable number as a float, it would only take up 4 bytes of storage, a reduction of almost a factor of 5. In turn, if we do this for all the numbers inside the CHGCAR file and then compress the binary file, we were able to obtain compression ratios of ~10. But can we do better? # Discrete Cosine Transform So far, we have only considered storing the density files in a lossless format. This means that we do not loose any information upon compression and decompression of the data. The opposite is to store data in a lossy fashion, wherein the decompressed data is no longer identical to the original. [JPEG](https://en.wikipedia.org/wiki/JPEG) is an example of such a lossy compression technique. The underlying algorithm behind JPEG is a [Discrete Cosine Transformation (DCT)](https://en.wikipedia.org/wiki/Discrete_cosine_transform), which is really well explained in this [Computerphile YouTube movie](https://www.youtube.com/watch?v=Q2aEzeMDHMA). What the DCT does is fit the finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. For example, if we have a set of 64 data points, then the DCT gives us a set of 64 coefficients corresponding to the cosines at different frequencies. The DCT can be done one-dimensional, two-dimensional in the case of images, three-dimensional in our case and in theory even beyond three-dimensional. The DCT itself is lossless, that is, from the set of coefficients, using a inverse discrete cosine transformation, the original data set can be obtained. The way we can employ this algorithm to obtain increased compression, is to drop the higher order frequencies in the DCT, similarly to how higher order frequencies are dropped in an [MP3](https://en.wikipedia.org/wiki/MP3). Let's exemplify this. Assume that we have 3D dataset of 2x2x2 grid points (8 grid points in total) with the values <pre> <code class="bash"> d = 0.84 d = 0.39 d = 0.78 d = 0.80 d = 0.91 d = 0.20 d = 0.34 d = 0.77 </code> </pre> The discrete cosine transformation in three dimensions of this dataset gives the following coefficients: <pre> <code class="bash"> c = 0.63 c = 0.13 c = -0.06 c = 0.40 c = 0.11 c = 0.04 c = -0.09 c = -0.24 </code> </pre> Let us now set the last coefficient to zero and perform a inverse DCT. We obtain the following values: <pre> <code class="bash"> d = 0.93 d = 0.31 d = 0.70 d = 0.88 d = 0.83 d = 0.28 d = 0.42 d = 0.68 </code> </pre> Despite that these values are off, they qualitatively resemble the original data. This example teaches illustrates the trade-off: we are able to increase compression performance but we have to pay in terms of accuracy. # The mathematics Given a cubic block of NxNxN grid points, then each DCT coefficient is calculated using the formula below $$c\_{i,j,k} = s\_{i,j,k} \sum\_{l=0}^{N-1} \sum\_{m=0}^{N-1} \sum\_{n=0}^{N-1} d\_{l,m,n} \cdot \cos \left( \frac{\pi}{N} (i + \frac{1}{2}) l \right) \cdot \cos \left( \frac{\pi}{N} (j + \frac{1}{2}) m \right) \cdot \cos \left( \frac{\pi}{N} (k + \frac{1}{2}) n \right)$$ wherein$s\_{i,j,k}$is a scaling constant and$d\_{l,m,n}$is the value at grid point (l,m,n). Thus, for each coefficient (of which there are$N^{3}$), we have to loop over all the$N^{3}$data points. The scaling constant is given by $$s\_{i,j,k} = s\_{i} \cdot s\_{j} \cdot s\_{k}$$ where $$s\_{i} = \begin{cases} \frac{1}{N},& \text{if } i\geq 0 \\ \frac{2}{N},& \text{otherwise} \end{cases} .$$ This scaling is in principle arbitrary, but we introduce it to ensure that the inverse discrete cosine transformation is given by $$d\_{i,j,k} = \sum\_{i=0}^{N-1} \sum\_{j=0}^{N-1} \sum\_{k=0}^{N-1} c\_{l,m,n} \cdot \cos \left( \frac{\pi}{N} (l + \frac{1}{2}) i \right) \cdot \cos \left( \frac{\pi}{N} (m + \frac{1}{2}) j \right) \cdot \cos \left( \frac{\pi}{N} (n+ \frac{1}{2}) k \right) .$$ # Block size and quality The above DCT transformation can be done for any size$N$, but it turns out that performing the transformation on typical data sets of 300x300x300 grid points is very expensive. So instead of calculating the DCT for the whole grid, we divide the grid into smaller chunks. This strategy is similar to that employed in JPEG compression, but instead of having 2D chunks, we have 3D chunks. These chunks are called blocks and it turns out that for CHGCAR and LOCPOT files, blocks of 4x4x4 turn out to give the best results (see below for the results). Each (4x4x4) block thus contains 64 values and the corresponding DCT contains 64 coefficients. We can now drop a number of coefficients corresponding to the higher frequencies of the DCT. To formalize this, let us introduce the concept of a quality value. A quality value of q means that all coefficients corresponding to those triplet of cosines which frequencies where i+j+k <= q holds, are kept and all other coefficients are dropped. This means that the quality values are not transferable between block sizes as the block size determines the maximum values of i,j,k. # The algorithm and the code The whole procedure can be summarized in the following six steps: 1. Read in the CHGCAR / LOCPOT 2. Divide the large grid into smaller chunks 3. Perform the DCT on each of these chunks 4. Drop the higher order coefficients based on the quality parameter 5. Store the other coefficients in binary format 6. Compress the final result I have written a small C++ program called [den2bin](https://github.com/ifilot/den2bin) that allows you to perform compression and decompression on CHGCAR and LOCPOT files. The program depends on the following development packages: [boost](http://www.boost.org/), [bz2](http://www.bzip.org/), [glm](https://glm.g-truc.net/) and [tclap](http://tclap.sourceforge.net/), which are typically readily available by your package manager. Also, you need to have CMake and the GNU compiler installed. On Ubuntu or Debian, you can get these as follows: <pre> <code class="bash"> sudo apt-get install build-essential cmake libboost-all-dev libbz2-dev libglm-dev libtclap-dev </code> </pre> To compile the code, simply clone the repository and compile it using CMake: <pre> <code class="bash"> git clone https://github.com/ifilot/den2bin cd den2bin mkdir build cd build cmake ../src make -j5 </code> </pre> The code checks if your compiler supports [OpenMP](https://en.wikipedia.org/wiki/OpenMP) for shared memory multiprocessing. If so, the program automatically makes use of all the available cores in your machine. Use the following commands to compress and decompress using the DCT: <pre> <code class="bash"> # to pack (note the "-l") ./den2bin -i *DENSITY_FILENAME* -o *BINARY_FILENAME* -m *MESSAGE_HEADER* -l -b *BLOCKSIZE* -q *QUALITY* # to unpack (same as for the lossless version) ./den2bin -x -i *BINARY_FILENAME* -o *DENSITY_FILENAME* </code> </pre> In the [README.md in the Github repository](https://github.com/ifilot/den2bin/blob/master/README.md), detailed information about all the program option and directives is provided. If you do not like to compile the program yourself, there is also the option to download the binary from the repository [by following these instructions](https://github.com/ifilot/den2bin/blob/master/README.md#obtaining-den2bin-from-the-repository). # Results To get an idea of the capabilities of our tool, I have compressed a CHGCAR file corresponding to the 1PI orbital of CO based on my [previous blog post on visualizing the electron density of the binding orbital of CO](http://www.ivofilot.nl/posts/view/27/Visualising+the+electron+density+of+the+binding+orbitals+of+the+CO+molecule+using+VASP). We are going to compare the compression ratios and data loss (purely qualitatively by looking at a contour plot) as a function of blocksize and quality. In this comparison, we have used cubic blocks of 2x2x2, 4x4x4, 8x8x8 and 16x16x16 grid points. The results are visualized in Figure 1. A high resolution PNG version of this image [can be found here](http://www.ivofilot.nl/images/33/1pi.png). From Figure 1, it can be seen that when using a quality of 1 (i.e. only a single DCT coefficient), we obtain a kind of down-scaling of the image as a single DCT coefficient only contains the average value for the whole data block. Increasing the quality factor decreases the compression ratio but increases the data quality (and hence the contour plot). Similar to JPEG, there is an optimum for the block size. Whereas this optimum is [8x8 for JPEG](https://stackoverflow.com/questions/10780425/why-jpeg-compression-processes-image-by-8x8-blocks), I would say the sweet spot for VASP density files is using a block size of 4x4x4 and a quality of 4. Nevertheless, do not take my word for it but test it for yourself. <img src="http://www.ivofilot.nl/images/32/1pi.jpg" width="100%" class="img img-responsive"> __Figure 1__: *Compression factor versus Blocksize and quality for the 1PI orbital of CO.* # Closing remarks If you like the program, feel free to [star the repository](https://help.github.com/articles/about-stars/) on Github. Bug reports or other comments to improve the program are always highly appreciated. Please use the [Issues](https://github.com/ifilot/den2bin/issues) functionality of Github for that. I am also interested in hearing your experience with the program. Feel free to share in the [comments](http://www.ivofilot.nl/posts/view/35/Efficient+compression+of+CHGCAR+and+LOCPOT+files#comments) where you have used the program for and at what block size and quality settings you found the compression to be ideal for your branch of work. # Bonus: more examples Below are some additional examples to illustrate the correlation between compression efficiency and data set accuracy. <img src="http://www.ivofilot.nl/images/29/chgcar.jpg" width="100%" class="img img-responsive"> __Figure 2__: *Compression factor versus Blocksize and quality for a CHGCAR file.* <img src="http://www.ivofilot.nl/images/28/locpot.jpg" width="100%" class="img img-responsive"> __Figure 3__: *Compression factor versus Blocksize and quality for a LOCPOT file.* • watched3695 • active(true) • created2017-12-27 00:00:00 • modified2018-01-01 00:10:23 • comments0 • 8(array) • id36 • titleAnimated gifs of molecular orbitals of CO • content# Introduction The electron density of a molecular orbital is basically a three-dimensional scalar field which can be visualized using both [contour plots](http://www.itl.nist.gov/div898/handbook/eda/section3/contour.htm) as well as [isosurfaces](https://en.wikipedia.org/wiki/Isosurface). Typically, we see that isosurfaces are being employed, as they more clearly show the three-dimensional character of the molecular orbital. However, such isosurfaces are nothing more than those [set of points in 3D space which have the same value](http://paulbourke.net/geometry/polygonise/). Thus in some sense, there is loss of information, especially if there are interesting features within the three-dimensional object spanned by the isosurface. # Animated gifs A very simple way to resolve this, is to use contour plots. As mentioned, the downside of using contour plots though is that we are no longer able to effectively show the three-dimensional character as we are simply [projecting planes within the scalar field](http://paulbourke.net/papers/conrec/) and colorizing each point on the plane based on the value of that point in the scalar field. We are able to generate a series of contour plots for a set of relevant planes within the three-dimensional space. Using the [previously demonstrated EDP program](http://www.ivofilot.nl/posts/view/27/Visualising+the+electron+density+of+the+binding+orbitals+of+the+CO+molecule+using+VASP), we are able to efficiently create this contour plots using the command line. In turn, using [Imagemagick](https://www.imagemagick.org) and a small Python script as shown below, we are able to construct [animated gifs](https://en.wikipedia.org/wiki/GIF) that loop over the generated set of contour plots. <pre> <code class="python"> #!/usr/bin/env python from subprocess import call import numpy # input file identifier filestr = "CO_5SIGMA" # output file name output = "co_5sigma" # generate slices and create contour plot for z in numpy.linspace(2.0, 8.0, 50): filename = "%s_%f.png" % (filestr, z) call(["/home/ivo/Documents/PROGRAMMING/CPP/edp/build/edp", "-i", "/scratch/ivo/vasp/CO/PARCHG_5SIGMA", "-s", "100", "-v", "1,0,0", "-w", "0,0,1", "-p", "0,%f,0" % z, "-o", filename]) call(["convert", filename, "-resize", "25%", filename.replace(".png",".gif")]) # combine everything inside a single animated gif call(["convert", "-delay", "0", "-loop" , "0", "%s*.gif" % filestr, "%s.gif" % output]) </code> </pre> # Showcase As usual, we use the electron density of the binding molecular orbitals of CO to demonstrate the tool. Below, you can see that the animated gifs loop over a set of contour planes. One of the things that are now easily visible are the regions of near-zero electron density at the center of the 1π and 5σ orbitals, which would normally not be visible when using isosurfaces. <table class="table"> <tr> <td> <b>3σ</b><br> <img src="http://www.ivofilot.nl/images/38/co_3sigma.gif"> </td> <td> <b>4σ</b><br> <img src="http://www.ivofilot.nl/images/39/co_4sigma.gif"> </td> </tr> <tr> <td> <b>1π</b><br> <img src="http://www.ivofilot.nl/images/40/co_1pi.gif"> </td> <td> <b>5σ</b><br> <img src="http://www.ivofilot.nl/images/41/co_5sigma.gif"> </td> </tr> </table> What other molecular orbitals would you like to see? Share your thoughts in the comments! • watched3517 • active(true) • created2017-12-29 00:00:00 • modified2018-01-01 00:08:29 • comments0 • post(array) • Post(array) • id29 • titleIntroduction to Electronic Structure Calculations: The variational principle • content# Introduction to Electronic Structure Calculations: The variational principle ## Introduction This post is the first part in an upcoming series explaining how to perform electronic structure calculations. The focus of this series is how to set-up an electronic structure calculation from scratch. We are not going to use any of the available quantum chemical codes, but write our very own code. This way, we will obtain a better understanding of how quantum chemical calculations are being performed. The way I am going to present the topic is by first providing the underlying theory and then further explaining the theory by giving a set of examples. The outline of this series is as follows: We start of with explaining some basics such as the (linear) variational principle, the Hartree product and the Slater determinant. From there on, we derive the Hartree-Fock equations and show how to set-up a simple calculation for the H<sub>2</sub> molecule. Next, we explain about basis sets, Gaussian integrals and the Hartree-Fock self-consistent field procedure. Finally, we talk about atomic and molecular orbitals, the construction of MO diagrams and the visualization of orbitals. For this series, I am going to assume that you have a basic understanding of quantum mechanics. You know what a wave function is and its statistical interpretation. Furthermore, I assume that you have passed introductory courses in calculus and linear algebra. ## The variational principle Given a normalized wave function$\lvert\phi\rangle$that satisfies the appropriate boundary conditions, then the expectation value of the Hamiltonian is an upper bound to the exact ground state energy. In other words, the lowest energy we can find, gives us the best wave function. Thus, if$ \langle \phi \rvert \phi \rangle=1 $, then$ \langle \phi \lvert \hat{H} \rvert \phi \rangle \geq E_{0} $. <div class="bs-callout bs-callout-primary"> <h4>Example 1: The energy of the 1s orbital in hydrogen</h4> Let's exemplify this by providing an example. In the following example, we wish to obtain the best value for$ \alpha $for the trial function$ \lvert \phi \rangle = N \exp \left( - \alpha r^{2} \right) $(where$N$is a normalization constant) that describes the 1s orbital in the H-atom. The Hamiltonian for the H-atom in <a href="https://en.wikipedia.org/wiki/Atomic_units">atomic units</a> is: $$\hat{H} = -\frac{1}{2} \nabla^{2} - \frac{1}{r}.$$ The first step is to find$E$as a function of$\alpha$. Therefore, we need to evaluate the following integral $$E = \langle \phi \lvert \hat{H} \rvert \phi \rangle = \int_{0}^{\infty} \textrm{d}r \int_{0}^{2\pi} \textrm{d} \theta \int_{0}^{\pi} \textrm{d} \varphi \, r^{2} \sin \varphi N \exp \left( - \alpha r^{2} \right) \left[ -\frac{1}{2} \nabla^{2} - \frac{1}{r} \right] N \exp \left( - \alpha r^{2} \right) .$$ Note that because we are integrating in <a href="http://mathworld.wolfram.com/SphericalCoordinates.html">spherical coordinates</a>, we have introduced the <a href="http://mathworld.wolfram.com/Jacobian.html">Jacobian</a> $$J = r^{2} \sin \varphi .$$ Furthermore, the radial part of the <a href="http://mathworld.wolfram.com/Laplacian.html">Laplacian</a>$\nabla^{2}$in spherical coordinates has the following form: $$\nabla^{2} = r^{-2} \frac{\textrm{d}}{\textrm{d} r} \left( r^{2} \frac{\textrm{d}}{\textrm{d} r} \right)$$ (we can ignore the angular parts of the Laplacian, because the 1s trial wave function is spherically symmetrical)<br><br> The above integral can be divided into two seperate integrals of the following forms $$I_1 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty r^2 \exp \left( -\alpha r^2 \right) \left[ r^{-2} \frac{\textrm{d}}{\textrm{d} r} \left( r^{2} \frac{\textrm{d}}{\textrm{d} r} \right) \exp \left( -\alpha r^2 \right) \right] \textrm{d}r$$ and $$I_2 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty r^2 \exp \left( -\alpha r^2 \right) \frac{1}{r} \exp \left( -\alpha r^2 \right) \textrm{d}r .$$ Since the wavefunction has no angular dependencies, the azimuthal ($\theta$) and polar ($\varphi$) parts have already been integrated in the above equations. These give the factor \pi$. To solve the first integral ($I_{1}$), we need to apply the Laplacian. This gives $$I_1 = - \left( \frac{4 \pi N^{2}}{2} \right) \int_0^\infty \exp \left( -\alpha r^2 \right) \left[ -6 \alpha^{2} + 4 \alpha^{2}r^{4} \right] \exp \left( -\alpha r^2 \right) \textrm{d}r .$$ Slightly rearranging this equation gives us $$I_1 = - 2 \pi N^{2} \int_0^\infty \left( 4 \alpha^{2}r^{4} -6 \alpha^{2} \right) \exp \left( -2 \alpha r^2 \right) \textrm{d}r .$$ This integral can be solved by performing <a href="https://en.wikipedia.org/wiki/Integration_by_parts">integration by parts</a>, but here we will use just use the following general formulas: $$\int_{0}^{\infty} \textrm{d}r \, r^{2m} \exp \left(-\alpha r^{2} \right) = \frac{(2m)!\pi^{1/2}}{2^{2m+1}m!\alpha^{m+1/2}}$$ and $$\int_{0}^{\infty} \textrm{d}r \, r^{2m+1} \exp \left(-\alpha r^{2} \right) = \frac{m!}{2\alpha^{m+1}} .$$ Applying these formulas for $I_1$ yields $$I_{1} = -2 \pi N^{2} \left[ 4 \alpha^{2} \frac{4! \pi^{1/2}}{2^5 2! (2 \alpha)^{5/2}} - 6 \alpha \frac{2!\pi^{1/2}}{2^3 1! (2\alpha)^{3/2}} \right] = N^{2} \frac{3 \pi^{3/2}} {4 \sqrt{2\alpha}}$$ and for $I_{2}$: $$I_{2} = -4 \pi N^{2} \int_{0}^{\infty} r \exp \left( -2 \alpha r^{2} \right) \textrm{d}r = -4 \pi N^{2} \frac{1}{4 \alpha} = \frac{-\pi}{\alpha} N^{2} .$$ Summing these two parts gives $$\langle \phi \lvert H \rvert \phi \rangle = I_1 + I_2 = N^2 \left( \frac{3 \pi ^{3/2}}{4 \sqrt{2 \alpha}} - \frac{\pi}{\alpha} \right) .$$ Using the same procedure, we can also evaluate the integral $\langle \phi \rvert \phi \rangle$. $$\langle \phi \rvert \phi \rangle = 4 \pi N^{2} \int_{0}^{\infty} r^2 \exp \left( -2 \alpha r^2 \right) \textrm{d}r = N^{2} \frac{\pi^{3/2}}{2 \sqrt{2}\alpha^{3/2}}$$ In order to evaluate the expectation value for the energy $E$ for the trial wave function $\lvert \varphi \rangle$, we need to evaluate $$E = \frac{\langle \phi \lvert \hat{H} \rvert \phi \rangle }{\langle \phi \rvert \phi \rangle} = \frac{3 \alpha}{2} - \frac{2 \sqrt{2} \alpha ^{1/2}}{\pi ^{1/2}} .$$ The reason we need to divide by $\langle \phi \rvert \phi \rangle$ is because $\lvert \phi \rangle$ is not normalized yet. If $\lvert \phi \rangle$ would be normalized, then the integral $\langle \phi \rvert \phi \rangle$ would be equal to one and thus drop out. The best value for $\alpha$ can then be found by differentiating $E$ with respect to $\alpha$ and finding that value for $\alpha$ where the differential is zero. $$\frac{\textrm{d}E}{\textrm{d}\alpha} = 0 .$$ Thus $$\frac{\textrm{d}E}{\textrm{d}\alpha} = \frac{3}{2} - \frac{\sqrt{2}}{\pi ^{1/2} \alpha^{1/2}}$$ and $$\alpha = \frac{8}{9 \pi}.$$ This gives us the following upper bound for the energy $E$: $$E = \frac{3}{2} \frac{8}{9 \pi} - \frac{2 \sqrt{2} \left( \frac{8}{9 \pi} \right)^{1/2}}{\pi ^{1/2}} = -\frac{4}{3 \pi} \approx -0.4244 .$$ </div> The exact energy for the 1s orbital in hydrogen is -0.5 HT with the corresponding wave function $\lvert \phi \rangle = \frac{1}{\sqrt{\pi}} \exp \left( - r \right)$. ## The linear variational principle Although the exact answer to the energy of the 1s orbital in hydrogen is known, let us assume that we do *not* know it and would like to obtain it. One way is to pick various trial wave functions with one independent variable and optimize that variable. The lowest energy we are going to find gives us then the best approximation of the ground state wave function. This approach would be quite tedious (and how would you know what trial wave function to choose in the first place anyway?). It would be nice if we could have just a single trial wave function that has a lot of 'variational freedom'. Here, I propose to use a trial wave function that is a linear set of fixed wave function like so: $$\lvert \phi \rangle = \sum\_{i=1}^N c_{i} \lvert \psi\_{i} \rangle$$ If we can assume that the wave functions $\lvert \psi\_i \rangle$ are orthonormal in the sense that $\langle \psi\_i \lvert \psi\_j \rangle = \delta\_{ij}$, then the energy is $$E = \sum\_{ij} \langle \psi\_i \lvert \hat{H} \rvert \psi\_j \rangle = \sum_{ij} c\_{i}c\_{j} H\_{ij}$$ and we are tasked with finding the best set of coefficients $c_{i}$ that minimizes $E$. It are exactly these coefficients (and accompanying wave functions) that provide us with the necessary variational freedom. (Note that $H\_{ij} = \langle \psi\_i \lvert \hat{H} \rvert \psi\_j \rangle$) The procedure to find this set is by using [Langrange's method of undetermined multiplies](https://en.wikipedia.org/wiki/Lagrange_multiplier). Herein $$L \left( c\_1, c\_2, \cdots, E \right) = \langle \phi \lvert \hat{H} \rvert \phi \rangle - E \left(\langle \phi \lvert \phi \rangle - 1 \right)$$ and we need to find that particular set of $c\_k$ where $$\frac{\partial L}{\partial c\_{k}} = 0$$ for each $k$. For each $k$, this gives the following equation $$\sum\_{j} c\_j H\_{kj} + \sum\_i c\_i H\_{ik} - 2 E c\_{k} = 0$$. Since $\langle \psi\_a \lvert \hat{H} \rvert \psi\_b \rangle = \langle \psi\_b \lvert \hat{H} \rvert \psi\_a \rangle = H\_{ab} = H\_{ba}$, we can simplify the above equation to $$\sum\_{j} H\_{ij} c\_j - E c\_{i} = 0.$$ The above expression is just the matrix expression $$H\vec{c} = E\vec{c} .$$ This last equation looks probably very familiar to you and relates to the [matrix eigenvalue problem](http://mathworld.wolfram.com/Eigenvalue.html). It is a very typical problem in linear algebra. To solve the matrix eigenvalue problem, one needs to diagonalize the $H$ matrix. This gives a set of eigenvalues and corresponding eigenvalues. The eigenvector corresponding to the lowest eigenvalue is the best set of $c\_k$ we are interested in. <div class="bs-callout bs-callout-primary"> <h4>Example 2: The 2x2 linear variational problem for finding the energy of the 1s orbital of hydrogen </h4> To illustrate the above procedure, we are going to find the best solution for the 1s orbital in the hydrogen atom using a trial wave function that is a linear combination of two orthonormal wave functions. The two orthonormal wave functions are $$\psi_{1} = \frac{2}{\pi^{3/4}} \exp \left( -r^2 / 2 \right)$$ and $$\psi_{2} = \frac{2\pi^{1/4}r - 4/\pi^{1/4}}{\sqrt{2\pi(3\pi-8)}} \exp \left( -r^2 / 2 \right).$$ And because of the orthonormality condition, the following integrals hold $$\langle \psi_{1} \lvert \psi_{2} \rangle = 0$$ and $$\langle \psi_{1} \lvert \psi_{1} \rangle = \langle \psi_{2} \lvert \psi_{2} \rangle = 1.$$ Applying the Hamiltonian to these wave functions, we can obtain the following 2x2 H-matrix: $$H = \begin{bmatrix} \langle \psi_{1} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{1} \lvert \hat{H} \rvert \psi_{2} \rangle \\langle \psi_{2} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{2} \lvert \hat{H} \rvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} -0.378379 & -0.0185959 \ -0.0185959 & 1.09531 \end{bmatrix}$$ Note that the H-matrix is symmetric. This is because we can exchange $\lvert \psi_{1} \rangle$ and $\lvert \psi_{2} \rangle$ and obtain the same result. Furthermore, note that although $\langle \psi_{1} \lvert \psi_{2} \rangle = 0$, this does not mean that the integral is also zero when the Hamiltonian operator is applied to either of the wave functions.<br><br> In order to find the best energy and approximation to the ground state wave function, we need to diagonalize the H-matrix. This can be easily done by hand, or using the <a href="https://www.wolframalpha.com/input/?i=Diagonalize%5B%7B%7B-0.378379%2C+-0.0185959%7D%2C+%7B-0.0185959%2C+1.09531%7D%7D%5D">very handy Wolfram Alpha website</a>. The result we obtain is $$D = \begin{bmatrix} 1.09554 & 0 \ 0 & -0.378614 \end{bmatrix}$$ for the diagonal (eigenvalue) matrix with corresponding $$X = \begin{bmatrix} -0.0126156 & -0.99992 \ 0.99992 & -0.0126156 \end{bmatrix}$$ for the eigenvector matrix. The first column of $X$ contains the coefficients corresponding to the first eigenvalue (energy) of the $D$-matrix and the second column of $X$ for the second eigenvalue. Since the second eigenvalue is lower ($-0.378614 < 1.09554$), the second column of $X$ contains the coefficients we are looking for. Thus, the best approximation to the ground state wave function is: $$\lvert \phi \rangle = c_{1} \lvert \psi_{1} \rangle + c_{2} \lvert \psi_{2} \rangle = -0.99992 \lvert \psi_{1} \rangle - 0.0126156 \lvert \psi_{2} \rangle.$$ The corresponding energy for this wave function is -0.378614 HT. This result is actually worse than what we obtained using the procedure of example 1 (which was -0.4244 HT). The reason for this is that the variational freedom using a linear combination of only two wave functions is actually less than one where we can vary a coefficient inside an exponent. To get better results, we would need to use a larger set of wave functions in the linear combination. </div> In Example 2 it was shown how we can obtain the best energy given a trial wave function which is a linear combination of a set of wave functions. Let us call these wave functions in the linear combination the basis functions. In order for the procedure in example 2 to work, the basis functions need to be orthonormal to each other. Finding a set of orthonormal functions by hand is somewhat tedious (try it for yourself if you do not believe me!). It would therefore be very handy if we could apply the above methodology (albeit somewhat changed) using a set of non-orthonormal basis functions. I already showed you how you can use Lagrange's method of undetermined multipliers in order to derive the matrix equation $Hc = Ec$. When the basis functions are no longer orthonormal, this matrix equation changes slightly and becomes $$H\vec{c} = ES\vec{c}.$$ Herein, $S$ is termed the overlap matrix where each term in the matrix is $$S\_{ij} = \langle \psi\_{i} \lvert \psi\_{j} \rangle.$$ The introduction of the overlap matrix $S$ makes the matrix diagonalization a tiny bit more complicated. There are several procedures for obtaining the eigenvalues and -vectors. The procedure we are going to use follows the canonical orthogonalization scheme. It works as follows 1. Construct the overlap matrix $S$. 2. Diagonalize the overlap matrix $S$ to obtain the eigenvalues of the overlap matrix and its corresponding eigenvector matrix $U$. 3. Construct a transformation matrix $X = U s^{-1/2}$. 4. Transform the Hamiltonian matrix to the canonically orthonormalized basis set $H' = X^{\dagger} H X$. 5. Diagonalize the transformed Hamiltonian matrix to obtain $D$ and $C'$ (where C' are the transformed coefficients). 6. Obtain the coefficients of the original basis by calculating $C = XC'$. The above probably looks a bit complicated, but really, it is not. Let me elaborate by showing you another example. <div class="bs-callout bs-callout-primary"> <h4>Example 3: Finding the energy of the 1s orbital of hydrogen using a non-orthonormal set of basis functions </h4> The two functions we are going to use are $$\psi_{1} = \exp \left( -r^{2} / 2 \right)$$ and $$\psi_{2} = r \exp \left( -r^{2} / 2 \right).$$ (<b>Step 1</b>)Their corresponding overlap matrix is $$S = \begin{bmatrix} \langle \psi_{1} \lvert \psi_{1} \rangle & \langle \psi_{1} \lvert \psi_{2} \rangle \\langle \psi_{2} \lvert \psi_{1} \rangle & \langle \psi_{2} \lvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} 5.56833 & 6.2832 \ 6.2832 & 8.35249 \end{bmatrix}.$$ From this matrix we can clearly see that these functions are neither orthogonal to each other nor normalized. <br><br>(<b>Step 2</b>)Diagonalizing the overlap matrix yields the following diagonal and eigenvector matrices $$D' = \begin{bmatrix} 13.396 & 0 \ 0 & 0.524846 \end{bmatrix}.$$ and $$U = \begin{bmatrix} 0.625975 & -0.779843 \ 0.779843 & 0.625975 \end{bmatrix}.$$ (<b>Step 3</b>)We can obtain the transformation matrix by calculating $$X = U s^{-1/2} = \begin{bmatrix} 0.625975 & -0.779843 \ 0.779843 & 0.625975 \end{bmatrix} \begin{bmatrix} 1/\sqrt{13.396} & 0 \ 0 & 1/\sqrt{0.524846} \end{bmatrix} = \begin{bmatrix} 0.171029 & -1.07644 \ 0.213069 & 0.864054 \end{bmatrix}.$$ The Hamiltonian matrix is $$H = \begin{bmatrix} \langle \psi_{1} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{1} \lvert \hat{H} \rvert \psi_{2} \rangle \\langle \psi_{2} \lvert \hat{H} \rvert \psi_{1} \rangle & \langle \psi_{2} \lvert \hat{H} \rvert \psi_{2} \rangle\end{bmatrix} = \begin{bmatrix} -2.10694 & -2.42674 \ -2.42674 & -1.4109 \end{bmatrix}.$$ Because our wave function $\lvert \psi_{1} \rangle$ in this example differs only by a normalization constant to the wave function $\lvert \psi_{1} \rangle$ in example 2, the value for the Hamiltonian element $H_{00}$ should only differ by a factor equal to the square of the normalization constant. In other words, if we divide $H_{00}$ by the value for $S_{00}$ (which represents that difference), we obtain the value of $H_{00}$ in example 2. The same applies of course for $H_{11}$. We can obtain the value in example 1 by calculating $H_{11} / S_{11}$.<br><br> (<b>Step 4</b>)And applying the transformation matrix to the Hamiltonian matrix gives $$H' = X^{\dagger} H X = \begin{bmatrix} 0.171029 & 0.213069 \ -1.07644 & 0.864054 \end{bmatrix} \begin{bmatrix} -2.10694 & -2.42674 \ -2.42674 & -1.4109 \end{bmatrix} \begin{bmatrix} 0.171029 & -1.07644 \ 0.213069 & 0.864054 \end{bmatrix}$$ $$H' = X^{\dagger} H X = \begin{bmatrix} -0.302548 & 0.32611 \ 0.32611 & 1.01951 \end{bmatrix}.$$ (<b>Step 5</b>)Diagonalizing the transformed Hamiltonian matrix gives $$D = \begin{bmatrix} 1.09557 & 0 \ 0 & -0.378614 \end{bmatrix}$$ with corresponding eigenvector matrix $$C' = \begin{bmatrix} 0.227151 & -0.973859 \ 0.973859 & 0.227151 \end{bmatrix}.$$ (<b>Step 6</b>)Finally, in order to get the coefficients in the original basis, we need to calculate $$C = XC' = \begin{bmatrix} 0.171029 & -1.07644 \ 0.213069 & 0.864054 \end{bmatrix} \begin{bmatrix} 0.227151 & -0.973859 \ 0.973859 & 0.227151 \end{bmatrix} = \begin{bmatrix} -1.00945 & -0.411073 \ 0.889866 & -0.0112284 \end{bmatrix}.$$ The lowest eigenvalue found from the diagonal matrix $D$ is -0.378614 HT. The corresponding wave function can be found by taking the coefficients of the corresponding column vector, thus giving $$\lvert \phi \rangle = -0.411073 \cdot \exp \left(-r^2 / 2 \right) - 0.0112284 \cdot r \exp \left(-r^2 / 2 \right).$$ This is effectively the same answer as found in example 2 as can be seen from the radial distribution plot (below)! This is of course not very surprising. The variation used in both approximations was by taking different polynomials in front of the exponent. Both attempts at finding the best wave function (example 2 as well as this example) use a zeroth and first order polynomial in front of the exponent. If we compare the radial distribution function of the wave function here and the one obtained in example 2, we can see that these functions overlap.<br><br> <img src="http://www.ivofilot.nl/images/24/bp_qc1_fig1.png"><br><br> This does not necessarily imply that the underlying contributions of the basis functions are the same. If you would examine the coefficients and the corresponding basis functions, you would notice a particular difference. The contribution of each basis functions in each set is visualized in the graph below. We can clearly see that the contributions differ.<br><br> <img src="http://www.ivofilot.nl/images/25/bp_qc1_fig2.png"><br><br> If we now sum up the contributions of the two basis functions for each linear combination, we note that the two wave functions in example 2 and this example are in principle the same.<br><br> <img src="http://www.ivofilot.nl/images/26/bp_qc1_fig3.png"><br><br> Thus, we obtain the same result for each wave function. </div> Although we have now the tools to find the best approximation of the ground state energy and its corresponding wave function using any set of basis functions, the result so far was not very satisfactory. Using only a single trial wave function as for instance given in example 1 gave better results. In order to improve on the previous results, we need to expand our set of basis functions. A larger basis set should in principle give a better result. <div class="bs-callout bs-callout-primary"> <h4>Example 4: The energy of the 1s orbital of hydrogen using a basis set including polynomials up to 4th order</h4> For our basis set, we are going to use the following set of basis functions: $$\lvert \psi_{i} \rangle = r^{i} \exp \left( -r^2/2 \right)$$ for $i = 0,1,2,3,4$. The overlap matrix $S$ then looks like: $$S = \begin{bmatrix} 5.56833 & 6.28319 & 8.35249 & 12.5664 & 20.8812 \ 6.28319 & 8.35249 & 12.5664 & 20.8812 & 37.6991 \ 8.35249 & 12.5664 & 20.8812 & 37.6991 & 73.0843 \ 12.5664 & 20.8812 & 37.6991 & 73.0843 & 150.796 \ 20.8812 & 37.6991 & 73.0843 & 150.796 & 328.879 \end{bmatrix}$$ and the hamiltonian matrix $H$ is $$H = \begin{bmatrix} -2.10694 & -2.42674 & -4.19506 & -8.35249 & -17.7867 \ -2.42674 & \ -1.4109 & -2.06931 & -5.25794 & -14.598 \ -4.19506 & -2.06931 & -1.08169 & \ -2.03167 & -8.98742 \ -8.35249 & -5.25794 & -2.03167 & 1.45319 & 2.31392 \ -17.7867 & -14.598 & -8.98742 & 2.31392 & 22.7788 \end{bmatrix}$$ Using the same procedure as in example 3, we obtain for the lowest eigenvalue of the transformed hamiltonian matrix the energy of <b>-0.487773 HT</b>. This result is a significant improvement and comes very close to the exact energy of -0.5HT! <br><br> Further improvement by expanding the basis set to 10 basis functions gives a result of -0.498481 HT. </div> ## Summary In this blogpost I have shown you that in order to find the best approximation to the ground state energy, the variational principle can be employed. By introducing some kind of variety in a trial wave function and then minimizing its corresponding energy, gives the best approximation to the ground state energy can be found. In the linear variational method, the variational freedom is provided by the specific set of basis functions used. In this method, the best coefficients in the linear combination of said basis functions that minimize the energy can be found. Here, I have shown that a sufficiently large set of basis functions composed of a Gaussian part (the exponent) multiplied with a polynomial of increasing order is able to approach the true ground state energy of the hydrogen atom. ### Acknowledgements Finally, I would like to acknowledge <a href="https://www.linkedin.com/pub/bart-zijlstra/74/682/29b/de">Bart Zijlstra</a> for thoroughly proofreading this post and pointing out the mistakes.
• watched10541
• active(true)
• created2015-07-14 12:00:00
• modified2015-07-20 14:33:15
• $request->data(empty) •$this->validationErrors(array)
• Post(empty)
• Comment(empty)
• Tag(empty)
• PostsTag(empty)
• ParentComment(empty)
• 0Auth
• 1Navbar
• 3Flag
• 4Markdown
• 5Number
• 6SimpleGraph
• 7DebugTimer
• 8Toolbar
• 9Html
• 10Form
• 11Session
• 12HtmlToolbar
====
• Environment

## App Constants

Constant Value
CONFIG /customers/e/2/e/ivofilot.nl/httpd.www/app/Config/

## CakePHP Constants

Constant Value
APP /customers/e/2/e/ivofilot.nl/httpd.www/app/
APP_DIR app
APPLIBS /customers/e/2/e/ivofilot.nl/httpd.www/app/Lib/
CACHE /customers/e/2/e/ivofilot.nl/httpd.www/app/tmp/cache/
CAKE /customers/e/2/e/ivofilot.nl/httpd.www/lib/Cake/
CAKE_CORE_INCLUDE_PATH /customers/e/2/e/ivofilot.nl/httpd.www/lib
CORE_PATH /customers/e/2/e/ivofilot.nl/httpd.www/lib/
CAKE_VERSION 2.10.13
CSS /customers/e/2/e/ivofilot.nl/httpd.www/app/webroot/css/
CSS_URL css/
DS /
FULL_BASE_URL https://ivofilot.nl
IMAGES /customers/e/2/e/ivofilot.nl/httpd.www/app/webroot/img/
IMAGES_URL img/
JS /customers/e/2/e/ivofilot.nl/httpd.www/app/webroot/js/
JS_URL js/
LOGS /customers/e/2/e/ivofilot.nl/httpd.www/app/tmp/logs/
ROOT /customers/e/2/e/ivofilot.nl/httpd.www
TESTS /customers/e/2/e/ivofilot.nl/httpd.www/app/Test/
TMP /customers/e/2/e/ivofilot.nl/httpd.www/app/tmp/
VENDORS /customers/e/2/e/ivofilot.nl/httpd.www/vendors/
WEBROOT_DIR webroot
WWW_ROOT /customers/e/2/e/ivofilot.nl/httpd.www/app/webroot/

## PHP Environment

Environment Variable Value
Php Version 7.4.13
Onecom Domain Name ivofilot.nl
Onecom Domain Root /customers/e/2/e/ivofilot.nl/
Onecom Memorylimit 1073741824
Onecom Cpu Shares 1024
Onecom Exec latest
Onecom Dir Layout Ver 0
Content Length 0
Http Connection close
Script Name /app/webroot/index.php
Request Uri /posts/view/29/Introduction+to+Electronic+Structure+Calculations%3A+The+variational+principle
Query String
Request Method GET
Server Protocol HTTP/1.1
Gateway Interface CGI/1.1
Redirect Url /app/webroot/posts/view/29/Introduction+to+Electronic+Structure+Calculations:+The+variational+principle
Remote Port 33298
Script Filename /customers/e/2/e/ivofilot.nl/httpd.www/app/webroot/index.php
Context Document Root /var/www
Context Prefix
Request Scheme https
Server Port 80
Server Name ivofilot.nl
Server Software Apache
Server Signature
Path /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Http X Varnish 476650508
Http Accept Encoding gzip
Http Host ivofilot.nl
Http X Onecom Host ivofilot.nl
Http X Forwarded Proto https
Http X Onecom Forwarded Proto https
Http X Forwarded For 18.232.146.10
Http Accept Language en-US,en;q=0.5
Http Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Http User Agent CCBot/2.0 (https://commoncrawl.org/faq/)
Env Vcv Env Addons Id one.com
Env Vcv Token Url https://wpapi.one.com/api/v1.0/plugins/visualcomposer/activate
Onecom Webshop Host webshop2.cst.webpod8-cph3.one.com
Https on
Onecom Tmpdir /customers/e/2/e/ivofilot.nl//tmp
Domain Name ivofilot.nl
Onecom Document Root /customers/e/2/e/ivofilot.nl/httpd.www
Document Root /customers/e/2/e/ivofilot.nl/httpd.www
Redirect Status 200
Redirect Env Vcv Env Addons Id one.com
Redirect Env Vcv Token Url https://wpapi.one.com/api/v1.0/plugins/visualcomposer/activate
Redirect Onecom Wp Addons Api https://wpapi.one.com
Redirect Onecom Webshop Host webshop2.cst.webpod8-cph3.one.com
Redirect Https on
Redirect Onecom Cpu Shares 1024
Redirect Onecom Memorylimit 1073741824
Redirect Onecom Exec latest
Redirect Onecom Dir Layout Ver 0
Redirect Onecom Tmpdir /customers/e/2/e/ivofilot.nl//tmp
Redirect Onecom Domain Root /customers/e/2/e/ivofilot.nl/
Redirect Onecom Domain Name ivofilot.nl
Redirect Domain Name ivofilot.nl
Redirect Onecom Document Root /customers/e/2/e/ivofilot.nl/httpd.www
Redirect Document Root /customers/e/2/e/ivofilot.nl/httpd.www
Redirect Redirect Status 200
Redirect Redirect Env Vcv Env Addons Id one.com
Redirect Redirect Env Vcv Token Url https://wpapi.one.com/api/v1.0/plugins/visualcomposer/activate
Redirect Redirect Onecom Wp Addons Api https://wpapi.one.com
Redirect Redirect Onecom Webshop Host webshop2.cst.webpod8-cph3.one.com
Redirect Redirect Https on
Redirect Redirect Onecom Cpu Shares 1024
Redirect Redirect Onecom Memorylimit 1073741824
Redirect Redirect Onecom Exec latest
Redirect Redirect Onecom Dir Layout Ver 0
Redirect Redirect Onecom Tmpdir /customers/e/2/e/ivofilot.nl//tmp
Redirect Redirect Onecom Domain Root /customers/e/2/e/ivofilot.nl/
Redirect Redirect Onecom Domain Name ivofilot.nl
Redirect Redirect Domain Name ivofilot.nl
Redirect Redirect Onecom Document Root /customers/e/2/e/ivofilot.nl/httpd.www
Redirect Redirect Document Root /customers/e/2/e/ivofilot.nl/httpd.www
Fcgi Role RESPONDER
Php Self /app/webroot/index.php
Request Time Float 1606355239.2076
Request Time 1606355239
====
• Include

## Included Files

#### Include Paths

• 0/customers/e/2/e/ivofilot.nl/httpd.www/lib
• 2/usr/share/php
• 3-> /customers/e/2/e/ivofilot.nl/httpd.www/lib/Cake/

#### Included Files

• core(array)
• Behavior(array)
• 0CORE/Model/Behavior/TreeBehavior.php
• Cache(array)
• 0CORE/Cache/Cache.php
• 1CORE/Cache/Engine/FileEngine.php
• 2CORE/Cache/CacheEngine.php
• Component(array)
• 0CORE/Controller/Component/SessionComponent.php
• 1CORE/Controller/Component/AuthComponent.php
• 2CORE/Controller/Component/Auth/FormAuthenticate.php
• 3CORE/Controller/Component/Auth/BaseAuthenticate.php
• Config(array)
• 0CORE/Config/routes.php
• 1CORE/Config/config.php
• Controller(array)
• 0CORE/Controller/Controller.php
• 1CORE/Controller/ComponentCollection.php
• 2CORE/Controller/Component.php
• Datasource(array)
• 0CORE/Model/Datasource/CakeSession.php
• 1CORE/Model/Datasource/Database/Mysql.php
• 2CORE/Model/Datasource/DboSource.php
• 3CORE/Model/Datasource/DataSource.php
• Error(array)
• 0CORE/Error/exceptions.php
• 1CORE/Error/ErrorHandler.php
• I18n(array)
• 0CORE/I18n/I18n.php
• 1CORE/I18n/L10n.php
• Log(array)
• 0CORE/Log/CakeLog.php
• 1CORE/Log/LogEngineCollection.php
• 2CORE/Log/Engine/FileLog.php
• 3CORE/Log/Engine/BaseLog.php
• Model(array)
• 0CORE/Model/Model.php
• 1CORE/Model/BehaviorCollection.php
• 2CORE/Model/ConnectionManager.php
• 3CORE/Model/ModelBehavior.php
• Network(array)
• 0CORE/Network/CakeRequest.php
• 1CORE/Network/CakeResponse.php
• Other(array)
• 0CORE/bootstrap.php
• 1CORE/basics.php
• 2CORE/Core/App.php
• 3CORE/Core/Configure.php
• 4CORE/Core/CakePlugin.php
• 5CORE/Event/CakeEventListener.php
• 6CORE/Event/CakeEvent.php
• 7CORE/Event/CakeEventManager.php
• 8CORE/Core/CakeObject.php
• Routing(array)
• 0CORE/Routing/Dispatcher.php
• 1CORE/Routing/Filter/AssetDispatcher.php
• 2CORE/Routing/DispatcherFilter.php
• 3CORE/Routing/Filter/CacheDispatcher.php
• 4CORE/Routing/Router.php
• 5CORE/Routing/Route/CakeRoute.php
• 6CORE/Routing/Route/PluginShortRoute.php
• Utility(array)
• 0CORE/Utility/Hash.php
• 1CORE/Utility/Inflector.php
• 2CORE/Utility/ObjectCollection.php
• 3CORE/Utility/Debugger.php
• 4CORE/Utility/CakeText.php
• 5CORE/Utility/ClassRegistry.php
• 6CORE/Utility/Set.php
• View(array)
• 0CORE/View/HelperCollection.php
• app(array)
• Config(array)
• 0APP/Config/core.php
• 1APP/Config/bootstrap.php
• 2APP/Config/routes.php
• 3APP/Config/database.php
• Controller(array)
• 0APP/Controller/PostsController.php
• 1APP/Controller/AppController.php
• Model(array)
• 0APP/Model/Post.php
• 1APP/Model/AppModel.php
• 2APP/Model/Comment.php
• 3APP/Model/Tag.php
• Other(array)
• 0APP/webroot/index.php
• plugins(array)
• DebugKit(array)
• Component(array)
• 0DebugKit/Controller/Component/ToolbarComponent.php
• Plugin(array)
• 0DebugKit/Lib/DebugMemory.php
• 1DebugKit/Lib/Panel/HistoryPanel.php
• 2DebugKit/Lib/DebugPanel.php
• 3DebugKit/Lib/Panel/SessionPanel.php
• 4DebugKit/Lib/Panel/RequestPanel.php
• 5DebugKit/Lib/Panel/SqlLogPanel.php
• 6DebugKit/Lib/Panel/TimerPanel.php
• 7DebugKit/Lib/Panel/LogPanel.php
• 8DebugKit/Lib/Log/Engine/DebugKitLog.php
• 9DebugKit/Lib/Panel/VariablesPanel.php
• 10DebugKit/Lib/Panel/EnvironmentPanel.php
• 11DebugKit/Lib/Panel/IncludePanel.php
• 12DebugKit/Lib/DebugTimer.php
• Markdown(array)
• Config(array)
• 0Markdown/Config/bootstrap.php
• Plugin(array)
• 0Markdown/Lib/Error/MarkdownExceptions.php
====