Tag: Materials Science

Experimental truths

In statistics there exist a well known aphorism:

All models are wrong but some are useful.

— George Edward Pelham Box, 1919-2013


"George E. P. Box" by DavidMCEddy

“George E. P. Box” by DavidMCEddy

From the point of view of the definition of the word “model” this is true in an absolute sense, since a model implicitly means approximations are made, and as such discrepancies with “the real system” exist. As a result, this real system is considered as the only “not wrong” description of itself. In the exact sciences, the real system is often nature. This may lead some scientists to believe that experimental results, and by extension conclusions based on them, are true by default. When confronted with theoretical results in disagreement with experimental conclusions, the quick response entails a failure of the theoretical model used since it is not real nature that was worked with, but only a model.

Quite often this is true, and leads to the formulation of new and better models of reality: This allowed, for example, Newton’s laws of motion to evolve to special relativity and further to general relativity. However, equally often (in materials science at least) something else may be going on: The scientist may have forgotten that the experimentalist is also using a model to create his/her experimental results. Broadly speaking, experimental results can be categorized as either being direct or indirect results. Direct results are what you could call “WYSIWYG”-results. What you measure is the quantity you are interested in: e.g. contact angles of liquids by measuring the angle between a drop of the liquid and the substrate surface, the scanning tunneling and atomic force microscopy pictures of a surface,… Indirect results on the other hand, require some post-processing of a direct result to obtain the quantity of interest. This post-processing step includes the use of a model which links the direct result to the property of interest. e.g. The atomic structure of a material. Here the direct result would be the measured X-ray diffraction (XRD) spectrum, while the model and its assumptions are nowadays neatly hidden in well-performing software. This software will try to fit known crystal models to obtain lattice parameters and atomic positions for the XRD spectrum provided. This means however that the obtained result is the best fit that can be obtained, which is not necessarily the actual atomic structure.

Graphical Abstract for paper: Fine-tuning the theoretically predicted structure of MIL-47(V) with the aid of powder X-ray diffraction.

Graphical Abstract for paper: Fine-tuning the theoretically predicted structure of MIL-47(V) with the aid of powder X-ray diffraction.

Another important aspect to remember in regard to experimental results is the fact that different samples are truly different systems. For example, a material grown as a single crystal or synthesized as a powder may give subtly different XRD-spectra. In a recent paper with Thomas Bogaerts, we investigated how well different models for the MIL-47(V) Metal Organic Framework (MOF) fitted to experimental XRD spectra of this material. We found that depending on which experimental spectrum (single crystal or powder XRD) we fitted to, a different model was preferred, showing nature to have multiple truths for the same system. The structural difference between these models is minute, since the models entail different spin configurations on the same topology. However, the effort required for the more extended fitting procedure performed by Thomas is well worth it, since it provided a new (indirect) method for determining the spin-configuration in these rather complex structure, giving access to slightly less-wrong models for the future.



Fine-tuning the theoretically predicted structure of MIL-47(V) with the aid of powder X-ray diffraction

Authors: Thomas Bogaerts, Louis Vanduyfhuys, Danny E. P. Vanpoucke, Jelle Wieme,
Michel Waroquier, Pascal van der Voort and Veronique van Speybroeck
Journal: Cryst. Eng. Comm. 17(45), 8612-8622 (2015)
doi: 10.1039/c5ce01388g
IF(2015): 3.849
export: bibtex
pdf: <Cryst.Eng.Comm.> 
Graphical Abstract: Which model represents the experimental XRD-spectra best? Ferromagnetic or anti-ferromagnetic chains? With of without offset?
Graphical Abstract: Which model represents the experimental XRD-spectra best? Ferromagnetic or anti-ferromagnetic chains? With of without offset?


The structural characterization of complex crystalline materials such as metal organic frameworks can prove a very difficult challenge both for experimentalists as for theoreticians. From theory, the flat potential energy surface of these highly flexible structures often leads to different geometries that are energetically very close to each other. In this work a distinction between various computationally determined structures is made by comparing experimental and theoretically derived X-ray diffractograms which are produced from the materials geometry. The presented approach allows to choose the most appropriate geometry of a MIL-47(V) MOF and even distinguish between different electronic configurations that induce small structural changes. Moreover the techniques presented here are used to verify the applicability of a newly developed force field for this material. The discussed methodology is of significant importance for modelling studies where accurate geometries are crucial, such as mechanical properties and adsorption of guest molecules.

Phonons: shake those atoms

In physics, a phonon is a collective excitation in a periodic, elastic arrangement of atoms or molecules in condensed matter, like solids and some liquids. Often designated a quasi-particle, it represents an excited state in the quantum mechanical quantization of the modes of vibrations of elastic structures of interacting particles.

— source: wikipedia

Or for simplicity: sound waves; the ordered shaking of atoms or molecules. When you hit a metal bell with a (small) hammer, or make a wineglass sing by rubbing the edges, you experience the vibrations of the object as sound. All objects in nature (going from atoms to stars) can be made to vibrate, and they do this at one or more specific frequencies : their eigenfrequencies or normal frequencies.

Also single molecules, if they are hit (for example by another molecule bumping into them) or receive extra energy in another way, start to vibrate. These vibrations can take many forms (elongating and shortening of bonds, rotating of parts of the molecule with respect to other parts, flip-flopping of loose ends, and so forth) and give a unique signature to the molecule since each of these vibrations (so-called eigen-modes) corresponds with a certain energy given to the molecule. As a result, if you know all the eigen-modes of a molecule, you also know which frequencies of infrared light they should absorb, which is very useful, since in experiment we do not “see” molecules (if we see them at all) as nice ball-and-stick objects.

From the computational point of view, this is not the only reason why in molecular modeling the vibrational frequencies of a system (i.e. the above eigen-modes) are calculated. In addition, they also tell if a system is in its ground state (which is what one is looking for most of the time) or not. Although this tool has wide-spread usage in molecular modeling, it is seldom used in ab initio solid state physics because of the associated computational cost. In addition, because of the finite size of the unit cell, the reciprocal space in which phonons live also has a finite size, in contrast to the single point for a molecule…making life complex. 😎

Continue reading

Spring School Computational Tools: Day 5 – CP2K

Today was the fifth and last day of our spring school on computational tools for materials science. However, this was no reason to sit back and relax. After having been introduced into VASP (day-2) and ABINIT (day-3) for solids, and into Gaussian (day-4) for molecules, today’s code (CP2K) is one which allows you to study both when focusing on dynamics and solvation.


If ensembles were coffee…

The introduction into the Swiss army knife called CP2K was provided by Dr. Andy Van Yperen-De Deyne. He explained to us the nature of the CP2K code (periodic, tools for solvated molecules, and focus on large/huge systems) and its limitations. In contrast to the codes of the previous days, CP2K uses a double basis set: plane waves where the properties are easiest and most accurate described with plane waves and gaussians where it is the case for gaussians. By means of some typical topics of calculations, Andy explained the basic setup of the input and output files, and warned for the explosive nature of too long time steps in molecular dynamics simulations. The possible ensembles for molecular dynamics (MD) were explained as different ways to store hot coffee. Following our daily routine, this session was followed by a hands-on session.

In the afternoon, the advanced session was presented by a triumvirate:  Thierry De Meyer, who discussed QM/MM simulations in detail, Dr. Andy Van Yperen-De Deyne, who discused vibrational finger printing and Lennart Joos, who, as the last presenter of the week, showed how different codes can be combined within a single project, each used where they are at the top of their strength, allowing him to unchain his zeolites.

CP2K, all lecturers

CP2K, all lecturers: Andy Van Yperen-De Deyne (top left), Thierry De Meyer (top right), Lennart Joos (bottom left). All spring school participants hard at work during the hands-on session, even at this last day (bottom right).

The spring school ended with a final hands-on session on CP2K, where the CMM team was present for the last stretch, answering questions and giving final pointers on how to perform simulations, and discussing which code to be most appropriate for each project. At 17h, after my closing remarks, the curtain fell over this spring school on computational tools for materials science. It has been a busy week, and Kurt and I are grateful for the help we got from everyone involved in this spring school, both local and external guests. Tired but happy I look back…and also a little bit forward, hoping and already partially planning a next edition…maybe in two years we will return.

Guests from the VASP (Martijn Marsman, top left) and ABINIT group (Xavier Gonze, top right, Matteo Giantomassi, bottom left, Gian-Marco Rignanese, bottom right)

Our external lecturers from the VASP group (Martijn Marsman, top left) and the ABINIT group (Xavier Gonze, top right, Matteo Giantomassi, bottom left, Gian-Marco Rignanese, bottom right)

Spring School Computational Tools: Day 4 – Gaussian

After having focused on solids during the previous two days of our spring school, using either VASP (Tuesday) or ABINIT (Wednesday), today’s focus goes to molecules, and we turn our attention to the Gaussian code.

Dietmar Hertsen introduced us into the Gaussian code, and immediately showed us why this code was included in our spring school set: it is the most popular code (according to google). He also explained this code is so popular (among chemists) because it can do a lot of the chemistry chemists are interested in, and because of the simplicity of the input files for small molecules. After pointing out the empty lines quirks of Gaussian it was time to introduce some of the possible editors to use with Gaussian. The remainder of his lecture, Dietmar showed us how simple (typical) Gaussian calculations are run, pointing out interesting aspects of the workflow, and the fun of watching vibrations in molden. He ended his lecture giving us some tips and tricks for the investigation of transition states, and the study of chemical reactions, as a mental preparation for the first hands-on session which followed after the coffee-break.

Lecturers for the Gaussian code.

Lecturers for the Gaussian code: Dietmar Hertsen (left) introducing the basics of the code, while Patrick Bultinck (right) discuses more advanced wave function techniques in more detail.

In the afternoon it was time to take out the big guns. Prof. Patrick Bultinck, of the Ghent Quantum Chemistry Group, was so kind to provide the advanced session. In this session, we were reminded, after two days of using the density as a central property, that wave functions are the only way to obtain perfect results. Unfortunately, practical limitations hamper the application to the systems of interest from a practical physical point of view. Patrick, being a quantum chemist to the bone, at several points stepped away from his slides and showed on the blackboard how several approximations to full configuration interaction (full-CI) can be obtained. He also made us aware of the caveats of such approaches; Such as size consistency and basis(-type) dependence for truncated CI, and noted that although CASSCF is a powerful method (albeit not for the fainthearted), it is somewhat a black art that should be used with caution. As such, CASSCF was not included in the advanced hands-on sessions guided by Dr. Sofie Van Damme (but who knows what may happen in a future edition of this spring school).

Spring School Computational Tools: Day 3 – Abinit

Today is the third day of our spring school. After the introduction to VASP yesterday, we now turn our attention to another quantum mechanical level solid state code: ABINIT. This is a Belgian ab initio code (mainly) developed at the Université Catholique de Louvain (UCL) in Wallonia.

Since no-one at CMM has practical experience with this code and several of us are interested to learn more about this code, we were very pleased that the group of Prof. Xavier Gonze was willing to support this day in its entirety. During the morning introductory session Prof. Xavier Gonze introduced us into the world of the ABINIT code, for which the initial ideas stem from 1997. Since then, the program has grown to about 800.000 lines of fortran code(!) and a team of 50 people worldwide currently contribute to its development. Also in recent years, a set of python scripts have been developed providing a more user friendly interface (abipy) toward the users of the code. The main goal of the development of this interface, is to shift interest back to the physics instead of trying to figure out which keywords do the trick. We also learned that the ABINIT code is strongly inspired by the ‘free software’ model, and as such Prof. Xavier Gonze prefers to refer to the copyright of ABINIT as copyleft. This open source mentality seems also to provide strength to the code; leading to its large number of developers/contributors; which in turn leads to the implementation of a wide variety of basis-sets, functionals and methodologies.

After the general introduction, Dr. Matteo Giantomassi introduced the abipy python package. This package was especially developed for automating post-processing of ABINIT results, and automatically generating input files. In short, to make interaction with ABINIT easier. Matteo, however, also warned that this approach which makes the use of ABINIT much more black box, might confuse beginners, since a lot of magic is going on under the hood of the abipy scripts. However, his presence, and that of the rest of the ABINIT delegation made sure confusion was kept to a minimum during the hands-on sessions, for which we are very grateful.

After the lunch break, Prof.  Gian-Marco Rignanese and Prof. Xavier Gonze held a duo seminar on more advanced topics covering Density Functional Perturbation Theory and based on this spectroscopy and phonon calculations beyond the frozen phonon approach. Of these last aspects I really am interested in seeing how they cope with my Metal Organic Frameworks

Spring School Computational Tools: Day 2 – VASP

On this second day of our spring school, the first ab initio solid state code is introduced: VASP, the Vienna Ab initio Simulation Package.

Having worked with this code for almost a full decade, some consider me an expert, and as such I had the dubious task of providing first contact with this code to our participants. Since all basic aspects and methods had already been introduced on the first day, I mainly focused on presenting the required input files and parameters, and showing how these should be tweaked for some standard type solid state calculations. Following this one-hour introduction, in which I apparently had not yet scared our participants too much, all participants turned up for the first hands-on session, where they got to play with the VASP program.

In the afternoon, we were delighted to welcome our first invited speaker, straight from the VASP-headquarters: Dr. Martijn Marsman. He introduced us to advanced features of VASP going beyond standard DFT. He showed the power (and limitations) of hybrid-functionals and introduced the quasi-particle approach of GW. We even went beyond GW with the Bethe-Salpeter equations (which include electron-hole interactions). Unfortunately, these much more accurate approaches are also much more expensive than standard DFT, but there is work being done on the implementation of a cubic scaling RPA implementation, which will provide a major step forward in the field of solid state science. Following this session, a second hands-on session took place where exercises linked to these more advanced topic were provided and eagerly tried by many of the more advanced participants.

Spring School Computational Tools: Day 1

Today our one-week spring school on computational tools for materials science kicked off. During this week, Kurt Lejaeghere and I host this spring school, which we have been busily organizing the last few months, intended to introduce materials scientists into the use of four major ab-initio codes (VASP, ABINIT, Gaussian and CP2K). During this first day, all participants are immersed in the theoretical background of molecular modeling and solid state physics.


Prof. Karen Hemelsoet presented a general introduction into molecular modeling, showing us which computational techniques are useful to treat problems of varying scales, both in space and time. With the focus going to the modeling of molecules she told us everything there is to know about the potential energy surface, (PES)  how to investigate it using different computational methods. She discussed the differences between localized (i.e. gaussian) and plane wave basis sets and taught us how to accurately sample the PES using both molecular dynamics and normal mode analysis. As a final topic she introduced us to the world of computational spectroscopy, showing how infrared spectra can be simulated, and the limitations of this type of simulations.

With the, somewhat mysterious, presentations of Prof. Stefaan Cottenier we moved from the  realm of molecules to that of solids. In his first session, he introduced density functional theory, a method ideally suited to treat extended systems at the quantum mechanical level. And showed that as much information is present in the electron density of a system as is in its wave function. In his second session, we fully plunged in the world of solids, and we were guided, step by step, towards a full understanding of the technical details generally found in the methods section of (ab-initio) computational materials science work. Throughout this session, NaCl was used as an ever present example, and we learned that our simple high-school picture of bonding in kitchen salt is a lie-to-children. In reality, Cl doesn’t gain an extra electron by stealing it away from Na, instead it is rather the Na 3s electron which is living to far away from the Na nucleus it belongs to.

De-activating an active atom.

It could be that I’ve perhaps found out a little bit about the structure
of atoms. You must not tell anyone anything about it. . .
–Niels Bohr (1885 – 1965),
in a letter to his brother (1912)

Getting the news that a paper got accepted for publication is exciting news, but it can also be a little bit sad since it indicates the end of a project. Little over a month ago we got this great news regarding our paper for the journal of chemical information and modeling. It was the culmination of a side project Goedele Roos and I had been working on, in an on-and-off fashion, over the last two years.

When we started the project each of us had his/her own goal in mind. In my case, it was my interest in showing that my Hirshfeld-I code could handle systems which are huge from the quantum mechanical calculation point of view. Goedele, on the other hand, was interested to see how good Hirshfeld-I charges behaved with increasing size of a molecular fraction. This is of interest for multiscale modeling approaches, for which Martin Karplus, Michael Levitt, and Arieh Warshel got the Nobel prize in chemistry in 2013. In such an approach, a large system, for example a solvated biomolecule containing tens of thousands of atoms, is split into several regions. The smallest central region, containing the part of the molecule one is interested in is studied quantum mechanically, and generally contains a few dozen up to a few hundred atoms. The second shell is much larger, and is described by force-field approaches (i.e. Newtonian mechanics) and can contain ten of thousands of atoms. Even further  from the quantum mechanically treated core a third region is described by continuum models.

What about the behavior of the charges? In a quantum mechanical approach, even though we still speak of electrons as-if referring to classical objects, we cannot point to a specific point in space to indicate: “There it is”. We only have a probability distribution in space indicating where the electron may be. As such, it also becomes hard to pinpoint an atom, and in an absolute sense measure/calculate it’s charge. However, because such concepts are so much more intuitive, many chemists and physicists have developed methods, with varying success, to split the electron probability distribution into atoms again. When applying such a scheme on the probability distributions of fractions of a large biomolecule, we would like the atoms at the center not to change to much when the fraction is made larger (i.e. contain more atoms). This would indicate that from some point onward you have included all atoms that interact with the central atoms. I think, you can already see the parallel with the multiscale modeling approach mentioned above; where that point would indicate the boundary between the quantum mechanical and the Newtonian shell.

Convergence of Hirshfeld-i charges for clusters of varying size of a biomolecule.

Convergence of Hirshfeld-I charges for clusters of varying size of a biomolecule. The black curves show the charge convergence of an active S atom, while the red curves indicate a deactivated S atom.

Although, we expected to merely be studying this convergence behavior, for the particular partitioning scheme I had implemented, we dug up an unexpected treasure. Of the set of central atoms we were interested all except one showed the nice (and boring) convergence behavior. The exception (a sulfur atom) showed a clear lack of convergence, it didn’t even show any intend toward convergence behavior even for our system containing almost 1000 atoms. However, unlike the other atoms we were checking, this S atom had a special role in the biomolecule: it was an active site, i.e. the atom where chemical reactions of the biomolecule with whatever else of molecule/atom are expected to occur.

Because this S atom had a formal charge of -1, we bound a H atom to it, and investigated this set of new fractions. In this case, the S atom, with the H atom bound to it, was no longer an active site. Lo and behold, the S atom shows perfect convergence like all other atoms of the central cluster. This shows us that an active site is more than an atom sitting at the right place at the right time. It is an atom which is reaching out to the world, interacting with other atoms over a very long range, drawing them in (>10 ångström=1 nm is very far on the atomic scale, imagine it like being able to touch someone who is standing >20 m away from you). Unfortunately, this is rather bad news for multiscale modeling, since this means that if you want to describe such an active site accurately you will need an extremely large central quantum mechanical region. When the active site is deactivated, on the other hand, a radius of ~0.5 nm around the deactivated site is already sufficient.

Similar  to Bohr, I have the feeling that “It could be that I’ve perhaps found out a little bit about the structure
of atoms.”, and it makes me happy.

Convergence of Atomic Charges with the Size of the Enzymatic Environment

Authors: Danny E. P. Vanpoucke, Julianna Oláh, Frank De Proft, Veronique Van Speybroeck, and Goedele Roos
Journal: J. Chem. Inf. Model. 55(3), 564-571 (2015)
doi: 10.1021/ci5006417
IF(2015): 3.657
export: bibtex
pdf: <J.Chem.Inf.Model.> 
Graphical Abstract: The influence of the cluster size and water presence on the atomic charge of active and inactive sites in Biomolecules.
Graphical Abstract: Graphical Abstract: The influence of the cluster size and water presence on the atomic charge of active and inactive sites in Bio-molecules.


Atomic charges are a key concept to give more insight into the electronic structure and chemical reactivity. The Hirshfeld-I partitioning scheme applied to the model protein human 2-cysteine peroxiredoxin thioredoxin peroxidase B is used to investigate how large a protein fragment needs to be in order to achieve convergence of the atomic charge of both, neutral and negatively charged residues. Convergence in atomic charges is rapidly reached for neutral residues, but not for negatively charged ones. This study pinpoints difficulties on the road towards accurate modeling of negatively charged residues of large bio-molecular systems in a multiscale approach.