Category: blog

Spring School Computational Tools: Day 2 – VASP

On this second day of our spring school, the first ab initio solid state code is introduced: VASP, the Vienna Ab initio Simulation Package.

Having worked with this code for almost a full decade, some consider me an expert, and as such I had the dubious task of providing first contact with this code to our participants. Since all basic aspects and methods had already been introduced on the first day, I mainly focused on presenting the required input files and parameters, and showing how these should be tweaked for some standard type solid state calculations. Following this one-hour introduction, in which I apparently had not yet scared our participants too much, all participants turned up for the first hands-on session, where they got to play with the VASP program.

In the afternoon, we were delighted to welcome our first invited speaker, straight from the VASP-headquarters: Dr. Martijn Marsman. He introduced us to advanced features of VASP going beyond standard DFT. He showed the power (and limitations) of hybrid-functionals and introduced the quasi-particle approach of GW. We even went beyond GW with the Bethe-Salpeter equations (which include electron-hole interactions). Unfortunately, these much more accurate approaches are also much more expensive than standard DFT, but there is work being done on the implementation of a cubic scaling RPA implementation, which will provide a major step forward in the field of solid state science. Following this session, a second hands-on session took place where exercises linked to these more advanced topic were provided and eagerly tried by many of the more advanced participants.

Spring School Computational Tools: Day 1

Today our one-week spring school on computational tools for materials science kicked off. During this week, Kurt Lejaeghere and I host this spring school, which we have been busily organizing the last few months, intended to introduce materials scientists into the use of four major ab-initio codes (VASP, ABINIT, Gaussian and CP2K). During this first day, all participants are immersed in the theoretical background of molecular modeling and solid state physics.

springschool

Prof. Karen Hemelsoet presented a general introduction into molecular modeling, showing us which computational techniques are useful to treat problems of varying scales, both in space and time. With the focus going to the modeling of molecules she told us everything there is to know about the potential energy surface, (PES)  how to investigate it using different computational methods. She discussed the differences between localized (i.e. gaussian) and plane wave basis sets and taught us how to accurately sample the PES using both molecular dynamics and normal mode analysis. As a final topic she introduced us to the world of computational spectroscopy, showing how infrared spectra can be simulated, and the limitations of this type of simulations.

With the, somewhat mysterious, presentations of Prof. Stefaan Cottenier we moved from the  realm of molecules to that of solids. In his first session, he introduced density functional theory, a method ideally suited to treat extended systems at the quantum mechanical level. And showed that as much information is present in the electron density of a system as is in its wave function. In his second session, we fully plunged in the world of solids, and we were guided, step by step, towards a full understanding of the technical details generally found in the methods section of (ab-initio) computational materials science work. Throughout this session, NaCl was used as an ever present example, and we learned that our simple high-school picture of bonding in kitchen salt is a lie-to-children. In reality, Cl doesn’t gain an extra electron by stealing it away from Na, instead it is rather the Na 3s electron which is living to far away from the Na nucleus it belongs to.

De-activating an active atom.

It could be that I’ve perhaps found out a little bit about the structure
of atoms. You must not tell anyone anything about it. . .
–Niels Bohr (1885 – 1965),
in a letter to his brother (1912)

Getting the news that a paper got accepted for publication is exciting news, but it can also be a little bit sad since it indicates the end of a project. Little over a month ago we got this great news regarding our paper for the journal of chemical information and modeling. It was the culmination of a side project Goedele Roos and I had been working on, in an on-and-off fashion, over the last two years.

When we started the project each of us had his/her own goal in mind. In my case, it was my interest in showing that my Hirshfeld-I code could handle systems which are huge from the quantum mechanical calculation point of view. Goedele, on the other hand, was interested to see how good Hirshfeld-I charges behaved with increasing size of a molecular fraction. This is of interest for multiscale modeling approaches, for which Martin Karplus, Michael Levitt, and Arieh Warshel got the Nobel prize in chemistry in 2013. In such an approach, a large system, for example a solvated biomolecule containing tens of thousands of atoms, is split into several regions. The smallest central region, containing the part of the molecule one is interested in is studied quantum mechanically, and generally contains a few dozen up to a few hundred atoms. The second shell is much larger, and is described by force-field approaches (i.e. Newtonian mechanics) and can contain ten of thousands of atoms. Even further  from the quantum mechanically treated core a third region is described by continuum models.

What about the behavior of the charges? In a quantum mechanical approach, even though we still speak of electrons as-if referring to classical objects, we cannot point to a specific point in space to indicate: “There it is”. We only have a probability distribution in space indicating where the electron may be. As such, it also becomes hard to pinpoint an atom, and in an absolute sense measure/calculate it’s charge. However, because such concepts are so much more intuitive, many chemists and physicists have developed methods, with varying success, to split the electron probability distribution into atoms again. When applying such a scheme on the probability distributions of fractions of a large biomolecule, we would like the atoms at the center not to change to much when the fraction is made larger (i.e. contain more atoms). This would indicate that from some point onward you have included all atoms that interact with the central atoms. I think, you can already see the parallel with the multiscale modeling approach mentioned above; where that point would indicate the boundary between the quantum mechanical and the Newtonian shell.

Convergence of Hirshfeld-i charges for clusters of varying size of a biomolecule.

Convergence of Hirshfeld-I charges for clusters of varying size of a biomolecule. The black curves show the charge convergence of an active S atom, while the red curves indicate a deactivated S atom.

Although, we expected to merely be studying this convergence behavior, for the particular partitioning scheme I had implemented, we dug up an unexpected treasure. Of the set of central atoms we were interested all except one showed the nice (and boring) convergence behavior. The exception (a sulfur atom) showed a clear lack of convergence, it didn’t even show any intend toward convergence behavior even for our system containing almost 1000 atoms. However, unlike the other atoms we were checking, this S atom had a special role in the biomolecule: it was an active site, i.e. the atom where chemical reactions of the biomolecule with whatever else of molecule/atom are expected to occur.

Because this S atom had a formal charge of -1, we bound a H atom to it, and investigated this set of new fractions. In this case, the S atom, with the H atom bound to it, was no longer an active site. Lo and behold, the S atom shows perfect convergence like all other atoms of the central cluster. This shows us that an active site is more than an atom sitting at the right place at the right time. It is an atom which is reaching out to the world, interacting with other atoms over a very long range, drawing them in (>10 ångström=1 nm is very far on the atomic scale, imagine it like being able to touch someone who is standing >20 m away from you). Unfortunately, this is rather bad news for multiscale modeling, since this means that if you want to describe such an active site accurately you will need an extremely large central quantum mechanical region. When the active site is deactivated, on the other hand, a radius of ~0.5 nm around the deactivated site is already sufficient.

Similar  to Bohr, I have the feeling that “It could be that I’ve perhaps found out a little bit about the structure
of atoms.”, and it makes me happy.

Start to Fortran

Code-statistics for the hive3 code (feb.2015)

Code-statistics for the hive3 code (feb.2015)

If you are used to programming in C/C++, Java or Pascal, you probably do this using an Integrated Development Environment (IDE’s) such as Dev-Cpp/Pascal, Netbeans, Eclipse, … There are dozens of free IDE’s for each of these languages. When starting to use fortran, you are in for a bit of a surprise. There are some commercial IDE’s that can handle fortran (MS Visual Studio, or the Lahey IDE). Free fortran IDEs are rather scarce and quite often are the result of the extension of a C++ focused IDE. This, however, does not make them less useful. Code::Blocks is such an IDE. It supports several programming and scripting languages including C and fortran, making it also suited for mixed languages development. In addition, this IDE has been developed for Windows, Linux and Mac Os X, making it highly portable. Furthermore, installing this IDE combined with for example the gcc compiler can be done quickly and without much hassle, as is explained in this excellent tutorial. In 5 steps everything is installed and you are up and running:

  1. Get a gfortran compiler at https://gcc.gnu.org/wiki/GFortran

    Go for binaries and get the installer if you are using Windows. This will provide you with the latest build. Be careful if you are doing this while upgrading from gfortran 4.8 to 4.9 or 4.10. The latter two are known to have a recently fixed compiler-bug related to the automatic finalization of objects. A solution to this problem is given in this post.

    UPDATE 03/02/2017: As the gcc page has changed significantly since this post was written, I suggest to follow the procedure described here for the installation of a 64bit version of the compiler.

  2. Get the Code::Blocks IDE at http://www.codeblocks.org/ or http://cbfortran.sourceforge.net/ (preferred)

    Since version 13.12 the Fortranproject plugin is included in the Code::Blocks installation.

  3. Setup gfortran

    Run the installer obtained at step 1…i.e. keep clicking OK until all is finished.

  4. Setup Code::Blocks for fortran
    1. Run the installer or Unzip the zip-file obtained in step 2.
    2. Run Code::Blocks and set your freshly installed GNU fortran compiler as default.
    3. Associate file types with Code::Blocks. If you are not using other IDE’s this may be an interesting idea
    4. Go to settings, select “Compiler and Debugger”, click on “Toolchain executables” and set the correct paths.
    5. Code::blocks has been configured.
  5. Your first new fortran program
    1. Go to “File” → “New” → “Project”.
    2. Select “Fortran Application”.
    3. Follow the Wizard: provide a project folder and title.
    4. Make sure the compiler is set to “GNU Fortran Compiler”, and click Finish.
    5. A new project is now being created, containing a main file named “main.f90”
    6. Click “Build”, to build this program, and then “Run”.
    7. Congratulations your first Fortran program is a fact.

 

Of course any real project will contain many files, and when you start to create fortran 2003/2008 code you will want to use “.f2003” or “.f03” instead of “.f90” . The Code::Blocks IDE is well suited for the former tasks, and we will return to these later. Playing with this IDE is the only way to learn about all its options. Two really nice plugins are “Format Fortran Indent” and “Code statistics”. The first one can be used to auto-indent your Fortran code, making it easier to find those nasty missing “end” statements. The code statistics tool runs through your entire project and tells you how many lines of code you have, and how many lines contain comments.

The Lost Art Of Reviewing: Hints for Reviewers and Authors

The last three months have been largely dedicated to the review of publications: On the one hand, some of my own work was going through the review-process, while on the other hand, I myself had to review several publications for various journals. During this time I got to see the review reports of fellow reviewers, both for my own work and the work I had to reviewed. Because the peer-review experience is an integral part of modern science, some hints for both authors and reviewers:

For reviewers:

  • Do you have time?

    When you get your first request to review a paper for a peer reviewed journal, this is an exciting experience. It implies you are being recognized by the community as a scientist with some merit. However, as time goes by, you will see the number of these requests increase, and your available time decreases (this is a law of nature). As such, don’t  be too eager to press that accept button. If you do not have time to do it this week, chances are slim you will have time next week or the week after that. Only accept when you have time to do it NOW. This ensures that you can provide a qualitative report on the paper under review (cf. point below) No-one will be angry if you say no once in a while. Some journals also ask if you can suggest alternate reviewers in such a case. As a group group leader (or more senior scientist) this is a good opportunity to introduce more junior scientists into the review process.

     Not to be mistaken with predatory journals, presenting all kinds of schemes in which you pay heavily for your publication to get published.

  • Are you familiar with the subject?(material/topic, theoretical method, experimental technique,…)

    You have always been selected specifically for your qualities, which in some cases means your name came up in a google search combining relevant keywords (not only authors and reviewers are victims of the current publish-or-perish mentality). Don’t be afraid to decline if a paper is outside your scope of interest/understanding. In my own case, I quite often get the request to review experimental papers which I will generally decline, unless I the abstract catches my interest. In such a case, it is best to let the editor know via a private note that although you provide a detailed report, you expect there to be an actual specialist (in my case an experimentalist) present with the other reviewers which can judge the specialized experimental aspects of the work you are reviewing.

  • Review a paper without checking out the authors.

    In some fields it is normal for the review process to be double blind (authors do not know the reviewers, and the reviewers do not know the authors), in others this is not the case. However, to be able to review a paper on it’s merit try to ignore who the authors are, it should reduce bias (both favorable or unfavorable), because that is the idea of science and writing papers: it should be about the work/science not the people who did the science.

  • Provide a useful review (positive or negative)

    Single sentence reviews stating how good/bad a paper is, only shows you barely looked at it (this may be due to time constraints or being outside your scope of expertise: cf. above). Although it may be nice for the authors to hear you found their work great and it should be published immediately, it leaves a bit of a hollow sense. In case of a rejection, on the other hand it will frustrate the authors since they do not learn anything from this report. So how can they ever improve it?

  • Nobody is perfect, and neither are our papers.

    No matter how good paper, one can always make remarks. Going from typographical/grammatical issues (remember, most authors are not native English speakers) to conceptual issues, aspects which may be unclear. Never be afraid to add these to your report.

 

For authors:

  • Do not submit a draft version of your paper.

    Although this is a quite a obvious statement, there appear to be authors who just send in their draft to a high ranking journal to get a review-report and then use this to clean up the draft and send it elsewhere. When you submit a paper you should always have the intention of having it accepted and published, and not just use the review progress to point out the holes in your current work in progress.

  • Take time to create Figures and Tables.

    Some people like to make figures and tables, others don’t. If you are one of the latter, whatever you do, avoid making sloppy figures or tables (e.g. incomplete captions, missing or meaningless legends, label your axis, remove artifacts from your graphics software, or even better switch to other graphics software). Tables and figures are capitalized because they are a neat and easy to use means of transferring information from the author to the reader. In the end it is often better not to have a figure/table than to have a bad one.

  • We are homo sapiens pan narrans

    Although as a species we are called homo sapiens (wise man) in essence we are rather pan narrans (storytelling chimpanzee). We tell stories, and have always told stories, to transfer knowledge from one generation to another. Fairy-tales learn us a dark forest is a dangerous place while proverbs express a truth based on common sens and practical experience.

    As such, a good publication is also more than just a cold enumeration of a set of data. You can order you data to have a story-line. This can be significantly different from the order in which you did your work. You could just imagine how you could have obtained the same results in a perfect world (with 20/20 hindsight that is a lot easier) and present steps which have a logical order in that (imaginary) perfect world. This will make it much easier for your reader to get through the entire paper. Note that it is often easier to have a story-line for a short piece of work than a very long paper. However, in the latter case the story-line is even more important, since it will make it easier for your reader to recollect specific aspects of your work, and easily track them down again without the need to go through the entire paper again.

  • Supporting/Supplementary Information (SI) is not a place to hide your work

    Some journals allow authors to provide  SI to their work. This should be data, to my opinion, without which the paper also can be published. Here you can put figures/tables which present data from the publication in a different format/relation. You can also place similar data in the SI: E.g. you have a dozen samples, and you show a spectrum of one sample as a prototype in the paper, while the spectra of the other samples are placed in SI. What you should not do is put part of the work in the SI to save space in the paper. Also, something I have seen happen is so-called combined experimental-theoretical papers, where the theoretical part is 95% located in the SI, only the conclusions of the theoretical part are put in the paper itself. Neither should you do the reverse. In the end you should ask yourself the question: would this paper be published/publishable under the same standards without the information placed in SI. If the answer is yes, then you have placed the right information in the SI.

  • Sales: 2 papers for the price of one

    Since many, if not all, funding organisations and promotion committees use the number of publications as a first measure of merit of a scientist, this leads to a very unhealthy idea that more publications means better science. Where the big names of 50 years ago could actually manage to have their first publication as a second year post doc, current day researchers (in science and engineering) will generally not even get their PhD without at least a handful publications. The economic notion of ever increasing profits (which is a great idea, as we know since the economic crisis of 2008) unfortunately also transpires in science, where the number of publications is the measure of profit. This sometimes drives scientists to consider publishing “Least Publishable Units”. Although it is true that it is easier to have a story-line for a short piece of work, you also loose the bigger picture. If you consider splitting your work in separate pieces, consider carefully why you do this. Should you do this? Fear that a paper will be too long is a poor excuse, since you can structure your work. Is there actually anything to gain scientifically from this, except one additional publication? Funding agencies claim to want only excellent work; so remind them that excellent work is not measured in simple accounting numbers.

 

Disclaimer: These hints reflect my personal opinion/preferences and as such might differ from the opinion/preference of your supervisor/colleagues/…, but I hope they can provide you an initial guide in your own relation to the peer-review process.

Fortran: Tales of the Living Dead?

Fortran, just like COBOL (not to be confused with cobold), is a programming language which is most widely known for its predicted demise. It has been around for over half a century, making it the oldest high level programming language. Due to this age, it is perceived by many to be a language that should have gone the way of the dinosaur a long time ago, however, it seems to persist, despite futile attempts to retire it. One of the main concerns is that the language is not up to date, and there are so many nice high level languages which allow you to do things much easier and “as fast”. Before we look into that, it is important to know that there are roughly two types of programming languages:

  1. Compiled languages (e.g. Fortran, C/C++, Pascal)
  2. Interpreted and scripting languages (e.g. Java, PHP, Python, Perl)

The former languages result in binary code that is executed directly on the machine of the user. These programs are generally speaking fast and efficient. Their drawback, however, is that they are not very transferable (different hardware, e.g. 32 vs. 64 bit, tend to be a problem). In contrast, interpreted languages are ‘compiled/interpreted’ at run-time by additional software installed on the machine of the user (e.g. JVM: Java virtual machine), making these scripts rather slow and inefficient since they are reinterpreted on each run. Their advantage, however, is their transferability and ease of use.  Note that Java is a bit of a borderline case; it is not entirely a full programming language like C and Fortran, since it requires a JVM to run, however, it is also not a pure scripting language like python or bash where the language is mainly used to glue other programs together.

Fortran has been around since the age of the dinosaurs. (https://onionesquereality.wordpress.com/)

Fortran has been around since the age of the dinosaurs. (via Onionesque Reality)

Now let us return to Fortran. As seen above, it is a compiled language, making it pretty fast. It was designed for scientific number-crunching purposes (FORTRAN comes from FORmula TRANslation) and as such it is used in many numerical high performance libraries (e.g. (sca) LAPACK and BLAS). The fact that it appeared in 1957 does not mean nothing has happened since. Over the years the language evolved from a procedural language, in FORTRAN II, to one supporting also modern Object Oriented Programming (OOP) techniques, in Fortran 2003. It is true that new techniques were introduced later than it was the case in other languages (e.g. the OOP concept), and many existing scientific codes contain quite some “old school” FORTRAN 77. This gives the impression that the language is rather limited compared to a modern language like C++.

So why is it still in use? Due to its age, many (numerical) libraries exist written in Fortran, and due to its performance in a HPC environment. This last point is also a cause of much debate between Fortran and C adepts. Which one is faster? This depends on many things. Over the years compilers have been developed for both languages aiming at speed. In addition to the compiler, also the programming skills of the scientist writing the code are of importance. As a result, comparative tests end up showing very little difference in performance [link1, link2]. In the end, for scientific programming, I think the most important aspect to consider is the fact that most scientist are not that good at programming as they would like/think (author included), and as such, the difference between C(++) and Fortran speeds for new projects will mainly be due to this lack of skills.

However, if you have no previous programming experience, I think Fortran may be easier and safer to learn (you can not play with pointers as is possible with C(++) and Pascal, which is a good thing, and you are required to define your variables, another good coding practice (Okay, you can use implicit typing in theory, but more or less everybody will suggest against this, since it is bad coding practice)). It is also easier to write more or less bug free code than is possible in C(++) (remember defining a global constant PI and ending up with the integer value 3 instead of 3.1415…). Also its long standing procedural setup keeps things a bit more simple, without the need to dive into the nitty gritty details of OOP, where you should know that you are handling pointers (This may be news for people used to Java, and explain some odd unexpected behavior) and getting to grips with concepts like inheritance and polymorphism, which, to my opinion, are rather complex in C++.

In addition, Fortran allows you to grow, while retaining ‘old’ code. You can start out with simple procedural design (Fortran 95) and move toward Object Oriented Programming (Fortran 2003) easily. My own Fortran code is a mixture of Fortran 95 and Fortran 2003. (Note for those who think code written using OOP is much slower than procedural programming: you should set the relevant compiler flags, like –ipo )

In conclusion, we end up with a programming language which is fast, if not the fastest, and contains most modern features (like OOP). Unlike some more recent languages, it has a more limited user base since it is not that extensively used for commercial purposes, leading to a slower development of the compilers (though these are moving along nicely, and probably will still be moving along nicely when most of the new languages have already been forgotten again). Tracking the popularity of programming languages is a nice pastime, which will generally show you C/C++ being one of the most popular languages, while languages like Pascal and Fortran dangle somewhere around 20th-40th position, and remain there over long timescales.

The fact that Fortran is considered rather obscure by proponents of newer scripting languages like Python can lead to slightly funny comment like:“Why don’t you use Python instead of such an old and deprecated language, it is so such easier to use and with the NumPy and SciPy library you can also do number-crunching.”. First of all, Python is a scripting language (which in my mind unfortunately puts it at about the same level as HTML and CSS 🙂  ), but more interestingly, those libraries are merely wrappers around C-wrappers around FORTRAN 77 libraries like MINPACK. So people suggesting to use Python over Fortran 95/2003 code, are actually suggesting using FORTRAN 77 over more recent Fortran standards. Such comments just put a smile on my face. 😀  With all this in mind, I hope to show in this blog that modern Fortran can tackle all challenges of modern scientific programming.

39th ICACC: day 3-5

The last three days of the conference, the virtual materials design session took place. This session was specifically focused on computational materials design. Because of this focus, the attendance was rather low, mainly computational materials scientists. Apparently, this type of specialized focus on computational work is the best way not to reach the general experimental public in the same field. As a computational scientist the only way to circumvent this, is by applying for a presentation in a relevant experimental session. This requires you to make a less technical presentation, but this is not a bad thing, since it forces you to think about your results and understand them in more simple terms.

An example of such a presentation was given Dr. Ong who discussed his high-throughput ab initio setup for designing solid state electrolytes. He showed that a material (Li10GeP2S12) which was thought to be a 1D conductor, actually is a 3D conductor, however, the Li-conductivity in the directions perpendicular to the 1D direction is ~100 times smaller, explaining why they were not noticed before. He also presented newly predicted materials for Li-transport, which lead to a standard experimental remark that such computational predictions mean very little, since they do not take into account temperature, and as such these structures may not be  stable. Although such remarks are “in theory” true, and are a nice example of a lack of understanding outside the computational community, in this case, the material the experimental researcher was referring to was recently synthesized and found to be stable.

On Thursday morning, I had the opportunity to present my contributed presentation. In contrast to my invited presentation, this presentation was solely focused on doped cerium dioxide. Using a three-step approach, we investigated all different contributions of the dopants to their modification of the mechanical properties of CeO2. In the first step, we look at group IV dopants, since Ce has an oxidation state of +IV in CeO2. Here we show that the character of the valence electrons (p or d) plays an important role with regard to stability and mechanical properties. In our second step dopants with an oxidation state different from +IV are considered, without the presence of oxygen vacancies. In this case, the same trends and behavior is observed as in the first step. In the third and last step also oxygen vacancies are included. We show that oxygen vacancies have a stabilizing influence of the doped system. Furthermore, the oxygen vacancies make CeO2 mechanically softer, i.e. reducing the bulk modulus.

In the afternoon, Prof. Frederico Rosei, of NanoFemto lab in Canada, gave an entertaining lecture on “Mentorship for young scientists: Developing scientific survival skills”. It presented an interesting forum to find out that, as scientists, we all seem to struggle with the same things. We want to do what we like (research), and invest a lot in this, unfortunately external forces (the struggle for funding/job security) complicate life. Frederico centered his lecture around three points of importance/goals a young scientist should always try to be aware of:

  • Know yourself
  • Plan ahead
  • Find a mentor

Although these are lofty goals, they tend to be quite none-trivial in the current-day scientific environment. Finding a mentor, i.e. a senior scientist with time on his/her hands not involved in your projects, is a bit like looking for a unicorn. Unlike the unicorn, they do exist, but there are very very few of them (How many professors do you know with spare time?). Planning ahead, and following your own plans are nice in theory, unfortunately a young scientists’ life (i.e. everyone below tenured professorship) tends to be ruled by funding in a kind of life or dead setup. I am not saying this is not the case for tenured professors, however, it is not their own life and death. For all other scientist : No funding=no job=end scientific career. As such, the pressure to publish (yes, funding agencies only count your papers, not how good/bad they are even though that is the official statement) is high, and will have a detrimental influence on the quality of science and of what is being published (if it isn’t already the case). I truly wished, the world could be as Prof. Rosei envisions it. Back to more happy subjects.

Friday was the last day of the conference. In the morning I again attended the virtual materials design session, but as with all other sessions several presentations were canceled, apparently snow is wreaking some havoc in New York airport, preventing several presenters not to be able to make it. Luckily Eva Zarkadoula made it to the conference to present her very nice modeling work: “Molecular Dynamic Simulations of Synergistic Effects in Ion Track Formation”. Using classical molecular dynamics simulations, she simulated how incoming high energy radiation traces a path trough a material allowing one to use this material as a detector. In contrast to what I would have imagined, perfectly crystalline material shows very little damage after the radiation has passed through. Even though initially a clear trace is visible, the system appear to relax back to a more or less perfect crystalline solid upon relaxation, making it a rather poor sensor material. However, if defects are present in the material, the track made by the radiation remains clearly visible. An extremely nice bonus to this work is the fact that direct comparison to experiments is possible.

The conference ended at noon, leaving some time to have a walk on the beach, find some souvenirs, and have a last dinner with colleagues from the conference.

 

39th ICACC: Day 2

Today I had to get to work myself, unlike the first day of the ICACC where I was only a spectator. At 8h30 I kicked of the GYIF as the first speaker in a session called “Theoretical Modeling and Applications” with my presentation: Computational Materials Science: Where Theory meets Experiment. Intended for a general audience of (expectedly mainly) experimental materials scientist this presentation was aimed to show the audience that computational materials science has in the last decades developed to the level where results relevant for real-life applications in materials science can be obtained. For this I used three examples of my own work: (1) Pt nanowires on Ge(001), where I used simulated STM to build an accurate atomistic model of these wires, (2) doped cerium oxides, where the influence of doping on lattice parameter and thermal expansion coefficients was studied for the purpose of matching them to those of other materials, and (3) Metal-Organic Frameworks, where I showed that the spin-configuration of the MIL-47(V) MOF is linked to the transition pressure inducing breathing.

Later on during the morning, I had the pleasure of chairing the session “Additive Manufacturing” together with Valerie Wiesner from NASA. During this session Mahref Vali and Lisa Rueschoff presented their most recent work on 3D printing of ceramic materials, a technique which will allow the printing of ceramic components in the future. The third speaker of this session, Rumi Kitazawa, delighted us with an inspiring talk on the “Engineering applications of Menger sponges”. A Menger sponge is a fractal related to the Cantor set, as such a fully developed Menger sponge can not be build with any real material, however, using 3D printing it is possible to build a structure with Menger sponge like features (i.e. with holes down to a certain size). By comparing experimental stress tests on such 3D-printed systems with calculations of the strain energy in such a structure, Rumi was able to show these Menger materials have a peculiar, albeit very organised, strain pattern along the main diagonal of the material. The combination of large and small pores present in these Menger sponge materials may make this behavior relevant for MOF (and other porous) materials, where the large pores reflect inter-grain pores, while the small pores are the pores of the MOF. So this is definitely a topic to remember.

Next to presentations, conferences also contain social events. Today, there were two social events; at noon there was a luncheon by the GYIF where the young investigators could mix with people from industry and senior group leaders. In the evening there was the first poster session and booth-stand where companies try to sell their services and lab equipment. Somehow, as a theoretician, I am always a bit at a loss at such events. To draw in more people, there was also a shot-glass contest. No, it was not the goal to drink as much as possible, but to build a protective structure around a shot-glass using only 15 drinking straws (no tape, wire, paper, staples,… allowed). To find the most protective structure, the shot-glass and straw constructions were dropped from various heights. Twenty four teams started at a drop height of 3 feet (~1 m), where already the first shot-glass didn’t survive the drop. Every round, the drop height was increased by 3 feet. For a drop of 20 feet (~6 m) there were only two teams remaining, including our team. Altough our glass survived its first bounce, the second bounce unfortunately broke our glass (darn). Then it was the turn of our remaining competitor, who’s shot-glass exploded into shards on first impact. Officially the result was a draw, although it is clear our construct had clearly the upper hand 😎 .

Our mixed Theoretical-Experimental "international-multi-university" research  team. Left to right: me (UGhent, Belgium), Bert Conings (UHasselt, Belgium), and Chenxin Jin  (Dalhousie University, Canada)  On the right hand side, you can see our construct of straws  around the shot glass.

Our mixed Theoretical-Experimental “international-multi-university” research team. Left to right: me (UGhent, Belgium), Bert Conings (UHasselt, Belgium), and Chenxin Jin (Dalhousie University, Canada)
On the right hand side, you can see our construct of straws around the shot glass.

39th ICACC: Day 1

Today the 39th International Conference  and exposition on Advanced Ceramics and Composites (ICACC) started in Daytona Beach, Florida. Here, scientist from all over the world will be discussing their latest work and findings in the field of ceramic materials during the coming week. Although my project on ceramic materials has been completed for over 2 years now, culminating in a second PhD, I was invited to present my work here. As with many of this kind of conferences, I am afraid that, as a computational materials scientist, I belong to a minority, strongly outnumbered by the experimental (materials) scientists present. This is an aspect that I will need to consider preparing my presentations.

The first morning session consisted of four plenary lectures (general overview presentations in which celebrated group leaders present the overall picture of the work done in their group and their hopes/views on the future). We started of an interesting lecture on “Thermal Barrier Coatings for Gas Turbines” by Prof. David Clarke, where we learned that, since a major part of the world wide energy production is gas based, improving gas-turbine efficiency by only 1% would produce more energy than all renewable production currently in play. This efficiency improvement can be obtained by operating the gas-turbines at higher temperatures. Unfortunately, the metal fans of such a turbine start to degrade if temperatures are too high. By coating them with materials that have a low thermal conductivity, it is possible to operate at the required high temperatures, while the metal fans experience an acceptable operating temperature which is a few hundred degrees lower. Next Prof. Sanjay Mathur from the university of Cologne presented his  groups work on the development of precursor libraries (these ideas are similar as those behind computational high-throughput projects). In precursor chemistry, where changing functional groups or doping leads to changes in the surface morphology, such libraries would then present an interesting tool for designing new materials for energy and health applications. As this apparently is a very hot topic, the fire-alarm of the conference center went into overdrive and everyone needed to be evacuated. When the conference resumed half an hour later Prof. Mathur reassured us he did not intend to fire things up like this. His presentation was followed by that of Prof. Cato Laurencin, who showed us how ceramic materials could be used in a new field he wishes to launch: “Regenerative engineering”. Here, combinations of micro- and nanostructured ceramics are used as matrices to grow and differentiate stem cells intended to heal fractured bones and cartilage. The final presentation by Prof. Kazushige Ohno discussed next generation filters for diesel particulates, which should provide us with a cleaner future.

In the afternoon the parallel symposia started, where I followed the 4th Global Young Investigator Forum (GYIF). Here, Prof. Ricardo Castro presented an interesting method for experimentally obtaining the surface energy of nanoparticles. His quest originated from the simple observation that existing phase diagrams for bulk materials no longer hold when one is working with nano-particles. In such systems, the energy contributions due to the surface of the particle become comparable to those of the inner bulk. Interestingly, one of the example systems Prof. Castro looked into was Mn3+ doped CeO2. In his work he found that the Mn was mainly located at the surface of the CeO2 particles, something I also expected from my own work on aliovalent doped CeO2, based on the defect formation energies of Cu and Co doping. Further presentations discussed Organometal trihalide perovskite solar cells. Although these solar cells still are rather unstable, they do show promise with regard to their efficiency.(The origin of this efficiency is unfortunately not really understood. Maybe other perovskite MOFs are more stable?)

CA: Coders Anonymous

It has been 2 weeks since I last wrote some code, today I started again.

Two weeks ago  I started the, for me, daunting task of upgrading my IDE and compiler to their most recent version. The upgrade itself went smoothly, since it basically consisted of uninstalling the old versions and installing the new ones. The the big finale, recompiling my fortran codebase, went just a little bit less smoothly. It crashed straight into a compiler-bug, nicely introduced in version 4.9 of the gcc fortran compiler, and carefully nurtured up to version 4.10 I had just installed. The bug sounds as follows:

error: internal compiler error: in gfc_conv_descriptor_data_get, at fortran/trans-array.c:145
end module TPeriodicTableModule

Clear sounding as it is, it required some further investigation to find out what was actually the problem and if and how it can be resolved. The problem appeared to be a rather simple one; The compiler seems to be unable to generate the finalization code for some object based constructions involving both fixed size and allocatable arrays, the strong suite of the fortran language.  A minimal example allowing you to bump into this compiler bug goes as follows:

 

Bug 59765   
  1. module bug59765
  2. type TSubObject
  3. integer, dimension(:), allocatable :: c
  4. end type TSubObject
  5. type TObject
  6. type(TSubObject), dimension(1) :: u
  7. end type TObject
  8. contains
  9.  
  10. subroutine add(s)
  11. class(TObject), intent(inout) :: s
  12. end subroutine add
  13.  
  14. end module bug59765

 

The issue arises when the compiler tries to setup the deallocation of the allocatable array of the TSubObject elements of the array u. Apparently the combination of the static array u and allocatable arrays c in the elements of u result in confusion. It was suggested that the compiler wants to perform the deallocation procedure as an array operation (one of the neat tricks fortran has up its sleeves):

deallocate(s%u(:)%c)

instead it should just use a normal do-loop and run over all elements of u.

One of the main ironies of this story is that this bug is strongly connected to object oriented programming, a rather new concept in the world of fortran. Although introduced in fortran 2003, more than 10 year ago, compiler support for these features have only reached basic maturity in recent years. The problem we are facing is one in the destructor of an object: the smart compiler wants to make our life easy, and implicitly create a good destructor for us. As with most smart solutions of modern day life, such things have a tendency to fail when you least expect it.

However, this bug (and the fact that it persists in more recent versions of the compiler) forces us to employ good coding practices: write a destructor yourself. Where C++ has implemented keywords for both constructor and destructor, the fortran programmer, as yet, only has a keyword for a destructor: final. This finalization concept was introduced in the fortran 2003 standard as part of the introduction of the Object Oriented Programming paradigm. A final procedure/function also works slightly different than what you may be used to in for example C++, namely, it is not directly callable by the programmer as an object-function/procedure. A final procedure/function is only called upon in an automatic way when an object is destroyed. So for those of us who also implement  ‘free()‘ procedures to clean-up objects at runtime, this means some extra work may be needed (I haven’t checked this in detail).

So how is our example-problem healed from bug 59765? Through the introduction of our own destructor.

  1. module fixbug59765
  2. type TSubObject
  3. integer, dimension(:), allocatable :: c
  4. contains
  5. final :: destroy_TSubObject
  6. end type TSubObject
  7. type TObject
  8. type(TSubObject), dimension(1) :: u
  9. end type TObject
  10. contains
  11.  
  12. subroutine add(s)
  13. class(TObject), intent(inout) :: s
  14. end subroutine add
  15.  
  16. subroutine destroy_TSubObject(this)
  17. type(TSubObject) :: this !note: this needs to be a type not a class
  18.  
  19. if (allocated(this%c)) deallocate(this%c)
  20. end subroutine destroy_TSubObject
  21.  
  22. end module fixbug59765

In my own code, both the TSubObject and TObject classes got their own final procedure, due to the slightly higher complexity of the objects involved. The resulting code compiled without further complaints, and what is more, it also still compiled with the recent intel ifort compiler. Unfortunately, final procedures are only included in the gcc compiler since version 4.9, making code containing them incompatible with the gcc version 4.8 and earlier.