Tag: computational materials science

Parallel Python?

As part of my machine learning research at AMIBM, I recently ran into the following challenge: “Is it possible to do parallel computation using python.” It sent me on a rather long and arduous journey, with the final answer being something like: “very reluctantly“.

Python was designed with one specific goal in mind; make it easy to implement small test programs to see if an idea is worth pursuing. This gave rise to a scripting language with a lot of flexibility, but also with significant limitations, most of which the “intended” user would never meet. However, as a consequence of its success, many are using it going far beyond this original scope (yours truly as well 🙂 ).

Python offers various libraries to parallelize your scripts…most of them wrappers adding minor additional functionality. However, digging down to the bottom one generally ends up at one of the following two libraries: the threading module and the multiprocessing module.

Of course, as with many things python, there is a huge amount of tutorials available with many of great quality.

import threading

Programmers experienced in a programming language such as C/C++, Pascal, or Fortran, may be familiar with the concept of multi-threading. With multi-threading, a CPU allows a program to distribute its work over multiple program-threads which can be performed in parallel by the different cores of the CPU (or while a core is idle, e.g., since a thread is waiting for data to be fetched).  One of the most famous API’s for writing multi-threaded applications is OpenMP. In the past I used it to parallelize my Hirshfeld-I implementation and the phonon-module of HIVE.

For Python, there is no implementation of the OpenMP API, instead there is the threading module. This provides access to the creation of multiple threads, each able to perform their own tasks while sharing data-objects. Unfortunately, python has also the Global Interpreter Lock, GIL for short, which allows only a single thread to access the interpreter at a time. This effectively reduces thread-based parallelization to a complex way of running a code in a serial way.

For more information on “multi-threading” in python, you can look into this tutorial.

import multiprocessing

In addition to the threading module, there is also the multiprocessing module. This module side-steps the GIL by creating multiple processes, each having its own interpreter. This however comes at a cost. Firstly, there is a significant computational cost starting the different processes. Secondly, objects are not shared between processes, so additional work is needed to collect and share data.

Using the “Pool” class, things are somewhat simplified, as can be seen in the code-fragment below.  With the pool class one creates a set of threads/processes available for your program. Then through the function apply_async function it is possible to run processes in parallel. (Note that you need to use the “async” version of the function, as otherwise you end up with running things serial …again)

  1. import multiprocessing as mp
  2.  
  3. def doOneRun(id:int): #trivial function to run in parallel
  4. return id**3
  5.  
  6.  
  7.  
  8. num_workers=10 #number of processes
  9. NRuns=1000 #number of runs of the function doOneRun
  10.  
  11. pool=mp.Pool(processes=num_workers) # create a pool of processes
  12. drones=[pool.apply_async(doOneRun, args=nr) for nr in range(NRuns)] #and run things in parallel
  13.  
  14. for drone in drones: #and collect the data
  15. Results.collectData(drone.get()) #Results.collectData is a function you write to recombine the separate results into a single result and is not given here.
  16.  
  17. pool.close() #close the pool...no new tasks can be run on any of the processes
  18. pool.join() #collapse all threads back into the main thread

 

how many cores does my computer have?

If you are used to HPC applications, you always want to get as much out of your machine as possible. With regard to parallelization this often means making sure no CPU cycle is left unused. In the example above we manually selected the number of processes to spawn. However, would it not be nice if the program itself could just set this value to be equal to the number of physical cores accessible?

Python has a large number of functions claiming to do just that. A few of them are given below.

  •  multiprocessing.cpu_count(): returns the number of logical cores it can find. So if you have a modern machine with hyper-threading technology, this will return a multiple of the number of physical cores (and you will be over-subscribing your CPU.
  • os.cpu_count(): same as multiprocessing.cpu_count().
  • psutil.cpu_count(logical=False): This implementation gives the same default behavior, however, the parameter logical allows for this function to return the correct number of cores in a single CPU. Indeed a single CPU. HPC architectures which contain multiples CPUs per node will again return an incorrect number, as the implementation makes use of a python “set”, and as such doesn’t increment for the same index core on a different CPU.

In conclusion, there seems to be no simple way to obtain the correct number of physical cores using python, and one is forced to provide this number manually. (If you do have knowledge of such a function which works in both windows and unix environments and both desktop and HPC architectures feel free to let me know in the comments.)

All in all, it is technically possible to run code in parallel using python, but you have to deal with a lot of python quirks such as GIL.

Permanent link to this article: https://dannyvanpoucke.be/parallel-python-en/

Casting Keynotes: The Virtual Lab

Last Tuesday? I had the pleasure of competing in the casting keynotes competition of the TEDx UHasselt chapter. An evening filled with interesting talks on subjects ranging from the FAIR principles of open-data (by Liebet Peeters)  to the duty not stay silent in the face of “bad ideas” and leading a life of purpose. An interesting presentation was the one by Ann Bessemans on visual prosody to improve reading skills in young children as well as reading experience, more specifically the transfer of non-literal-content, for non-native speakers. There was also time for some humor, with the dangerous life of Tim Biesmans, who suffers from peanut-allergies. For him, death lurks around every corner, even in a first-date’s kiss. During my talk, I traced the evolution of computational research as the third paradigm of scientific discovery, showing you can find computational research in every field, and why it is evolving at its break-neck speed.

During the event, both the public and a jury voted on the best presentation, which would then have to present at the TEDx UHasselt in 2020.

And the Winner is …drum roll… Danny Vanpoucke!

So this story will continue during the 2020 TEDx event at UHasselt, and I hope to see you there 🙂

Casting Keynotes

top: Full action shots of my presentation. Moore’s Law as driving force behind computational research, and pondering the meaning of Artificial Intelligence. Bottom: Yes, I won 🙂

 

Permanent link to this article: https://dannyvanpoucke.be/casting-keynotes-the-virtual-lab/

Universiteit Van Vlaanderen

A bit over 1 month ago, I told you about my adventure at the film studio of “de Universiteit Van Vlaanderen“. Today is the day the movie is officially released. You can find it at the website of de Universiteit Van Vlaanderen: Video. The video is in Dutch as this is a science-communication platform aimed at the local population, presenting the expertise available at our local universities.

 

In addition to this video, I was asked by Knack magazine to write a piece on the topic presented. As computational research is my central business I wrote a piece on the subject introducing the general public to the topic. The piece can be read here (in Dutch).

And of course, before I forget, this weekend there was also the half-yearly daylight saving exercise with our clocks.[and in Dutch]

 

Permanent link to this article: https://dannyvanpoucke.be/universiteit-van-vlaanderen/

Can Europium Atoms form Luminescent Centres in Diamond: A combined Theoretical-Experimental Study

Authors: Danny E. P. Vanpoucke, Shannon S. Nicley, Jorne Raymakers, Wouter Maes, and Ken Haenen
Journal: Diam. Relat. Mater 94, 233-241 (2019)
doi: 10.1016/j.diamond.2019.02.024
IF(2019): 2.650
export: bibtex
pdf: <DiamRelatMater>

 

Spin polarization around the various Eu-defect models in diamond. Blue and red represent the up and down spin channels respectively
Graphical Abstract: Spin polarization around the various Eu-defect models in diamond. Blue and red represent the up and down spin channels respectively.

Abstract

The incorporation of Eu into the diamond lattice is investigated in a combined theoretical-experimental study. The large size of the Eu ion induces a strain on the host lattice, which is minimal for the Eu-vacancy complex. The oxidation state of Eu is calculated to be 3+ for all defect models considered. In contrast, the total charge of the defect-complexes is shown to be negative: -1.5 to -2.3 electron. Hybrid-functional electronic-band-structures show the luminescence of the Eu defect to be strongly dependent on the local defect geometry. The 4-coordinated Eu substitutional dopant is the most promising candidate to present the typical Eu3+ luminescence, while the 6-coordinated Eu-vacancy complex is expected not to present any luminescent behaviour. Preliminary experimental results on the treatment of diamond films with Eu-containing precursor indicate the possible incorporation of Eu into diamond films treated by drop-casting. Changes in the PL spectrum, with the main luminescent peak shifting from approximately 614 nm to 611 nm after the growth plasma exposure, and the appearance of a shoulder peak at 625 nm indicate the potential incorporation. Drop-casting treatment with an electronegative polymer material was shown not to be necessary to observe the Eu signature following the plasma exposure, and increased the background
luminescence.

Permanent link to this article: https://dannyvanpoucke.be/paper_eudopingdrmspecial2018-en/

Universiteit Van Vlaanderen: Will we be able to design new materials using our smartphone in the future?

Yesterday, I had the pleasure of giving a lecture for the Universiteit van Vlaanderen, a science communication platform where Flemish academics are asked to answer “a question related to their research“. This question is aimed to be highly clickable and very much simplified. The lecture on the other hand is aimed at a general lay public.

I build my lecture around the topic of materials simulations at the atomic scale. This task ended up being rather challenging, as my computational research has very little direct overlap with the everyday life of the average person. I deal with supercomputers (which these days tend to be bench-marked in terms of smartphone power) and the quantum mechanical simulation of materials at the atomic scale, two other topics which may ring a bell…but only as abstract topics people may have heard of.

Therefor, I crafted a story taking people on a fast ride down the rabbit hole of my work. Starting from the almost divine power of the computational materials scientist over his theoretical sample, over the reality of nano-scale materials in our day-to-day lives, past the relative size of atoms and through the game nature of simulations and the salvation of computational research by grace of Moore’s Law…to the conclusion that in 25 years, we may be designing the next generation of CPU materials on our smartphone instead of a TIER-1 supercomputer. …did I say we went down the rabbit hole?

The television experience itself was very exhilarating for me. Although my actual lecture took only 15 minutes, the entire event took almost a full day. Starting with preparations and a trial run in the afternoon (for me and my 4 colleagues) followed by make-up (to make me look pretty on television 🙂 … or just to reduce my reflectance). In the evening we had a group diner meeting the people who would be in charge of the technical aspects and entertainment of the public. And then it was 19h30. Tensions started to grow. The public entered the studio, and the show was ready to start. Before each lecture, there was a short interview to test sound and light, and introduce us to the public. As the middle presenter, I had the comfortable position not to be the first, so I could get an idea of how things went for my colleagues, and not to be the last, which can really be destructive on your nerves.

At 21h00, I was up…

and down the rabbit hole we went. 

 

 

Full periodic table, with all elements presented with their relative size (if known)

Full periodic table, with all elements presented with their relative size (if known) created for the Universiteit van Vlaanderen lecture.

 

Permanent link to this article: https://dannyvanpoucke.be/universiteit-van-vlaanderen-en/

New year’s resolution

A new year, a new beginning.

For most people this is a time of making promises, starting new habits or stopping old ones. In general, I forgo making such promises, as I know they turn out idle in a mere few weeks without external stimulus or any real driving force.

In spite of this, I do have a new years resolution for this year: I am going to study machine learning and use it for any suitable application I can get my hands on (which will mainly be materials science, but one never knows).  I already have a few projects in mind, which should help me stay focused and on track. With some luck, you will be reading about them here on this blog. With some more luck, they may even end up being part of an actual scientific publication.

But first things first, learn the basics (beyond hear-say messages of how excellent and world improving AI is/will be). What are the different types of machine learning available, is it all black box or do you actually have some control over things. Is it a kind of magic? What’s up with all these frameworks (isn’t there anyone left who can program?), and why the devil seem they all to be written in a script langue (python) instead of a proper programming language? A lot of questions I hope to see answered. A lot of things to learn. Lets start by building some foundations…the old fashioned way: By studying using a book, with real paper pages!

Happy New Year, and best wishes to you all!

Permanent link to this article: https://dannyvanpoucke.be/new-years-resolution-en/

Synthesis, characterization and thermodynamic stability of nanostructured ε-iron carbonitride powder prepared by a solid-state mechanochemical route

Authors: Seyyed Amin Rounaghi, Danny E. P. Vanpoucke, Elaheh Esmaeili, Sergio Scudino, and Jürgen Eckert
Journal: J. Alloys Compd. 778, 327-336 (2019)
doi: 10.1016/j.jallcom.2018.11.007
IF(2019): 4.650
export: bibtex
pdf: <JAlloysCompd>

Abstract

Nanostructured epsilon iron carbonitride (ε-Fe3CxN1-x, x ∼ 0.05) powder with high purity (>97 wt%) was synthesized through a simple mechanochemical reaction between metallic iron and melamine. Various characterization techniques were employed to investigate the chemical and physical characteristics of the milling intermediates and the final products. The thermodynamic stability of the different phases in the Fe-C-N ternary system, including nitrogen and carbon doped structures were studied through density functional theory (DFT) calculations. A Boltzmann-distribution model was developed to qualitatively assess the stability and the proportion of the different milling products vs. milling energy. The theoretical and experimental results revealed that the milling products mainly comprise the ε-Fe3CxN1-xphase with a mean crystallite size of around 15 nm and a trace of amorphous carbonmaterial. The thermal stability and magnetic properties of the milling products were thoroughly investigated. The synthesized ε-Fe3CxN1-x exhibited thermal stabilities up to 473 K and 673 K in air and argon atmospheres, respectively, and soft magnetic properties with a saturation magnetization of around 125 emu/g.

Permanent link to this article: https://dannyvanpoucke.be/paper_epsilonfenc-en/

Predicting Partial Atomic Charges in Siliceous Zeolites

Authors: Jarod J. Wolffis, Danny E. P. Vanpoucke, Amit Sharma, Keith V. Lawler, and Paul M. Forster
Journal: Microporous Mesoporous Mater. 277, 184-196 (2019)
doi: 10.1016/j.micromeso.2018.10.028
IF(2019): 4.551
export: bibtex
pdf: <MicroporousMesoporousMater>

 

Partial charges in zeolites for force fields.
Graphical Abstract: Partial charges in zeolites for force fields.

Abstract

Partial atomic charge, which determines the magnitude of the Coulombic non-bonding interaction, represents a critical parameter in molecular mechanics simulations. Partial charges may also be used as a measure of physical properties of the system, i.e. covalency, acidic/catalytic sites, etc. A range of methods, both empirical and ab initio, exist for calculating partial charges in a given solid, and several of them are compared here for siliceous (pure silica) zeolites. The relationships between structure and the predicted partial charge are examined. The predicted partial charges from different methods are also compared with related experimental observations, showing that a few of the methods offer some guidance towards identifying the T-sites most likely to undergo substitution or for proton localization in acidic framework forms. Finally, we show that assigning unique calculated charges to crystallographically unique framework atoms makes an appreciable difference in simulating predicting N2 and O2 adsorption with common dispersion-repulsion parameterizations.

Permanent link to this article: https://dannyvanpoucke.be/paper_hizeolites_2018-en/

Book chapter: Computational Chemistry Experiment Possibilities

Authors: Bartłomiej M. Szyja and Danny Vanpoucke
Book: Zeolites and Metal-Organic Frameworks, (2018)
Chapter Ch 9, p 235-264
Title Computational Chemistry Experiment Possibilities
ISBN: 978-94-629-8556-8
export: bibtex
pdf: <Amsterdam University Press>
<Open Access>

 

Zeolites and Metal-Organic Frameworks (the hard-copy)

Abstract

Thanks to a rapid increase in the computational power of modern CPUs, computational methods have become a standard tool for the investigation of physico-chemical phenomena in many areas of chemistry and technology. The area of porous frameworks, such as zeolites, metal-organic frameworks (MOFs) and covalent-organic frameworks (COFs), is not different. Computer simulations make it possible, not only to verify the results of the experiments, but even to predict previously inexistent materials that will present the desired experimental properties. Furthermore, computational research of materials provides the tools necessary to obtain fundamental insight into details that are often not accessible to physical experiments.

The methodology used in these simulations is quite specific because of the special character of the materials themselves. However, within the field of porous frameworks, density functional theory (DFT) and force fields (FF)
are the main actors. These methods form the basis of most computational studies, since they allow the evaluation of the potential energy surface (PES) of the system.

Related:

Newsflash: here

Permanent link to this article: https://dannyvanpoucke.be/chaptermofs_2018-en/

Building bridges towards experiments.

Quantum Holy Grail: The Ground-State

Quantum mechanical calculations provide a powerful tool to investigate the world around us. Unfortunately it is also a computationally very expensive tool to use, which puts a boundary on what is possible in terms of computational materials research. For example, when investigating a solid at the quantum mechanical level, you are limited in the number of atoms that you can consider. Even with a powerful supercomputer at hand, a hundred to a thousand atoms are currently accessible for “routine” investigations. The computational cost also limits the number of configurations/combinations you can calculate.

However, in the end— and often with some blood sweat and tears—these calculations do provide you the ground-state structure and energy of your system. From this point forward you can continue characterizing its properties, life is beautiful and happy times are just beyond the horizon. At this horizon your experimental colleague awaits you. And he/she tells you:

Sorry, I don’t find that structure in my sample.

After recovering from the initial shock, you soon realize that in (materials science) experiments one seldom encounters a sample in “the ground-state”. Experiments are performed at temperatures above 0K and pressures above 0 Pa (even in vacuum :p ). Furthermore, synthesis methods often involve elevated temperatures, increased pressure, mechanical forces, chemical reactions,… which give rise to meta-stable configurations. In such an environment, your nicely deduced ground-state may be an exception to the rule. It is only one point within the phase-space of the possible.

So how can you deal with this? You somehow need to sample the phase-space available to the experiment.

Sampling Phase-Space for Ball-milling synthesis.

For a few years now, I have a very fruitful collaboration with Prof. Rounaghi. His interest goes toward the cheap fabrication of metal-nitrides. Our first collaboration focused on AlN, while later work included Ti, V and Cr-nitrides. Although this initial work had a strong focus on simple corroboration through the energies calculated at the quantum mechanical level, the collaboration also allowed me to look at my data in a different way. I wanted to “simulate” the reactions of ball-milling experiments more closely.

Due to the size-limitations of quantum mechanical calculations I played with the following idea:

  • Assume there exists a general master reaction which describes what happens during ball-milling.

X Al + Y Melamine → x1 Al + x2 Melamine + x3 AlN + …

where all the xi represent the fractions of the reaction products present.

  • With the boundary condition that the number of particles needs to be conserved, you end up with a large set of (x1,x2,x3,…) configurations which each have a certain energy. This energy is calculated using the quantum mechanical energies of each product. The configuration with the lowest energy is the ground state configuration. However, investigating the entire accessible phase-space showed that the energies of the other possible configurations are generally not that much higher.
  • What if we used the energy available due to ball-milling in the same fashion as we use kBT? And sample the phase-space using Boltzmann statistics.
  • The resulting Boltzmann distribution of the configurations available in the phase-space can then be used to calculate the mass/atomic fraction of each of the products and allow us to represent an experimental sample as a collection of small units with slightly different configurations, weighted according to their Boltzmann distribution.

This setup allowed me to see the evolution in end-products as function of the initial ratio in case of AlN, and in our current project to indicate the preferred Iron-nitride present.

Grid-sampling vs Monte-Carlo-sampling

Whereas the AlN system was relatively easy to investigate—the phase space was only 3 dimensional— the recent iron based system ended up being 4 dimensional when considering only host materials, and 10 dimensional when including defects. For a small 3-4D phase-space, it is possible to create an equally spaced grid and get converged results using a few million to a billion grid-points. For a 10D phase-space this is no longer possible. As you can no longer keep all data-points (easily) in storage during your calculation (imagine 1 Billion points, requiring you to store 11 double precision floats or about 82Gb) you need a method that does not rely on large arrays of data. For our Boltzmann statistics this gives us a bit of a pickle, as we need to have the global minimum of our phase space. A grid is too course to find it, while a simple Monte-Carlo just keeps hopping around.

Using Metropolis’s improvement of the Monte-Carlo approach was an interesting exercise, as it clearly shows the beauty and simplicity of the approach. This becomes even more awesome the moment you imagine the resources available in those days. I noted 82Gb being a lot, but I do have access to machines with those resources; its just not available on my laptop. In those days MANIAC supercomputers had less than 100 kilobyte of memory.

Although I theoretically no longer need the minimum energy configuration, having access to that information is rather useful. Therefore, I first search the phase-space for this minimum. This is rather tricky using Metropolis Monte Carlo (of course better techniques exist, but I wanted to be a bit lazy), and I found that in the limit of T→0 the algorithm will move toward the minimum. This, however, may require nearly 100 million steps of which >99.9% are rejected. As it only takes about 20 second on a modern laptop…this isn’t a big issue.

Finding a minimum using Metropolis Monte Carlo.

Finding a minimum using Metropolis Monte Carlo.

Next, a similar Metropolis Monte Carlo algorithm can be used to sample the entire phase space. Using 109 sample points was already sufficient to have a nicely converged sampling of the phase space for the problem at hand. Running the calculation for 20 different “ball-milling” energies took less than 2 hours, which is insignificant, when compared to the resources required to calculate the quantum mechanical ground state energies (several years). The figure below shows the distribution of the mass fraction of one of the reaction products as well as the distribution of the energies of the sampled configurations.

Metropolis Monte Carlo distribution of mass fraction and configuration energies for 3 sets of sample points.

Metropolis Monte Carlo distribution of mass fraction and configuration energies for 3 sets of sample points.

This clearly shows us how unique and small the quantum mechanical ground state configuration and its contribution is compared to the remainder of the phase space. So of course the ground state is not found in the experimental sample but that doesn’t mean the calculations are wrong either. Both are right, they just look at reality from a different perspective. The gap between the two can luckily be bridged, if one looks at both sides of the story. 

 

Permanent link to this article: https://dannyvanpoucke.be/experimental-bridges-en/