Aug 12

Dangerous travel physics

Tossing coins into a fountain brings luck, tossing them of a building causes death and destruction?

 

We have probably all done it at one point when traveling: thrown a coin into a wishing well or a fountain. There are numerous wishing wells with legends describing how the deity living in the well will bring good fortune in return for this gift. The myths and legends often originate from Celtic, German or Nordic traditions.

In case of the Trevi fountain, there is the belief that if you throw a coin over your left shoulder using your right hand, you will return to Rome…someday. As this fountain and legend are iconic parts of our western movie history, many, many coins get tossed into it (more than 1 Million € worth each year, which is collected an donated to charity).

In addition to these holiday legends, there also exist more recent “coin-myths”: Death by falling penny. These myths are always linked to tall buildings, and claim that a penny dropped from the top of such a building could kill someone if they hit him.

Traveling with Newton

In both kinds of coin legends, the trajectory of the coin can be predicted quite well using Newton’s Laws. Their speed is low compared to the speed of light, and the coins are sufficiently large to keep the world of quantum mechanics hidden from sight.

The second Law of Newton states that the speed of an object changes if there is a force acting on it. Here on earth, gravity is a major player (especially for Physics exercises). In case of a coin tossed into a fountain, gravity will cause the coin to follow a roughly parabolic path before disappearing into the water. The speed at which the coin will hit the water will be comparable to the speed with which it was thrown…at least if there isn’t to much of a difference in height between the surface of the water and the hand of the one throwing the coin.

But, what if this difference is large? Such as in case of the penny being dropped from a tall building. In such a case, the initial velocity is zero, and the penny is accelerated toward the ground by gravity. Using the equations of motion for a uniform accelerated system, we can calculate easily the speed at which the coin hits the ground:

x = x0 + v0*t + ½ * g * t²

v=v0+g*t

If we drop a penny from the 3rd floor of the Eiffel Tower (x0=276.13m, x=0m, v0=0 m/s, g=-9.81m/s²) then the first equation teaches us that after 7.5 seconds, the penny will hit the ground with a final speed (second equation) of -73.6 m/s (or -265 km/h)*. With such a velocity, the penny definitely will leave an impression. More interestingly, we will get the exact same result for a pea (cooked or frozen), a bowling ball, a piano or an anvil…but also a feather. At this point, your intuition must be screaming at you that you are missing something important.

All models are wrong…but they can be very useful

The power of models in physics, originates from keeping only the most important and relevant aspects. Such approximations provide a simplified picture and allow us to understand the driving forces behind nature itself. However, in this context, models in physics are approximations of reality, and thus by definition wrong, in the sense that they do not provide an “exact” representation of reality. This is also true for Newton’s Laws, and our application above. With these simple rules, it is possible to describe the motion of the planets as well as a coin tossed into the Trevi fountain.

So what’s the difference between the coin tossed into a fountain and planetary motion on the one hand, and our assorted objects being dropped from the Eiffel Tower on the other hand?

Friction as it presents itself in aerodynamic drag!

Aerodynamic drag gives rise to a force in the direction opposite to the movement, and it is defined as:

FD= ½ *Rho*v²*CD*A

This force depends on the density Rho of the medium (hence water gives a larger drag than air), the velocity and surface area A in the direction of movement of the object, and CD the drag coefficient, which depends on the shape of the object.

If we take a look at the planets and the coin tosses, we notice that, due to the absence of air between the planets, no aerodynamic drag needs to be considered for planetary motion. In case of a coin being tossed into the Trevi fountain, there is aerodynamic drag, however, the speeds are very low as well as the distance traversed. As such the effect of aerodynamic drag will be rather small, if not negligible. In case of objects being dropped from a tall building, the aerodynamic drag will not be negligible, and it will be the factors CD and A which will make sure the anvil arrives at the ground level before the feather.

Because this force also depends on the velocity, you can no longer make direct use of the first two equations to calculate the time of impact and velocity at each point of the path. You will need a numerical approach for this (which is also the reason this is not (regularly) taught in introductory physics classes at high school). However, using excel, you can get a long way in creating a numerical solution for this problem.[Excel example]

As we know the density of air is about 1.2kg/m³, CD for a thin cylinder (think coin) is 1.17, the radius of a penny is 9.5 mm and its mass is 2.5g, then we can find the terminal velocity of the penny to be 11.1 m/s (40 km/h). The penny will land on the ground after about 25.6 seconds. This is quite a bit slower than what we found before, and also quite a bit more safe. The penny will reach its terminal velocity after having fallen about 60 m, which means that dropping a penny from taller buildings (the Atomium [102 m], the Eiffel Tower [276.13 m, 3rd floor, 324 m top], the Empire State Building [381 m] or even the Burj Khalifa [829.8 m]) will have no impact on the velocity it will have when hitting the ground: 40km/h.

This is a collision you will most probably survive, but which will definitely leave a small bruise on impact.

 

*The minus sign indicates the coin is falling downward.

Jul 27

Book chapter: Computational Chemistry Experiment Possibilities

Authors: Bartłomiej M. Szyja and Danny Vanpoucke
Book: Zeolites and Metal-Organic Frameworks, (2018)
Chapter Ch 9, p 235-264
Title Computational Chemistry Experiment Possibilities
ISBN: 978-94-629-8556-8
export: bibtex
pdf: <Amsterdam University Press>

 

Zeolites and Metal-Organic Frameworks (the hard-copy)

Abstract

Thanks to a rapid increase in the computational power of modern CPUs, computational methods have become a standard tool for the investigation of physico-chemical phenomena in many areas of chemistry and technology. The area of porous frameworks, such as zeolites, metal-organic frameworks (MOFs) and covalent-organic frameworks (COFs), is not different. Computer simulations make it possible, not only to verify the results of the experiments, but even to predict previously inexistent materials that will present the desired experimental properties. Furthermore, computational research of materials provides the tools necessary to obtain fundamental insight into details that are often not accessible to physical experiments.

The methodology used in these simulations is quite specific because of the special character of the materials themselves. However, within the field of porous frameworks, density functional theory (DFT) and force fields (FF)
are the main actors. These methods form the basis of most computational studies, since they allow the evaluation of the potential energy surface (PES) of the system.

Related:

Newsflash: here

Jul 17

Building bridges towards experiments.

Quantum Holy Grail: The Ground-State

Quantum mechanical calculations provide a powerful tool to investigate the world around us. Unfortunately it is also a computationally very expensive tool to use, which puts a boundary on what is possible in terms of computational materials research. For example, when investigating a solid at the quantum mechanical level, you are limited in the number of atoms that you can consider. Even with a powerful supercomputer at hand, a hundred to a thousand atoms are currently accessible for “routine” investigations. The computational cost also limits the number of configurations/combinations you can calculate.

However, in the end— and often with some blood sweat and tears—these calculations do provide you the ground-state structure and energy of your system. From this point forward you can continue characterizing its properties, life is beautiful and happy times are just beyond the horizon. At this horizon your experimental colleague awaits you. And he/she tells you:

Sorry, I don’t find that structure in my sample.

After recovering from the initial shock, you soon realize that in (materials science) experiments one seldom encounters a sample in “the ground-state”. Experiments are performed at temperatures above 0K and pressures above 0 Pa (even in vacuum :p ). Furthermore, synthesis methods often involve elevated temperatures, increased pressure, mechanical forces, chemical reactions,… which give rise to meta-stable configurations. In such an environment, your nicely deduced ground-state may be an exception to the rule. It is only one point within the phase-space of the possible.

So how can you deal with this? You somehow need to sample the phase-space available to the experiment.

Sampling Phase-Space for Ball-milling synthesis.

For a few years now, I have a very fruitful collaboration with Prof. Rounaghi. His interest goes toward the cheap fabrication of metal-nitrides. Our first collaboration focused on AlN, while later work included Ti, V and Cr-nitrides. Although this initial work had a strong focus on simple corroboration through the energies calculated at the quantum mechanical level, the collaboration also allowed me to look at my data in a different way. I wanted to “simulate” the reactions of ball-milling experiments more closely.

Due to the size-limitations of quantum mechanical calculations I played with the following idea:

  • Assume there exists a general master reaction which describes what happens during ball-milling.

X Al + Y Melamine → x1 Al + x2 Melamine + x3 AlN + …

where all the xi represent the fractions of the reaction products present.

  • With the boundary condition that the number of particles needs to be conserved, you end up with a large set of (x1,x2,x3,…) configurations which each have a certain energy. This energy is calculated using the quantum mechanical energies of each product. The configuration with the lowest energy is the ground state configuration. However, investigating the entire accessible phase-space showed that the energies of the other possible configurations are generally not that much higher.
  • What if we used the energy available due to ball-milling in the same fashion as we use kBT? And sample the phase-space using Boltzmann statistics.
  • The resulting Boltzmann distribution of the configurations available in the phase-space can then be used to calculate the mass/atomic fraction of each of the products and allow us to represent an experimental sample as a collection of small units with slightly different configurations, weighted according to their Boltzmann distribution.

This setup allowed me to see the evolution in end-products as function of the initial ratio in case of AlN, and in our current project to indicate the preferred Iron-nitride present.

Grid-sampling vs Monte-Carlo-sampling

Whereas the AlN system was relatively easy to investigate—the phase space was only 3 dimensional— the recent iron based system ended up being 4 dimensional when considering only host materials, and 10 dimensional when including defects. For a small 3-4D phase-space, it is possible to create an equally spaced grid and get converged results using a few million to a billion grid-points. For a 10D phase-space this is no longer possible. As you can no longer keep all data-points (easily) in storage during your calculation (imagine 1 Billion points, requiring you to store 11 double precision floats or about 82Gb) you need a method that does not rely on large arrays of data. For our Boltzmann statistics this gives us a bit of a pickle, as we need to have the global minimum of our phase space. A grid is too course to find it, while a simple Monte-Carlo just keeps hopping around.

Using Metropolis’s improvement of the Monte-Carlo approach was an interesting exercise, as it clearly shows the beauty and simplicity of the approach. This becomes even more awesome the moment you imagine the resources available in those days. I noted 82Gb being a lot, but I do have access to machines with those resources; its just not available on my laptop. In those days MANIAC supercomputers had less than 100 kilobyte of memory.

Although I theoretically no longer need the minimum energy configuration, having access to that information is rather useful. Therefore, I first search the phase-space for this minimum. This is rather tricky using Metropolis Monte Carlo (of course better techniques exist, but I wanted to be a bit lazy), and I found that in the limit of T→0 the algorithm will move toward the minimum. This, however, may require nearly 100 million steps of which >99.9% are rejected. As it only takes about 20 second on a modern laptop…this isn’t a big issue.

Finding a minimum using Metropolis Monte Carlo.

Finding a minimum using Metropolis Monte Carlo.

Next, a similar Metropolis Monte Carlo algorithm can be used to sample the entire phase space. Using 109 sample points was already sufficient to have a nicely converged sampling of the phase space for the problem at hand. Running the calculation for 20 different “ball-milling” energies took less than 2 hours, which is insignificant, when compared to the resources required to calculate the quantum mechanical ground state energies (several years). The figure below shows the distribution of the mass fraction of one of the reaction products as well as the distribution of the energies of the sampled configurations.

Metropolis Monte Carlo distribution of mass fraction and configuration energies for 3 sets of sample points.

Metropolis Monte Carlo distribution of mass fraction and configuration energies for 3 sets of sample points.

This clearly shows us how unique and small the quantum mechanical ground state configuration and its contribution is compared to the remainder of the phase space. So of course the ground state is not found in the experimental sample but that doesn’t mean the calculations are wrong either. Both are right, they just look at reality from a different perspective. The gap between the two can luckily be bridged, if one looks at both sides of the story. 

 

Jun 07

Science Figured out

Diamond and CPU's, now still separated, but how much longer will this remain the case? Top left: Thin film N-doped diamond on Si (courtesy of Sankaran Kamatchi). Top right: Very old Pentium 1 CPU from 1993 (100MHz), with µm architecture. Bottom left: more recent intel core CPU (3GHz) of 2006 with nm scale architecture. Bottom right: Piece of single crystal diamond. A possible alternative for silicon, with 20x higher thermal conductivity, and 7x higher mobility of charge carriers.

Diamond and CPU’s, now still separated, but how much longer will this remain the case?
Top left: Thin film N-doped diamond on Si (courtesy of Sankaran Kamatchi). Top right: Very old Pentium 1 CPU from 1993 (100MHz), with µm architecture. Bottom left: more recent intel core CPU (3GHz) of 2006 with nm scale architecture. Bottom right: Piece of single crystal diamond. A possible alternative for silicon, with 20x higher thermal conductivity, and 7x higher mobility of charge carriers.

Can you pitch your research in 3 minutes, this is the concept behind “wetenschap uitgedokterd/science figured out“. A challenge I accepted after the fun I had at the science-battle. If I can explain my work to a public of 6 to 12 year-olds, explaining it to adults should be possible as well. However, 3 minutes is very short (although some may consider this long in the current bitesize world), especially if you have to explain something far from day-to-day life and can not assume any scientific background.

Where to start? Capture the imagination: “Imagine a world where you are a god.

Link back to the real world. “All modern-day high-tech toys are more and more influenced by the atomic scale details.” Over the last decade, I have seen the nano-scale progress slowly but steadily into the realm of real-life materials research. This almost invisible trend will have a huge impact on materials science in the coming decade, because more and more we will see empirical laws breaking down, and it will become harder and harder to fit trends of materials using a classical mindset, something which has worked marvelously for materials science during the last few centuries. Modern and future materials design (be it solar cells, batteries, CPU’s or even medicine) will have to rely on quantum mechanical intuition and hence quantum mechanical simulations. (Although there is still much denial in that regard.)

Is there a problem to be solved? Yes indeed: “We do not have quantum mechanical intuition by nature, and manipulating atoms is extremely hard in practice and for practical purposes.” Although popular science magazines every so often boast pictures of atomic scale manipulation of atoms and the quantum regime, this makes it far from easy and common inside and outside the university lab. It is amazing how hard these things tend to get (ask your local experimental materials research PhD) and the required blood, sweat and tears are generally not represented in the glory-parade of a scientific publication.

Can you solve this? Euhm…yes…at least to some extend. “Computational materials research can provide the quantum mechanical intuition we human beings lack, and gives us access to atomic scale manipulation of a material.” Although computational materials science is seen by experimentalists as theory, and by theoreticians as experiments, it is neither and both. Computational materials science combines the rigor and control of theory, with access to real-life systems of experiments. It, unfortunately also suffers the limitations of both: as the system is still idealized (but to much lesser extend than in theoretical work) and control is not absolute (you have to follow where the algorithms take you, just as an experimentalist has to follow where the reaction takes him/her). But, if these strengths and weaknesses are balanced wisely (requires quite a few years of experience) an expert will gain fundamental insights in experiments.

Animation representing the buildup of a diamond surface in computational work.

Animation representing the buildup of a diamond surface in computational work.

As a computational materials scientist, you build a real-life system, atom by atom, such that you know exactly where everything is located, and then calculate its properties based on the rules of quantum mechanics, for example. In this sense you have absolute control as in theory. This comes at a cost (conservation of misery 🙂 ); where nature itself makes sure the structure is the “correct one” in experiments, you have to find it yourself in computational work. So you generally end up calculating many possible structural combinations of your atoms to first find out which is the one most probable to represent nature.

So what am I actually doing?I am using atomic scale quantum mechanical computations to investigate the materials my experimental colleagues are studying, going from oxides to defects in diamond.” I know this is vague, but unfortunately, the actual work is technical. Much effort goes into getting the calculations to run in the direction you want them to proceed (This is the experimental side of computational materials science.). The actual goal varies from project to project. Sometimes, we want to find out which material is most stable, and which material is most likely to diffuse into the other, while at other times we want to understand the electronic structure, to test if a defect is really luminescent, this to trace the source of the experimentally observed luminescence. Or if you want to make it more complex, even find out which elements would make diamond grow faster.

Starting from this, I succeeded in creating a 3-minute pitch of my research for Science Figured out. The pitch can be seen here (in Dutch, with English subtitles that can be switched on through the cogwheel in the bottom right corner).

Some external links:

 

May 22

VSC User Day 2018

Today, I am attending the 4th VSC User Day at the “Paleis de Academiën” in Brussels. Flemish researchers for whom the lifeblood of their research flows through the chips of a supercomputer are gathered here to discuss their experiences and present their research.

Some History

About 10 years ago, at the end of 2007 and beginning of 2008, the 5 Flemish universities founded the Flemish Supercomputer Center (VSC). A virtual organisation with one central goal:  Combine their strengths and know-how with regard to High Performance Compute (HPC) centers to make sure they were competitive with comparable HPC centers elsewhere.

By installing a super-fast network between the various university compute centers, each Flemish researcher has nowadays access to state-of-the-art computer infrastructure, independent of his or her physical location. A researcher at the University of Hasselt, like myself, can easily run calculations on the supercomputers installed at the university of Ghent or Leuven. In October 2012 the existing university supercomputers, so-called Tier-2 supercomputers, are joined by the first Flemish Tier-1 supercomputer, which was housed at the brand new data-centre of Ghent University. This machine is significantly larger than the existing Tier-2 machines, and allows Belgium to become the 25th member of the PRACE network, a European network which provides computational researchers access to the best and largest computer facilities in Europe. The fast development of computational research in Flanders and the explosive growth in the number of computational researchers, combined with the first shared Flemish supercomputer (in contrast to the university TIER-2 supercomputers, which some still consider private property rather than part of VSC) show the impact of the virtual organisation that is the VSC. As a result, on January 16th 2014, the first VSC User Day is organised, bringing together HPC users from all 5 universities  and industry. Here the users share their experiences and discuss possible improvements and changes. Since then, the first Tier-1 supercomputer has been decommissioned and replaced by a brand new Tier-1 machine, this time located at the KU Leuven. Furthermore, the Flemish government has put 30M€ aside for super-computing in Flanders, making sure that also in the future Flemish computational research stays competitive. The future of computational research in Flanders looks bright.

Today is User Day 2018

During the 4th VSC User Day, researchers of all 5 Flemish universities will be presenting the work they are performing on the supercomputers of the VSC network. The range of topics is very broad: from first principles materials modelling to chip design, climate modelling and space weather. In addition there will also be several workshops, introducing new users to the VSC and teaching advanced users the finer details of GPU-code and code optimization and parallelization. This later aspect is hugely important during the use of supercomputers in an academic context. Much of the software used is developed or modified by the researchers themselves. And even though this software can present impressive behavior, it doe not speed up automatically if you provide it access to more CPU’s. This is a very non-trivial task the researchers has to take care of, by carefully optimizing and parallelizing his or her code.

To support the researchers in their work, the VSC came up with ingenious poster-prizes. The three best posters will share 2018 node days of calculation time (about 155 years of calculations on a normal simple computer).

Wish me luck!

 

Single-slide presentation of my poster @VSC User Day 2018.

Single-slide presentation of my poster @VSC User Day 2018.

Mar 27

Fairy tale science or a science fairy tale?

Once upon a time…

Once upon a time, a long time ago—21 days ago to be precise—there was a conference in the tranquil town of Hasselt. Every year, for 23 years in a row, researchers gathered there for three full days, to present and adore their most colorful and largest diamonds. For three full days, there was just that little bit more of a sparkle to their eyes. They divulged where new diamonds could be found, and how they could be used. Three days to could speak without any restriction, without hesitation, about the sixth element which bonds them all. Because all knew the language. They honored the magic of the NV-center and the arcane incantations leading to the highest doping. All, masters of their common mystic craft.

At the end of the third day, with sadness in their harts they said their good-byes and went back, in small groups, to their own ivory tower, far far away. With them, however, they took a small sparkle of hope and expectation, because in twelve full moons they would reconvene. Bringing with them new and grander tales and even more sparkling diamonds, than had ever been seen before.

For most outsiders, the average conference presentation is as clear as an arcane conjuration of a mythological beast. As scientist, we are often trapped by the assumption that our unique expertise is common knowledge for our public, a side-effect of our enthusiasm for our own work.

Clear vs. accurate

In a world where science is facing constant pressure due to the financing model employed—in addition to the up-rise in “fake news” and “alternative facts”— it is important for young researchers to be able to bring their story clearly and accurately.

However, clear and accurate often have the bad habit of counteracting one-another, and as such, maintaining a good balance between the two take a lot more effort than one might expect. Focus on either one aspect (accuracy or clarity) tends to be disastrous. Conference presentations and scientific publications tend to focus on accuracy, making them not clear at all for the non-initiate. Public presentations and news paper articles, on the other hand, focus mainly on clarity with fake news accidents waiting to happen. For example, one could recently read that 7% of the DNA of the astronaut Scott Kelly had changed during a space-flight, instead of a change of in gene-expression. Although both things may look similar, they are very different. The latter presents a rather natural response of the (human) body to any stress situation. The former, however, removes Scott from the human race entirely. Even the average gorilla would be closer related to you and I, than Scott Kelly, as they differ less than 5% in their DNA from our DNA. So keeping a good balance between clarity and accuracy is important, albeit not that easy. Time pressure plays an important role here.

Two extremes?

Wetenschapsbattle Trophy: Each of the contestants of the wetenschapsbattle received a specially designed and created hat from the children of the school judging the contest. Mine has diamonds and computers. 🙂

In the week following the diamond conference in Hasselt, I also participated in a sciencebattle. A contest in which researchers have to explain their research to a public of 6-to 12-year-olds in a time-span of 15 minutes. These kids are judge, jury and executioner of the contest so to speak. It’s a natural reflex to place these two events at the opposite ends of a scale. And it is certainly true for some aspects; The entire room volunteering spontaneously when asked for help is something which happens somewhat less often at a scientific conference. However, clarity and accuracy should be equally central aspects for both.

So, how do you explain your complex research story to a crowd of 6-to 12-year-olds? I discovered the answer during a masterclass by The Floor is Yours.  Actually, more or less the same way you should tell it to an audience of adults, or even your own colleagues. As a researcher you are a specialist in a very narrow field, which means that no-one will loose out when focus is shifted a bit more to clarity. The main problem you encounter here, however, is time. This is both the time required to tell your story (forget “elevator pitches”, those are good if you are a used-car salesman, they are not for science) as well as the time required to prepare your story (it took me a few weeks to build and then polish my story for the children).

Most of this time is spent answering the questions: “What am I actually doing?” and “Why am I doing this specifically?“. The quest for metaphors which are both clear and accurate takes quite some time. During this task you tend to suffer, as a scientist, from the combination of your need for accuracy and your deep background knowledge. These are the same inhibitors a scientist encounters when involved in a public discussion on his/her own field of expertise.

Of course you also do not want to be pedantic:

Q: What do you do?

A: I am a Computational Materials Researcher.

Q: Compu-what??

A: 1) Computational = using a computer

2) Materials = everything you see around you, the stuff everything is made of

3) Researcher = Me

However, as a scientist, you may want to use such imaginary discussions during your preparation. Starting from these pedantic dialogues, you trace a path along the answers which interest you most. The topics which touch your scientific personality. This way, you take a step back from your direct research, and get a more broad picture. Also, by telling about theme’s, you present your research from a more broad perspective, which is more easily accessible to your audience: “What are atoms?“, “How do you make diamond?“, “What is a computer simulation?

At the end—after much blood, sweat and tears—your story tells something about your world as a whole. Depending on your audience you can include more or less detailed aspects of your actual day-to-day research, but at its hart, it remains a story.

Because, if we like it or not, in essence we all are “Pan narrans“, storytelling apes.

Jan 19

Newsflash: Book-chapter on MOFs and Zeolites en route to bookstores near you.

It is almost a year ago that I wrote a book-chapter, together with Bartek Szyja, on MOFs and Zeolites. Coming March 2018, the book will be available through University press. It is interesting to note that in a 13 chapter book, ours was the only chapter dealing with the computational study and simulation of these materials…so there is a lot more that can be done by those who are interested and have the patience to perform these delicate and often difficult but extremely rewarding studies. From my time as a MOF researcher I have learned two important things:

  1. Any kind of interesting/extreme/silly physics you can imagine will be present in some MOFs. In this regard, the current state of the MOF/COF field is still in its infancy as most experimental work focuses on  simple applications such as catalysis and gas storage, for which other materials may be better suited. These porous materials may be theoretically interesting for direct industrial application, but the synthesis cost generally will be a bottleneck. Instead, looking toward the fundamental physics applications: Low dimensional magnetism, low dimensional conduction, spin-filters, multiferroics, electron-phonon interactions, interactions between spin and mechanical properties,…. MOFs are a true playground for the theoretician.
  2. MOFs are very hard to simulate correctly, so be wary of all (published) results that come computationally cheap and easy. Although the unit-cell of any MOF is huge, with regard to standard solid state materials, the electron interactions are also quite long range, so the first Brillouin zone needs very accurate sampling (something often neglected). Also spin-configurations can have a huge influence, especially in systems with a rather flat potential energy surface.

In the book-chapter, we discuss some basic techniques used in the computational study of MOFs, COFs, and Zeolites, which will be of interest to researchers starting in the field. We discuss molecular dynamics and Monte Carlo, as well as Density Functional Theory and all its benefits and limitations.

Jan 09

A Spectre and Meltdown victim: VASP

Over the last weekend, two serious cyber security issues were hot news: Meltdown and Spectre [more links, and links](not to be mistaken for a title of a bond-movie). As a result, also academic HPC centers went into overdrive installing patches as fast as possible. The news of the two security issues went hand-in-hand with quite a few belittling comments toward the chip-designers ignoring the fact that no-one (including those complaining now) discovered the problem for over decade. Of course there was also the usual scare-mongering (cyber-criminals will hack our devices by next Monday, because hacks using these bugs are now immediately becoming their default tools etc.) typical since the beginning of the  21st century…but now it is time to return back to reality.

One of the big users on scientific HPC installations is the VASP program(an example), aimed at the quantum mechanical simulation of materials, and a program central to my own work. Due to an serendipitous coincidence of a annoyingly hard to converge job I had the opportunity to see the impact of the Meltdown and Spectre patches on the performance of VASP: 16% performance loss (within the range of the expected 10-50% performance loss for high performance applications [1][2][3]).

The case:

  • large HSE06 calculation of a 71 atom defective ZnO supercell.
  • 14 irreducible k-points (no reduction of the Hartree-Fock k-points)
  • 14 nodes of 24 cores, with KPAR=14, and NPAR=1 (I know NPAR=24 is the recommended option)

The calculation took several runs of about 10 electronic steps (of each about 5-6 h wall-time, about 2.54 years of CPU-time per run) . The relative average time is shown below (error-bars are the standard deviation of the times within a single run). As the final step takes about 50% longer it is treated separately. As you can see, the variation in time between different electronic steps is rather small (even running on a different cluster only changes the time by a few %). The impact of the Meltdown/Spectre patch gives a significant impact.

Impact of Meltdown/Spectre patch on VASP performance

Impact of Meltdown/Spectre patch on VASP performance for a 336 core MPI job.

 

The HPC team is currently looking into possible workarounds that could (partially) alleviate the problem. VASP itself is rather little I/O intensive, and a first check by the HPC team points toward MPI (the parallelisation framework required for multi-node jobs) being ‘a’ if not ‘the’ culprit. This means that also an impact on other multi-node programs is to be expected. On the bright side, finding a workaround for MPI would be beneficial for all of them as well.

So far, tests I performed with the HPC team not shown any improvements (recompiling VASP didn’t help, nor an MPI related fix). Let’s keep our fingers crossed, and hope the future brings insight and a solution.

 

Jan 01

Review of 2017

Happy New Year

2017 has come and gone. 2018 eagerly awaits getting acquainted. But first we look back one last time, trying to turn this into a old tradition. What have I done during the last year of some academic merit.

Publications: +4

Completed refereeing tasks: +8

  • The Journal of Physical Chemistry (2x)
  • Journal of Physics: Condensed Matter (3x)
  • Diamond and Related Materials (3x)

Conferences & workshops: +5 (Attended) 

  • Int. Conference on Diamond and Carbon Materials (DCM) 2017, Gothenburg, Sweden, September 3rd-7th, 2017 [oral presentation]
  • Summerschool: “Upscaling techniques for mathematical models involving multiple scales”, Hasselt, Belgium, June 26th-29th, 2017 [poster presentation]
  • VSC-user day, Brussels, Belgium, June 2nd, 2017 [poster presentation]
  • E-MRS 2017 Spring Meeting, Strasbourg, France, May 22nd-26th, 2017 [1 oral + 2 poster presentations]
  • SBDD XXII, Hasselt University, Belgium, March 8th-10th, 2017 [poster presentation]

PhD-students: +1

  • Mohammadreza Hosseini (okt.-… ,Phd student physical chemistry, Tarbiat Modares University, Teheran, Iran)

Bachelor-students: +2

Current size of HIVE:

  • 48.5K lines of program (code: 70 %)
  • ~70 files
  • 45 (command line) options

Hive-STM program:

And now, upward and onward, a new year, a fresh start.

Nov 12

Slow science: the case of Pt induced nanowires on Ge(001)

Free-standing Pt-induced nanowire on Ge(001).

Simulated STM image of the Pt-induced nanowires on the Ge(001) surface. Green discs indicate the atomic positions of the bulk-Ge atoms; red: Pt atoms embedded in the top surface layers; yellow: Ge atoms forming the nanowire observed by STM.

Ten years ago, I was happily modeling Pt nanowires on Ge(001) during my first Ph.D. at the university of Twente. As a member of the Computational Materials Science group, I also was lucky to have good and open contact with the experimental research group of Prof. Zandvliet, whom was growing these nanowires. In this environment, I learned there is a big difference between what is easy in experiment and what is easy in computational research. It also taught me to find a common ground which is “easy” for both (Scanning tunneling microscopy (STM) images in this specific case).

During this 4-year project, I quickly came to the conclusion that the nanowires could not be formed by Pt atoms, but that it needed to be Ge atoms instead. Although the simulated STM images were  very convincing, it was really hard to overcome the experimental intuition…and experiments which seemed to contradict this picture (doi: 10.1016/j.susc.2006.07.055 ). As a result, I spend a lot of time learning about the practical aspects of the experiments (an STM tip is a complicated thing) and trying to extract every possible piece of information published and unpublished. Especially the latter provided important support. The “ugly”(=not good for publishing) experimental pictures tended to be real treasures from my computational point of view. Of course, much time was spent on tweaking the computational model to get a perfect match with experiments (e.g. the 4×1 periodicity), and trying to reproduce experiments seemingly supporting the “Ge-nanowire” model (e.g. simulation of CO adsorption and identification of the path along the wire the molecule follows.).

In contrast to my optimism at the end of my first year (I believed all modeling could be finished before my second year ended), the modeling work ended up being a very complex exercise, taking 4 years of research. Now I am happy that I was wrong, as the final result ended up being very robust and became “The model for Pt induced nanowires on Ge(001)“.

Upon doing a review article on this field five years after my Ph.D. I was amazed (and happy) to see my model still stood. Even more, there had been complex experimental studies (doi: 10.1103/PhysRevB.85.245438) which even seemed to support the model I proposed. However, these experiments were stil making an indirect comparison. A direct comparison supporting the Ge nature of the nanowires was still missing…until recently.

In a recent paper in Phys. Rev. B (doi: 10.1103/PhysRevB.96.155415) a Japanese-Turkish collaboration succeeded in identifying the nanowire atoms as Ge atoms. They did this using an Atomic Force Microscope (AFM) and a sample of Pt induced nanowires, in which some of the nanowire atoms were replaced by Sn atoms. The experiment rather simple in idea (execution however requires rather advanced skills): compare the forces experienced by the AFM when measuring the Sn atom, the chain atoms and the surface atoms. The Sn atoms are easily recognized, while the surface is known to consist of Ge atoms. If the relative force of the chain atom is the same as that of the surface atoms, then the chain consists of Ge atoms, while if the force is different, the chain consists of Pt atoms.

*small drum-roll*

And they found the result to be the same.

Yes, after nearly 10 years since my first publication on the subject, there finally is experimental proof that the Pt nanowires on Ge(001) consist of Ge atoms. Seeing this paper made me one happy computational scientist. For me it shows the power of computational research, and provides an argument why one should not be shy to push calculations to their limit. The computational cost may be high, but at least one is performing relevant work. And of course, never forget, the most seemingly easy looking experiments are  usually not easy at all, so as a computational materials scientist you should not take them for granted, but let those experimentalists know how much you appreciate their work and effort.

Older posts «