Tag: science communication

Start to science-communicate

Today and tomorrow, there is a 2-day summer school on science communication held at the University of Antwerp: Let’s Talk Science! During this summer school there are a large number of workshops to participate in, and lectures to attend, dealing with all aspects of science communication.

Wetenschapsbattle Trophy: Hat made by the children for the contestants of the wetenschapsbattle. Mine has diamonds and computers. 🙂

I was invited to represent Hasselt University (and science communication done by its members) during the plenary panel session starting the summer school. The goal of this plenary session was to share our experiences and thoughts on science communication. The contributions varied from hands-on examples to more abstract presentations of what to keep in mind, including useful tips. The central aim of my presentation was directed at identifying the boundary between science communication and scientific communication. Or more precisely, showing that this border may be more artificial than we are aware of. By showing that everyone’s unique in his/her expertise and discipline, I provided the link between conference presentations and presentations for the general public. I traveled through my history of science communication, starting in the middle: with the Science Battle. An event, I wrote about before, where you are asked to explain your work in 15 minutes to an audience of 6-to 12-year-olds. Then I worked my way back via my blog and contributions to “Ik heb been vraag” (such as: if you drop a penny from the Eiffel tower, will this kill someone on the ground?) to the early beginning of my research: simulating STM images. In the latter case, although I was talking to experts in their field (experimental growth and characterization), their total lack of experience in modelling and quantum mechanical simulations transformed my colleagues into “general public”. This is an important aspect to realize, not only for science communication, but also for scientific communication. As a consequence this also means that most of the tips and tricks applicable to science communication are also applicable to scientific communication.

For example: tell a coherent story. As noted by one of my favorite authors – Terry Pratchett – the human species might have better been called “Pan Narrans”, the storytelling ape. We tell stories and we remember by stories. This is also a means to make your scien(ce/tific) communication more powerful. I told the story of my passion during science explained and my lecture for de Universiteit van Vlaanderen.

A final point I touched is the question of “Why?”. Why should you do science communication? Some may note that is our duty as scientists, since we are payed with taxpayer money. But personally I believe this is not a good incentive. Science communication should originate from your own passion. It should be because you want to, instead of because you have to. If you want to, it is much easier to show you passion, show your interest, and also take the time to do it.

This brought me back to my central theme: Science communication can be simple and small. E.g. projecting simulated STM images on the wall’s of the medieval castle in Ghent (Gravensteen) during a previous edition of the Ghent Light Festival.

Simulated STM of nanowires projected on the Gravensteen (Ghent) during the 2012 Light Festival). Courtesy of Glenn Pollefeyt

Simulated STM of nanowires projected on the Gravensteen (Ghent) during the 2012 Light Festival). Courtesy of Glenn Pollefeyt

VSC-users day 2019

It is becoming an interesting yearly occurrence: the VSC user day. During this 5th edition, HPC users of the various Flemish universities gather together at the Belgian Royal Academy of Science (KVAB) to present their state-of-the-art work using the Flemish Tier-1 and Tier-2 supercomputers. This is done during a poster-presentation session. This year, I presented my work with regard to vibrational spectra in solids and periodic systems. In contrast to molecules, vibrational spectra in solids are rarely investigated at the quantum mechanical level due to their high cost. I show that imaginary modes are not necessarily a result of structural instabilities, and I present a method for identifying the vibrational spectrum of a defect.

Poster for the VSC user day 2019.

In addition, international speakers discuss recent (r)evolutions in High Performance Computing, and during workshops, the participants are introduced in new topics such as GPU-computing, parallelization, and the VSC Cloud and data platform. The possibilities of GPU were presented by Ehsan, of the VSC, showing extreme speedups of 10x to 100x, strongly depending on the application, the graphics card. It is interesting to see that simple CUDA prama’s can be used to obtain such effects…maybe I should have a go at them for the Hirshfeld and phonon parts of my HIVE code…if they can deal with quadruple precision, and very large arrays. During the presentation of Joost Vandevondele (ETH Zürich) we learned what the future holds with regard to next generation HPC machines. As increasing speed becomes harder and harder to obtain, people are again looking into dedicated hardware systems, a situation akin to the founding days of HPC. Whether this is a situation we should applaud remains to be seen, as it means that we are moving back to codes written for specific machines. This decrease in portability will probably be alleviated by high level scripting languages (such as python), which at the same time result in a significant loss of the initial gain. (Think of the framework approach to modern programming which leads to trivial applications requiring HPC resources to start.)

In addition, this year the HPC-team of the TIER-1 machine is present for a panel discussion, presenting the future of the infrastructure. The machine nearly doubled in size which is great news. Let us hope that in addition for financing hardware, there is also a significant budget considered for a serious extension of a dedicated HPC support team. Running a Tier-1 machine is not something one does as a side-project, but which requires a constant vigilance of a dedicated team to deal with software updates, resulting compatibility issues, conflicting scripts and just hardware and software running haywire because they can.

With this hope, I look toward the future. A future where computational research is steadily are every quickly is becoming common place in the fabric om academic endeavors.

Universiteit Van Vlaanderen

A bit over 1 month ago, I told you about my adventure at the film studio of “de Universiteit Van Vlaanderen“. Today is the day the movie is officially released. You can find it at the website of de Universiteit Van Vlaanderen: Video. The video is in Dutch as this is a science-communication platform aimed at the local population, presenting the expertise available at our local universities.

 

In addition to this video, I was asked by Knack magazine to write a piece on the topic presented. As computational research is my central business I wrote a piece on the subject introducing the general public to the topic. The piece can be read here (in Dutch).

And of course, before I forget, this weekend there was also the half-yearly daylight saving exercise with our clocks.[and in Dutch]

 

SBDD XXIV: Diamond workshop

The participants to SBDD XXIV of 2019.  (courtesy of Jorne Raymakers, SBDD XXIV secretary) 

 

Last week the 24th edition of the Hasselt diamond workshop took place (this year chaired by Christoph Becher). It’s already the fourth time, since 2016, I have attended this conference, and each year it is a joy to meet up with the familiar faces of the diamond research field. The program was packed, as usual. And this year the NV-center was again predominantly present as the all-purpose quantum defect in diamond. I keep being amazed at how much it is used (although it has a rather low efficiency) and also about how many open question remain with regard to its incorporation during growth. With a little luck, you may read more about this in the future, as it is one of a few dozen ideas and questions I want to investigate.

A very interesting talk was given by Yamaguchi Takahide, who is combining hexagonal-BN and H-terminated diamond for high performance electronic devices. In such a device the h-BN leads to the formation of a 2D hole-gas at the interface (i.e., surface transfer doping), making it interesting for low dimensional applications. (And it of course hints at the opportunities available with other 2D materials.) The most interesting fact, as well as the most mind-boggling to my opinion, was the fact that there was no clear picture of the atomic structure of the interface. But that is probably just me. For experiments, nature tends to make sure everything is alright, while we lowly computational materials artificers need to know where each and every atom belongs. I’ll have to make some time to find out.

A second extremely interesting presentation was given by Anke Krueger (who will be the chair of the 25th edition of SBDD next year), showing of her groups skill at creating fluorine terminated diamond…without getting themselves killed. The surface termination of diamond with fluorine comes with many different hazards, going from mere poisoning, to fire and explosions. The take-home message: “kids don’t try this at home”. Despite all this risky business, a surface coverage of up to 85% was achieved, providing a new surface termination for diamond, with a much stronger trapping of negative charges near the surface, ideal for forming negatively charged NV centers.

On the last day, Rozita Rouzbahani presented our collaboration on the growth of B doped diamond. She studied the impact of growth conditions on the B concentration and growth speed of B doped diamond surfaces. My computational results corroborate her results and presents the atomic scale mechanism resulting in an increased doping concentration upon increased growth speed. I am looking forward to the submission of this nice piece of research.

And now, we wait another year for the next edition of SBDD, the celebratory 25th edition with a focus on diamond surfaces.

Universiteit Van Vlaanderen: Will we be able to design new materials using our smartphone in the future?

Yesterday, I had the pleasure of giving a lecture for the Universiteit van Vlaanderen, a science communication platform where Flemish academics are asked to answer “a question related to their research“. This question is aimed to be highly clickable and very much simplified. The lecture on the other hand is aimed at a general lay public.

I build my lecture around the topic of materials simulations at the atomic scale. This task ended up being rather challenging, as my computational research has very little direct overlap with the everyday life of the average person. I deal with supercomputers (which these days tend to be bench-marked in terms of smartphone power) and the quantum mechanical simulation of materials at the atomic scale, two other topics which may ring a bell…but only as abstract topics people may have heard of.

Therefor, I crafted a story taking people on a fast ride down the rabbit hole of my work. Starting from the almost divine power of the computational materials scientist over his theoretical sample, over the reality of nano-scale materials in our day-to-day lives, past the relative size of atoms and through the game nature of simulations and the salvation of computational research by grace of Moore’s Law…to the conclusion that in 25 years, we may be designing the next generation of CPU materials on our smartphone instead of a TIER-1 supercomputer. …did I say we went down the rabbit hole?

The television experience itself was very exhilarating for me. Although my actual lecture took only 15 minutes, the entire event took almost a full day. Starting with preparations and a trial run in the afternoon (for me and my 4 colleagues) followed by make-up (to make me look pretty on television 🙂 … or just to reduce my reflectance). In the evening we had a group diner meeting the people who would be in charge of the technical aspects and entertainment of the public. And then it was 19h30. Tensions started to grow. The public entered the studio, and the show was ready to start. Before each lecture, there was a short interview to test sound and light, and introduce us to the public. As the middle presenter, I had the comfortable position not to be the first, so I could get an idea of how things went for my colleagues, and not to be the last, which can really be destructive on your nerves.

At 21h00, I was up…

and down the rabbit hole we went. 

 

 

Full periodic table, with all elements presented with their relative size (if known)

Full periodic table, with all elements presented with their relative size (if known) created for the Universiteit van Vlaanderen lecture.

 

Roots of Science

Today VLIR (Flemish Inter-university Council) and the Young Academy had a conference on the future of fundamental research in Flanders: Roots of Science. We live in a world where we rely on science more and more to resolve our problems (think climate change, disease control, energy generation, …). In our bizarre world of alternative facts and fake news, science can be utterly ignored in one sentence and proposed as a magical solution in the next.

Although I am happy with the faith some have in the possibilities of science, it is important to remember that it is not magic. This has a very important consequence:

Things do not happen simply because you want them to happen.

 

Many important breakthroughs in science are what one would call serendipity (e.g., the discovery of penicillin by Fleming, development of the WWW as a side-effect of researchers wanting to share their data

at CERN in 1991,…) . In Flanders the Royal Flemish Academy and the Young Academy have written a Standpoint (an evidence-based advisory text)

discussing the need for more researcher-driven research in contrast to agenda-driven research, as they believe this is a conditio sine qua non for a healthy scientific future.

Where government-driven research focuses on resolving questions from society, researcher-driven research allows the researcher to follow his or her personal interest. This not with the primal aim of having short-te

rm return of investment, but with the aim of providing the fundamental knowledge and expertise which some day may be needed for the former. In researcher-driver research, the journey is the goal as this is where scientific progress is made by finding solutions for problems not imagined before.

Do we have to pay for this with our tax-payers money? I think we do. No-one imagined optical drives (CD, DVD, blue-ray) to become a billion euro industry while the laser was being developed in a lab. Who would have thought the transistor would play such an important role in our every-day life? And what about the first computer? Thomas Watson, President of IBM, has allegedly said in 1943: “I think there is a world market for maybe 5 computers.” And, yet, now many of us have more than 5 computers at home (including tables, smartphones,…)! The researchers working on these “inventions” did not do this with your Blue-ray player or smartphone in mind. These high impact applications are “merely” side-products of their fundamental scientific research. No-one at the time could predict this, so why should we be able to do this today? In this sense, you should see funding of fundamental research as a long term investment. Tax-money is being invested in our future, and the future of children and grandchildren. Although we do not know what will be the outcome, we know from the past that it will have an impact on our lives.

Its difficult to make predictions, especially about the future.

Let us therefor support more researcher-driven research.

 

In addition to the Standpoint, there is also a very nice video explaining the situation (with subtitles in English or Dutch, use the cogwheel to select your preference).

Newsflash: Materials of the Future

This summer, I had the pleasure of being interviewed by Kim Verhaeghe, a journalist of the EOS magazine, on the topic of “materials of the future“. Materials which are currently being investigated in the lab and which in the near or distant future may have an enormous impact on our lives. While brushing up on my materials (since materials with length scales of importance beyond 1 nm are generally outside my world of accessibility), I discovered that to cover this field you would need at least an entire book just to list the “materials of the future”. Many materials deserve to be called materials of the future, because of their potential. Also depending on your background other materials may get your primary attention.

In the resulting article, Kim Verhaeghe succeeded in presenting a nice selection, and I am very happy I could contribute to the story. Introducing “the computational materials scientist” making use of supercomputers such as BrENIAC, but also new materials such as Metal-Organic Frameworks (MOF) and shedding some light on “old” materials such as diamond, graphene and carbon nanotubes.

Dangerous travel physics

Tossing coins into a fountain brings luck, tossing them of a building causes death and destruction?

 

We have probably all done it at one point when traveling: thrown a coin into a wishing well or a fountain. There are numerous wishing wells with legends describing how the deity living in the well will bring good fortune in return for this gift. The myths and legends often originate from Celtic, German or Nordic traditions.

In case of the Trevi fountain, there is the belief that if you throw a coin over your left shoulder using your right hand, you will return to Rome…someday. As this fountain and legend are iconic parts of our western movie history, many, many coins get tossed into it (more than 1 Million € worth each year, which is collected an donated to charity).

In addition to these holiday legends, there also exist more recent “coin-myths”: Death by falling penny. These myths are always linked to tall buildings, and claim that a penny dropped from the top of such a building could kill someone if they hit him.

Traveling with Newton

In both kinds of coin legends, the trajectory of the coin can be predicted quite well using Newton’s Laws. Their speed is low compared to the speed of light, and the coins are sufficiently large to keep the world of quantum mechanics hidden from sight.

The second Law of Newton states that the speed of an object changes if there is a force acting on it. Here on earth, gravity is a major player (especially for Physics exercises). In case of a coin tossed into a fountain, gravity will cause the coin to follow a roughly parabolic path before disappearing into the water. The speed at which the coin will hit the water will be comparable to the speed with which it was thrown…at least if there isn’t to much of a difference in height between the surface of the water and the hand of the one throwing the coin.

But, what if this difference is large? Such as in case of the penny being dropped from a tall building. In such a case, the initial velocity is zero, and the penny is accelerated toward the ground by gravity. Using the equations of motion for a uniform accelerated system, we can calculate easily the speed at which the coin hits the ground:

x = x0 + v0*t + ½ * g * t²

v=v0+g*t

If we drop a penny from the 3rd floor of the Eiffel Tower (x0=276.13m, x=0m, v0=0 m/s, g=-9.81m/s²) then the first equation teaches us that after 7.5 seconds, the penny will hit the ground with a final speed (second equation) of -73.6 m/s (or -265 km/h)*. With such a velocity, the penny definitely will leave an impression. More interestingly, we will get the exact same result for a pea (cooked or frozen), a bowling ball, a piano or an anvil…but also a feather. At this point, your intuition must be screaming at you that you are missing something important.

All models are wrong…but they can be very useful

The power of models in physics, originates from keeping only the most important and relevant aspects. Such approximations provide a simplified picture and allow us to understand the driving forces behind nature itself. However, in this context, models in physics are approximations of reality, and thus by definition wrong, in the sense that they do not provide an “exact” representation of reality. This is also true for Newton’s Laws, and our application above. With these simple rules, it is possible to describe the motion of the planets as well as a coin tossed into the Trevi fountain.

So what’s the difference between the coin tossed into a fountain and planetary motion on the one hand, and our assorted objects being dropped from the Eiffel Tower on the other hand?

Friction as it presents itself in aerodynamic drag!

Aerodynamic drag gives rise to a force in the direction opposite to the movement, and it is defined as:

FD= ½ *Rho*v²*CD*A

This force depends on the density Rho of the medium (hence water gives a larger drag than air), the velocity and surface area A in the direction of movement of the object, and CD the drag coefficient, which depends on the shape of the object.

If we take a look at the planets and the coin tosses, we notice that, due to the absence of air between the planets, no aerodynamic drag needs to be considered for planetary motion. In case of a coin being tossed into the Trevi fountain, there is aerodynamic drag, however, the speeds are very low as well as the distance traversed. As such the effect of aerodynamic drag will be rather small, if not negligible. In case of objects being dropped from a tall building, the aerodynamic drag will not be negligible, and it will be the factors CD and A which will make sure the anvil arrives at the ground level before the feather.

Because this force also depends on the velocity, you can no longer make direct use of the first two equations to calculate the time of impact and velocity at each point of the path. You will need a numerical approach for this (which is also the reason this is not (regularly) taught in introductory physics classes at high school). However, using excel, you can get a long way in creating a numerical solution for this problem.[Excel example]

As we know the density of air is about 1.2kg/mÂł, CD for a thin cylinder (think coin) is 1.17, the radius of a penny is 9.5 mm and its mass is 2.5g, then we can find the terminal velocity of the penny to be 11.1 m/s (40 km/h). The penny will land on the ground after about 25.6 seconds. This is quite a bit slower than what we found before, and also quite a bit more safe. The penny will reach its terminal velocity after having fallen about 60 m, which means that dropping a penny from taller buildings (the Atomium [102 m], the Eiffel Tower [276.13 m, 3rd floor, 324 m top], the Empire State Building [381 m] or even the Burj Khalifa [829.8 m]) will have no impact on the velocity it will have when hitting the ground: 40km/h.

This is a collision you will most probably survive, but which will definitely leave a small bruise on impact.

 

*The minus sign indicates the coin is falling downward.

Book chapter: Computational Chemistry Experiment Possibilities

Authors: Bartłomiej M. Szyja and Danny Vanpoucke
Book: Zeolites and Metal-Organic Frameworks, (2018)
Chapter Ch 9, p 235-264
Title Computational Chemistry Experiment Possibilities
ISBN: 978-94-629-8556-8
export: bibtex
pdf: <Amsterdam University Press>
<Open Access>

 

Zeolites and Metal-Organic Frameworks (the hard-copy)

Abstract

Thanks to a rapid increase in the computational power of modern CPUs, computational methods have become a standard tool for the investigation of physico-chemical phenomena in many areas of chemistry and technology. The area of porous frameworks, such as zeolites, metal-organic frameworks (MOFs) and covalent-organic frameworks (COFs), is not different. Computer simulations make it possible, not only to verify the results of the experiments, but even to predict previously inexistent materials that will present the desired experimental properties. Furthermore, computational research of materials provides the tools necessary to obtain fundamental insight into details that are often not accessible to physical experiments.

The methodology used in these simulations is quite specific because of the special character of the materials themselves. However, within the field of porous frameworks, density functional theory (DFT) and force fields (FF)
are the main actors. These methods form the basis of most computational studies, since they allow the evaluation of the potential energy surface (PES) of the system.

Related:

Newsflash: here

Building bridges towards experiments.

Quantum Holy Grail: The Ground-State

Quantum mechanical calculations provide a powerful tool to investigate the world around us. Unfortunately it is also a computationally very expensive tool to use, which puts a boundary on what is possible in terms of computational materials research. For example, when investigating a solid at the quantum mechanical level, you are limited in the number of atoms that you can consider. Even with a powerful supercomputer at hand, a hundred to a thousand atoms are currently accessible for “routine” investigations. The computational cost also limits the number of configurations/combinations you can calculate.

However, in the end— and often with some blood sweat and tears—these calculations do provide you the ground-state structure and energy of your system. From this point forward you can continue characterizing its properties, life is beautiful and happy times are just beyond the horizon. At this horizon your experimental colleague awaits you. And he/she tells you:

Sorry, I don’t find that structure in my sample.

After recovering from the initial shock, you soon realize that in (materials science) experiments one seldom encounters a sample in “the ground-state”. Experiments are performed at temperatures above 0K and pressures above 0 Pa (even in vacuum :p ). Furthermore, synthesis methods often involve elevated temperatures, increased pressure, mechanical forces, chemical reactions,… which give rise to meta-stable configurations. In such an environment, your nicely deduced ground-state may be an exception to the rule. It is only one point within the phase-space of the possible.

So how can you deal with this? You somehow need to sample the phase-space available to the experiment.

Sampling Phase-Space for Ball-milling synthesis.

For a few years now, I have a very fruitful collaboration with Prof. Rounaghi. His interest goes toward the cheap fabrication of metal-nitrides. Our first collaboration focused on AlN, while later work included Ti, V and Cr-nitrides. Although this initial work had a strong focus on simple corroboration through the energies calculated at the quantum mechanical level, the collaboration also allowed me to look at my data in a different way. I wanted to “simulate” the reactions of ball-milling experiments more closely.

Due to the size-limitations of quantum mechanical calculations I played with the following idea:

  • Assume there exists a general master reaction which describes what happens during ball-milling.

X Al + Y Melamine → x1 Al + x2 Melamine + x3 AlN + …

where all the xi represent the fractions of the reaction products present.

  • With the boundary condition that the number of particles needs to be conserved, you end up with a large set of (x1,x2,x3,…) configurations which each have a certain energy. This energy is calculated using the quantum mechanical energies of each product. The configuration with the lowest energy is the ground state configuration. However, investigating the entire accessible phase-space showed that the energies of the other possible configurations are generally not that much higher.
  • What if we used the energy available due to ball-milling in the same fashion as we use kBT? And sample the phase-space using Boltzmann statistics.
  • The resulting Boltzmann distribution of the configurations available in the phase-space can then be used to calculate the mass/atomic fraction of each of the products and allow us to represent an experimental sample as a collection of small units with slightly different configurations, weighted according to their Boltzmann distribution.

This setup allowed me to see the evolution in end-products as function of the initial ratio in case of AlN, and in our current project to indicate the preferred Iron-nitride present.

Grid-sampling vs Monte-Carlo-sampling

Whereas the AlN system was relatively easy to investigate—the phase space was only 3 dimensional— the recent iron based system ended up being 4 dimensional when considering only host materials, and 10 dimensional when including defects. For a small 3-4D phase-space, it is possible to create an equally spaced grid and get converged results using a few million to a billion grid-points. For a 10D phase-space this is no longer possible. As you can no longer keep all data-points (easily) in storage during your calculation (imagine 1 Billion points, requiring you to store 11 double precision floats or about 82Gb) you need a method that does not rely on large arrays of data. For our Boltzmann statistics this gives us a bit of a pickle, as we need to have the global minimum of our phase space. A grid is too course to find it, while a simple Monte-Carlo just keeps hopping around.

Using Metropolis’s improvement of the Monte-Carlo approach was an interesting exercise, as it clearly shows the beauty and simplicity of the approach. This becomes even more awesome the moment you imagine the resources available in those days. I noted 82Gb being a lot, but I do have access to machines with those resources; its just not available on my laptop. In those days MANIAC supercomputers had less than 100 kilobyte of memory.

Although I theoretically no longer need the minimum energy configuration, having access to that information is rather useful. Therefore, I first search the phase-space for this minimum. This is rather tricky using Metropolis Monte Carlo (of course better techniques exist, but I wanted to be a bit lazy), and I found that in the limit of T→0 the algorithm will move toward the minimum. This, however, may require nearly 100 million steps of which >99.9% are rejected. As it only takes about 20 second on a modern laptop…this isn’t a big issue.

Finding a minimum using Metropolis Monte Carlo.

Finding a minimum using Metropolis Monte Carlo.

Next, a similar Metropolis Monte Carlo algorithm can be used to sample the entire phase space. Using 109 sample points was already sufficient to have a nicely converged sampling of the phase space for the problem at hand. Running the calculation for 20 different “ball-milling” energies took less than 2 hours, which is insignificant, when compared to the resources required to calculate the quantum mechanical ground state energies (several years). The figure below shows the distribution of the mass fraction of one of the reaction products as well as the distribution of the energies of the sampled configurations.

Metropolis Monte Carlo distribution of mass fraction and configuration energies for 3 sets of sample points.

Metropolis Monte Carlo distribution of mass fraction and configuration energies for 3 sets of sample points.

This clearly shows us how unique and small the quantum mechanical ground state configuration and its contribution is compared to the remainder of the phase space. So of course the ground state is not found in the experimental sample but that doesn’t mean the calculations are wrong either. Both are right, they just look at reality from a different perspective. The gap between the two can luckily be bridged, if one looks at both sides of the story.