Tag Archive: theory and experiment

Oct 31

Daylight saving and solar time

For many people around the world, last weekend was highlighted by a half-yearly recurring ritual: switching to/from daylight saving time. In Belgium, this goes hand-in-hand with another half-yearly ritual; The discussion about the possible benefits of abolishing of daylight saving time. Throughout the last century, daylight saving time has been introduced on several occasions. The most recent introduction in Belgium and the Netherlands was in 1977. At that time it was intended as a measure for conserving energy, due to the oil-crises of the 70’s. (In Belgium, this makes it painfully modern due to the current state of our energy supplies: the impending doom of energy shortages and the accompanying disconnection plans which will put entire regions without power in case of shortages.)

The basic idea behind daylight saving time is to align the daylight hours with our working hours. A vision quite different from that of for example ancient Rome, where the daily routine was adjusted to the time between sunrise and sunset. This period was by definition set to be 12 hours, making 1h in summer significantly longer than 1h in winter. As children of our time, with our modern vision on time, it is very hard to imagine living like this without being overwhelmed by images of of impending doom and absolute chaos. In this day and age, we want to know exactly, to the second, how much time we are spending on everything (which seems to be social media mostly 😉 ). But also for more important aspects of life, a more accurate picture of time is needed. Think for example of your GPS, which will put you of your mark by hundreds of meters if your uncertainty in time is a mere 0.000001 seconds. Also, police radar will not be able to measure the speed of your car with the same uncertainty on its timing.

Turing back to the Roman vision of time, have you ever wondered why “the day” is longer during summer than during winter? Or, if this difference is the same everywhere on earth? Or, if the variation in day length is the same during the entire year?

Our place on earth

To answer these questions, we need a good model of the world around us. And as is usual in science, the more accurate the model, the more detailed the answer.

Let us start very simple. We know the earth is spherical, and revolves around it’s axis in 24h. The side receiving sunlight we call day, while the shaded side is called night. If we assume the earth rotates at a constant speed, then any point on its surface will move around the earths rotational axis at a constant angular speed. This point will spend 50% of its time at the light side, and 50% at the dark side. Here we have also silently assumed, the rotational axis of the earth is “straight up” with regard to the sun.

In reality, this is actually not the case. The earths rotational axis is tilted by about 23° from an axis perpendicular to the orbital plane. If we now consider a fixed point on the earths surface, we’ll note that such a point at the equator still spends 50% of its time in the light, and 50% of its time in the dark. In contrast, a point on the northern hemisphere will spend less than 50% of its time on the daylight side, while a point on the southern hemisphere spends more than 50% of its time on the daylight side. You also note that the latitude plays an important role. The more you go north, the smaller the daylight section of the latitude circle becomes, until it vanishes at the polar circle. On the other hand, on the southern hemisphere, if you move below the polar circle, the point spend all its time at the daylight side. So if the earths axis was fixed with regard to the sun, as shown in the picture, we would have a region on earth living an eternal night (north pole) or day (south pole). Luckily this is not the case. If we look at the evolution of the earths axis, we see that it is “fixed with regard to the fixed stars”, but makes a full circle during one orbit around the sun.* When the earth axis points away from the sun, it is winter on the northern hemisphere, while during summer it points towards the sun. In between, during the equinox, the earth axis points parallel to the sun, and day and night have exactly the same length: 12h.

So, now that we know the length of our daytime varies with the latitude and the time of the year, we can move one step further.

How does the length of a day vary, during the year?

The length of the day varies over the year, with the longest and shortest days indicated by the summer and winter solstice. The periodic nature of this variation may give you the inclination to consider it as a sine wave, a sine-in-the-wild so to speak. Now let us compare a sine wave fitted to actual day-time data for Brussels. As you can see, the fit is performing quite well, but there is a clear discrepancy. So we can, and should do better than this.

Instead of looking at the length of each day, let us have a look at the difference in length between sequential days.** If we calculate this difference for the fitted sine wave, we again get a sine wave as we are taking a finite difference version of the derivative. In contrast, the actual data shows not a sine wave, but a broadened sine wave with flat maximum and minimum. You may think this is an error, or an artifact of our averaging, but in reality, this trend even depends on the latitude, becoming more extreme the closer you get to the poles.

This additional information, provides us with the extra hint that in addition to the axial tilt of the earth axis, we also need to consider the latitude of our position. What we need to calculate is the fraction of our latitude circle (e.g. for Brussels this is 50.85°) that is illuminated by the sun, each day of the year. With some perseverance and our high school trigonometric equations, we can derive an analytic solution, which can then be calculated by, for example, excel.

Some calculations

The figure above shows a 3D sketch of the situation on the left, and a 2D representation of the latitude circle on the right. α is related to the latitude, and β is the angle between the earth axis and the ‘shadow-plane’ (the plane between the day and night sides of earth). As such, β will be maximal during solstice (±23°26’12.6″) and exactly equal to zero at the equinox—when the earth axis lies entirely in the shadow-plane. This way, the length of the day is equal to the illuminated fraction of the latitude circle: 24h(360°-2γ). γ can be calculated as cos(γ)=adjacent side/hypotenuse in the right hand side part of the figure above. If we indicate the earth radius as R, then the hypotenuse is given by Rsin(α). The adjacent side, on the other hand, is found to be equal to R’sin(β), where R’=B/cos(β), and B is the perpendicular distance between the center of the earth and the plane of the latitude circle, or B=Rcos(α).

Combining all these results, we find that the number of daylight hours is:

24h*{360°-2arccos[cotg(α)tg(β)]}

 

How accurate is this model?

All our work is done, the actual calculation with numbers is a computer’s job, so we put excel to work. For Brussels we see that our model curve very nicely and smoothly follows the data (There is no fitting performed beyond setting the phase of the model curve to align with the data). We see that the broadening is perfectly shown, as well as the perfect estimate of the maximum and minimum variation in daytime (note that this is not a fitting parameter, in contrast to the fit with the sine wave). If you want to play with this model yourself, you can download the excel sheet here. While we are on it, I also drew some curves for different latitudes. Note that beyond the polar circles this model can not work, as we enter regions with periods of eternal day/night.

 

After all these calculations, be honest:

You are happy you only need to change the clock twice a year, don’t you. 🙂

 

 

* OK, in reality the earths axis isn’t really fixed, it shows a small periodic precession with a period of about 41000 years. For the sake of argument we will ignore this.

** Unfortunately, the data available for sunrises and sunsets has only an accuracy of 1 minute. By taking averages over a period of 7 years, we are able to reduce the noise from ±1 minute to a more reasonable value, allowing us to get a better picture of the general trend.

External links

Jul 17

Building bridges towards experiments.

Quantum Holy Grail: The Ground-State

Quantum mechanical calculations provide a powerful tool to investigate the world around us. Unfortunately it is also a computationally very expensive tool to use, which puts a boundary on what is possible in terms of computational materials research. For example, when investigating a solid at the quantum mechanical level, you are limited in the number of atoms that you can consider. Even with a powerful supercomputer at hand, a hundred to a thousand atoms are currently accessible for “routine” investigations. The computational cost also limits the number of configurations/combinations you can calculate.

However, in the end— and often with some blood sweat and tears—these calculations do provide you the ground-state structure and energy of your system. From this point forward you can continue characterizing its properties, life is beautiful and happy times are just beyond the horizon. At this horizon your experimental colleague awaits you. And he/she tells you:

Sorry, I don’t find that structure in my sample.

After recovering from the initial shock, you soon realize that in (materials science) experiments one seldom encounters a sample in “the ground-state”. Experiments are performed at temperatures above 0K and pressures above 0 Pa (even in vacuum :p ). Furthermore, synthesis methods often involve elevated temperatures, increased pressure, mechanical forces, chemical reactions,… which give rise to meta-stable configurations. In such an environment, your nicely deduced ground-state may be an exception to the rule. It is only one point within the phase-space of the possible.

So how can you deal with this? You somehow need to sample the phase-space available to the experiment.

Sampling Phase-Space for Ball-milling synthesis.

For a few years now, I have a very fruitful collaboration with Prof. Rounaghi. His interest goes toward the cheap fabrication of metal-nitrides. Our first collaboration focused on AlN, while later work included Ti, V and Cr-nitrides. Although this initial work had a strong focus on simple corroboration through the energies calculated at the quantum mechanical level, the collaboration also allowed me to look at my data in a different way. I wanted to “simulate” the reactions of ball-milling experiments more closely.

Due to the size-limitations of quantum mechanical calculations I played with the following idea:

  • Assume there exists a general master reaction which describes what happens during ball-milling.

X Al + Y Melamine → x1 Al + x2 Melamine + x3 AlN + …

where all the xi represent the fractions of the reaction products present.

  • With the boundary condition that the number of particles needs to be conserved, you end up with a large set of (x1,x2,x3,…) configurations which each have a certain energy. This energy is calculated using the quantum mechanical energies of each product. The configuration with the lowest energy is the ground state configuration. However, investigating the entire accessible phase-space showed that the energies of the other possible configurations are generally not that much higher.
  • What if we used the energy available due to ball-milling in the same fashion as we use kBT? And sample the phase-space using Boltzmann statistics.
  • The resulting Boltzmann distribution of the configurations available in the phase-space can then be used to calculate the mass/atomic fraction of each of the products and allow us to represent an experimental sample as a collection of small units with slightly different configurations, weighted according to their Boltzmann distribution.

This setup allowed me to see the evolution in end-products as function of the initial ratio in case of AlN, and in our current project to indicate the preferred Iron-nitride present.

Grid-sampling vs Monte-Carlo-sampling

Whereas the AlN system was relatively easy to investigate—the phase space was only 3 dimensional— the recent iron based system ended up being 4 dimensional when considering only host materials, and 10 dimensional when including defects. For a small 3-4D phase-space, it is possible to create an equally spaced grid and get converged results using a few million to a billion grid-points. For a 10D phase-space this is no longer possible. As you can no longer keep all data-points (easily) in storage during your calculation (imagine 1 Billion points, requiring you to store 11 double precision floats or about 82Gb) you need a method that does not rely on large arrays of data. For our Boltzmann statistics this gives us a bit of a pickle, as we need to have the global minimum of our phase space. A grid is too course to find it, while a simple Monte-Carlo just keeps hopping around.

Using Metropolis’s improvement of the Monte-Carlo approach was an interesting exercise, as it clearly shows the beauty and simplicity of the approach. This becomes even more awesome the moment you imagine the resources available in those days. I noted 82Gb being a lot, but I do have access to machines with those resources; its just not available on my laptop. In those days MANIAC supercomputers had less than 100 kilobyte of memory.

Although I theoretically no longer need the minimum energy configuration, having access to that information is rather useful. Therefore, I first search the phase-space for this minimum. This is rather tricky using Metropolis Monte Carlo (of course better techniques exist, but I wanted to be a bit lazy), and I found that in the limit of T→0 the algorithm will move toward the minimum. This, however, may require nearly 100 million steps of which >99.9% are rejected. As it only takes about 20 second on a modern laptop…this isn’t a big issue.

Finding a minimum using Metropolis Monte Carlo.

Finding a minimum using Metropolis Monte Carlo.

Next, a similar Metropolis Monte Carlo algorithm can be used to sample the entire phase space. Using 109 sample points was already sufficient to have a nicely converged sampling of the phase space for the problem at hand. Running the calculation for 20 different “ball-milling” energies took less than 2 hours, which is insignificant, when compared to the resources required to calculate the quantum mechanical ground state energies (several years). The figure below shows the distribution of the mass fraction of one of the reaction products as well as the distribution of the energies of the sampled configurations.

Metropolis Monte Carlo distribution of mass fraction and configuration energies for 3 sets of sample points.

Metropolis Monte Carlo distribution of mass fraction and configuration energies for 3 sets of sample points.

This clearly shows us how unique and small the quantum mechanical ground state configuration and its contribution is compared to the remainder of the phase space. So of course the ground state is not found in the experimental sample but that doesn’t mean the calculations are wrong either. Both are right, they just look at reality from a different perspective. The gap between the two can luckily be bridged, if one looks at both sides of the story. 

 

Jun 07

Science Figured out

Diamond and CPU's, now still separated, but how much longer will this remain the case? Top left: Thin film N-doped diamond on Si (courtesy of Sankaran Kamatchi). Top right: Very old Pentium 1 CPU from 1993 (100MHz), with µm architecture. Bottom left: more recent intel core CPU (3GHz) of 2006 with nm scale architecture. Bottom right: Piece of single crystal diamond. A possible alternative for silicon, with 20x higher thermal conductivity, and 7x higher mobility of charge carriers.

Diamond and CPU’s, now still separated, but how much longer will this remain the case?
Top left: Thin film N-doped diamond on Si (courtesy of Sankaran Kamatchi). Top right: Very old Pentium 1 CPU from 1993 (100MHz), with µm architecture. Bottom left: more recent intel core CPU (3GHz) of 2006 with nm scale architecture. Bottom right: Piece of single crystal diamond. A possible alternative for silicon, with 20x higher thermal conductivity, and 7x higher mobility of charge carriers.

Can you pitch your research in 3 minutes, this is the concept behind “wetenschap uitgedokterd/science figured out“. A challenge I accepted after the fun I had at the science-battle. If I can explain my work to a public of 6 to 12 year-olds, explaining it to adults should be possible as well. However, 3 minutes is very short (although some may consider this long in the current bitesize world), especially if you have to explain something far from day-to-day life and can not assume any scientific background.

Where to start? Capture the imagination: “Imagine a world where you are a god.

Link back to the real world. “All modern-day high-tech toys are more and more influenced by the atomic scale details.” Over the last decade, I have seen the nano-scale progress slowly but steadily into the realm of real-life materials research. This almost invisible trend will have a huge impact on materials science in the coming decade, because more and more we will see empirical laws breaking down, and it will become harder and harder to fit trends of materials using a classical mindset, something which has worked marvelously for materials science during the last few centuries. Modern and future materials design (be it solar cells, batteries, CPU’s or even medicine) will have to rely on quantum mechanical intuition and hence quantum mechanical simulations. (Although there is still much denial in that regard.)

Is there a problem to be solved? Yes indeed: “We do not have quantum mechanical intuition by nature, and manipulating atoms is extremely hard in practice and for practical purposes.” Although popular science magazines every so often boast pictures of atomic scale manipulation of atoms and the quantum regime, this makes it far from easy and common inside and outside the university lab. It is amazing how hard these things tend to get (ask your local experimental materials research PhD) and the required blood, sweat and tears are generally not represented in the glory-parade of a scientific publication.

Can you solve this? Euhm…yes…at least to some extend. “Computational materials research can provide the quantum mechanical intuition we human beings lack, and gives us access to atomic scale manipulation of a material.” Although computational materials science is seen by experimentalists as theory, and by theoreticians as experiments, it is neither and both. Computational materials science combines the rigor and control of theory, with access to real-life systems of experiments. It, unfortunately also suffers the limitations of both: as the system is still idealized (but to much lesser extend than in theoretical work) and control is not absolute (you have to follow where the algorithms take you, just as an experimentalist has to follow where the reaction takes him/her). But, if these strengths and weaknesses are balanced wisely (requires quite a few years of experience) an expert will gain fundamental insights in experiments.

Animation representing the buildup of a diamond surface in computational work.

Animation representing the buildup of a diamond surface in computational work.

As a computational materials scientist, you build a real-life system, atom by atom, such that you know exactly where everything is located, and then calculate its properties based on the rules of quantum mechanics, for example. In this sense you have absolute control as in theory. This comes at a cost (conservation of misery 🙂 ); where nature itself makes sure the structure is the “correct one” in experiments, you have to find it yourself in computational work. So you generally end up calculating many possible structural combinations of your atoms to first find out which is the one most probable to represent nature.

So what am I actually doing?I am using atomic scale quantum mechanical computations to investigate the materials my experimental colleagues are studying, going from oxides to defects in diamond.” I know this is vague, but unfortunately, the actual work is technical. Much effort goes into getting the calculations to run in the direction you want them to proceed (This is the experimental side of computational materials science.). The actual goal varies from project to project. Sometimes, we want to find out which material is most stable, and which material is most likely to diffuse into the other, while at other times we want to understand the electronic structure, to test if a defect is really luminescent, this to trace the source of the experimentally observed luminescence. Or if you want to make it more complex, even find out which elements would make diamond grow faster.

Starting from this, I succeeded in creating a 3-minute pitch of my research for Science Figured out. The pitch can be seen here (in Dutch, with English subtitles that can be switched on through the cogwheel in the bottom right corner).

Some external links:

 

May 22

VSC User Day 2018

Today, I am attending the 4th VSC User Day at the “Paleis de Academiën” in Brussels. Flemish researchers for whom the lifeblood of their research flows through the chips of a supercomputer are gathered here to discuss their experiences and present their research.

Some History

About 10 years ago, at the end of 2007 and beginning of 2008, the 5 Flemish universities founded the Flemish Supercomputer Center (VSC). A virtual organisation with one central goal:  Combine their strengths and know-how with regard to High Performance Compute (HPC) centers to make sure they were competitive with comparable HPC centers elsewhere.

By installing a super-fast network between the various university compute centers, each Flemish researcher has nowadays access to state-of-the-art computer infrastructure, independent of his or her physical location. A researcher at the University of Hasselt, like myself, can easily run calculations on the supercomputers installed at the university of Ghent or Leuven. In October 2012 the existing university supercomputers, so-called Tier-2 supercomputers, are joined by the first Flemish Tier-1 supercomputer, which was housed at the brand new data-centre of Ghent University. This machine is significantly larger than the existing Tier-2 machines, and allows Belgium to become the 25th member of the PRACE network, a European network which provides computational researchers access to the best and largest computer facilities in Europe. The fast development of computational research in Flanders and the explosive growth in the number of computational researchers, combined with the first shared Flemish supercomputer (in contrast to the university TIER-2 supercomputers, which some still consider private property rather than part of VSC) show the impact of the virtual organisation that is the VSC. As a result, on January 16th 2014, the first VSC User Day is organised, bringing together HPC users from all 5 universities  and industry. Here the users share their experiences and discuss possible improvements and changes. Since then, the first Tier-1 supercomputer has been decommissioned and replaced by a brand new Tier-1 machine, this time located at the KU Leuven. Furthermore, the Flemish government has put 30M€ aside for super-computing in Flanders, making sure that also in the future Flemish computational research stays competitive. The future of computational research in Flanders looks bright.

Today is User Day 2018

During the 4th VSC User Day, researchers of all 5 Flemish universities will be presenting the work they are performing on the supercomputers of the VSC network. The range of topics is very broad: from first principles materials modelling to chip design, climate modelling and space weather. In addition there will also be several workshops, introducing new users to the VSC and teaching advanced users the finer details of GPU-code and code optimization and parallelization. This later aspect is hugely important during the use of supercomputers in an academic context. Much of the software used is developed or modified by the researchers themselves. And even though this software can present impressive behavior, it doe not speed up automatically if you provide it access to more CPU’s. This is a very non-trivial task the researchers has to take care of, by carefully optimizing and parallelizing his or her code.

To support the researchers in their work, the VSC came up with ingenious poster-prizes. The three best posters will share 2018 node days of calculation time (about 155 years of calculations on a normal simple computer).

Wish me luck!

 

Single-slide presentation of my poster @VSC User Day 2018.

Single-slide presentation of my poster @VSC User Day 2018.

Nov 12

Slow science: the case of Pt induced nanowires on Ge(001)

Free-standing Pt-induced nanowire on Ge(001).

Simulated STM image of the Pt-induced nanowires on the Ge(001) surface. Green discs indicate the atomic positions of the bulk-Ge atoms; red: Pt atoms embedded in the top surface layers; yellow: Ge atoms forming the nanowire observed by STM.

Ten years ago, I was happily modeling Pt nanowires on Ge(001) during my first Ph.D. at the university of Twente. As a member of the Computational Materials Science group, I also was lucky to have good and open contact with the experimental research group of Prof. Zandvliet, whom was growing these nanowires. In this environment, I learned there is a big difference between what is easy in experiment and what is easy in computational research. It also taught me to find a common ground which is “easy” for both (Scanning tunneling microscopy (STM) images in this specific case).

During this 4-year project, I quickly came to the conclusion that the nanowires could not be formed by Pt atoms, but that it needed to be Ge atoms instead. Although the simulated STM images were  very convincing, it was really hard to overcome the experimental intuition…and experiments which seemed to contradict this picture (doi: 10.1016/j.susc.2006.07.055 ). As a result, I spend a lot of time learning about the practical aspects of the experiments (an STM tip is a complicated thing) and trying to extract every possible piece of information published and unpublished. Especially the latter provided important support. The “ugly”(=not good for publishing) experimental pictures tended to be real treasures from my computational point of view. Of course, much time was spent on tweaking the computational model to get a perfect match with experiments (e.g. the 4×1 periodicity), and trying to reproduce experiments seemingly supporting the “Ge-nanowire” model (e.g. simulation of CO adsorption and identification of the path along the wire the molecule follows.).

In contrast to my optimism at the end of my first year (I believed all modeling could be finished before my second year ended), the modeling work ended up being a very complex exercise, taking 4 years of research. Now I am happy that I was wrong, as the final result ended up being very robust and became “The model for Pt induced nanowires on Ge(001)“.

Upon doing a review article on this field five years after my Ph.D. I was amazed (and happy) to see my model still stood. Even more, there had been complex experimental studies (doi: 10.1103/PhysRevB.85.245438) which even seemed to support the model I proposed. However, these experiments were stil making an indirect comparison. A direct comparison supporting the Ge nature of the nanowires was still missing…until recently.

In a recent paper in Phys. Rev. B (doi: 10.1103/PhysRevB.96.155415) a Japanese-Turkish collaboration succeeded in identifying the nanowire atoms as Ge atoms. They did this using an Atomic Force Microscope (AFM) and a sample of Pt induced nanowires, in which some of the nanowire atoms were replaced by Sn atoms. The experiment rather simple in idea (execution however requires rather advanced skills): compare the forces experienced by the AFM when measuring the Sn atom, the chain atoms and the surface atoms. The Sn atoms are easily recognized, while the surface is known to consist of Ge atoms. If the relative force of the chain atom is the same as that of the surface atoms, then the chain consists of Ge atoms, while if the force is different, the chain consists of Pt atoms.

*small drum-roll*

And they found the result to be the same.

Yes, after nearly 10 years since my first publication on the subject, there finally is experimental proof that the Pt nanowires on Ge(001) consist of Ge atoms. Seeing this paper made me one happy computational scientist. For me it shows the power of computational research, and provides an argument why one should not be shy to push calculations to their limit. The computational cost may be high, but at least one is performing relevant work. And of course, never forget, the most seemingly easy looking experiments are  usually not easy at all, so as a computational materials scientist you should not take them for granted, but let those experimentalists know how much you appreciate their work and effort.

Sep 23

A combined experimental and theoretical investigation of the Al-Melamine reactive milling system: a mechanistic study towards AlN-based ceramics

Authors: Seyyed Amin Rounaghi, Danny E.P. Vanpoucke, Hossein Eshghi, Sergio Scudino, Elaheh Esmaeili, Steffen Oswald and Jürgen Eckert
Journal: J. Alloys Compd. 729, 240-248 (2017)
doi: 10.1016/j.jallcom.2017.09.168
IF(2017): 3.779
export: bibtex
pdf: <J.Alloys Compd.>

 

Graphical Abstract: Evolution of the end products as function of Al and N content during ball-milling synthesis of AlN.
Graphical Abstract: Evolution of the end products as function of Al and N content during ball-milling synthesis of AlN.

Abstract

A versatile ball milling process was employed for the synthesis of hexagonal aluminum nitride (h-AlN) through the reaction of metallic aluminum with melamine. A combined experimental and theoretical study was carried out to evaluate the synthesized products. Milling intermediates and products were fully characterized via various techniques including XRD, FTIR, XPS, Raman and TEM. Moreover, a Boltzmann distribution model was proposed to investigate the effect of milling energy and reactant ratios on the thermodynamic stability and the proportion of different milling products. According to the results, the reaction mechanism and milling products were significantly influenced by the reactant ratio. The optimized condition for AlN synthesis was found to be at Al/M molar ratio of 6, where the final products were consisted of nanostructured AlN with average crystallite size of 11 nm and non-crystalline heterogeneous carbon.

Apr 28

Mechanochemical synthesis of nanostructured metal nitrides, carbonitrides and carbon nitride: A combined theoretical and experimental study

Authors: Seyyed Amin Rounaghi, Danny E.P. Vanpoucke, Hossein Eshghi, Sergio Scudino, Elaheh Esmaeili, Steffen Oswald and Jürgen Eckert
Journal: Phys. Chem. Chem. Phys. 19, 12414-12424 (2017)
doi: 10.1039/C7CP00998D
IF(2017): 3.906
export: bibtex
pdf: <Phys.Chem.Chem.Phys.>

Abstract

Nowadays, the development of highly efficient routes for the low cost synthesis of nitrides is greatly growing. Mechanochemical synthesis is one of those promising techniques which is conventionally employed for the synthesis of nitrides by long term milling of metallic elements under pressurized N2 or NH3 atmosphere (A. Calka and J. I. Nikolov, Nanostruct. Mater., 1995, 6, 409-412). In the present study, we describe a versatile, room-temperature and low cost mechanochemical process for the synthesis of nanostructured metal nitrides (MNs), carbonitrides (MCNs) and carbon nitride (CNx). Based on this technique, melamine as a solid nitrogen-containing organic compound (SNCOC) is ball milled with four different metal powders (Al, Ti, Cr and V) to produce nanostructured AlN, TiCxN1-x, CrCxN1-x, and VCxN1-x (x~0.05). Both theoretical and experimental techniques are implemented to determine the reaction intermediates, products, by-products and finally, the mechanism underling this synthetic route. According to the results, melamine is polymerized in the presence of metallic elements at intermediate stages of the milling process, leading to the formation of a carbon nitride network. The CNx phase subsequently reacts with the metallic precursors to form MN, MCN or even MCN-CNx nano-composites depending on the defect formation energy and thermodynamic stability of the corresponding metal nitride, carbide and C/N co-doped structures.

Apr 12

Call for Abstracts: Condensed Matter Science in Porous Frameworks: On Zeolites, Metal- and Covalent-Organic Frameworks

Flyer for the Colloquium on Porous Frameworks at the CMD26Together with Ionut Tranca (TU Eindhoven, The Netherlands) and Bartłomiej Szyja (Wrocław University of Technology, Poland) I am organizing a colloquium “Condensed Matter Science in Porous Frameworks: On Zeolites, Metal- and Covalent-Organic Frameworks” which will take place during the 26th biannual Conference & Exhibition CMD26 – Condensed Matter in Groningen (September 4th – 9th, 2016). During our colloquium, we hope to bring together experimental and theoretical researchers working in the field of porous frameworks, providing them the opportunity to present and discuss their latest work and discoveries.

Zeolites, Metal-Organic Frameworks, and Covalent-Organic Frameworks are an interesting class of hybrid materials. They are situated at the boundary of research fields, with properties akin to both molecules and solids. In addition, their porosity puts them at the boundary between surfaces and bulk materials, while their modular nature provides a wealthy playground for materials design.

We invite you to submit your abstract for oral or poster contributions to our colloquium. Poster contributions participate in a Best Poster Prize competition.

The deadline for abstract submission is April 30th, 2016.

The extended deadline for abstract submission is May 14th, 2016.

 

CMD26 – Condensed Matter in Groningen is an international conference, organized by the Condensed Matter Division of the European Physical Society, covering all aspects of condensed matter physics, including soft condensed matter, biophysics, materials science, quantum physics and quantum simulators, low temperature physics, quantum fluids, strongly correlated materials, semiconductor physics, magnetism, surface and interface physics, electronic, optical and structural properties of materials. The scientific programme will consist of a series of plenary and semi-plenary talks and Mini-colloquia. Within each Mini-colloquium, there will be invited lectures, oral contributions and posters.

 

Feel free to distribute this call for abstracts and our flyer and we hope to see you in Groningen!

Feb 01

Computational Materials Science: Where Theory Meets Experiments

Authors: Danny E. P. Vanpoucke,
Journal: Developments in Strategic Ceramic Materials:
Ceramic Engineering and Science Proceedings 36(8), 323-334 (2016)
(ICACC 2015 conference proceeding)
Editors: Waltraud M. Kriven, Jingyang Wang, Dongming Zhu,Thomas Fischer, Soshu Kirihara
ISBN: 978-1-119-21173-0
webpage: Wiley-VCH
export: bibtex
pdf: <preprint> 

Abstract

In contemporary materials research, we are able to create and manipulate materials at ever smaller scales: the growth of wires with nanoscale dimensions and the deposition of layers with a thickness of only a few atoms are just two examples that have become common practice. At this small scale, quantum mechanical effects become important, and this is where computational materials research comes into play. Using clever approximations, it is possible to simulate systems with a scale relevant for experiments. The resulting theoretical models provide fundamental insights in the underlying physics and chemistry, essential for advancing modern materials research. As a result, the use of computational experiments is rapidly becoming an important tool in materials research both for predictive modeling of new materials and for gaining fundamental insights in the behavior of existing materials. Computer and lab experiments have complementary limitations and strengths; only by combining them can the deepest fundamental secrets of a material be revealed.

In this paper, we discuss the application of computational materials science for nanowires on semiconductor surfaces, ceramic materials and flexible metal-organic frameworks, and how direct comparison can advance insight in the structure and properties of these materials.