Can Europium Atoms form Luminescent Centres in Diamond: A combined Theoretical-Experimental Study

Authors: Danny E. P. Vanpoucke, Shannon S. Nicley, Jorne Raymakers, Wouter Maes, and Ken Haenen
Journal: Diam. Relat. Mater 94, 233-241 (2019)
doi: 10.1016/j.diamond.2019.02.024
IF(2017): 2.232
export: bibtex
pdf: <DiamRelatMater>

 

Spin polarization around the various Eu-defect models in diamond. Blue and red represent the up and down spin channels respectively
Graphical Abstract: Spin polarization around the various Eu-defect models in diamond. Blue and red represent the up and down spin channels respectively.

Abstract

The incorporation of Eu into the diamond lattice is investigated in a combined theoretical-experimental study. The large size of the Eu ion induces a strain on the host lattice, which is minimal for the Eu-vacancy complex. The oxidation state of Eu is calculated to be 3+ for all defect models considered. In contrast, the total charge of the defect-complexes is shown to be negative: -1.5 to -2.3 electron. Hybrid-functional electronic-band-structures show the luminescence of the Eu defect to be strongly dependent on the local defect geometry. The 4-coordinated Eu substitutional dopant is the most promising candidate to present the typical Eu3+ luminescence, while the 6-coordinated Eu-vacancy complex is expected not to present any luminescent behaviour. Preliminary experimental results on the treatment of diamond films with Eu-containing precursor indicate the possible incorporation of Eu into diamond films treated by drop-casting. Changes in the PL spectrum, with the main luminescent peak shifting from approximately 614 nm to 611 nm after the growth plasma exposure, and the appearance of a shoulder peak at 625 nm indicate the potential incorporation. Drop-casting treatment with an electronegative polymer material was shown not to be necessary to observe the Eu signature following the plasma exposure, and increased the background
luminescence.

SBDD XXIV: Diamond workshop

The participants to SBDD XXIV of 2019.  (courtesy of Jorne Raymakers, SBDD XXIV secretary) 

 

Last week the 24th edition of the Hasselt diamond workshop took place (this year chaired by Christoph Becher). It’s already the fourth time, since 2016, I have attended this conference, and each year it is a joy to meet up with the familiar faces of the diamond research field. The program was packed, as usual. And this year the NV-center was again predominantly present as the all-purpose quantum defect in diamond. I keep being amazed at how much it is used (although it has a rather low efficiency) and also about how many open question remain with regard to its incorporation during growth. With a little luck, you may read more about this in the future, as it is one of a few dozen ideas and questions I want to investigate.

A very interesting talk was given by Yamaguchi Takahide, who is combining hexagonal-BN and H-terminated diamond for high performance electronic devices. In such a device the h-BN leads to the formation of a 2D hole-gas at the interface (i.e., surface transfer doping), making it interesting for low dimensional applications. (And it of course hints at the opportunities available with other 2D materials.) The most interesting fact, as well as the most mind-boggling to my opinion, was the fact that there was no clear picture of the atomic structure of the interface. But that is probably just me. For experiments, nature tends to make sure everything is alright, while we lowly computational materials artificers need to know where each and every atom belongs. I’ll have to make some time to find out.

A second extremely interesting presentation was given by Anke Krueger (who will be the chair of the 25th edition of SBDD next year), showing of her groups skill at creating fluorine terminated diamond…without getting themselves killed. The surface termination of diamond with fluorine comes with many different hazards, going from mere poisoning, to fire and explosions. The take-home message: “kids don’t try this at home”. Despite all this risky business, a surface coverage of up to 85% was achieved, providing a new surface termination for diamond, with a much stronger trapping of negative charges near the surface, ideal for forming negatively charged NV centers.

On the last day, Rozita Rouzbahani presented our collaboration on the growth of B doped diamond. She studied the impact of growth conditions on the B concentration and growth speed of B doped diamond surfaces. My computational results corroborate her results and presents the atomic scale mechanism resulting in an increased doping concentration upon increased growth speed. I am looking forward to the submission of this nice piece of research.

And now, we wait another year for the next edition of SBDD, the celebratory 25th edition with a focus on diamond surfaces.

Universiteit Van Vlaanderen: Will we be able to design new materials using our smartphone in the future?

Yesterday, I had the pleasure of giving a lecture for the Universiteit van Vlaanderen, a science communication platform where Flemish academics are asked to answer “a question related to their research“. This question is aimed to be highly clickable and very much simplified. The lecture on the other hand is aimed at a general lay public.

I build my lecture around the topic of materials simulations at the atomic scale. This task ended up being rather challenging, as my computational research has very little direct overlap with the everyday life of the average person. I deal with supercomputers (which these days tend to be bench-marked in terms of smartphone power) and the quantum mechanical simulation of materials at the atomic scale, two other topics which may ring a bell…but only as abstract topics people may have heard of.

Therefor, I crafted a story taking people on a fast ride down the rabbit hole of my work. Starting from the almost divine power of the computational materials scientist over his theoretical sample, over the reality of nano-scale materials in our day-to-day lives, past the relative size of atoms and through the game nature of simulations and the salvation of computational research by grace of Moore’s Law…to the conclusion that in 25 years, we may be designing the next generation of CPU materials on our smartphone instead of a TIER-1 supercomputer. …did I say we went down the rabbit hole?

The television experience itself was very exhilarating for me. Although my actual lecture took only 15 minutes, the entire event took almost a full day. Starting with preparations and a trial run in the afternoon (for me and my 4 colleagues) followed by make-up (to make me look pretty on television 🙂 … or just to reduce my reflectance). In the evening we had a group diner meeting the people who would be in charge of the technical aspects and entertainment of the public. And then it was 19h30. Tensions started to grow. The public entered the studio, and the show was ready to start. Before each lecture, there was a short interview to test sound and light, and introduce us to the public. As the middle presenter, I had the comfortable position not to be the first, so I could get an idea of how things went for my colleagues, and not to be the last, which can really be destructive on your nerves.

At 21h00, I was up…

and down the rabbit hole we went. 

 

 

Full periodic table, with all elements presented with their relative size (if known)

Full periodic table, with all elements presented with their relative size (if known) created for the Universiteit van Vlaanderen lecture.

 

Roots of Science

Today VLIR (Flemish Inter-university Council) and the Young Academy had a conference on the future of fundamental research in Flanders: Roots of Science. We live in a world where we rely on science more and more to resolve our problems (think climate change, disease control, energy generation, …). In our bizarre world of alternative facts and fake news, science can be utterly ignored in one sentence and proposed as a magical solution in the next.

Although I am happy with the faith some have in the possibilities of science, it is important to remember that it is not magic. This has a very important consequence:

Things do not happen simply because you want them to happen.

 

Many important breakthroughs in science are what one would call serendipity (e.g., the discovery of penicillin by Fleming, development of the WWW as a side-effect of researchers wanting to share their data

at CERN in 1991,…) . In Flanders the Royal Flemish Academy and the Young Academy have written a Standpoint (an evidence-based advisory text)

discussing the need for more researcher-driven research in contrast to agenda-driven research, as they believe this is a conditio sine qua non for a healthy scientific future.

Where government-driven research focuses on resolving questions from society, researcher-driven research allows the researcher to follow his or her personal interest. This not with the primal aim of having short-te

rm return of investment, but with the aim of providing the fundamental knowledge and expertise which some day may be needed for the former. In researcher-driver research, the journey is the goal as this is where scientific progress is made by finding solutions for problems not imagined before.

Do we have to pay for this with our tax-payers money? I think we do. No-one imagined optical drives (CD, DVD, blue-ray) to become a billion euro industry while the laser was being developed in a lab. Who would have thought the transistor would play such an important role in our every-day life? And what about the first computer? Thomas Watson, President of IBM, has allegedly said in 1943: “I think there is a world market for maybe 5 computers.” And, yet, now many of us have more than 5 computers at home (including tables, smartphones,…)! The researchers working on these “inventions” did not do this with your Blue-ray player or smartphone in mind. These high impact applications are “merely” side-products of their fundamental scientific research. No-one at the time could predict this, so why should we be able to do this today? In this sense, you should see funding of fundamental research as a long term investment. Tax-money is being invested in our future, and the future of children and grandchildren. Although we do not know what will be the outcome, we know from the past that it will have an impact on our lives.

Its difficult to make predictions, especially about the future.

Let us therefor support more researcher-driven research.

 

In addition to the Standpoint, there is also a very nice video explaining the situation (with subtitles in English or Dutch, use the cogwheel to select your preference).

New year’s resolution

A new year, a new beginning.

For most people this is a time of making promises, starting new habits or stopping old ones. In general, I forgo making such promises, as I know they turn out idle in a mere few weeks without external stimulus or any real driving force.

In spite of this, I do have a new years resolution for this year: I am going to study machine learning and use it for any suitable application I can get my hands on (which will mainly be materials science, but one never knows).  I already have a few projects in mind, which should help me stay focused and on track. With some luck, you will be reading about them here on this blog. With some more luck, they may even end up being part of an actual scientific publication.

But first things first, learn the basics (beyond hear-say messages of how excellent and world improving AI is/will be). What are the different types of machine learning available, is it all black box or do you actually have some control over things. Is it a kind of magic? What’s up with all these frameworks (isn’t there anyone left who can program?), and why the devil seem they all to be written in a script langue (python) instead of a proper programming language? A lot of questions I hope to see answered. A lot of things to learn. Lets start by building some foundations…the old fashioned way: By studying using a book, with real paper pages!

Happy New Year, and best wishes to you all!

Defensive Programming and Debugging

Last few months, I finally was able to remove something which had been lingering on my to-do list for a very long time: studying debugging in Fortran. Although I have been programming in Fortran for over a decade, and getting quite good at it, especially in the more exotic aspects such as OO-programming, I never got around to learning how to use decent debugging tools. The fact that I am using Fortran was the main contributing factor. Unlike other languages, everything you want to do in Fortran beyond number-crunching in procedural code has very little documentation (e.g., easy dll’s for objects), is not natively supported (e.g., find a good IDE for fortran, which also supports modern aspects like OO, there are only very few who attempt), or you are just the first to try it (e.g., fortran programs for android :o, definitely on my to-do list). In a long bygone past I did some debugging in Delphi (for my STM program) as the debugger was nicely integrated in the IDE. However, for Fortran I started programming without an IDE and as such did my initial debugging with well placed write statements. And I am a bit ashamed to say, I’m still doing it this way, because it can be rather efficient for a large code spread over dozens of files with hundreds of procedures.

However, I am trying to repent for my sins. A central point in this penance was enlisting for the online MOOC  “Defensive Programming and Debugging“. Five weeks of intense study followed, in which I was forced to use command line gdb and valgrind. During these five weeks I also sharpened my skills at identifying possible sources of bugs (and found some unintentional bugs in the course…but that is just me). Five weeks of hard study, and taking tests, I successfully finished the course, earning my certificate as defensive programmer and debugger. (In contrast to my sometimes offensive programming and debugging skills before 😉 .)

Merry Christmas & Happy New Year

Having fun with my xmgrace-fortran library and fractal code!

Synthesis, characterization and thermodynamic stability of nanostructured ε-iron carbonitride powder prepared by a solid-state mechanochemical route

Authors: Seyyed Amin Rounaghi, Danny E. P. Vanpoucke, Elaheh Esmaeili, Sergio Scudino, and Jürgen Eckert
Journal: J. Alloys Compd. 778, 327-336 (2019)
doi: 10.1016/j.jallcom.2018.11.007
IF(2017): 3.779
export: bibtex
pdf: <JAlloysCompd>

Abstract

Nanostructured epsilon iron carbonitride (ε-Fe3CxN1-x, x ∼ 0.05) powder with high purity (>97 wt%) was synthesized through a simple mechanochemical reaction between metallic iron and melamine. Various characterization techniques were employed to investigate the chemical and physical characteristics of the milling intermediates and the final products. The thermodynamic stability of the different phases in the Fe-C-N ternary system, including nitrogen and carbon doped structures were studied through density functional theory (DFT) calculations. A Boltzmann-distribution model was developed to qualitatively assess the stability and the proportion of the different milling products vs. milling energy. The theoretical and experimental results revealed that the milling products mainly comprise the ε-Fe3CxN1-xphase with a mean crystallite size of around 15 nm and a trace of amorphous carbonmaterial. The thermal stability and magnetic properties of the milling products were thoroughly investigated. The synthesized ε-Fe3CxN1-x exhibited thermal stabilities up to 473 K and 673 K in air and argon atmospheres, respectively, and soft magnetic properties with a saturation magnetization of around 125 emu/g.

Predicting Partial Atomic Charges in Siliceous Zeolites

Authors: Jarod J. Wolffis, Danny E. P. Vanpoucke, Amit Sharma, Keith V. Lawler, and Paul M. Forster
Journal: Microporous Mesoporous Mater. 277, 184-196 (2019)
doi: 10.1016/j.micromeso.2018.10.028
IF(2017): 3.649
export: bibtex
pdf: <MicroporousMesoporousMater>

 

Partial charges in zeolites for force fields.
Graphical Abstract: Partial charges in zeolites for force fields.

Abstract

Partial atomic charge, which determines the magnitude of the Coulombic non-bonding interaction, represents a critical parameter in molecular mechanics simulations. Partial charges may also be used as a measure of physical properties of the system, i.e. covalency, acidic/catalytic sites, etc. A range of methods, both empirical and ab initio, exist for calculating partial charges in a given solid, and several of them are compared here for siliceous (pure silica) zeolites. The relationships between structure and the predicted partial charge are examined. The predicted partial charges from different methods are also compared with related experimental observations, showing that a few of the methods offer some guidance towards identifying the T-sites most likely to undergo substitution or for proton localization in acidic framework forms. Finally, we show that assigning unique calculated charges to crystallographically unique framework atoms makes an appreciable difference in simulating predicting N2 and O2 adsorption with common dispersion-repulsion parameterizations.

Daylight saving and solar time

For many people around the world, last weekend was highlighted by a half-yearly recurring ritual: switching to/from daylight saving time. In Belgium, this goes hand-in-hand with another half-yearly ritual; The discussion about the possible benefits of abolishing of daylight saving time. Throughout the last century, daylight saving time has been introduced on several occasions. The most recent introduction in Belgium and the Netherlands was in 1977. At that time it was intended as a measure for conserving energy, due to the oil-crises of the 70’s. (In Belgium, this makes it painfully modern due to the current state of our energy supplies: the impending doom of energy shortages and the accompanying disconnection plans which will put entire regions without power in case of shortages.)

The basic idea behind daylight saving time is to align the daylight hours with our working hours. A vision quite different from that of for example ancient Rome, where the daily routine was adjusted to the time between sunrise and sunset. This period was by definition set to be 12 hours, making 1h in summer significantly longer than 1h in winter. As children of our time, with our modern vision on time, it is very hard to imagine living like this without being overwhelmed by images of of impending doom and absolute chaos. In this day and age, we want to know exactly, to the second, how much time we are spending on everything (which seems to be social media mostly 😉 ). But also for more important aspects of life, a more accurate picture of time is needed. Think for example of your GPS, which will put you of your mark by hundreds of meters if your uncertainty in time is a mere 0.000001 seconds. Also, police radar will not be able to measure the speed of your car with the same uncertainty on its timing.

Turing back to the Roman vision of time, have you ever wondered why “the day” is longer during summer than during winter? Or, if this difference is the same everywhere on earth? Or, if the variation in day length is the same during the entire year?

Our place on earth

To answer these questions, we need a good model of the world around us. And as is usual in science, the more accurate the model, the more detailed the answer.

Let us start very simple. We know the earth is spherical, and revolves around it’s axis in 24h. The side receiving sunlight we call day, while the shaded side is called night. If we assume the earth rotates at a constant speed, then any point on its surface will move around the earths rotational axis at a constant angular speed. This point will spend 50% of its time at the light side, and 50% at the dark side. Here we have also silently assumed, the rotational axis of the earth is “straight up” with regard to the sun.

In reality, this is actually not the case. The earths rotational axis is tilted by about 23° from an axis perpendicular to the orbital plane. If we now consider a fixed point on the earths surface, we’ll note that such a point at the equator still spends 50% of its time in the light, and 50% of its time in the dark. In contrast, a point on the northern hemisphere will spend less than 50% of its time on the daylight side, while a point on the southern hemisphere spends more than 50% of its time on the daylight side. You also note that the latitude plays an important role. The more you go north, the smaller the daylight section of the latitude circle becomes, until it vanishes at the polar circle. On the other hand, on the southern hemisphere, if you move below the polar circle, the point spend all its time at the daylight side. So if the earths axis was fixed with regard to the sun, as shown in the picture, we would have a region on earth living an eternal night (north pole) or day (south pole). Luckily this is not the case. If we look at the evolution of the earths axis, we see that it is “fixed with regard to the fixed stars”, but makes a full circle during one orbit around the sun.* When the earth axis points away from the sun, it is winter on the northern hemisphere, while during summer it points towards the sun. In between, during the equinox, the earth axis points parallel to the sun, and day and night have exactly the same length: 12h.

So, now that we know the length of our daytime varies with the latitude and the time of the year, we can move one step further.

How does the length of a day vary, during the year?

The length of the day varies over the year, with the longest and shortest days indicated by the summer and winter solstice. The periodic nature of this variation may give you the inclination to consider it as a sine wave, a sine-in-the-wild so to speak. Now let us compare a sine wave fitted to actual day-time data for Brussels. As you can see, the fit is performing quite well, but there is a clear discrepancy. So we can, and should do better than this.

Instead of looking at the length of each day, let us have a look at the difference in length between sequential days.** If we calculate this difference for the fitted sine wave, we again get a sine wave as we are taking a finite difference version of the derivative. In contrast, the actual data shows not a sine wave, but a broadened sine wave with flat maximum and minimum. You may think this is an error, or an artifact of our averaging, but in reality, this trend even depends on the latitude, becoming more extreme the closer you get to the poles.

This additional information, provides us with the extra hint that in addition to the axial tilt of the earth axis, we also need to consider the latitude of our position. What we need to calculate is the fraction of our latitude circle (e.g. for Brussels this is 50.85°) that is illuminated by the sun, each day of the year. With some perseverance and our high school trigonometric equations, we can derive an analytic solution, which can then be calculated by, for example, excel.

Some calculations

The figure above shows a 3D sketch of the situation on the left, and a 2D representation of the latitude circle on the right. α is related to the latitude, and β is the angle between the earth axis and the ‘shadow-plane’ (the plane between the day and night sides of earth). As such, β will be maximal during solstice (±23°26’12.6″) and exactly equal to zero at the equinox—when the earth axis lies entirely in the shadow-plane. This way, the length of the day is equal to the illuminated fraction of the latitude circle: 24h(360°-2γ). γ can be calculated as cos(γ)=adjacent side/hypotenuse in the right hand side part of the figure above. If we indicate the earth radius as R, then the hypotenuse is given by Rsin(α). The adjacent side, on the other hand, is found to be equal to R’sin(β), where R’=B/cos(β), and B is the perpendicular distance between the center of the earth and the plane of the latitude circle, or B=Rcos(α).

Combining all these results, we find that the number of daylight hours is:

24h*{360°-2arccos[cotg(α)tg(β)]}

 

How accurate is this model?

All our work is done, the actual calculation with numbers is a computer’s job, so we put excel to work. For Brussels we see that our model curve very nicely and smoothly follows the data (There is no fitting performed beyond setting the phase of the model curve to align with the data). We see that the broadening is perfectly shown, as well as the perfect estimate of the maximum and minimum variation in daytime (note that this is not a fitting parameter, in contrast to the fit with the sine wave). If you want to play with this model yourself, you can download the excel sheet here. While we are on it, I also drew some curves for different latitudes. Note that beyond the polar circles this model can not work, as we enter regions with periods of eternal day/night.

 

After all these calculations, be honest:

You are happy you only need to change the clock twice a year, don’t you. 🙂

 

 

* OK, in reality the earths axis isn’t really fixed, it shows a small periodic precession with a period of about 41000 years. For the sake of argument we will ignore this.

** Unfortunately, the data available for sunrises and sunsets has only an accuracy of 1 minute. By taking averages over a period of 7 years, we are able to reduce the noise from ±1 minute to a more reasonable value, allowing us to get a better picture of the general trend.

External links