50 results for materials science

Computational Materials Science: Where Theory Meets Experiments

Authors: Danny E. P. Vanpoucke,
Journal: Developments in Strategic Ceramic Materials:
Ceramic Engineering and Science Proceedings 36(8), 323-334 (2016)
(ICACC 2015 conference proceeding)
Editors: Waltraud M. Kriven, Jingyang Wang, Dongming Zhu,Thomas Fischer, Soshu Kirihara
ISBN: 978-1-119-21173-0
webpage: Wiley-VCH
export: bibtex
pdf: <preprint> 

Abstract

In contemporary materials research, we are able to create and manipulate materials at ever smaller scales: the growth of wires with nanoscale dimensions and the deposition of layers with a thickness of only a few atoms are just two examples that have become common practice. At this small scale, quantum mechanical effects become important, and this is where computational materials research comes into play. Using clever approximations, it is possible to simulate systems with a scale relevant for experiments. The resulting theoretical models provide fundamental insights in the underlying physics and chemistry, essential for advancing modern materials research. As a result, the use of computational experiments is rapidly becoming an important tool in materials research both for predictive modeling of new materials and for gaining fundamental insights in the behavior of existing materials. Computer and lab experiments have complementary limitations and strengths; only by combining them can the deepest fundamental secrets of a material be revealed.

In this paper, we discuss the application of computational materials science for nanowires on semiconductor surfaces, ceramic materials and flexible metal-organic frameworks, and how direct comparison can advance insight in the structure and properties of these materials.

Practical Machine-Learning for the Materials Scientist

Scilight graphic

Individual model realizations may not perform that well, but the average model realization always performs very well.

Machine-Learning  is up and trending. You can’t open a paper, magazine or website without someone trying to convince you their new AI-improved app/service will radically change your life. It will make the production of your company more efficient and cheaper, make costumers flock to your shop and possibly cure cancer on the side. Also in science, a lot of impressive claims are being made. General promises entail that it makes the research of interest faster, better, more efficient,… There is, however, a bit of fine print which is never explicitly mentioned: you need a LOT of data. This data is used to teach your Machine-Learning algorithm whatever it is intended to learn.

In some cases, you can get lucky, and this data is already available while in other, you still need to create it yourself. In case of computational materials science this often means performing millions upon millions of calculations to create a data set on which to train the Machine-Learning algorithm.[1] The resulting Machine-Learning model may be a thousand times faster in direct comparison, but only if you ignore the compute-time deficit you start from.

In materials science, this is not only a problem for those performing first principles modeling, but also for experimental researchers. When designing a new material, you generally do not have the resources to generate thousands or millions of samples while varying the parameters involved. Quite often you are happy if you can create even a few dozen samples. So, can this research still benefit from Machine-Learning if only very small data sets are available?

In my recent work on materials design using Machine-Learning combined with small data sets, I discuss the limitations of small data sets in the context of Machine-Learning and present a natural approach for obtaining the best possible model.[2] [3]

The Good, the Bad and the Average.

(a) Simplified representation of modeling small data sets. (b) Data set size dependence of the distribution of model coefficients. (c) Evolution of model-coefficients with data set size. (d) correlation between model coefficient value and model quality.

In Machine-Learning a data set is generally split in two parts. One part to train the model, and a second part to test the quality of the model. One of the underlying assumptions to this approach is that each subset of the data set provides an accurate representation of the “true” data/model. As a result, taking a different subset to train your data should give rise to “the same model” (ignoring small numerical fluctuations). Although this is generally true for large (and huge) data sets, for  small data sets this is seldomly the case (cf. figure (a) on the side). There, the individual data points considered will have a significant impact on the final model, and different subsets give rise to very different models. Luckily the coefficients of these models still present a peaked distribution. (cf. figure (b)).

On the down side, however, if one isn’t careful in preprocessing the data set correctly, these distributions will not converge upon increasing the data set size, giving rise to erratic model behaviour.[2]

Not only the model coefficients give rise to a distribution, the same is true for the model quality. Using the same data set, but making a different split between training and test data can give rise to large differences in  quality for the model instances. Interestingly, the model quality presents a strong correlation with the model coefficients, with the best quality model instances being closer to the “true” model instance. This gives rise to a simple approach: just take many train-test splittings, and select the best model. There are quite some problems with such an approach, which are discussed in the manuscript [2]. The most important one being the fact that the quality measure on a very small data set is very volatile itself. Another is the question of how many such splittings should be considered? Should it be an exhaustive search, or are any 10 random splits good enough (obviously not)? These problems are alleviated by the nice observation that “the average” model shows not the average quality or the average model coefficients, but instead it presents the quality of the best model (as well as the best model coefficients). (cf. figure (c) and (d))

This behaviour is caused by the fact that the best model instances have model coefficients which are also the average of the coefficient distributions. This observation hold for simple and complex model classes making it widely applicable. Furthermore, for model classes for which it is possible to define a single average model instance, it gives access to a very efficient predictive model as it only requires to store model coefficients for a single instance, and predictions only require a single evaluation. For models where this is not the case one can still make use of an ensemble average to benefit from the superior model quality, but at a higher computational cost. 

References and footnotes

[1] For example, take “ANI-1: an extensible neural network potential with DFT accuracy at force field computational cost“, one of the most downloaded papers of the journal of Chemical Science. The data set the authors generated to train their neural network required them to optimize 58.000 molecules using DFT calculations. Furthermore, for these molecules a total of about 17.200.000 single-point energies were calculated (again at the DFT level). I leave it to the reader to estimate the amount of calculation time this requires.

[2] “Small Data Materials Design with Machine Learning: When the Average Model Knows Best“, Danny E. P. Vanpoucke, Onno S. J. van Knippenberg, Ko Hermans, Katrien V. Bernaerts, and Siamak Mehrkanoon, J. Appl. Phys. 128, 054901  (2020)

[3] “When the average model knows best“, Savannah Mandel, AIP SciLight 7 August (2020)

Universiteit Van Vlaanderen: Will we be able to design new materials using our smartphone in the future?

Yesterday, I had the pleasure of giving a lecture for the Universiteit van Vlaanderen, a science communication platform where Flemish academics are asked to answer “a question related to their research“. This question is aimed to be highly clickable and very much simplified. The lecture on the other hand is aimed at a general lay public.

I build my lecture around the topic of materials simulations at the atomic scale. This task ended up being rather challenging, as my computational research has very little direct overlap with the everyday life of the average person. I deal with supercomputers (which these days tend to be bench-marked in terms of smartphone power) and the quantum mechanical simulation of materials at the atomic scale, two other topics which may ring a bell…but only as abstract topics people may have heard of.

Therefor, I crafted a story taking people on a fast ride down the rabbit hole of my work. Starting from the almost divine power of the computational materials scientist over his theoretical sample, over the reality of nano-scale materials in our day-to-day lives, past the relative size of atoms and through the game nature of simulations and the salvation of computational research by grace of Moore’s Law…to the conclusion that in 25 years, we may be designing the next generation of CPU materials on our smartphone instead of a TIER-1 supercomputer. …did I say we went down the rabbit hole?

The television experience itself was very exhilarating for me. Although my actual lecture took only 15 minutes, the entire event took almost a full day. Starting with preparations and a trial run in the afternoon (for me and my 4 colleagues) followed by make-up (to make me look pretty on television 🙂 … or just to reduce my reflectance). In the evening we had a group diner meeting the people who would be in charge of the technical aspects and entertainment of the public. And then it was 19h30. Tensions started to grow. The public entered the studio, and the show was ready to start. Before each lecture, there was a short interview to test sound and light, and introduce us to the public. As the middle presenter, I had the comfortable position not to be the first, so I could get an idea of how things went for my colleagues, and not to be the last, which can really be destructive on your nerves.

At 21h00, I was up…

and down the rabbit hole we went. 

 

 

Full periodic table, with all elements presented with their relative size (if known)

Full periodic table, with all elements presented with their relative size (if known) created for the Universiteit van Vlaanderen lecture.

 

Newsflash: Materials of the Future

This summer, I had the pleasure of being interviewed by Kim Verhaeghe, a journalist of the EOS magazine, on the topic of “materials of the future“. Materials which are currently being investigated in the lab and which in the near or distant future may have an enormous impact on our lives. While brushing up on my materials (since materials with length scales of importance beyond 1 nm are generally outside my world of accessibility), I discovered that to cover this field you would need at least an entire book just to list the “materials of the future”. Many materials deserve to be called materials of the future, because of their potential. Also depending on your background other materials may get your primary attention.

In the resulting article, Kim Verhaeghe succeeded in presenting a nice selection, and I am very happy I could contribute to the story. Introducing “the computational materials scientist” making use of supercomputers such as BrENIAC, but also new materials such as Metal-Organic Frameworks (MOF) and shedding some light on “old” materials such as diamond, graphene and carbon nanotubes.

Science Figured out

Diamond and CPU's, now still separated, but how much longer will this remain the case? Top left: Thin film N-doped diamond on Si (courtesy of Sankaran Kamatchi). Top right: Very old Pentium 1 CPU from 1993 (100MHz), with µm architecture. Bottom left: more recent intel core CPU (3GHz) of 2006 with nm scale architecture. Bottom right: Piece of single crystal diamond. A possible alternative for silicon, with 20x higher thermal conductivity, and 7x higher mobility of charge carriers.
Diamond and CPU's, now still separated, but how much longer will this remain the case? Top left: Thin film N-doped diamond on Si (courtesy of Sankaran Kamatchi). Top right: Very old Pentium 1 CPU from 1993 (100MHz), with µm architecture. Bottom left: more recent intel core CPU (3GHz) of 2006 with nm scale architecture. Bottom right: Piece of single crystal diamond. A possible alternative for silicon, with 20x higher thermal conductivity, and 7x higher mobility of charge carriers.

Diamond and CPU’s, now still separated, but how much longer will this remain the case?
Top left: Thin film N-doped diamond on Si (courtesy of Sankaran Kamatchi). Top right: Very old Pentium 1 CPU from 1993 (100MHz), with µm architecture. Bottom left: more recent intel core CPU (3GHz) of 2006 with nm scale architecture. Bottom right: Piece of single crystal diamond. A possible alternative for silicon, with 20x higher thermal conductivity, and 7x higher mobility of charge carriers.

Can you pitch your research in 3 minutes, this is the concept behind “wetenschap uitgedokterd/science figured out“. A challenge I accepted after the fun I had at the science-battle. If I can explain my work to a public of 6 to 12 year-olds, explaining it to adults should be possible as well. However, 3 minutes is very short (although some may consider this long in the current bitesize world), especially if you have to explain something far from day-to-day life and can not assume any scientific background.

Where to start? Capture the imagination: “Imagine a world where you are a god.

Link back to the real world. “All modern-day high-tech toys are more and more influenced by the atomic scale details.” Over the last decade, I have seen the nano-scale progress slowly but steadily into the realm of real-life materials research. This almost invisible trend will have a huge impact on materials science in the coming decade, because more and more we will see empirical laws breaking down, and it will become harder and harder to fit trends of materials using a classical mindset, something which has worked marvelously for materials science during the last few centuries. Modern and future materials design (be it solar cells, batteries, CPU’s or even medicine) will have to rely on quantum mechanical intuition and hence quantum mechanical simulations. (Although there is still much denial in that regard.)

Is there a problem to be solved? Yes indeed: “We do not have quantum mechanical intuition by nature, and manipulating atoms is extremely hard in practice and for practical purposes.” Although popular science magazines every so often boast pictures of atomic scale manipulation of atoms and the quantum regime, this makes it far from easy and common inside and outside the university lab. It is amazing how hard these things tend to get (ask your local experimental materials research PhD) and the required blood, sweat and tears are generally not represented in the glory-parade of a scientific publication.

Can you solve this? Euhm…yes…at least to some extend. “Computational materials research can provide the quantum mechanical intuition we human beings lack, and gives us access to atomic scale manipulation of a material.” Although computational materials science is seen by experimentalists as theory, and by theoreticians as experiments, it is neither and both. Computational materials science combines the rigor and control of theory, with access to real-life systems of experiments. It, unfortunately also suffers the limitations of both: as the system is still idealized (but to much lesser extend than in theoretical work) and control is not absolute (you have to follow where the algorithms take you, just as an experimentalist has to follow where the reaction takes him/her). But, if these strengths and weaknesses are balanced wisely (requires quite a few years of experience) an expert will gain fundamental insights in experiments.

Animation representing the buildup of a diamond surface in computational work.

Animation representing the buildup of a diamond surface in computational work.

As a computational materials scientist, you build a real-life system, atom by atom, such that you know exactly where everything is located, and then calculate its properties based on the rules of quantum mechanics, for example. In this sense you have absolute control as in theory. This comes at a cost (conservation of misery 🙂 ); where nature itself makes sure the structure is the “correct one” in experiments, you have to find it yourself in computational work. So you generally end up calculating many possible structural combinations of your atoms to first find out which is the one most probable to represent nature.

So what am I actually doing?I am using atomic scale quantum mechanical computations to investigate the materials my experimental colleagues are studying, going from oxides to defects in diamond.” I know this is vague, but unfortunately, the actual work is technical. Much effort goes into getting the calculations to run in the direction you want them to proceed (This is the experimental side of computational materials science.). The actual goal varies from project to project. Sometimes, we want to find out which material is most stable, and which material is most likely to diffuse into the other, while at other times we want to understand the electronic structure, to test if a defect is really luminescent, this to trace the source of the experimentally observed luminescence. Or if you want to make it more complex, even find out which elements would make diamond grow faster.

Starting from this, I succeeded in creating a 3-minute pitch of my research for Science Figured out. The pitch can be seen here (in Dutch, with English subtitles that can be switched on through the cogwheel in the bottom right corner).

Some external links:

 

Fairy tale science or a science fairy tale?

Once upon a time…

Once upon a time, a long time ago—21 days ago to be precise—there was a conference in the tranquil town of Hasselt. Every year, for 23 years in a row, researchers gathered there for three full days, to present and adore their most colorful and largest diamonds. For three full days, there was just that little bit more of a sparkle to their eyes. They divulged where new diamonds could be found, and how they could be used. Three days to could speak without any restriction, without hesitation, about the sixth element which bonds them all. Because all knew the language. They honored the magic of the NV-center and the arcane incantations leading to the highest doping. All, masters of their common mystic craft.

At the end of the third day, with sadness in their harts they said their good-byes and went back, in small groups, to their own ivory tower, far far away. With them, however, they took a small sparkle of hope and expectation, because in twelve full moons they would reconvene. Bringing with them new and grander tales and even more sparkling diamonds, than had ever been seen before.

For most outsiders, the average conference presentation is as clear as an arcane conjuration of a mythological beast. As scientist, we are often trapped by the assumption that our unique expertise is common knowledge for our public, a side-effect of our enthusiasm for our own work.

Clear vs. accurate

In a world where science is facing constant pressure due to the financing model employed—in addition to the up-rise in “fake news” and “alternative facts”— it is important for young researchers to be able to bring their story clearly and accurately.

However, clear and accurate often have the bad habit of counteracting one-another, and as such, maintaining a good balance between the two take a lot more effort than one might expect. Focus on either one aspect (accuracy or clarity) tends to be disastrous. Conference presentations and scientific publications tend to focus on accuracy, making them not clear at all for the non-initiate. Public presentations and news paper articles, on the other hand, focus mainly on clarity with fake news accidents waiting to happen. For example, one could recently read that 7% of the DNA of the astronaut Scott Kelly had changed during a space-flight, instead of a change of in gene-expression. Although both things may look similar, they are very different. The latter presents a rather natural response of the (human) body to any stress situation. The former, however, removes Scott from the human race entirely. Even the average gorilla would be closer related to you and I, than Scott Kelly, as they differ less than 5% in their DNA from our DNA. So keeping a good balance between clarity and accuracy is important, albeit not that easy. Time pressure plays an important role here.

Two extremes?

Wetenschapsbattle Trophy: Each of the contestants of the wetenschapsbattle received a specially designed and created hat from the children of the school judging the contest. Mine has diamonds and computers. 🙂

In the week following the diamond conference in Hasselt, I also participated in a sciencebattle. A contest in which researchers have to explain their research to a public of 6-to 12-year-olds in a time-span of 15 minutes. These kids are judge, jury and executioner of the contest so to speak. It’s a natural reflex to place these two events at the opposite ends of a scale. And it is certainly true for some aspects; The entire room volunteering spontaneously when asked for help is something which happens somewhat less often at a scientific conference. However, clarity and accuracy should be equally central aspects for both.

So, how do you explain your complex research story to a crowd of 6-to 12-year-olds? I discovered the answer during a masterclass by The Floor is Yours.  Actually, more or less the same way you should tell it to an audience of adults, or even your own colleagues. As a researcher you are a specialist in a very narrow field, which means that no-one will loose out when focus is shifted a bit more to clarity. The main problem you encounter here, however, is time. This is both the time required to tell your story (forget “elevator pitches”, those are good if you are a used-car salesman, they are not for science) as well as the time required to prepare your story (it took me a few weeks to build and then polish my story for the children).

Most of this time is spent answering the questions: “What am I actually doing?” and “Why am I doing this specifically?“. The quest for metaphors which are both clear and accurate takes quite some time. During this task you tend to suffer, as a scientist, from the combination of your need for accuracy and your deep background knowledge. These are the same inhibitors a scientist encounters when involved in a public discussion on his/her own field of expertise.

Of course you also do not want to be pedantic:

Q: What do you do?

A: I am a Computational Materials Researcher.

Q: Compu-what??

A: 1) Computational = using a computer

2) Materials = everything you see around you, the stuff everything is made of

3) Researcher = Me

However, as a scientist, you may want to use such imaginary discussions during your preparation. Starting from these pedantic dialogues, you trace a path along the answers which interest you most. The topics which touch your scientific personality. This way, you take a step back from your direct research, and get a more broad picture. Also, by telling about theme’s, you present your research from a more broad perspective, which is more easily accessible to your audience: “What are atoms?“, “How do you make diamond?“, “What is a computer simulation?

At the end—after much blood, sweat and tears—your story tells something about your world as a whole. Depending on your audience you can include more or less detailed aspects of your actual day-to-day research, but at its hart, it remains a story.

Because, if we like it or not, in essence we all are “Pan narrans“, storytelling apes.

Slow science: the case of Pt induced nanowires on Ge(001)

Free-standing Pt-induced nanowire on Ge(001).

Simulated STM image of the Pt-induced nanowires on the Ge(001) surface. Green discs indicate the atomic positions of the bulk-Ge atoms; red: Pt atoms embedded in the top surface layers; yellow: Ge atoms forming the nanowire observed by STM.

Ten years ago, I was happily modeling Pt nanowires on Ge(001) during my first Ph.D. at the university of Twente. As a member of the Computational Materials Science group, I also was lucky to have good and open contact with the experimental research group of Prof. Zandvliet, whom was growing these nanowires. In this environment, I learned there is a big difference between what is easy in experiment and what is easy in computational research. It also taught me to find a common ground which is “easy” for both (Scanning tunneling microscopy (STM) images in this specific case).

During this 4-year project, I quickly came to the conclusion that the nanowires could not be formed by Pt atoms, but that it needed to be Ge atoms instead. Although the simulated STM images were  very convincing, it was really hard to overcome the experimental intuition…and experiments which seemed to contradict this picture (doi: 10.1016/j.susc.2006.07.055 ). As a result, I spend a lot of time learning about the practical aspects of the experiments (an STM tip is a complicated thing) and trying to extract every possible piece of information published and unpublished. Especially the latter provided important support. The “ugly”(=not good for publishing) experimental pictures tended to be real treasures from my computational point of view. Of course, much time was spent on tweaking the computational model to get a perfect match with experiments (e.g. the 4×1 periodicity), and trying to reproduce experiments seemingly supporting the “Ge-nanowire” model (e.g. simulation of CO adsorption and identification of the path along the wire the molecule follows.).

In contrast to my optimism at the end of my first year (I believed all modeling could be finished before my second year ended), the modeling work ended up being a very complex exercise, taking 4 years of research. Now I am happy that I was wrong, as the final result ended up being very robust and became “The model for Pt induced nanowires on Ge(001)“.

Upon doing a review article on this field five years after my Ph.D. I was amazed (and happy) to see my model still stood. Even more, there had been complex experimental studies (doi: 10.1103/PhysRevB.85.245438) which even seemed to support the model I proposed. However, these experiments were stil making an indirect comparison. A direct comparison supporting the Ge nature of the nanowires was still missing…until recently.

In a recent paper in Phys. Rev. B (doi: 10.1103/PhysRevB.96.155415) a Japanese-Turkish collaboration succeeded in identifying the nanowire atoms as Ge atoms. They did this using an Atomic Force Microscope (AFM) and a sample of Pt induced nanowires, in which some of the nanowire atoms were replaced by Sn atoms. The experiment rather simple in idea (execution however requires rather advanced skills): compare the forces experienced by the AFM when measuring the Sn atom, the chain atoms and the surface atoms. The Sn atoms are easily recognized, while the surface is known to consist of Ge atoms. If the relative force of the chain atom is the same as that of the surface atoms, then the chain consists of Ge atoms, while if the force is different, the chain consists of Pt atoms.

*small drum-roll*

And they found the result to be the same.

Yes, after nearly 10 years since my first publication on the subject, there finally is experimental proof that the Pt nanowires on Ge(001) consist of Ge atoms. Seeing this paper made me one happy computational scientist. For me it shows the power of computational research, and provides an argument why one should not be shy to push calculations to their limit. The computational cost may be high, but at least one is performing relevant work. And of course, never forget, the most seemingly easy looking experiments are  usually not easy at all, so as a computational materials scientist you should not take them for granted, but let those experimentalists know how much you appreciate their work and effort.

Bachelor Projects Completed: 2 new computational materials scientists initialised

The black arts of computational materials science.

Black arts of computational materials science.

Just over half a year ago, I mentioned that I presented two computational materials science related projects for the third bachelor physics students at the UHasselt. Both projects ended up being chosen by a bachelor student, so I had the pleasure of guiding two eager young minds in their first steps into the world of computational materials science. They worked very hard, cursed their machine or code (as any good computational scientist should do once in a while, just to make sure that he/she is still at the forefront of science) and survived. They actually did quite a bit more than “just surviving”, they grew as scientists and they grew in self-confidence…given time I believe they may even thrive within this field of research.

One week ago, they presented their results in a final presentation for their classmates and supervisors. The self-confidence of Giel, and the clarity of his story was impressive. Giel has a knack for storytelling in (a true Pan Narrans as Terry Pratchett would praise him). His report included an introduction to various topics of solid state physics and computational materials science in which you never notice how complicated the topic actually is. He just takes you along for the ride, and the story unfolds in a very natural fashion. This shows how well he understands what he is writing about.

This, in no way means his project was simple or easy. Quite soon, at the start of his project Giel actually ran into a previously unknown VASP bug. He had to play with spin-configurations of defects and of course bumped into a hand full of rookie mistakes which he only made once *thumbs-up*. (I could have warned him for them, but I believe people learn more if they bump their heads themselves. This project provided the perfect opportunity to do so in a safe environment. 😎 )  His end report was impressive and his results on the Ge-defect in diamond are of very good quality.

The second project was brought to a successful completion by Asja. This very eager student actually had to learn how to program in fortran before he could even start. He had to implement code to calculate partial phonon densities with the existing HIVE code. Along the way he also discovered some minor bugs (Thank you very much 🙂  ) and crashed into a rather unexpected hard one near the end of the project. For some time, things looked very bleak indeed: the partial density of equivalent atoms was different, and the sum of all partial densities did not sum to the total density. As a result there grew some doubts if it would be possible to even fulfill the goal of the project. Luckily, Asja never gave up and stayed positive, and after half a day of debugging on my part the culprit was found (in my part of the code as well). Fixing this he quickly started torturing his own laptop calculating partial phonon densities of state for Metal-organic frameworks and later-on also the Ge-defect in diamond, with data provided by Giel. Also these results are very promising and will require some further digging, but they will definitely be very interesting.

For me, it has been an interesting experience, and I count myself lucky with these two brave and very committed students. I wish them all the best of luck for the future, and maybe we meet again.

tUL Life Sciences Research Day 2016

tUL Life Science Research Day 2016 Poster

Yesterday was the tUL Life Sciences Research Day 2016. A conference event build around finding collaboration possibilities between the University of Hasselt in Belgium and the University of Maastricht (The Netherlands)…after all tUL is the “transnational University Limburg” which brings two universities together that are only separated some 26 km, but you have to cross a national border.

Although Life sciences itself is not my personal niche, I went to look for opportunities, as nano-particles which are used for drug delivery often consist of metals or oxides. These materials on the other hand are my niche. I used my current work on MOFs as a means to show what is possible from the ab-initio point of view, and presented this as a poster.

tUL Life Science Research Day 2016 Poster

Poster presented at the tUL Life Sciences Research Day, depicting my work on the unfunctionalized and the functionalized MIL-47(V) MOF.

Call for Abstracts: Condensed Matter Science in Porous Frameworks: On Zeolites, Metal- and Covalent-Organic Frameworks

Flyer for the Colloquium on Porous Frameworks at the CMD26

Flyer for the Colloquium on Porous Frameworks at the CMD26Together with Ionut Tranca (TU Eindhoven, The Netherlands) and Bartłomiej Szyja (Wrocław University of Technology, Poland) I am organizing a colloquium “Condensed Matter Science in Porous Frameworks: On Zeolites, Metal- and Covalent-Organic Frameworks” which will take place during the 26th biannual Conference & Exhibition CMD26 – Condensed Matter in Groningen (September 4th – 9th, 2016). During our colloquium, we hope to bring together experimental and theoretical researchers working in the field of porous frameworks, providing them the opportunity to present and discuss their latest work and discoveries.

Zeolites, Metal-Organic Frameworks, and Covalent-Organic Frameworks are an interesting class of hybrid materials. They are situated at the boundary of research fields, with properties akin to both molecules and solids. In addition, their porosity puts them at the boundary between surfaces and bulk materials, while their modular nature provides a wealthy playground for materials design.

We invite you to submit your abstract for oral or poster contributions to our colloquium. Poster contributions participate in a Best Poster Prize competition.

The deadline for abstract submission is April 30th, 2016.

The extended deadline for abstract submission is May 14th, 2016.

 

CMD26 – Condensed Matter in Groningen is an international conference, organized by the Condensed Matter Division of the European Physical Society, covering all aspects of condensed matter physics, including soft condensed matter, biophysics, materials science, quantum physics and quantum simulators, low temperature physics, quantum fluids, strongly correlated materials, semiconductor physics, magnetism, surface and interface physics, electronic, optical and structural properties of materials. The scientific programme will consist of a series of plenary and semi-plenary talks and Mini-colloquia. Within each Mini-colloquium, there will be invited lectures, oral contributions and posters.

 

Feel free to distribute this call for abstracts and our flyer and we hope to see you in Groningen!