Most commented posts
- Phonons: shake those atoms — 3 comments
- Start to Fortran — 1 comment
May 09 2017
With a recent transfer of this website to a new server, I was able to move it more or less entirely into the HTTPS domain. The main reason for doing this is the recent hobby of Google (and chrome as it’s extension) to target HTTP websites to indicate them as insecure [1,2,3]. The transfer from HTTP to HTTPS provides some security with regard to Man-In-The-middle attacks (or for those with the background in quantum information and quantum cryptography: it protects Alice and Bob from Eve…). It unfortunately will not protect my website from being hacked, nor you if I would include “evil” scripts into a page or post. đ
Those who are interested in the “security” of this website may notice some pages are not yet labeled as secure. The origin lies in the sitemeter used to measure traffic. As it does not allow for an HTTPS connection to the used script. Other than that everything on the pages should be over HTTPS. The sole exceptions are some of the posts, which may still be using the HTTP version of the CSS style sheets and javascript scripts used to securely hide my email. If you notice something like this, feel free to put it in a comment under the post, and I’ll fix it as soon as possible.).
Apr 28 2017
Authors: | Seyyed Amin Rounaghi, Danny E.P. Vanpoucke, Hossein Eshghi, Sergio Scudino, Elaheh Esmaeili, Steffen Oswald and JĂŒrgen Eckert |
Journal: | Phys. Chem. Chem. Phys. 19, 12414-12424 (2017) |
doi: | 10.1039/C7CP00998D |
IF(2017): | 3.906 |
export: | bibtex |
pdf: | <Phys.Chem.Chem.Phys.> |
Nowadays, the development of highly efficient routes for the low cost synthesis of nitrides is greatly growing. Mechanochemical synthesis is one of those promising techniques which is conventionally employed for the synthesis of nitrides by long term milling of metallic elements under pressurized N2 or NH3 atmosphere (A. Calka and J. I. Nikolov, Nanostruct. Mater., 1995, 6, 409-412). In the present study, we describe a versatile, room-temperature and low cost mechanochemical process for the synthesis of nanostructured metal nitrides (MNs), carbonitrides (MCNs) and carbon nitride (CNx). Based on this technique, melamine as a solid nitrogen-containing organic compound (SNCOC) is ball milled with four different metal powders (Al, Ti, Cr and V) to produce nanostructured AlN, TiCxN1-x, CrCxN1-x, and VCxN1-x (x~0.05). Both theoretical and experimental techniques are implemented to determine the reaction intermediates, products, by-products and finally, the mechanism underling this synthetic route. According to the results, melamine is polymerized in the presence of metallic elements at intermediate stages of the milling process, leading to the formation of a carbon nitride network. The CNx phase subsequently reacts with the metallic precursors to form MN, MCN or even MCN-CNx nano-composites depending on the defect formation energy and thermodynamic stability of the corresponding metal nitride, carbide and C/N co-doped structures.
Mar 08 2017
For a recent project, attempting to investigate Eu dopants in bulk diamond, I ended up simplifying the problem and investigating the C-vacancy in diamond. The setup is simple: take a super cell of diamond, remove 1 carbon atom and calculate. This, however, ended up being a bit more complicated than I had expected.
Removing the single carbon atom gives rise to 4 dangling bonds on the neighboring carbon atoms. The electrons occupying these bonds will gladly interact with one another, giving rise to three different possible spin states:
Starting the calculations without any assumptions gives nice results. Unfortunately they are wrong, and we seem to have ended up in nearby local minima. Including the spin-configurations above as starting assumptions solves the problem, luckily.
The electronic structure is, however, still not a perfect match for experiment. But this is well-known behavior for Density Functional Theory with local functionals such as LDA and PBE. A solution is the use of hybrid functionals (such as HSE06). Conservation of misery kicks in hard at this point, since the latter type of calculations are 1000x as expensive in compute time (and the LDA and PBE calculations aren’t finished in a matter of seconds or minutes, but need several hours on multiple cores). An old methodology to circumvent this problem is the use of Hubbard-U like correction terms (so-called DFT+U). Interestingly for this defect system the two available parameters in a DFT+U setup are independent, and allow for the electronic structure to be perfectly tuned. At the end of the fitting exercise, we now have two additional parameters, which allow us to get electronic structures of hybrid functional quality, but at PBE computational cost.
The evolution of the band-structure as function of the two parameters can be seen below.
Jan 01 2017
2016 has come and gone. 2017 eagerly awaits getting acquainted. But first we look back one last time, trying to turn this into a tradition. What have I done during the last year of some academic merit.
Publications: +4
Completed refereeing tasks: +5
Conferences: +4 (Attended) & + 1 (Organized)
PhD-students: +2
Current size of HIVE:
Hive-STM program:
Dec 26 2016
When running programs on HPC infrastructure, one of the first questions to ask yourself is: “How well does this program scale?“
In applications for HPC resources, this question plays a central role, often with the additional remark: “But for your specific system!“. For some software packages this is an important remark, for other packages this remark has little relevance, as the package performs similarly for all input (or for given classes of input). The VASP package is one of the latter. For my current resource application at the Flemish TIER-1 I set out to do a more extensive scaling test of the VASP package. This is for 2 reasons. The first being the fact that I will be using a newer version of VASP: vasp 5.4.1 (I am currently using my own multiply patched version 5.3.3.). The second being the fact that I will be using a brand new TIER-1 machine (second Flemish TIER-1, as our beloved muk retired the end of 2016).
Why should I put in the effort to get access to resources on such a TIER-1 supercomputer? Because such machines are the life blood of the computational materials scientist. They are their sidekick in the quest for understanding of materials. Over the past 4 years, I was granted (and used) 20900 node-days of calculation time (i.e. over 8 Million hours of CPU time, or 916 years of calculation time) on the first TIER-1 machine.
Now back to the topic. How well does VASP 5.4.1 behave? That depends on the system at hand, and how good you choose the parallelization settings.
VASP provides several parameters which allow for straightforward parallelization of the simulation:
In addition, one needs to keep the architecture of the HPC-system in mind as well: NPAR, KPAR and their product should be divisors of the number of nodes (and cores) used.
Both NPAR and KPAR parameters can be set simultaneously and will have a serious influence on the speed of your calculations. However, not all possible combinations give the same speedup. Even worse, not all combinations are beneficial with regard to speed. This is best seen for a small 2 atom diamond primitive cell.
Two things are clear. First of all, switching on a parallelization parameter does not necessarily mean the calculation will speed up. In some case it may actually slow you down. Secondly, the best and worst performance is consistently obtained using the same settings. Best: KPAR = maximal, and NPAR = 1, worst: KPAR = 1 and NPAR = maximal.
This small system shows us what you can expect for systems with a lot of k-points and very few electronic bands ( actually, any real calculation on this system would only require 8 electronic bands, not 56 as was used here to be able to asses the performance of the NPAR parameter.)
In a medium sized system (20-100 atoms), the situation will be different. There, the number of k-points will be small (5-50) while the natural number of electronic bands will be large (>100). As a test-case I looked at my favorite Metal-Organic Framework: MIL-47(V).
This system has only 12 k-points to parallelize over, and 224 electronic bands. The spread per number of nodes is more limited than for the small system. In contrast, the general trend remains the same: KPAR=high, NPAR=low, with an optimum performance when KPAR=#nodes. Going beyond standard DFT, using hybrid functionals, also retains the same picture, although in some cases about 10% performance can be gained when using one half node per k-point. Unfortunatly, as we have very few k-points to start from, this will only be an advantage if the limiting factor is the number of nodes available.
An interesting behaviour is seen when one keeps the k-points/#nodes ratio constant:
As you can see, VASP performs really well up to KPAR=#k-points (>80% efficiency). More interestingly, if the k-point/#node ratio is kept constant, the efficiency (now calculated as T1/(T2*NPAR) with T1 timing for a single node, and T2 for multiple nodes) is roughly constant. I.e. if you know the walltime for a 2-k-point/2-nodes job, you can use/expect the same for the same system but now considering 20-k-points/20-nodes (think Density of States and Band-structure calculations, or just change of #k-points due to symmetry reduction or change in k-point grid.) đ
If one thing is clear from the current set of tests, it is the fact that good scaling is possible. How it is attained, however, depends greatly on the system at hand. More importantly, making a poor choice of parallelization settings can be very detrimental to the obtained speed-up and efficiency. Unfortunately when performing calculations on an HPC system, one has externally imposed limitations to work with:
Here are some guidelines (some open doors as well):
In short:
[1] 28 is a lousy number in that regard as its prime-decomposition is 2x2x7, leaving little overlap with prime-decompositions of the number of k-points, which more often than you wish end up being prime numbers themselves đ„
[2] The small systemâs memory requirements varied from 0.15 to 1.09 Gb/core for the different combinations.
Dec 21 2016
Today the projects for the third year bachelor students in physics were presented at UHasselt. I also contributed two projects, giving the students the opportunity to choose for a computational materials science project. During these projects, I hope to introduce them into the modern (black) arts of High-Performance Computing and materials modelling beyond empirical models.
The two projects focus each on a different aspect of what it is to be a computational materials scientist. One project focuses on performing quantum mechanical calculations using the VASP program, and analyzing the obtained results with existing software. This student will investigate the NV-defect complex in diamond in all its facets. The other project focuses on the development of new tools to investigate the data generated by simulation software like VASP. This student will extend the existing phonon module in the HIVE-toolbox and use it to analyse a whole range of materials, varying from my favourite Metal-Organic Framework to a girlâs best friend: diamond.
Calculemus solidi
A description of the projects in Dutch can be found here.