Tag: Computational Philosophy

Assigning probabilities to non-Lipschitz mechanical systems

Authors:  Danny E. P. Vanpoucke and Sylvia Wenmackers
Journal: Chaos 31, 123131 (2021)
doi: 10.1063/5.0063388
IF(2021): 3.642
export: bibtex
pdf: <Chaos>   (Open Access) <ArXiv v1>
github: <NortonDomeExplorer>

 

Malament's Mounds
Graphical Abstract: Examples of Malament’s Mounds, of which the Norton Dome is the special case where α=½.

Abstract

We present a method for assigning probabilities to the solutions of initial value problems that have a Lipschitz singularity. To illustrate the method, we focus on the following toy-example: \ddot{r} = r^{a}, r(t = 0) = 0, and \dot{r}|_{r(t=0)}= 0, with a∈]0; 1[. This example has a physical interpretation as a mass in a uniform gravitational field on a dome of particular shape; the case with a = ½ is known as Norton’s dome. Our approach is based on (1) finite difference equations, which are deterministic; (2) elementary techniques from alpha-theory, a simplified framework for non-standard analysis that allows us to study infinitesimal perturbations; and (3) a uniform prior on the canonical phase space.  Our deterministic, hyperfinite grid model allows us to assign probabilities to the solutions of the This allows us to assign probabilities to the solutions of the initial value problem in the original, indeterministic model.

Phase space vector field for all Malament's mounds

Phase space vector field for all Malament’s mounds

Casting Keynotes: The Virtual Lab

Last Tuesday? I had the pleasure of competing in the casting keynotes competition of the TEDx UHasselt chapter. An evening filled with interesting talks on subjects ranging from the FAIR principles of open-data (by Liebet Peeters)  to the duty not stay silent in the face of “bad ideas” and leading a life of purpose. An interesting presentation was the one by Ann Bessemans on visual prosody to improve reading skills in young children as well as reading experience, more specifically the transfer of non-literal-content, for non-native speakers. There was also time for some humor, with the dangerous life of Tim Biesmans, who suffers from peanut-allergies. For him, death lurks around every corner, even in a first-date’s kiss. During my talk, I traced the evolution of computational research as the third paradigm of scientific discovery, showing you can find computational research in every field, and why it is evolving at its break-neck speed.

During the event, both the public and a jury voted on the best presentation, which would then have to present at the TEDx UHasselt in 2020.

And the Winner is …drum roll… Danny Vanpoucke!

So this story will continue during the 2020 TEDx event at UHasselt, and I hope to see you there 🙂

Casting Keynotes

top: Full action shots of my presentation. Moore’s Law as driving force behind computational research, and pondering the meaning of Artificial Intelligence. Bottom: Yes, I won 🙂

 

Exa-scale computing future in Europe?

As a computational materials scientist with a main research interest in the ab initio simulation of materials, computational resources are the life-blood of my research. Over the last decade, I have seen my resource usage grow from less than 100.000 CPU hours per year to several million CPU-hours per year. To satisfy this need for computational resources I have to make use of HPC facilities, like the TIER-2 machines available at the Flemish universities and the Flemish TIER-1 supercomputer, currently hosted at KU Leuven. At the international level, computational scientists have access to so called TIER-0 machines, something I no doubt will make use of in the future. Before I continue, let me first explain a little what this TIER-X business actually means.

The TIER-X notation is used to give an indication of the size of the computer/supercomputer indicated. There are 4 sizes:

  •  TIER-3: This is your personal computer(laptop/desktop) or even a small local cluster of a research group. It can contain from one (desktop) up to a few hundred CPU’s (local cluster). Within materials research, this is sufficient for quite a few tasks: post-processing of data, simple force-field based calculations, or even small quantum chemical or solid state calculations. A large fraction of the work during my first Ph.D. was performed on the local cluster of the CMS.
  • TIER-2: This is a supercomputer hosted by an institute or university. It generally contains over 1000 CPUs and has a peak performance of >10 TFLOPS (1012 Floating Point Operations Per Second, compare this to 1-50×10FLOPS or 1-25 GFLOPS of an average personal computer). The TIER-2 facilities of the VUB and UAntwerp both have a peak performance of about 75TFLOPS , while the machines at Ghent University and the KU Leuven/Uhasselt facilities both have a peak performance of about 230 TFLOPS. Using these machines I was able to perform the calculations necessary for my study of dopant elements in cerates (and obtain my second Ph.D.).
  • TIER-1: Moving up one more step, there are the national/regional supercomputers. These generally contain over 10.000 CPUs and have a peak performance of over 100 TFLOPS. In Flanders the Flemish Supercomputer Center (VSC) manages the TIER-1 machine (which is being funded by the 5 Flemish universities). The first TIER-1 machine was hosted at Ghent University, while the second and current one is hosted at KU Leuven, an has a peak performance of 623 TFLOPS (more than all TIER-2 machines combined), and cost about 5.5 Million € (one of the reasons it is a regional machine). Over the last 5 years, I was granted over 10 Million hours of CPU time, sufficient for my study of Metal-Organic Frameworks and defects in diamond.
  • TIER-0: This are international level supercomputers. These machines contain over 100.000 CPUs, and have a peak performance in excess of 1 PFLOP (1 PetaFLOP = 1000 TFLOPS). In Europe the TIER-0 facilities are available to researchers via the PRACE network (access to 7 TIER-0 machines, accumulated 43.49 PFLOPS).

This is roughly the status of what is available today for Flemish scientists at various levels. With the constantly growing demand for more processing power, the European union, in name of EuroHPC, has decided in march of this year, that Europe will host two Exa-scale computers. These machines will have a peak performance of at least 1 EFLOPS, or 1000 PFLOPS. These machines are expected to be build by 2024-2025. In June, Belgium signed up to EuroHPC as the eighth country participating, in addition, to the initial 7 countries (Germany, France, Spain, Portugal, Italy, Luxemburg and The Netherlands).

This is very good news for all involved in computational research in Flanders. There is the plan to build these machines, there is a deadline, …there just isn’t an idea of what these machines should look like (except: they will be big, massively power consuming and have a target peak performance). To get an idea what users expect of such a machine, Tier-1 and HPC users have been asked to put forward requests/suggestions of what they want.

From my user personal experience, and extrapolating from my own usage I see myself easily using 20 million hours of CPU time each year by the time these Exa-scale machines are build. Leading a computational group would multiply this value. And then we are talking about simple production purpose calculations for “standard” problems.

The claim that an Exa-scale scale machine runs 1000x faster than a peta-scale machine, is not entirely justified, at least not for the software I am generally encountering. As software seldom scales linearly, the speed-gain from Exa-scale machinery mainly comes from the ability to perform many more calculations in parallel. (There are some exceptions which will gain within the single job area, but this type of jobs is limited.) Within my own field, quantum mechanical calculation of the electronic structure of periodic atomic systems, the all required resources tend to grow with growth of the problem size. As such, a larger system (=more atoms) requires more CPU-time, but also more memory. This means that compute nodes with many cores are welcome and desired, but these cores need the associated memory. Doubling the cores would require the memory on a node to be doubled as well. Communication between the nodes should be fast as well, as this will be the main limiting factor on the scaling performance. If all this is implemented well, then the time to solution of a project (not a single calculation) will improve significantly with access to Exa-scale resources. The factor will not be 100x from a Pflops system, but could be much better than 10x. This factor 10 also takes into account that projects will have access to much more demanding calculations as a default (Hybrid functional structure optimization instead of simple density functional theory structure optimization, which is ~1000x cheaper for plane wave methods but is less accurate).

At this scale, parallelism is very important, and implementing this into a program is far from a trivial task. As most physicists/mathematicians/chemists/engineers may have the skills for writing scientifically sound software, we are not computer-scientists and our available time and skills are limited in this regard. For this reason, it will become more important for the HPC-facility to provide parallelization of software as a service. I.e. have a group of highly skilled computer scientists available to assist or even perform this task.

Next to having the best implementation of software available, it should also be possible to get access to these machines. This should not be limited to a happy few through a peer review process which just wastes human research potential. Instead access to these should be a mix of guaranteed access and peer review.

  • Guaranteed access: For standard production projects (5-25 million CPU hours/year) university researchers should have a guaranteed access model. This would allow them to perform state of the are research without too much overhead. To prevent access to people without the proven necessary need/skills it could be implemented that a user-database is created and appended upon each application. Upon first application, a local HPC-team (country/region/university Tier-1 infrastructure) would have to provide a recommendation with regard to the user, including a statement of the applicant’s resource usage at that facility. Getting resources in a guaranteed access project would also require a limited project proposal (max 2 pages, including user credentials, requested resources, and small description of the project)
  • Peer review access: This would be for special projects, in which the researcher requires a huge chunk of resources to perform highly specialized calculations or large High-throughput exercises (order of 250-1000 million CPU hours, e.g. Nature Communications 8, 15959 (2017)). In this case a full project with serious peer review (including rebuttal stage, or the possibility to resubmit after considering the indicated problems). The goal of this peer review system should not be to limit the number of accepted projects, but to make sure the accepted projects run successfully.
  • Pay per use: This should be the option for industrial/commercial users.

What could an HPC user as myself do to contribute to the success of EuroHPC? This is rather simple, run the machine as a pilot user (I have experience on most of the TIER-2 clusters of Ghent University and both Flemish Tier-1 machines. I successfully crashed the programs I am using by pushing them beyond their limits during pilot testing, and ran into rather unfortunate issues. 🙂 That is the job of a pilot user, use the machine/software in unexpected ways, such that this can be resolved/fixed by the time the bulk of the users get access.) and perform peer review of the lager specialized projects.

Now the only thing left to do is wait. Wait for the Exa-scale supercomputers to be build…7 years to go…about 92 node-days on Breniac…a starting grant…one long weekend of calculations.

Appendix

For simplicity I use the term CPU to indicate a single compute core, even though technically, nowadays a single CPU will contain multiple cores (desktop/laptop: 2-8 cores, HPC-compute node: 2-20 cores / CPU (or more) ). This to make comparison a bit more easy.

Furthermore, modern computer systems start more and more to rely on GPU performance as well, which is also a possible road toward Exa-scale computing.

Orders of magnitude:

  • G = Giga = 109
  • T = Tera = 1012
  • P = Peta = 1015
  • E = Exa = 1018

Annual Meeting of the Belgian Physical Society 2016

ConferenceLogoWebsite_1

Wednesday May 18th was a good day for our little family. Since my girlfriend an I both are physicists by training, we attended the annual meeting of the Belgian Physical Society in Ghent, together. What made this event even more special was the fact that both of us had an oral presentation at the same conference, which never happened before. 🙂

Sylvia talked about an example of indeterminism in Newtonian mechanics, and showed how the indeterminism can be clarified by using non-standard analysis. The example considers the Norton Dome, a hill with a specifically designed shape ( y(x)=-2/3(1-(1-3/2|x|)^{2/3})^{3/2} ). When considering a point mass, experiencing only gravitational force, there are two solutions for the equation of motion: (1) the mass is there, and remains there forever (r(t)=0) and (2) the mass was rolling uphill with a non-zero speed which becomes exactly zero at the top, and continues over the top (  r(t)=\frac{1}{144} (t-T)^4 with T the time the top is reached). Here, r refers to the arc length as measured along the dome (0 at the top). In addition, there also exists a family of solutions taking the first solution at t<T, while taking the second solution at t>T. (As the first and second derivatives of these latter solutions are continuous, Newton will not complain.) This leads to indeterminism in a Newtonian system; for instance, you start with a mass on the top of the hill, and at a random point in time it starts to roll off without the presence of an external something putting it into motion. Using infinitesimals, Sylvia shows that the probability for the mass to start rolling off the dome immediately is infinitesimally close to one.

My own talk was on the use of computational materials science as a means for understanding and explaining experimental observations. I presented results on the pressure-induced breathing of the MIL-47(V) MOF, showing how the experimentally observed S-shape of the transition-pressure-curve can be explained by the spin interactions of the unpaired vanadium-d electrons: it turns out that regions with only ferromagnetic chains compress already at 85 MPa, while the addition of higher and higher percentages of anti-ferromagnetic chains increases the pressure at which the pores collapse, up to 125 MPa for the regions containing 100% anti-ferromagnetic chains. As a second topic, I showed how the electronic band structure of the linker-functionalized UiO-66(Zr) MOF changes. When one or two -OH or -SH groups are added to the benzene ring of the linker, part of the valence band is split off and moves into the band gap. In semiconductors, this would be called a gap state; however, in this case, since every linker in the material contributes

Belgian Physical Society Meeting 2016

Top left: I am presenting computational results on MOFs. Top Right: Sylvia presents the Norton Dome. Bottom: Group picture at the central garden in “Het Pand”. (Photos: courtesy of Sylvia Wenmackers (TL), Philippe Smet (TR), and Michael Tytgat (B) )

a single electron state to this gap state, it practically becomes the valence band top. As a consequence, the color of such functionalized MOF’s changes from white to yellow and orange. As a third topic, I discussed the COK-69(Ti) MOF. In this MOF the electrons in the titaniumoxide clusters are strongly correlated, just as for pure titaniumoxide. Because such systems are poorly described with standard DFT, we used the DFT+U approach, which allowed us to discern between Ti3+ and Ti4+ ions. The latter was practically done by partitioning the electron density using the Hirshfeld-I scheme.

Next to our own talks, the BPS-meeting started with two very interesting plenary lectures on the two big machines/facilities of the physics community: ITER (fusion reactor under construction) and LHC (circular collider, under constant upgrade) at CERN. Prof. Jean Jacquinot, presented the progress in fusion research (among which simulations of plasma-instabilities) and the actual building progress of the ITER facility. Prof. Sergio Bertolucci on the other hand informed us on the latest results obtained with the LHC at CERN, but also about future plans (Future Circular Collider, with a circumference of about 100 km!!). He also showed us the amount of data involved in running the CERN experiments, puting them into perspective: LHC produced in 2012 about 15 Petabyte of data per year (15.000 Terabyte) which is the same as the mount of data added to Youtube on yearly basis. At that time the ATLAS experiment had a dataset of 140 Petabyte (compare to the 100 Petabyte of google’s search index or the 180 Petabyte of facebook uploads/year). The presenters, both excellent and enthusiastic speakers, reminded us that these projects thrive on the enthusiasm of young researchers with open minds. But they also noted, something that is rather often forgotten, that it is the journey not the goal which is most important. Of course, ITER is the next step on the road to commercial fusion power, but along the way much more is learned as a result of tackling practical problems. This is even more so for the CERN experiments, where the “goal” is not as related to our daily lives (keeping the lights on) but focuses on understanding the world. This is at the core of what it means to be a physicist: the need and drive to understand the world. This is also what should drive research but becomes increasingly hampered by the funding-question: how/what profit will it make in the “real world”. Remember the transistor which makes your computer and smartphone as powerful as they are, the laser in CD/DVD-players, the internet allowing you to read this post, and so many more.

Following these plenary presentations, four young scientists competed for the young speaker award presenting their PhD research. Two presentations (1),(2) focused on vortices in superconductors, a third one discussed the use of plasmons in graphene nanoribbons to enhance telecommunication while the fourth talk introduced us into the world of string theory.

In the afternoon, there were six parallel session, of which I mainly attended the Condensed Matter and Nanostructure Physics-session (since I had my own talk there) and the Biological, Medical, Statistical and Mathematical Physics-session rooting for Sylvia. During the Condensed matter session I was mainly fascinated by the presentation of Prof. Sara Bals, on coloring atoms in 3 dimensions. She showed how, using energy-dispersive X-ray (EDX) mapping it is possible to create a 3D atomic lattice of nano-materials and clusters. This is a more direct approach than the usual X-ray diffraction (XRD) approach for identifying a crystal structure. Unfortunately, I am afraid this technique may not be well suited for the MOFs I’m working on, since they contain mainly light elements and not heavy metals(although it may be interesting to try once the technique is optimized further). It is, however, definitely a technique to remember for future projects, to suggest to experimental collaborators.

Links:

Rationality: A Social-Epistemology Perspective

Authors: Sylvia Wenmackers, Danny E. P. Vanpoucke, and Igor Douven
Journal: Front. Psychol. 5, 581 (2014)
doi: 10.3389/fpsyg.2014.00581
IF(2014): 2.560
export: bibtex
pdf: <Front.Psychol.> (open Access)

Abstract

Both in philosophy and in psychology, human rationality has traditionally been studied from an ‘individualistic’ perspective. Recently, social epistemologists have drawn attention to the fact that epistemic interactions among agents also give rise to important questions concerning rationality. In previous work, we have used a formal model to assess the risk that a particular type of social-epistemic interactions lead agents with initially consistent belief states into inconsistent belief states. Here, we continue this work by investigating the dynamics to which these interactions may give rise in the population as a whole.

Probability of inconsistencies in theory revision,
A multi-agent model for updating logically interconnected beliefs under bounded confidence

Authors: Sylvia Wenmackers, Danny E. P. Vanpoucke, and Igor Douven
Journal: Eur. Phys. J. B 85, 44 (2012)
doi: 10.1140/epjb/e2011-20617-8
IF(2012): 1.282
export: bibtex
pdf: <Eur.Phys.J.B>

Abstract

We present a model for studying communities of epistemically interacting agents who update their belief states by averaging (in a specified way) the belief states of other agents in the community. The agents in our model have a rich belief state, involving multiple independent issues which are interrelated in such a way that they form a theory of the world. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating (in the given way). To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. It is shown that, under the assumptions of our model, an agent always has a probability of less than 2% of ending up in an inconsistent belief state. Moreover, this probability can be made arbitrarily small by increasing the number of independent issues the agents have to judge or by increasing the group size. A real-world situation to which this model applies is a group of experts participating in a Delphi-study.