In the previous tutorial, we created a constructor and destructor for our TTimer class. Next, we extend our class with overloaded operators. Depending on the type of object your class represents, you may want to define an addition/subtraction/multiplication/… operator. In addition, the assignment operator deserves some extra attention as you may want to have a clear control over this operation (e.g., deep copy vs shallow copy). The full source of this tutorial and the previous, can be downloaded from my github-page.
Let us start with the latter: the assignment operator. As with all other operators, it is possible to overload the assignment operator in modern fortran.
1. Assignment (=) operator overloading
When dealing with objects and classes—or extended data-structures in general—, their properties often are (implicit) pointers to the actual data-structure. This brings an interesting source of possible bugs due to shallow copies being made while deep copies are expected (although the problem may be less pronounced in Fortran than it is in Python).
In a fortran object, the assignment of a pointer component (i.e., an explicit pointer variable, or a component which is an object itself) happens via a shallow copy (or pointer assignment). In contrast, for an allocatable component, the assignment operation performs by default a deep copy (i.e., space is allocated, and values are copied). Shallow copies are very useful with regard to quickly creating new handles to the same data-structure. However, if you want to make a true copy, which you can modify without changing the original, then a deep copy is what you want. By implementing assignment overloading for your own classes, you have more control over the actual copying process, and you can make sure you are creating deep copies if those are preferred.
The implementation of overloading for the assignment operator is not too complicated. It requires two lines in your class definition:
type, public :: TTimer private ... contains private procedure, pass(this) :: Copy !< Make a copy of a timer object generic, public :: assignment(=) => Copy !< This is how copy is used. ... end type TTimer
First, you need to define a class method which performs a copy-operation—which in a fit or original though we decided to call “copy” ;-). As you can see this function is private, so it will not be accessible to the user of your class via a call like :
Secondly, you link this class method via the “=>” to the assignment-operator. It is a generic interface, which means the assignment operator could be linked to different functions, of which the relevant one will be determined and used during run-time. This generic is also public (otherwise you would not be able to use it).
The implementation of the class method follows the standard rules of any class method and could look like
pure subroutine Copy(this,from) class(TTimer), intent(inout) :: this class(TTimer), intent(in) :: from this%firstProperty = from%firstProperty ... !make explicit copies of all properties and components ... end subroutine Copy
The “this” object which we passed to our class method is the object on the left side of the assignment operator, while the “from” object is the one on the right side. Note that both objects are defined as “class” and not as “type”. Within the body of this method you are in charge of copying the data from the “from”-object to the “this”-object, giving you control over deep/shallow copying.
In practice the overloaded operator is used as:
type(TTimer):: TimerThis, TimerFrom TimerFrom = TTimer() ! initialization of the timers TimerThis = TTimer() ! (cf., previous tutorial on constructors and destructors) ... ! do stuff with TimerFrom ... TimerThis = TimerFrom ! although you type "=", the overloading causes this to be implemented as-if you wrote ! call TimerThis%copy(TimerFrom)
2. Operator (+,-,*,/,…) overloading
Just as you can overload the assignment operator above, you can also overload all other fortran operators. However, be careful to keep things intuitive. For example, an addition operation on our TTimer class is strange. What would it mean to add one timer to another? How would you subtract one chronometer from another? In contrast, inside our TTimer class we have a list of TTime objects which can be used to represent a date and time, as-well-as a time interval. For the remainder of this tutorial, we will assume the TTime class only represents time-intervals. For such a class, it makes sense to be able to add and subtract time intervals.
Let us start with the basic definition of our TTime-class:
type, public :: TTime private ... ! the properties of the TTime class ... contains private ... ! the methods of the TTime class ... procedure, pass(this) :: copy ! Copy content from other TTime instance, ! private, accessed via the assignment statement procedure, pass(this) :: add ! Add two TTime instances. procedure, pass(this) :: subtract ! subtract two TTime instances. generic, public :: assignment(=) => copy ! This is how copy is used. generic, public :: operator(+) => add ! This is how add is used. generic, public :: operator(-) => subtract ! This is how subtract is used. final :: destructor end type TTime interface TTime module procedure constructor end interface TTime
The TTime class has a constructor and destructor, implemented as we discussed before. The assignment operator is over-loaded as well. The overloading of the “+” and “–” operator follows the same setup as for the assignment operator. First, you define a class method where you will implement the addition or subtraction. Second, you link this class method to the operator as a generic. The main difference with overloading the assignment operator is that you need to use the keyword operator instead of assignment, during the second step. The class methods are private, while the generic link is public. The only thing left to do is to implement the class methods. In case of operator overloading, the class methods are functions.
pure function add(this,that) Result(Total) class(TTime), intent(in) :: this, that Type(TTime) :: total total = TTime() ... ! implementation of the addition of the properties of ! this to the properties of that, and storing them in ! Total ! e.g.: Total%seconds = this%seconds + that%seconds ... end function add
The returned object need to be defined as a type, and the further implementation of the function follows the standard fortran rules. It is important to note that for a function-header like this one, the object to the left of the operator will be the one calling the overloaded operator function, so:
Total = this + that
Total = that + this
This may not seem this important, as we are adding two objects of the same class, but that is not necessarily always the case. Imagine that you want to overload the multiplication operator, such that you could multiply your time-interval with any possible real value. On paper
Δt * 3.5 = 3.5 * Δt
but for the compiler in the left product “this” would be a TTime object and “that” would be a real, while in the right product “this” is the real, and “that” is the TTime object. To deal with such a situation, you need to implement two class methods, which in practice only differ in their header:
pure function MultLeft(this,that) Result(Total) class(TTime), intent(in) :: this real, intent(in) :: that Type(TTime) :: total
pure function MultRight(that, this) Result(Total) class(TTime), intent(in) :: this real, intent(in) :: that Type(TTime) :: total
In the class definition both functions are linked to the operator as
procedure, pass(this) :: MultLeft procedure, pass(this) :: MultRight generic, public :: operator(*) => MultLeft, MultRight
With this in mind, we could also expand our implementation of the “+” and “–” operator, by adding functionality that allows for the addition and subtraction of reals representing time-intervals. Also here, the left and right versions would need to be implemented.
As you can see, modern object oriented fortran provides you all the tools you need to create powerful classes capable of operator overloading using simple and straightforward implementations.
In our next Tutorial, we’ll look into data-hiding and private/public options in fortran classes.
In this tutorial on Object Oriented Programming in Fortran 2003, we are going to discuss how to create constructors and destructors for a Fortran class. During this tutorial, I assume that you know how to create a new project and what a class looks like in Fortran 2003. This tutorial is build around a TimerClass, which I wrote as an upgrade for my initial timing module in HIVE-tools. The full source of this TimerClass can be found and downloaded from github.
Where the former two tutorials were aimed at translating a scientific model into classes within the confines of the Fortran programming language, this tutorial is aimed at consolidating a class using good practices: The creation a constructor and destructor. As the destructor in Fortran classes is most straight forward of the two, we’ll start with it.
1. The destructor.
A destructor is a method (i.e., a class subroutine) which is automatically invoked when the object is destroyed (e.g., by going out of scope). In case of a Fortran class, this task is performed by the class-method(s) indicated as final procedure. Hence such methods are also sometimes referred to as finalizers. Although in some languages destructors and finalizers are two distinctly different features (finalizers are then often linked to garbage collecting), within the Fortran context I consider them the same.
Within the definition of our TTimerClass the destructor is implemented as:
- module TimerClass
- implicit none
- type, public :: TTimer
- ! here come the properties
- ! here come the methods
- final :: destructor
- end type TTimer
- subroutine destructor(this)
- Type(TTimer) :: this
- ! Do whatever needs doing in the destructor
- end subroutine destructor
- end module TimerClass
In contrast to a normal class-method, the destructor is called using the final keyword, instead of the usual procedure keyword. This method is private, as it is not intended to be used by the user anyway, only by the compiler upon cleanup of the instance of the class (i.e., the object). Furthermore, although defined as part of the class, a final subroutine is not type-bound, and can thus not be accessed through the type.
The destructor subroutine itself is a normal Fortran subroutine. There is, however, one small difference with a usual class-method, the parameter referring to the object (c.q. “this“) is indicated as a TYPE and not as a CLASS. This is because the destructor is only applicable to properties belonging to this “class” (Note that final subroutines are not inherited by the child-class). For a child-class (also called a derived class), the destructor of the child-class should deal with all the additional properties of the child-class, while the destructor of the parent-class is called to deal with its respective properties. In practice, the destructor of the child-class is called first, after which the destructor of the parent class is called (and recursively further along the class its family tree.)
So what do you put in such a destructor? Anything that needs to be done to allow the object to be gracefully terminated. Most obviously: deallocation of allocatable arrays, pointer components, closing file handles,…
2. The constructor.
Where other programming languages may provide an initialization section or access to a key-worded constructor. Although Fortran allows for variables to be initialized upon definition, there is no constructor keyword available to be used in its classes. Of course, this does not prevent you from adding an “init()” subroutine which the user should call once the new object is allocated. You could even use a private Boolean property (initialized old style) to keep track of the fact that an object was initialized when entering any of its methods, and if not, call the init() function there and then. There are many ways to deal with the initialization of a new object. Furthermore, different approaches also put the burden of doing things right either with the programmer developing the class, or the user, applying the class and creating objects.
Here, I want to present an approach which allows you to present a clear set-up of your class and which resembles the instance creation approach also seen in other languages (and which implicitly shows the “pointer”-nature of objects ):
NewObject = TClass()
In case of our TTimer class this will look like:
Type(TTimer) :: MyTimer MyTimer = TTimer()
This means we need to have a function with the exact same name as our class (cf., above), which is achieved through the use of an interface to a module procedure. Just giving this name to the constructor function itself will cause your compiler to complain (“Name ttimer at (1) is already defined as a generic interface“). By using a different name for the function, and wrapping it in an interface, this issue is avoided.
- module Timerclass
- implicit none
- type, public :: TTimer
- end type TTimer
- interface TTimer
- module procedure Constructor
- end interface TTimer
- function Constructor() Result(Timer)
- type(TTimer) :: Timer
- !initialize variables directly
- ! or through method calls
- call Timer%setTime(now)
- end function Constructor
- end module TimerClass
Note that the constructor function is not part of the class definition, and as such the object is not passed to the constructor function. In addition, the Timer object being created is defined as a Type(TTimer) not Class(TTimer), also because this function is not part of the class definition.
That is all there is to it. Simple and elegant.
In our next Tutorial, we’ll have a look at operator and assignment overloading. Combined with a constructor and destructor as presented here, you are able to create powerful and intuitive classes (even in Fortran).
On the fifth and final day of the workshop we return to the lab. Our task as a group: optimize our raspberry pink lacquer with regard to hardness, glossiness and chemical resistance.
The four cans of base material made during day 1 of the workshop were mixed to make sure we were all using the same base material (there are already sufficient noise introducing variables present, so any that can be eliminated should be.). Next, each team got a set of recipes generated with the ML algorithm to create. The idea was to parallelise the human part of the process. This would actually also have made for a very interesting exercise to perform in a computer science program. It showed perfectly how bottlenecks are formed and what impact is of serial sections and access/distribution of resources (or is this just in my mind? 😎 ). After a first round of samples, we already tried to improve the performance of our unit by starting the preparation of the next batch (prefetching 😉 ) while the results of the previous samples were entered into the ML algorithm, and that was run.
At the end of two update rounds, we discussed the results, there were already some clear improvements visible, but a few more rounds would have been needed to get to the best situation. A very interesting aspect to notice during such an exercise, is the difference in the concept of accuracy for the experimental side and the computational side of the story. While the computer easily spits out values in grams with 10 significant digits, at the experimental side of the story it was already extremely hard to get the same amounts with an accuracy of 0.02 gram (the present air currents give larger changes on the scale).
This workshop was a very satisfying experience. I believe I learned most with regard to Machine Learning from the unintentional observation in the lab. Thank you Christian and Kevin!
Day 4 of the workshop is again a machine learning centered day. Today we were introduced into the world of Gaussian Processes, and ML approach which is rooted in statistics and models data by looking at the averages of a distribution of functions…it is a function of functions. In contrast to most other ML approaches it is also very well suited for small data sets, which is why I had my eye on them already for quite some time. However, Gaussian Processes are not perfect and interestingly enough, their drawbacks and benefits seem quite complementary with the benefits and drawbacks to neural networks. Deep Gaussian Covariance Networks (DGCN) find their origin in this observation, and were designed with the idea of compensating the drawbacks of both approaches by combining them. The resulting approach is rather powerful and in contrast to any other ML approach: it does not have any hyper-parameter!!
Tomorrow, during the last day of the workshop, we will be using this DGCN to optimize our raspberry pink lacquers.
Today the workshop shifted gears a bit. We left the experimental side of the story and moved fully into the world of machine learning. This change went hand-in-hand with a doubling of the number of participants, showing how a hot-topic machine learning really is. Kevin Cremanns, who is presenting this part of the workshop, started by putting things into perspective a bit, and warned everyone not to hope for magical solutions (ML and AI have their problems), while at the same time presenting some very powerful examples of what is possible. A fun example is the robotic arm learning to flip pancakes:
During the introduction, all the usual suspects of machine learning passed the stage. And although you can read about them in every ML-book, it is nice to hear them discussed by someone who uses them on a daily basis. This mainly because practical details (often omitted in text-books) are also mentioned, helping one to avoid the same mistakes many have made before you. Furthermore, the example codes provided are extremely well documented, making them an interesting source of teaching material (the online manuals for big libraries like sci-kit learn or pandas tend to be too abstract, too big, and too intertwined for new users).
All-in-all a very interesting day. I look forward to tomorrow, as then we will be introduced into the closed source machine learning library developed at the University Hochschule Niederrhein.
Today was the second day of the machine learning workshop on coatings. After having focused on the components of coatings, today our focus went to characterization and deposition. The set of available characterization techniques is as extensive as the possible components to use. There was, however, one thing which grabbed my attention: “The magical human observer”. Several characterization techniques were presented to heavily rely on the human observer’s opinion and Fingerspitzengefühl. Sometimes this even came with the suggestion that such a human observer outperforms the numerical results of characterization machinery. This makes me wonder if this isn’t an indication of a poor translation of the human concept to the experiment intended to perform the same characterization. Another important factor to keep in mind when building automation frameworks and machine learning models.
In the afternoon, we again put on our lab coats and goggles. The task of the day: put our raspberry pink lacquer on different substrates and characterize the glossiness (visually) and the pendulum hardness.
Tomorrow the machine learning will kick in.
Today was the first day of school…not only for my son, but for me as well. While he bravely headed for the second grade of primary school, I was en route to the first day of a week-long workshop on Machine Learning and Coatings technology at the Hochschule Niederrhein in Krefeld. A workshop combining both the practical art of creating coating formulations and the magic of simulation, more specifically machine learning.
During my career as a computational materials researcher, I have worked with almost every type of material imaginable (from solids to molecules, including the highly porous things in between called MOFs), and looked into every aspect available, be it configuration (defects , surfaces, mixtures,…) or materials properties (electronic structure, charge transfer, mechanical behavior and spin configurations). But each and every time, I did this from a purely theoretical perspective*. As a result, I have not set foot in a lab (except when looking for a colleague) since 2002 or 2003, so you can imagine my trepidation at the prospect of having to do “real” lab-work during this workshop.
Participating in such a practical session— even such a ridiculously simple and safe one— is a rather interesting experience. The safety-goggles, white-coat and gloves are cool to wear, true, but from my perspective as a computational researcher who wants to automate things, this gives me a better picture of what is going on. For example, we** carefully weigh 225.3 grams of a liquid compound and add 2.2 grams of another (each with an accuracy of about 0.01 gram). In another cup, we collect two dye compounds (powders), again trying our best to perfectly match the prescribed quantities. But when the two are combined in the mixer it is clear that a significant quantity (multiple grams) are lost, just sticking to the edge of the container and spatula. So much for carefully weighing (of course a pro has tricks and skills to deal with this better than we did, but still). Conclusion: (1)Error bars are important, but hard to define. (2) Mixtures made by hand or by a robot should be quite different in this regard.
For the theoretical part of my brain, mixing 10 compounds is just putting them in the same box and stir, mix or shake. Practice can be quite different, especially if you need 225 grams of compound A, and 2.2 grams of compound B. This means that for the experimentalist there is a “natural order” for doing things. This order does not exist at the theoretical side of the spectrum***, where I build my automation and machine learning. This, in addition to the implicit interdependence of combined compounds, gives the high-dimensional space of possible mixtures a rather contorted shape. This gives rise to several questions begging for answers, such as: how important is this order, and can we (ab)use all this to make our search space smaller (but still efficient to sample).
At the end of the day, I learned a lot of interesting things and our team of three ended up with a nice raspberry pink varnish.
Next, day two, where we will characterize our raspberry pink varnish.
* Yes, I do see how strange this may appear for someone whose main research focus is aimed at explaining and predicting experiments. 🙂
** We were divided in teams of 2-3 people, so there were people with actual lab skills nearby to keep me safe. However, if this makes you think I was just idly present in the background, I have to disappoint you. I am brave enough to weigh inanimate powders and slow flowing resins 😉 .
*** Computational research in its practice uses aspects of both the experimental and theoretical branches of research. We think as theoreticians when building models and frameworks, and coax our algorithms to a solution with a gut-feeling and Fingerspitzengefühl only experimentalists can appreciate.
Today and tomorrow, there is a 2-day summer school on science communication held at the University of Antwerp: Let’s Talk Science! During this summer school there are a large number of workshops to participate in, and lectures to attend, dealing with all aspects of science communication.
I was invited to represent Hasselt University (and science communication done by its members) during the plenary panel session starting the summer school. The goal of this plenary session was to share our experiences and thoughts on science communication. The contributions varied from hands-on examples to more abstract presentations of what to keep in mind, including useful tips. The central aim of my presentation was directed at identifying the boundary between science communication and scientific communication. Or more precisely, showing that this border may be more artificial than we are aware of. By showing that everyone’s unique in his/her expertise and discipline, I provided the link between conference presentations and presentations for the general public. I traveled through my history of science communication, starting in the middle: with the Science Battle. An event, I wrote about before, where you are asked to explain your work in 15 minutes to an audience of 6-to 12-year-olds. Then I worked my way back via my blog and contributions to “Ik heb been vraag” (such as: if you drop a penny from the Eiffel tower, will this kill someone on the ground?) to the early beginning of my research: simulating STM images. In the latter case, although I was talking to experts in their field (experimental growth and characterization), their total lack of experience in modelling and quantum mechanical simulations transformed my colleagues into “general public”. This is an important aspect to realize, not only for science communication, but also for scientific communication. As a consequence this also means that most of the tips and tricks applicable to science communication are also applicable to scientific communication.
For example: tell a coherent story. As noted by one of my favorite authors – Terry Pratchett – the human species might have better been called “Pan Narrans”, the storytelling ape. We tell stories and we remember by stories. This is also a means to make your scien(ce/tific) communication more powerful. I told the story of my passion during science explained and my lecture for de Universiteit van Vlaanderen.
A final point I touched is the question of “Why?”. Why should you do science communication? Some may note that is our duty as scientists, since we are payed with taxpayer money. But personally I believe this is not a good incentive. Science communication should originate from your own passion. It should be because you want to, instead of because you have to. If you want to, it is much easier to show you passion, show your interest, and also take the time to do it.
This brought me back to my central theme: Science communication can be simple and small. E.g. projecting simulated STM images on the wall’s of the medieval castle in Ghent (Gravensteen) during a previous edition of the Ghent Light Festival.
It is becoming an interesting yearly occurrence: the VSC user day. During this 5th edition, HPC users of the various Flemish universities gather together at the Belgian Royal Academy of Science (KVAB) to present their state-of-the-art work using the Flemish Tier-1 and Tier-2 supercomputers. This is done during a poster-presentation session. This year, I presented my work with regard to vibrational spectra in solids and periodic systems. In contrast to molecules, vibrational spectra in solids are rarely investigated at the quantum mechanical level due to their high cost. I show that imaginary modes are not necessarily a result of structural instabilities, and I present a method for identifying the vibrational spectrum of a defect.
In addition, international speakers discuss recent (r)evolutions in High Performance Computing, and during workshops, the participants are introduced in new topics such as GPU-computing, parallelization, and the VSC Cloud and data platform. The possibilities of GPU were presented by Ehsan, of the VSC, showing extreme speedups of 10x to 100x, strongly depending on the application, the graphics card. It is interesting to see that simple CUDA prama’s can be used to obtain such effects…maybe I should have a go at them for the Hirshfeld and phonon parts of my HIVE code…if they can deal with quadruple precision, and very large arrays. During the presentation of Joost Vandevondele (ETH Zürich) we learned what the future holds with regard to next generation HPC machines. As increasing speed becomes harder and harder to obtain, people are again looking into dedicated hardware systems, a situation akin to the founding days of HPC. Whether this is a situation we should applaud remains to be seen, as it means that we are moving back to codes written for specific machines. This decrease in portability will probably be alleviated by high level scripting languages (such as python), which at the same time result in a significant loss of the initial gain. (Think of the framework approach to modern programming which leads to trivial applications requiring HPC resources to start.)
In addition, this year the HPC-team of the TIER-1 machine is present for a panel discussion, presenting the future of the infrastructure. The machine nearly doubled in size which is great news. Let us hope that in addition for financing hardware, there is also a significant budget considered for a serious extension of a dedicated HPC support team. Running a Tier-1 machine is not something one does as a side-project, but which requires a constant vigilance of a dedicated team to deal with software updates, resulting compatibility issues, conflicting scripts and just hardware and software running haywire because they can.
With this hope, I look toward the future. A future where computational research is steadily are every quickly is becoming common place in the fabric om academic endeavors.