Tag: compiler

Defensive Programming and Debugging

Last few months, I finally was able to remove something which had been lingering on my to-do list for a very long time: studying debugging in Fortran. Although I have been programming in Fortran for over a decade, and getting quite good at it, especially in the more exotic aspects such as OO-programming, I never got around to learning how to use decent debugging tools. The fact that I am using Fortran was the main contributing factor. Unlike other languages, everything you want to do in Fortran beyond number-crunching in procedural code has very little documentation (e.g., easy dll’s for objects), is not natively supported (e.g., find a good IDE for fortran, which also supports modern aspects like OO, there are only very few who attempt), or you are just the first to try it (e.g., fortran programs for android :o, definitely on my to-do list). In a long bygone past I did some debugging in Delphi (for my STM program) as the debugger was nicely integrated in the IDE. However, for Fortran I started programming without an IDE and as such did my initial debugging with well placed write statements. And I am a bit ashamed to say, I’m still doing it this way, because it can be rather efficient for a large code spread over dozens of files with hundreds of procedures.

However, I am trying to repent for my sins. A central point in this penance was enlisting for the online MOOC  “Defensive Programming and Debugging“. Five weeks of intense study followed, in which I was forced to use command line gdb and valgrind. During these five weeks I also sharpened my skills at identifying possible sources of bugs (and found some unintentional bugs in the course…but that is just me). Five weeks of hard study, and taking tests, I successfully finished the course, earning my certificate as defensive programmer and debugger. (In contrast to my sometimes offensive programming and debugging skills before 😉 .)

Permanent link to this article: https://dannyvanpoucke.be/defensive-programming-en/

Simple Parallelization in Fortran: OpenMP

Pentium2speedThe first PC we got at our home was a Pentium II. My dad got it, because I was going to university, and I would be able to do something “useful” with it. (Yup, I survived my entire high school career searching stuff in the library and the home encyclopedia. Even more, Google didn’t even exist before we got our computer, as the company was only founded in 1998 🙂 ). The machine was advertised as state of the art with a clock rate of a whooping 233 MHz! During the decade that followed, the evolution of the clock rates kept going at a steady pace, until it saturated at about 3-4 GHz(15 times faster than the 233 MHz) around 2005. Since then, the clock rate has not increased a bit. If anything, the average clock rate has even decreased to the range 2-3 GHz. As power-consumption grows quadratically with with the clock rate, this means that (1) there is much more heat produced, that needs to be transported away from your CPU (otherwise it get’s destroyed), (2) reducing the clock rate by a factor 2, allows you to power 4 CPU’s at half the clock rate, effectively doubling your calculation power. (There are even more tricks involved in modern CPU’s which crack up performance such that the clock rate isn’t a real measure for performance any longer, and sales people need to learn more new buzzword to sell your computer/laptop 👿 )

intelCoreCloneWhere in 2005 you bought a single CPU with a high clock rate, you now get a machine with multiple cores. Most machines you can get these days have a minimum of 2 cores, with quad-core machines becoming more and more common. But, there is always a but, even though you now have access to multiple times the processing power of 2005, this does not mean that your own code will be able to use it. Unfortunately there is no simple compiler switch which makes your code parallel (like the -m64 switch which makes your code 64-bit), you have to do this yourself (the free lunch is over). Two commonly used frameworks for this task are OpenMP and MPI. The former mainly focuses on shared memory configurations (laptops, desktops, single nodes in a cluster), while the latter focuses of large distributed memory setups (multi-node clusters) and is thus well-suited for creating codes that need to run on hundreds or even thousands of CPU’s. The two frameworks differ significantly in their complexity, fortunately for us, OpenMP is both the easier one, and the one most suited for a modern multi-core computer. The OpenMP framework consists of pragma’s (or directives) which can be added in an existing code as comment lines, and tell a compiler knowledgeable of OpenMP how to parallelize the code. (It is interesting to note that MPI and OpenMP are inteded for parallel programming in either C, C++ or fortran … a hint that what the important programming languages are.)

OpenMP in Fortran: Basics

A. Compiler-options and such

As most modern fortran compilers also are well aware of openMP (you can check which version of openMP is supported here), you generally will not need to install a new compiler to write parallel fortran code. You only need to add a single compiler flag: -fopenmp (gcc/gfortran), -openmp (intel compiler), or -mp (Portland Group). In Code::Blocks you will find this option under Settings > Compiler > Compiler Settings tab > Compiler Flags tab (If the option isn’t present try adding it to “other compiler options” and hope your compiler recognizes one of the flags). 

Secondly, you need to link in the OpenMP library. In Code::Blocks go to Settings > Compiler > Linker Settings tab > Link Libraries: add. Where you add the libgomp.dll.a library (generally found in the folder of your compiler…in case of 64 bit compilers, make sure you get the 64 bit version)

Finally, you may want to get access to OpenMP functions inside your code. This can be achieved by a use statement: use omp_lib.

B. Machine properties

OpenMP contains several functions which allow you to query and set several environment variables (check out these cheat-sheets for OpenMP v3.0 and v4.0).

  • omp_get_num_procs() : returns the number of processors your code sees (in hyper-threaded CPU’s this will be double of the actual number of processor cores).
  • omp_get_num_threads() : returns the number of threads available in a specific section of the code.
  • omp_set_num_threads(I): Sets the number of threads for the openMP parallel section to I
  • omp_get_thread_num() : returns the index of the specific thread you are in [0..I[

 

  1. subroutine OpenMPTest1()
  2.         use omp_lib;
  3.  
  4.         write(*,*) "Running OpenMP Test 1: Environment variables"
  5.         write(*,*) "Number of threads :",omp_get_num_threads()
  6.         write(*,*) "Number of CPU's available:",omp_get_num_procs()
  7.         call omp_set_num_threads(8) ! set the number of threads to 8
  8.         write(*,*) "#Threads outside the parallel section:",omp_get_num_threads()
  9.         !below we start a parallel section
  10.         !$OMP PARALLEL
  11.         write(*,*) "Number of threads in a parallel section :",omp_get_num_threads()
  12.         write(*,*) "Currently in thread with ID = ",omp_get_thread_num()
  13.         !$OMP END PARALLEL
  14.  
  15. end subroutine OpenMPTest1

 

Notice in the example code above that outside the parallel section indicated with the directives $OMP PARALLEL and $OMP END PARALLEL, the program only sees a single thread, while inside the parallel section 8 threads will run (independent of the number of cores available). 

C. Simple parallelization

The OpenMP frameworks consists of an set of directives which can be used to manage the parallelization of your code (cheat-sheets for OpenMP v3.0 and v4.0). I will not describe them in detail as there exists several very well written and full tutorials on the subject, we’ll just have a look at a quick and easy parallelization of a big for-loop. As said, OpenMP makes use of directives (or Pragma’s) which are placed as comments inside the code. As such they will not interfere with your code when it is compiled as a serial code (i.e. without the -fopenmp compiler flag). The directives are preceded by what is called a sentinel ( $OMP ). In the above example code, we already saw a first directive: PARALLEL. Only inside blocks delimited by this directive, can your code be parallel.

  1. subroutine OMPTest2()
  2.         use omp_lib;
  3.  
  4.         integer :: IDT, NT,nrx,nry,nrz
  5.         doubleprecision, allocatable :: A(:,:,:)
  6.         doubleprecision :: RD(1:1000)
  7.         doubleprecision :: startT, TTime, stt
  8.  
  9.         call random_seed()
  10.         call random_number(RD(1:1000))
  11.         IDT=500 ! we will make a 500x500x500 matrix
  12.         allocate(A(1:IDT,1:IDT,1:IDT))
  13.  
  14.         write(*,'(A)') "Number of preferred threads:"
  15.         read(*,*) NT
  16.         call omp_set_num_threads(NT)
  17.         startT=omp_get_wtime()
  18.         !$OMP PARALLEL PRIVATE(stt)
  19.         stt=omp_get_wtime()
  20.        
  21.         !$OMP DO
  22.         do nrz=1,IDT
  23.            do nry=1,IDT
  24.               do nrx=1,IDT
  25.               A(nrx,nry,nrz)=RD(modulo(nrx+nry+nrz,1000)+1)
  26.               end do
  27.            end do
  28.         end do
  29.         !$OMP END DO
  30.         write(*,*) "time=",(omp_get_wtime()-stt)/omp_get_wtick()," ticks for thread ",omp_get_thread_num()
  31.         !$OMP END PARALLEL
  32.         TTime=(omp_get_wtime()-startT)/omp_get_wtick()
  33.         write(*,*)" CPU-resources:",Ttime," ticks."
  34.  
  35.         deallocate(A)
  36.     end subroutine RunTest2

The program above fills up a large 3D array with random values taken from a predetermined list. The user is asked to set the number of threads (lines 14-16), and the function omp_get_wtime() is used to obtain the number of seconds since epoch, while the function omp_get_wtick() gives the number of seconds between ticks. These functions can be used to get some timing data for each thread, but also for the entire program. For each thread, the starting time is stored in the variable stt. To protect this variable of being overwritten by each separate thread, this variable is declared as private to the thread (line 18: PRIVATE(stt) ). As a result, each thread will have it’s own private copy of the stt variable.

The DO directive on line 21, tells the compiler that the following loop needs to be parallelized. Putting the !$OMP DO pragma around the outer do-loop has the advantage that it minimizes the overhead produced by the parallelization (i.e. resources required to make local copies of variables, calculating the distribution of the workload over the different threads at the start of the loop, and combining the results at the end of the loop).

As you can see, parallelizing a loop is rather simple. It takes only 4 additional comment lines (!$OMP PARALLEL , !$OMP DO, !$OMP END DO and !$OMP END PARALLEL) and some time figuring out which variables should be private for each thread, i.e. which are the variables that get updated during each cycle of a loop. Loop counters you can even ignore as these are by default considered private. In addition, the number of threads is set on another line giving us 5 new lines of code in total. It is of course possible to go much further, but this is the basis of what you generally need.

Unfortunately, the presented example is not that computationally demanding, so it will be hard to see the full effect of  the parallelization. Simply increasing the array size will not resolve this as you will quickly run out of memory. Only with more complex operations in the loop will you clearly see the parallelization. An example of a more complex piece of code is given below (it is part of the phonon-subroutine in HIVE):

  1. !setup work space for lapack
  2.         N = this%DimDynMat
  3.         LWORK = 2*N - 1
  4.         call omp_set_num_threads(this%nthreads)
  5.         chunk=(this%nkz)/(this%nthreads*5)
  6.         chunk=max(chunk,1)
  7.         !$OMP PARALLEL PRIVATE(WORK, RWORK, DM, W, RPart,IO)
  8.         allocate(DM(N,N))
  9.         allocate( WORK(2*LWORK), RWORK(3*N-2), W(N) )
  10.         !the write statement only needs to be done by a single thread, and the other threads do not need to wait for it
  11.         !$OMP SINGLE
  12.         write(uni,'(A,I0,A)') " Loop over all ",this%nkpt," q-points."
  13.         !$OMP END SINGLE NOWAIT
  14.         !we have to loop over all q-points
  15.         !$OMP DO SCHEDULE(DYNAMIC,chunk)
  16.         do nrz=1,this%nkz
  17.             do nry=1,this%nky
  18.                 do nrx=1,this%nkx
  19.                     if (this%kpointListBZ1(nrx,nry,nrz)) then
  20.                         !do nrk=1,this%nkpt
  21.                         WORK = 0.0_R_double
  22.                         RWORK = 0.0_R_double                                                
  23.                         DM(1:this%DimDynMat,1:this%DimDynMat)=this%dynmatFIpart(1:this%DimDynMat,1:this%DimDynMat) ! make a local copy
  24.                         do nri=1,this%poscar%nrions
  25.                             do nrj=1,this%poscar%nrions
  26.                                 Rpart=cmplx(0.0_R_double,0.0_R_double)
  27.                                 do ns=this%vilst(1,nri,nrj),this%vilst(2,nri,nrj)
  28.                                     Rpart=Rpart + exp(i*(dot_product(this%rvlst(1:3,ns),this%kpointList(:,nrx,nry,nrz))))
  29.                                 end do
  30.                                 Rpart=Rpart/this%mult(nri,nrj)
  31.                                 DM(((nri-1)*3)+1:((nri-1)*3)+3,((nrj-1)*3)+1:((nrj-1)*3)+3) = &
  32.                                     & DM(((nri-1)*3)+1:((nri-1)*3)+3,((nrj-1)*3)+1:((nrj-1)*3)+3)*Rpart
  33.                             end do
  34.                         end do
  35.                         call MatrixHermitianize(DM,IOS=IO)
  36.                         call ZHEEV( 'V', 'U', N, DM, N, W, WORK, LWORK, RWORK, IO )
  37.                         this%FullPhonFreqList(:,nrx,nry,nrz)=sign(sqrt(abs(W)),W)*fac
  38.                     end if
  39.                 end do
  40.             end do
  41.         end do
  42.         !$OMP END DO
  43.         !$OMP SINGLE
  44.         write(uni,'(A)') " Freeing lapack workspace."
  45.         !$OMP END SINGLE NOWAIT
  46.         deallocate( WORK, RWORK,DM,W )
  47.         !$OP END PARALLEL

In the above code, a set of equations is solved using the LAPACK eigenvalue solver ZHEEV to obtain the energies of the phonon-modes in each point of the Brillouin zone. As the calculation of the eigenvalue spectrum for each point is independent of all other points, this is extremely well-suited for parallelization, so we can add !$OMP PARALLEL and !$OMP END PARALLEL on lines 7 and 47. Inside this parallel section there are several variables which are recycled for every grid point, so we will make them PRIVATE (cf. line 7, most of them are work-arrays for the ZHEEV subroutine).

Lines 12 and 44 both contain a write-statement. Without further action, each thread will perform this write action, and we’ll end up with multiple copies of the same line (Although this will not break your code it will look very sloppy to any user of the code). To circumvent this problem we make use of the !$OMP SINGLE directive. This directive makes sure only 1 thread (the first to arrive) will perform the write action. Unfortunately, the SINGLE block will create an implicit barrier at which all other threads will wait. To prevent this from happening, the NOWAIT clause is added at the end of the block. In this specific case, the NOWAIT clause will have only very limited impact due to the location of the write-statements. But this need not always to be the case.

On line 15 the !$OMP DO pragma indicates a loop will follow that should be parallelized. Again we choose for the outer loop, as to reduce the overhead due to the parallelization procedure. We also tell the compiler how the work should be distributed using the SCHEDULE(TYPE,CHUNK) clause. There are three types of scheduling:

  1. STATIC: which is best suited for homogeneous workloads. The loop is split in equal pieces (size given by the optional parameter CHUNK, else equal pieces with size=total size/#threads)
  2. DYNAMIC: which is better suited if the workload is not homogeneous.(in this case the central if-clause on line 19 complicates things). CHUNK can again be used to define the sizes of the workload blocks.
  3. GUIDED: which is a bit like dynamic but with decreasing block-sizes.

From this real life example, it is again clear that OpenMP parallelization in fortran can be very simple.

D. Speedup?

On my loyal sidekick (with hyper-threaded quad-core core i7) I was able to get following speedups for the phonon-code (the run was limited to performing only a phonon-DOS calculation):

speedup due to openMP parallelization

Speedup of the entire phonon-subroutine due to parallelization of the main-phonon-DOS loop.

The above graph shows the speed-up results for the two different modes for calculating the phonon-DOS. The reduced mode (DM red), uses a spectrum reduced to that of a unit-cell, but needs a much denser sampling of the Brillouin zone (second approach), and is shown by the black line. The serial calculation in this specific case only took 96 seconds, and the maximum speedup obtained was about x1.84. The red and green curves give the speedup of the calculation mode which makes use of the super-cell spectrum (DM nored, i.e. much larger matrix to solve), and shows for increasing grid sizes a maximum speedup of x2.74 (serial time: 45 seconds) and x3.43 (serial time 395 seconds) respectively. The reason none of the setups reaches a speedup of 4 (or more) is twofold:

  1. Amdahl’s law puts an upper limit to the global speedup of a calculation by taking into account that only part of the code is parallelized (e.g. write sections to a single file can not be parallelized.)
  2. There needs to be sufficient blocks of work for all threads (indicated by nkz in the plot)

In case of the DM nored calculations, the parallelized loop clearly takes the biggest part of the calculation-time, while for the DM red calculation, also the section generating the q-point grid takes a large fraction of the calculation time limiting the effect of parallelization. An improvement here would be to also parallelize  the subroutine generating the grid, but that will be for future work. For now, the expensive DM nored calculations show an acceptable speedup.

 

Permanent link to this article: https://dannyvanpoucke.be/fortran-openmp1-en/

To x64 or not to x64: Installing a 64-bit fortran compiler

Current day computers generally have 64-bit processors, and most even have 64-bit operating systems. On such systems, 32-bit programs will run fine, but 64-bit programs can make more efficient use of the underlying system. When we installed a fortran compiler and the code::blocks IDE, the default fortran compiler generated 32-bit programs. This generally is not an issue, unless you need a large amount of memory, for example to store a temporary array with 4003 double precision coordinates (as I did for a project I’m currently working on). You may first start to look for ways of increasing the stack-size of your program, but you will soon discover that the problem is more profound: a 32-bit program cannot access address spacing beyond 4Gb. (In practice, generally you will not even reach 4Gb before running into problems.) This is because the memory address of your data is stored as a 32-bit value (232 = 4 294 967 296 = 4Gb) so the only way out of this predicament is a “larger address” aka 64-bit. So you need to install a new compiler capable of providing 64-bit programs.

  1. Installing minGW64 for code::blocks
    1. Installing the compiler
    2. Setting the PATH-variable (win10)
    3. Adding the compiler to code::blocks
  2. Upgrading Lapack to 64-bit

Continue reading

Permanent link to this article: https://dannyvanpoucke.be/install-fortran64bit-en/

Start to Fortran

Code-statistics for the hive3 code (feb.2015)

Code-statistics for the hive3 code (feb.2015)

If you are used to programming in C/C++, Java or Pascal, you probably do this using an Integrated Development Environment (IDE’s) such as Dev-Cpp/Pascal, Netbeans, Eclipse, … There are dozens of free IDE’s for each of these languages. When starting to use fortran, you are in for a bit of a surprise. There are some commercial IDE’s that can handle fortran (MS Visual Studio, or the Lahey IDE). Free fortran IDEs are rather scarce and quite often are the result of the extension of a C++ focused IDE. This, however, does not make them less useful. Code::Blocks is such an IDE. It supports several programming and scripting languages including C and fortran, making it also suited for mixed languages development. In addition, this IDE has been developed for Windows, Linux and Mac Os X, making it highly portable. Furthermore, installing this IDE combined with for example the gcc compiler can be done quickly and without much hassle, as is explained in this excellent tutorial. In 5 steps everything is installed and you are up and running:

  1. Get a gfortran compiler at https://gcc.gnu.org/wiki/GFortran

    Go for binaries and get the installer if you are using Windows. This will provide you with the latest build. Be careful if you are doing this while upgrading from gfortran 4.8 to 4.9 or 4.10. The latter two are known to have a recently fixed compiler-bug related to the automatic finalization of objects. A solution to this problem is given in this post.

    UPDATE 03/02/2017: As the gcc page has changed significantly since this post was written, I suggest to follow the procedure described here for the installation of a 64bit version of the compiler.

  2. Get the Code::Blocks IDE at http://www.codeblocks.org/ or http://cbfortran.sourceforge.net/ (preferred)

    Since version 13.12 the Fortranproject plugin is included in the Code::Blocks installation.

  3. Setup gfortran

    Run the installer obtained at step 1…i.e. keep clicking OK until all is finished.

  4. Setup Code::Blocks for fortran
    1. Run the installer or Unzip the zip-file obtained in step 2.
    2. Run Code::Blocks and set your freshly installed GNU fortran compiler as default.
    3. Associate file types with Code::Blocks. If you are not using other IDE’s this may be an interesting idea
    4. Go to settings, select “Compiler and Debugger”, click on “Toolchain executables” and set the correct paths.
    5. Code::blocks has been configured.
  5. Your first new fortran program
    1. Go to “File” → “New” → “Project”.
    2. Select “Fortran Application”.
    3. Follow the Wizard: provide a project folder and title.
    4. Make sure the compiler is set to “GNU Fortran Compiler”, and click Finish.
    5. A new project is now being created, containing a main file named “main.f90”
    6. Click “Build”, to build this program, and then “Run”.
    7. Congratulations your first Fortran program is a fact.

 

Of course any real project will contain many files, and when you start to create fortran 2003/2008 code you will want to use “.f2003” or “.f03” instead of “.f90” . The Code::Blocks IDE is well suited for the former tasks, and we will return to these later. Playing with this IDE is the only way to learn about all its options. Two really nice plugins are “Format Fortran Indent” and “Code statistics”. The first one can be used to auto-indent your Fortran code, making it easier to find those nasty missing “end” statements. The code statistics tool runs through your entire project and tells you how many lines of code you have, and how many lines contain comments.

Permanent link to this article: https://dannyvanpoucke.be/start-to-fortran-en/

CA: Coders Anonymous

It has been 2 weeks since I last wrote some code, today I started again.

Two weeks ago  I started the, for me, daunting task of upgrading my IDE and compiler to their most recent version. The upgrade itself went smoothly, since it basically consisted of uninstalling the old versions and installing the new ones. The the big finale, recompiling my fortran codebase, went just a little bit less smoothly. It crashed straight into a compiler-bug, nicely introduced in version 4.9 of the gcc fortran compiler, and carefully nurtured up to version 4.10 I had just installed. The bug sounds as follows:

error: internal compiler error: in gfc_conv_descriptor_data_get, at fortran/trans-array.c:145
end module TPeriodicTableModule

Clear sounding as it is, it required some further investigation to find out what was actually the problem and if and how it can be resolved. The problem appeared to be a rather simple one; The compiler seems to be unable to generate the finalization code for some object based constructions involving both fixed size and allocatable arrays, the strong suite of the fortran language.  A minimal example allowing you to bump into this compiler bug goes as follows:

 

Bug 59765   
  1. module bug59765
  2. type TSubObject
  3. integer, dimension(:), allocatable :: c
  4. end type TSubObject
  5. type TObject
  6. type(TSubObject), dimension(1) :: u
  7. end type TObject
  8. contains
  9.  
  10. subroutine add(s)
  11. class(TObject), intent(inout) :: s
  12. end subroutine add
  13.  
  14. end module bug59765

 

The issue arises when the compiler tries to setup the deallocation of the allocatable array of the TSubObject elements of the array u. Apparently the combination of the static array u and allocatable arrays c in the elements of u result in confusion. It was suggested that the compiler wants to perform the deallocation procedure as an array operation (one of the neat tricks fortran has up its sleeves):

deallocate(s%u(:)%c)

instead it should just use a normal do-loop and run over all elements of u.

One of the main ironies of this story is that this bug is strongly connected to object oriented programming, a rather new concept in the world of fortran. Although introduced in fortran 2003, more than 10 year ago, compiler support for these features have only reached basic maturity in recent years. The problem we are facing is one in the destructor of an object: the smart compiler wants to make our life easy, and implicitly create a good destructor for us. As with most smart solutions of modern day life, such things have a tendency to fail when you least expect it.

However, this bug (and the fact that it persists in more recent versions of the compiler) forces us to employ good coding practices: write a destructor yourself. Where C++ has implemented keywords for both constructor and destructor, the fortran programmer, as yet, only has a keyword for a destructor: final. This finalization concept was introduced in the fortran 2003 standard as part of the introduction of the Object Oriented Programming paradigm. A final procedure/function also works slightly different than what you may be used to in for example C++, namely, it is not directly callable by the programmer as an object-function/procedure. A final procedure/function is only called upon in an automatic way when an object is destroyed. So for those of us who also implement  ‘free()‘ procedures to clean-up objects at runtime, this means some extra work may be needed (I haven’t checked this in detail).

So how is our example-problem healed from bug 59765? Through the introduction of our own destructor.

  1. module fixbug59765
  2. type TSubObject
  3. integer, dimension(:), allocatable :: c
  4. contains
  5. final :: destroy_TSubObject
  6. end type TSubObject
  7. type TObject
  8. type(TSubObject), dimension(1) :: u
  9. end type TObject
  10. contains
  11.  
  12. subroutine add(s)
  13. class(TObject), intent(inout) :: s
  14. end subroutine add
  15.  
  16. subroutine destroy_TSubObject(this)
  17. type(TSubObject) :: this !note: this needs to be a type not a class
  18.  
  19. if (allocated(this%c)) deallocate(this%c)
  20. end subroutine destroy_TSubObject
  21.  
  22. end module fixbug59765

In my own code, both the TSubObject and TObject classes got their own final procedure, due to the slightly higher complexity of the objects involved. The resulting code compiled without further complaints, and what is more, it also still compiled with the recent intel ifort compiler. Unfortunately, final procedures are only included in the gcc compiler since version 4.9, making code containing them incompatible with the gcc version 4.8 and earlier.

Permanent link to this article: https://dannyvanpoucke.be/ca-coders-anonymous/