Along with the effort of rewriting, there’s also distrust of new models of rewrites of existing ones. There are decades of work in the literature publishing and critiquing results from existing models, and there are…not that for anything new. It’s much less risky and thus easier to accept to port an existing model to a new HPC platform or architecture than it is to adopt a new one.
Additionally, HPC is a weird little niche of design and practice at both the software and hardware levels. There are a few hundred customers who occasionally have a lot of money and usually have no money. Spinning that whole ecosystem around is difficult, and breaking a niche off the edge of it (if you decided to take climate modelling into Julia without also bringing petrochemicals, bioinformatics, automotive, etc) is a serious risk.
When NREL finally open sourced SAM (GitHub), one of the standard tools for calculating PV output based on location and other factors, a friend of mine decided to take a look at the code. On his first compilation, it was clear no one had ever built it using -Wall, it had thousands of warnings. When he looked more closely he could tell it has been translated, badly, from (probably) MATLAB, and had many errors in the translation, like (this is C++)
arr[x,y]
to access a 2D coordinate in arrays - for anyone playing at home, a,b in C++ means evaluate a, then evaluate b and return its result, so this code was accessing only the y coordinate.
This would be find if this was undergrad code, but this code had been around for a very long time (decades?), had dozens of papers based on it, and plenty of solar projects relied on it for estimating their ROI. I bring up this anecdote as a counterexample that the age of these libraries and programs does not mean they are high quality, and in fact their age lulls people into a false sense of trust that they actually implement the algorithms they claim to.
He’s since submitted many patches to resolve all the warnings and made sure it compiles with a number of compilers, but I wonder how valid the results over the years actually are- maybe they got lucky and it turns out the simplified model they accidentally wrote was sufficient.
All governments take actions because of these models. They do affect lives of every person on the planet ant future generations to come. “If it’s not broken, don’t fix it” approach doesn’t fit here. Rewriting these models is could be made for a cost of a few international conferences, flights and accommodations of participants.
critiquing results from existing models
The models should be scrutinized, no the results they give?
As someone that has interacted with people that write these models, optimistic is putting it lightly. I think whomever thinks that rewriting a bunch of fortran will be productive is entirely underselling both fortran, and the effort to make fortran super fast for simulations.
Rewriting this stuff in javascript isn’t realistic nor will it be fast. And any rewrite is going to have the same problem in 50 years. What are you going to rewrite it again then? How do you know its the same simulation and results?
Sometimes I think us computer programmers don’t really think through the delusions we tell ourselves.
But by rewriting we may check if the implementation follows the specification - see this as a reproductivity issue, do you recall when a bug in excel compromised thousands of researches? And by not changing anything we may find ourselves in a situation where no one knows how these models work, nor be able to improve them. Something similar to banking and cobol situation, but much worse.
In general, we don’t know, and in the abstract, that’s a problem (though specifically in the case of weather forecasting, “did you fail to predict a storm over this shipping lane” is a bigger deal than “did you predict this storm that actually happened for reasons that don’t stand up to scrutiny”, and many meteorological models are serviceably good). There are recent trends to push for more software engineering involvement in computational research (my own PhD topic, come back to me in three years!), and for greater discoverability and accessibility of source code used in computational research.
But turning “recent trends to push” into “international shift in opinion across a whole field of endeavour” is a slow process, much slower than the few international flights and a fancy dinner some software engineers think it should take. And bear in mind none of that requires replatforming anyone off of Fortran, which is not necessary, sufficient, desirable, feasible, or valuable to anyone outside the Rust evangelism strike force.
Most climate models are published in papers and are widely distributed. If you would like to remake these models in other languages and runtimes, you absolutely could (and perhaps find gaps in the papers along the way, but that’s a separate matter.) The problem is, getting the details right is very tough here. Is your library rounding in the same places the previous library was rounding at, in the same ways? How accurate is its exponential arithmetic? What’s the epsilon you need to verify against to be confident that the results are correct?
The article links CliMA as a project for Julia based climate models, but remember, most scientific computing libraries use Fortran one way or another. We’re just pushing the Fortran complexity down to LAPACK rather than up into our model code. Though that’s probably enough to greatly increase explainability, maintainability, and iterative velocity on these models.
“If it’s not broken, don’t fix it” approach doesn’t fit here.
What are you even trying to say here? If it’s not broken we should rewrite it because… reasons? It’s not broken, so why would we waste public money rewriting it when we could use that money to further improve its ability to help us?
Rewriting these models is could be made for a cost of a few international conferences, flights and accommodations of participants.
The article is low on these specific details and I’m not too familiar with climate modelling but I bet that the models aren’t just fancy Excel sheets on steroids – there’s at least one large iterative solver involved, for example, and these things account for most of the Fortran code. In that case, this estimate is off by at least one order of magnitude.
Even if it weren’t, and a few international conferences and flights were all it takes, what actual problem would this solve? This isn’t a mobile app. If a 30 year-old piece of.code is still in use right now, that’s because quite a stack of taxpayer money has been spent on solving that particular problem (by which I mean several people have spent 3-5 of their most productive years in solving it), and the solution is still satisfactory. Why spend money on something that has not been a problem for 30 years instead of spending it on all the problems we still have?
I think that a lot of SE have a misleading view on what scientific programming and infrastructure and a bias about “old languages” not common in their field of predilection. Every time I have been confronted to Fortran, it was codebase regularly updated to the new-ish Fortran standard available. Those models are not easy to wrap your head around sometimes and to implement efficiently either. Especially in our current world were linear algebra is an underlying base for a lot of algorithms, concepts, Fortran would have a good candidate to be at the edge instead of, well, C++. Julia have brought back the array/matrix as a core component.
When looking at the alternative proposed in the article. Chapel is unknown besides some niches. Python is basically an API/DSL to C/C++/Fortran/Rust libraries. Julia is the only one bringing a lot of niceties to the table from performance, unicode handling and multiple dispatch to cite a few. R is totally dependent to C++ for anything heavy with RCpp integration leading the movement.
Sometimes, I really hope that Python will lose the popularity contest to Julia (not gonna happen for few years at least before seeing Julia taking over Python in some niches) and before that I hope that Fortran can make a come back too. Fortran is not suited for a lot of different trends right knows, for example I don’t think NLP or bioinformatics DNA processing will gain a lot in Fortran due to limitations in string/unicode management to begin with.
I think that a lot of SE have a misleading view on what scientific programming and infrastructure and a bias about “old languages” not common in their field of predilection.
This! Software Engineers and numeric analysis don’t mix.
I actually thing this is fairly unfortunate. There’s a lot of numerical algorithms that software engineers can use to unlock new features in their code or write better software, but the huge social differences between the groups create a large wall to collaboration.
The author of the post works at Cray, so he would have access to the very best Fortran developers in the world (along with the best compilers and OSes for running it) and knows exactly why climate models are still done in Fortran. He also knows how to write a headline that gets a lot of clicks :).
I used to work there, and got to bs with Bill Long at lunch about Fortran (not the seattle office). Talking to Fortran compiler guys is fun.
Fortran is cool in that the Cray Fortran compiler had ENV switches for, well basically everything, and i’m not kidding. So its “easy” to tune every single compilation to what you need to do. That and you can mix old and new fortran into the same binary. Try that with c++/c. Rust is only approaching what fortran longevity has had in the past.
why do we care when a computer language was created? Is Javascript better suited for climate modelling because it is newer?
most of scientific computing heavily relies on Fortran. Some code has been rewritten in other languages but I do not see a point of porting Fortran to C++. Maybe porting these code based to Julia, Rust or Zig becomes desired going forward many of the people who I know in the scientific community writes Fortran because it works for their domain. You cannot sell additional complexity (look at Rust now) to them if the outcome for their use case is the same.
the author fails to mention that Python is a glue language without C++/C and surprise surprise Fortran half of Python project would be dead (Numpy, Pandas, etc.)
I am waiting to see if anything emerges in climate modelling in the coming decades other than Fortran.
The title is disingenuous, saying Fortran from now is Fortran from 1957 is wrong. It is like saying C++ is from the early 80s, or that C is from the 70s (arguably more true out of the three).
Nor did the author compare against commercial Fortran compilers. I understand the motives for the piece, any effort about using different languages and formalism in this space should center around reproducibility, provability, stability, etc. It should be about the correctness and how much we can trust the results, not raw performance.
Wrong answers faster are a hindrance to preventing catastrophe.
The chief complaint with Fortran is that it’s unreadable but also that its buggy and slow, which it is. Being less readable than ASM is just the cherry on top of the mound of ****.
The very benchmarks the author cites show that Fortran is slower than C++ and Rust in all tasks and slower than C in all but one. Conceptually Rust, C++, Fortran and C can aim to have about the same speed, C and C++ have the advantage in terms of native APIs and ABIs designed with them in mind, Rust has the advantage of safety when doing weird stuff, C++ has the advantage of library ecosystem, compile-time abstraction and abstraction in general, C has simplicity. Fortran is inferior to all 3 in any way imaginable.
It should be noted that the speed difference here is very significant, with fortran being 2-10x times slower than the leader on virtually all problems and when the results are taken in aggregate, it barely beats C# (you know, the whole .NET language made to develop web apps?): https://benchmarksgame-team.pages.debian.net/benchmarksgame/which-programs-are-fastest.html
And again, these are the benchmarks the author reference, so I assume they are better than average.
Fortran only gets written in fields that are status-driven, where skill and merit are irrelevant, it’s used to keep the “old guard” into academic positions.
If you want to know what the guys that need efficiency are doing (AWS, GCP, Cloudflare, Dropbox… etc), they are doing C++ and some C and some are considering using some Rust, if you want to know what the guys that need fast complex math are doing (Hedge funds, exchanges, brokers, applied ML services… etc) they are doing C++ and some C.
Nobody is arguing for turning Fortran code into python, they are arguing for turning it into code that works as intended, can be read by normal humans and compiles to something that is 2-10x times as fast.
The chief complaint with Fortran is that it’s unreadable but also that its buggy and slow, which it is. Being less readable than ASM is just the cherry on top of the mound of ****.
The very benchmarks the author cites show that Fortran is slower than C++ and Rust in all tasks and slower than C in all but one. Conceptually Rust, C++, Fortran and C can aim to have about the same speed, C and C++ have the advantage in terms of native APIs and ABIs designed with them in mind, Rust has the advantage of safety when doing weird stuff, C++ has the advantage of library ecosystem, compile-time abstraction and abstraction in general, C has simplicity. Fortran is inferior to all 3 in any way imaginable.
It should be noted that the speed difference here is very significant, with fortran being 2-10x times slower than the leader on virtually all problems and when the results are taken in aggregate, it barely beats C# (you know, the whole .NET language made to develop web apps?): https://benchmarksgame-team.pages.debian.net/benchmarksgame/which-programs-are-fastest.html
You know that LAPACK (itself based on BLAS which is also written in Fortran) is written in Fortran 90 right? Almost all scientific code running on every platform used by every academy, government, or corporation is using LAPACK through their programming language/runtime of choice. If you want to push scientific computing out of the Fortran orbit, there’s a lot of work to be done.
LAPACK is not very widely used for compute intensive problems, CUDA doesn’t support it, it support CUBLAS (which si written in C/ASM), so that’s 90% of high speed computing ruled out.
Then you have CPU based stuff, for which LAPACK will be used, but not the original implementation, but rather e.g. intel’s MK, or whatever the guys with supercomputers are using. LAPACK is more of a standard than a library, and people have their own implementations.
The fortran one is indeed used by… ahm? Who exactly? A few procedures on Android phones that lack GPUs?
I’m not talking about SOTA. I’m talking about, well, every other application of linear algebra. If you think everything but Android has moved to using CUDA and MKL, I think you need to take a broader look at academia and industry. ARM cores don’t even have MKL available.
I’m not sure if you’re intentionally dense, but let me try to take a stab at this again:
For applications where speed matters and where complex linear algebra is needed (e.g. what the author is talking about), NOBODY is using the default LAPACK implementation.
For applications where you need 2-3 LAPACK function calls every hour, LAPACK is a reasonable choice, since it’s lightweight and compatible on a lot of platforms and hard enough to write that nobody bothered to try and rewrite a free and open-source cross-hardware version.
However, 99% of usage is not 99% of devices, so the fact that LAPACK is running on most devices doesn’t matter, since this device run <1% of all LAPACK-interface based computations.
This goes back to my point of Fortran is slow and buggy, so people that have skin in the game don’t use it for mathematical applications. It might be reasonable for a simple decision tree used in bejewelled but not, e.g., for meteorological models (at least not if the meteorologists were concerned about speed or accuracy, but see the problem of responsibility/skin-in-the-game).
You seem to be attacking a strawman of my position here. I’m not arguing that libraries written in fortran don’t work anymore, just that they are very bad, very bad in that they are slower than alternatives written in Rust, C++ and C, and more on part with fast GCed language (e.g. C#, Java, Switf)
I’m not sure if you’re intentionally dense, but let me try to take a stab at this again:
Not everyone on the internet is trying to fight you, please relax. Assume good faith.
You seem to be attacking a strawman of my position here. I’m not arguing that libraries written in fortran don’t work anymore, just that they are very bad, very bad in that they are slower than alternatives written in Rust, C++ and C, and more on part with fast GCed language (e.g. C#, Java, Switf)
I was responding in particular to your claim that we should just rewrite these routines out of Fortran. When we need to use specific CPU or GPU features, we reach for MKL or CUBLAS. For applications that need the speed or CPU/GPU optimizations, this is fine. For all other applications, Fortran is sufficient. I’m only saying that rewriting Fortran into other languages is a lot of work for dubious gain. Folks that need the extra speed can get it. It sounds like we largely agree on this.
I am a bit dubious about your claim that Fortran is chased by academic status-seekers, because it makes the assumption that industry is smart and efficient while academia is bureaucratic and not-efficient and uses results from the Benchmarks game to highlight it. I realize TFA linked to these benchmarks, but I took a cursory glance at them, and some of them don’t even parallelize across CPUs and some don’t even compile! Given that Fortran is a niche language, an online benchmark shootout is probably not the best place. That said, I don’t have a justified idea of Fortran’s numbers either so I can’t do anything more than say I’m dubious.
Based in part on the comment orib made, and looking at some of the code myself, I actually agree with you that those benchmarks might not be a fair comparison.
I guess the thing I disagree on still is that there’s little purpose for most scientific applications if people can’t easily read and understand their code. Especially in domains where practical applications are 30-years away, I feel like it would make sense for 3rd parties to be able to validate the logic as easily as possible.
Much like I would protest to someone using mathematical notation from the late 19th century, even though in principal it would be sufficient for a vast swath of all equations being represented in physics.
The chief complaint with Fortran is that it’s unreadable but also that its buggy and slow, which it is.
Prove it, how is it buggy and slow for its domain? Give 3 examples for each of buggy and slow, should be easy for you with how determined your claim is. Showing benchmarkgames isn’t showing real world code/simulations in any language. Matrix multiplication is what you want to benchmark, why are you using benchmark “games”? Those “games” are hardly readable after people golf them to insanity. If your argument is that I should trust anything from those “games”, well I’ve got ice cubes in antarctica to sell you.
And demonstrate how the restrict keyword in c, which is relatively recent mind you can overcome what fortran can guarantee inherently. And how that affects aliasing and ultimately generated code and how your statement of fortran being inferior to all of c/c++/rust. Have you ever talked to a scientist writing fortran for simualtion? Ever ask them why? Methinks the answer to those last two questions is no. Because I have, and the people writing fortran for simulations are insanely smart, and bear in mind the goal isn’t 100% “it benchmarks faster than language xyz”, its “I run this simulation for $N months/weeks on end, it needs to be fast enough, and I’ll profile it to see what could be improved”. And for each supercomputer they rebuild and retune every library and object for the new processor as needed. I gotta be honest the amount of ignorance of the general computer programming population about how traditional HPC works is disheartening.
And how are simulations that are being run by scientists using the literal fortran COMPLEX type by definition not “guys that need fast complex math”? Fortran specifically handles complex math better than c. My mind boggles at some of the statements here.
Less supposition and hearsay and more demonstrable proof please, this comment is atrocious for basically being without any actual depth or substance to its claims.
I provided examples of fortran being bad within the terms the author set for himself. Have you heard of arbitrary demands for rigor? If every time an opinion you hold is challenge you start clamouring for more evidence instead of considering whether or not you have any counter-evidence, well, you see the problem.
Are academics with significant clout even writing code these days? I assume they should have a small team of grad students and postdocs to do that in their lab for them…. is is the fact that everything is written in Fortran the cause of them being forced to maintain old code?
Fortran only gets written in fields that are status-driven
Our industry isn’t immune to status-driven endeavors. Think of all the “bad” technology that gets adopted because FAANG uses it or the proliferation of flavor-of-the-month frameworks in the web space.
Even if his assertion were true, assuming that all ‘complex math’ is the same and a language that is good for one is good for another is a pretty good indication that you should disregard the post. For example, a lot of financial regulation describes permitted errors in terms of decimal digits and so to comply you must perform the calculations in BCD. If you’re doing scientific computing in BCD, you have serious problems. Conversely, a lot of scientific computing needs the full range of IEEE floating point rounding modes and so on, yet this is rarely if ever needed for financial analysis.
With C99, C grew the full support for floating point weirdness that you need to be able to port Fortan code to C, but so few people care about this in C that most C compilers are only now starting to grow acceptable support (GCC had it for a while, LLVM is gaining it now, in both cases the majority of the work is driven by the needs of the Fortran front end and then just exposed in C). Fortran has much stricter restrictions on aliasing, which make things like autovectorisation far easier than for C/C++. C/C++ developers now use modern OpenMP vectorisation pragmas to tell the compiler what assumptions it is allowed to make (and is unable to check, so miscompiles may happen if you get it wrong), whereas a half-decent Fortan compiler can get the same information directly from the source.
In my opinion, Climate scientists did not lose the software sweepstakes
I find this weird. A tool can be a good (or the best) tool for a job (and I guess Fortran is) and the developers can still be unhappy because it’s a pain to write and clunky? I guess there’s a reason the rest of the world doesn’t use it. The rest of the world also doesn’t use Assembler for most tasks because it’s tedious.
Along with the effort of rewriting, there’s also distrust of new models of rewrites of existing ones. There are decades of work in the literature publishing and critiquing results from existing models, and there are…not that for anything new. It’s much less risky and thus easier to accept to port an existing model to a new HPC platform or architecture than it is to adopt a new one.
Additionally, HPC is a weird little niche of design and practice at both the software and hardware levels. There are a few hundred customers who occasionally have a lot of money and usually have no money. Spinning that whole ecosystem around is difficult, and breaking a niche off the edge of it (if you decided to take climate modelling into Julia without also bringing petrochemicals, bioinformatics, automotive, etc) is a serious risk.
When NREL finally open sourced SAM (GitHub), one of the standard tools for calculating PV output based on location and other factors, a friend of mine decided to take a look at the code. On his first compilation, it was clear no one had ever built it using
-Wall
, it had thousands of warnings. When he looked more closely he could tell it has been translated, badly, from (probably) MATLAB, and had many errors in the translation, like (this is C++)to access a 2D coordinate in arrays - for anyone playing at home,
a,b
in C++ means evaluate a, then evaluate b and return its result, so this code was accessing only they
coordinate.This would be find if this was undergrad code, but this code had been around for a very long time (decades?), had dozens of papers based on it, and plenty of solar projects relied on it for estimating their ROI. I bring up this anecdote as a counterexample that the age of these libraries and programs does not mean they are high quality, and in fact their age lulls people into a false sense of trust that they actually implement the algorithms they claim to.
He’s since submitted many patches to resolve all the warnings and made sure it compiles with a number of compilers, but I wonder how valid the results over the years actually are- maybe they got lucky and it turns out the simplified model they accidentally wrote was sufficient.
All governments take actions because of these models. They do affect lives of every person on the planet ant future generations to come. “If it’s not broken, don’t fix it” approach doesn’t fit here. Rewriting these models is could be made for a cost of a few international conferences, flights and accommodations of participants.
The models should be scrutinized, no the results they give?
Rarely have I read something so optimistic.
As someone that has interacted with people that write these models, optimistic is putting it lightly. I think whomever thinks that rewriting a bunch of fortran will be productive is entirely underselling both fortran, and the effort to make fortran super fast for simulations.
Rewriting this stuff in javascript isn’t realistic nor will it be fast. And any rewrite is going to have the same problem in 50 years. What are you going to rewrite it again then? How do you know its the same simulation and results?
Sometimes I think us computer programmers don’t really think through the delusions we tell ourselves.
But by rewriting we may check if the implementation follows the specification - see this as a reproductivity issue, do you recall when a bug in excel compromised thousands of researches? And by not changing anything we may find ourselves in a situation where no one knows how these models work, nor be able to improve them. Something similar to banking and cobol situation, but much worse.
The “specification” is “does this code give the same results as it always has”. HPC isn’t big on unit testing or on other forms of detailed design.
Isn’t that a problem? How do we know then they follow peer reviewed papers they were supposed to follow?
In general, we don’t know, and in the abstract, that’s a problem (though specifically in the case of weather forecasting, “did you fail to predict a storm over this shipping lane” is a bigger deal than “did you predict this storm that actually happened for reasons that don’t stand up to scrutiny”, and many meteorological models are serviceably good). There are recent trends to push for more software engineering involvement in computational research (my own PhD topic, come back to me in three years!), and for greater discoverability and accessibility of source code used in computational research.
But turning “recent trends to push” into “international shift in opinion across a whole field of endeavour” is a slow process, much slower than the few international flights and a fancy dinner some software engineers think it should take. And bear in mind none of that requires replatforming anyone off of Fortran, which is not necessary, sufficient, desirable, feasible, or valuable to anyone outside the Rust evangelism strike force.
Most climate models are published in papers and are widely distributed. If you would like to remake these models in other languages and runtimes, you absolutely could (and perhaps find gaps in the papers along the way, but that’s a separate matter.) The problem is, getting the details right is very tough here. Is your library rounding in the same places the previous library was rounding at, in the same ways? How accurate is its exponential arithmetic? What’s the epsilon you need to verify against to be confident that the results are correct?
The article links CliMA as a project for Julia based climate models, but remember, most scientific computing libraries use Fortran one way or another. We’re just pushing the Fortran complexity down to LAPACK rather than up into our model code. Though that’s probably enough to greatly increase explainability, maintainability, and iterative velocity on these models.
What are you even trying to say here? If it’s not broken we should rewrite it because… reasons? It’s not broken, so why would we waste public money rewriting it when we could use that money to further improve its ability to help us?
The article is low on these specific details and I’m not too familiar with climate modelling but I bet that the models aren’t just fancy Excel sheets on steroids – there’s at least one large iterative solver involved, for example, and these things account for most of the Fortran code. In that case, this estimate is off by at least one order of magnitude.
Even if it weren’t, and a few international conferences and flights were all it takes, what actual problem would this solve? This isn’t a mobile app. If a 30 year-old piece of.code is still in use right now, that’s because quite a stack of taxpayer money has been spent on solving that particular problem (by which I mean several people have spent 3-5 of their most productive years in solving it), and the solution is still satisfactory. Why spend money on something that has not been a problem for 30 years instead of spending it on all the problems we still have?
I was so excited to see the conclusion was not what I expected. Idealism tempered with “you have to get there from here”.
I think that a lot of SE have a misleading view on what scientific programming and infrastructure and a bias about “old languages” not common in their field of predilection. Every time I have been confronted to Fortran, it was codebase regularly updated to the new-ish Fortran standard available. Those models are not easy to wrap your head around sometimes and to implement efficiently either. Especially in our current world were linear algebra is an underlying base for a lot of algorithms, concepts, Fortran would have a good candidate to be at the edge instead of, well, C++. Julia have brought back the array/matrix as a core component.
When looking at the alternative proposed in the article. Chapel is unknown besides some niches. Python is basically an API/DSL to C/C++/Fortran/Rust libraries. Julia is the only one bringing a lot of niceties to the table from performance, unicode handling and multiple dispatch to cite a few. R is totally dependent to C++ for anything heavy with RCpp integration leading the movement.
Sometimes, I really hope that Python will lose the popularity contest to Julia (not gonna happen for few years at least before seeing Julia taking over Python in some niches) and before that I hope that Fortran can make a come back too. Fortran is not suited for a lot of different trends right knows, for example I don’t think NLP or bioinformatics DNA processing will gain a lot in Fortran due to limitations in string/unicode management to begin with.
This! Software Engineers and numeric analysis don’t mix.
I actually thing this is fairly unfortunate. There’s a lot of numerical algorithms that software engineers can use to unlock new features in their code or write better software, but the huge social differences between the groups create a large wall to collaboration.
Interesting post about RESURRECTING FORTRAN
The author of the post works at Cray, so he would have access to the very best Fortran developers in the world (along with the best compilers and OSes for running it) and knows exactly why climate models are still done in Fortran. He also knows how to write a headline that gets a lot of clicks :).
I used to work there, and got to bs with Bill Long at lunch about Fortran (not the seattle office). Talking to Fortran compiler guys is fun.
Fortran is cool in that the Cray Fortran compiler had ENV switches for, well basically everything, and i’m not kidding. So its “easy” to tune every single compilation to what you need to do. That and you can mix old and new fortran into the same binary. Try that with c++/c. Rust is only approaching what fortran longevity has had in the past.
Few things:
why do we care when a computer language was created? Is Javascript better suited for climate modelling because it is newer?
most of scientific computing heavily relies on Fortran. Some code has been rewritten in other languages but I do not see a point of porting Fortran to C++. Maybe porting these code based to Julia, Rust or Zig becomes desired going forward many of the people who I know in the scientific community writes Fortran because it works for their domain. You cannot sell additional complexity (look at Rust now) to them if the outcome for their use case is the same.
the author fails to mention that Python is a glue language without C++/C and surprise surprise Fortran half of Python project would be dead (Numpy, Pandas, etc.)
I am waiting to see if anything emerges in climate modelling in the coming decades other than Fortran.
The title is disingenuous, saying Fortran from now is Fortran from 1957 is wrong. It is like saying C++ is from the early 80s, or that C is from the 70s (arguably more true out of the three).
Nor did the author compare against commercial Fortran compilers. I understand the motives for the piece, any effort about using different languages and formalism in this space should center around reproducibility, provability, stability, etc. It should be about the correctness and how much we can trust the results, not raw performance.
Wrong answers faster are a hindrance to preventing catastrophe.
This article is attacking a strawman using lies.
The chief complaint with Fortran is that it’s unreadable but also that its buggy and slow, which it is. Being less readable than ASM is just the cherry on top of the mound of ****.
The very benchmarks the author cites show that Fortran is slower than C++ and Rust in all tasks and slower than C in all but one. Conceptually Rust, C++, Fortran and C can aim to have about the same speed, C and C++ have the advantage in terms of native APIs and ABIs designed with them in mind, Rust has the advantage of safety when doing weird stuff, C++ has the advantage of library ecosystem, compile-time abstraction and abstraction in general, C has simplicity. Fortran is inferior to all 3 in any way imaginable.
It should be noted that the speed difference here is very significant, with fortran being 2-10x times slower than the leader on virtually all problems and when the results are taken in aggregate, it barely beats C# (you know, the whole .NET language made to develop web apps?): https://benchmarksgame-team.pages.debian.net/benchmarksgame/which-programs-are-fastest.html
And again, these are the benchmarks the author reference, so I assume they are better than average.
Fortran only gets written in fields that are status-driven, where skill and merit are irrelevant, it’s used to keep the “old guard” into academic positions.
If you want to know what the guys that need efficiency are doing (AWS, GCP, Cloudflare, Dropbox… etc), they are doing C++ and some C and some are considering using some Rust, if you want to know what the guys that need fast complex math are doing (Hedge funds, exchanges, brokers, applied ML services… etc) they are doing C++ and some C.
Nobody is arguing for turning Fortran code into python, they are arguing for turning it into code that works as intended, can be read by normal humans and compiles to something that is 2-10x times as fast.
You know that LAPACK (itself based on BLAS which is also written in Fortran) is written in Fortran 90 right? Almost all scientific code running on every platform used by every academy, government, or corporation is using LAPACK through their programming language/runtime of choice. If you want to push scientific computing out of the Fortran orbit, there’s a lot of work to be done.
LAPACK is not very widely used for compute intensive problems, CUDA doesn’t support it, it support CUBLAS (which si written in C/ASM), so that’s 90% of high speed computing ruled out.
Then you have CPU based stuff, for which LAPACK will be used, but not the original implementation, but rather e.g. intel’s MK, or whatever the guys with supercomputers are using. LAPACK is more of a standard than a library, and people have their own implementations.
The fortran one is indeed used by… ahm? Who exactly? A few procedures on Android phones that lack GPUs?
I’m not talking about SOTA. I’m talking about, well, every other application of linear algebra. If you think everything but Android has moved to using CUDA and MKL, I think you need to take a broader look at academia and industry. ARM cores don’t even have MKL available.
I’m not sure if you’re intentionally dense, but let me try to take a stab at this again:
This goes back to my point of Fortran is slow and buggy, so people that have skin in the game don’t use it for mathematical applications. It might be reasonable for a simple decision tree used in bejewelled but not, e.g., for meteorological models (at least not if the meteorologists were concerned about speed or accuracy, but see the problem of responsibility/skin-in-the-game).
You seem to be attacking a strawman of my position here. I’m not arguing that libraries written in fortran don’t work anymore, just that they are very bad, very bad in that they are slower than alternatives written in Rust, C++ and C, and more on part with fast GCed language (e.g. C#, Java, Switf)
Not everyone on the internet is trying to fight you, please relax. Assume good faith.
I was responding in particular to your claim that we should just rewrite these routines out of Fortran. When we need to use specific CPU or GPU features, we reach for MKL or CUBLAS. For applications that need the speed or CPU/GPU optimizations, this is fine. For all other applications, Fortran is sufficient. I’m only saying that rewriting Fortran into other languages is a lot of work for dubious gain. Folks that need the extra speed can get it. It sounds like we largely agree on this.
I am a bit dubious about your claim that Fortran is chased by academic status-seekers, because it makes the assumption that industry is smart and efficient while academia is bureaucratic and not-efficient and uses results from the Benchmarks game to highlight it. I realize TFA linked to these benchmarks, but I took a cursory glance at them, and some of them don’t even parallelize across CPUs and some don’t even compile! Given that Fortran is a niche language, an online benchmark shootout is probably not the best place. That said, I don’t have a justified idea of Fortran’s numbers either so I can’t do anything more than say I’m dubious.
Based in part on the comment orib made, and looking at some of the code myself, I actually agree with you that those benchmarks might not be a fair comparison.
I guess the thing I disagree on still is that there’s little purpose for most scientific applications if people can’t easily read and understand their code. Especially in domains where practical applications are 30-years away, I feel like it would make sense for 3rd parties to be able to validate the logic as easily as possible.
Much like I would protest to someone using mathematical notation from the late 19th century, even though in principal it would be sufficient for a vast swath of all equations being represented in physics.
Prove it, how is it buggy and slow for its domain? Give 3 examples for each of buggy and slow, should be easy for you with how determined your claim is. Showing benchmarkgames isn’t showing real world code/simulations in any language. Matrix multiplication is what you want to benchmark, why are you using benchmark “games”? Those “games” are hardly readable after people golf them to insanity. If your argument is that I should trust anything from those “games”, well I’ve got ice cubes in antarctica to sell you.
And demonstrate how the restrict keyword in c, which is relatively recent mind you can overcome what fortran can guarantee inherently. And how that affects aliasing and ultimately generated code and how your statement of fortran being inferior to all of c/c++/rust. Have you ever talked to a scientist writing fortran for simualtion? Ever ask them why? Methinks the answer to those last two questions is no. Because I have, and the people writing fortran for simulations are insanely smart, and bear in mind the goal isn’t 100% “it benchmarks faster than language xyz”, its “I run this simulation for $N months/weeks on end, it needs to be fast enough, and I’ll profile it to see what could be improved”. And for each supercomputer they rebuild and retune every library and object for the new processor as needed. I gotta be honest the amount of ignorance of the general computer programming population about how traditional HPC works is disheartening.
And how are simulations that are being run by scientists using the literal fortran COMPLEX type by definition not “guys that need fast complex math”? Fortran specifically handles complex math better than c. My mind boggles at some of the statements here.
Less supposition and hearsay and more demonstrable proof please, this comment is atrocious for basically being without any actual depth or substance to its claims.
Think its time for me to call lobsters quits.
FWIW, every other commenter, except for the comment you replied to, agreed with your position on Fortran.
I provided examples of fortran being bad within the terms the author set for himself. Have you heard of arbitrary demands for rigor? If every time an opinion you hold is challenge you start clamouring for more evidence instead of considering whether or not you have any counter-evidence, well, you see the problem.
Are academics with significant clout even writing code these days? I assume they should have a small team of grad students and postdocs to do that in their lab for them…. is is the fact that everything is written in Fortran the cause of them being forced to maintain old code?
Our industry isn’t immune to status-driven endeavors. Think of all the “bad” technology that gets adopted because FAANG uses it or the proliferation of flavor-of-the-month frameworks in the web space.
[Comment removed by author]
https://blog.janestreet.com/why-ocaml/
Together with the two parallel comments I’m going to conclude you don’t know what you’re talking about.
Even if his assertion were true, assuming that all ‘complex math’ is the same and a language that is good for one is good for another is a pretty good indication that you should disregard the post. For example, a lot of financial regulation describes permitted errors in terms of decimal digits and so to comply you must perform the calculations in BCD. If you’re doing scientific computing in BCD, you have serious problems. Conversely, a lot of scientific computing needs the full range of IEEE floating point rounding modes and so on, yet this is rarely if ever needed for financial analysis.
With C99, C grew the full support for floating point weirdness that you need to be able to port Fortan code to C, but so few people care about this in C that most C compilers are only now starting to grow acceptable support (GCC had it for a while, LLVM is gaining it now, in both cases the majority of the work is driven by the needs of the Fortran front end and then just exposed in C). Fortran has much stricter restrictions on aliasing, which make things like autovectorisation far easier than for C/C++. C/C++ developers now use modern OpenMP vectorisation pragmas to tell the compiler what assumptions it is allowed to make (and is unable to check, so miscompiles may happen if you get it wrong), whereas a half-decent Fortan compiler can get the same information directly from the source.
I’m wondering how many languages like Chapel, Fortran has survived. In 10 years there will be still fortran and openmpi, not sure about Chapel =)
I find this weird. A tool can be a good (or the best) tool for a job (and I guess Fortran is) and the developers can still be unhappy because it’s a pain to write and clunky? I guess there’s a reason the rest of the world doesn’t use it. The rest of the world also doesn’t use Assembler for most tasks because it’s tedious.
Interesting, well-argued and informative.