Old, not particularly good article remains old and not particularly good. “You’re wrong because my one unpublished benchmark says so.”
[Comment removed by author]
The content is also quite low quality.
Talk of a bench mark with no code, so there is no way to verify how legitimate the implementations are. Talk of C without reference to the restrict keyword which lets you tell the compiler exactly what the author is talking about (that keyword was added in C99).
On that note, the title should have a (2006) in it. The landscape of C has changed quite a bit; a whole new language revision was released in that time, and compilers have gotten much smarter.
The author appears to be unaware of C’s “restrict” keyword which addresses the specific optimisation issue he raises.
The use of the restrict keyword in C, in principle, allows non-obtuse C to achieve the same performance as the same program written in Fortran.
Besides that, there have been compiler flags to that effect since well before C99, and performance critical code is almost always compiled with those flags turned on.
This article hasn’t aged well, and for some reason the formatting is terrible. (I read it a few years ago, and it wasn’t as bad.)
I’ve reached the point where I refuse to talk about languages as being fast or slow. What seems to be true may not be– there may be someone who can use language features that I don’t know about– and what is true now will not be, 10 years from now.
I think that there was a much stronger C++ bias in 2006. Back then, there were a lot of people using C++ “because it’s fast” for web programming projects where it was inappropriate. And I agree that the quality of the programmers has always mattered more than traits of the language, whether we’re talking about performance or aesthetics. That will always be true. I’ve seen dozens of businesses fail because they hired bad programmers, and none fail because they chose a language that “wasn’t fast enough” (if you have good engineers, they can rewrite performance-critical stuff in other languages; it’s not a big deal.) That being said, I find it a bit audacious to claim that “C and C++ suck rocks as languages for numerical computing”. To my knowledge, that is not true. In fact, most of the scientific programmers whom I know use Numpy/Scipy, which use a lot of C libraries (and probably some Fortran).
Thank you for your comment.
Seems they have changed their site at some point (even link-scheme), I could still figure out the old link which is far better styled.
Interestingly enough D is challenging C/C++ in a remarkable manner….
Anything that can be computed at compile time is…. So in principle, for example, an extremely long running task could be instantaneous at run time…. with a a very long compile time. (This is under the control of the programmer)
A more practical example of this power is….
For example D now has the fastest regex engine https://dlang.org/regular-expression.html
There are many examples within the standard library of how the ability to easily and understandably special case template instantiations to the particular types used “on-the-day”.
Can you substantiate that claim?
(This is from the last years dconf, I believe this library is now part of the std library)
Relooking at the video is seems it is not fastest for all tasks.
That is impressive, but two benchmarks really aren’t sufficient to justify slinging around claims like “the fastest regex engine,” especially if one of the benchmarks is as simple as regex-dna. Allow me to demonstrate. Consider Rust’s regex library, compiled normally, running single threaded (like D’s benchmark):
$ time ./regex-dna < /data/regex-dna.fasta
Now watch what happens when I enable simd acceleration (which, to be clear, is using a special algorithm):
$ time ./regex-dna < /data/regex-dna.fasta
Now compare that with dmd (compile line taken from build.sh, I’m not a D programmer, so please correct me if I have this wrong):
$ dmd -O -release -inline -version=ct_regex d_dna.d
$ time ./d_dna /data/regex-dna.fasta
Both Rust’s regex engine and Intel’s Hyperscan regex library have this particular optimization, which applies especially well to the regex-dna benchmark, but doesn’t apply to all regexes.
PCRE2 is another regex library that is screaming fast when using its JIT.
Long running tasks generally depend on data you can only gather at runtime, like the contents of a file of the user’s choosing. I can’t really think of any long running tasks which could be done at compile time, do you have any examples?
In the “silly” category. Factorizing large primes.
In the “not so silly” category. Precomputing and memoizing values eg. Primes, nFactorial… In C you’d write a C program to generate the values and print them as a const array to #include.
In D you can do that more directly.