When I read “time safety”, I mentally prepared for a discussion of timing attacks, which some libraries available in both C and Rust can harden potentially affected code against. But no! This was the “young languages not going to survive” angle.
So let me offer a prophesy: Rust is here to stay. Yes, there will be other languages that may offer even more enticing tradeoffs (one should hope), but just as C will survive competition from Rust, Rust will survive competition from those new languages.
I thought the author meant spatial/temporal safety. Papers on the topic talk about memory (eg buffer overruns) and spatial (eg use-after-frees). Then, there’s time-related attacks like time-of-check to time-of-use attacks, timing channels, etc. The phrase @Hales was looking for is software longevity.
I think Rust and Go (and some other newish langs like TypeScript) fill niches that will give them an active community for a good while, but even if we assume they go out of fashion and stop growing, there is more than enough code already written in them to economically justify companies keeping them maintained a long time. That seems like more or less how older langs got established, too.
If you need examples, Docker, Kubernetes, and a ton of internal code at Google, CloudFlare, etc. are in Go, and Firefox and variouscompanies’ critical infra depends on Rust. Rust also benefits from the yet-more-established LLVM project handling a lot of lower-level bits.
For comparison, in the early 2000’s there was incredible hype around XML and fancy object models like COM. They turned out not to be the future: HTML evolved into HTML5 instead of XHTML; REST and JSON ended up a more popular base for APIs than XML-RPC or SOAP; and Mozilla ended up removing a lot of XPCOM use from Gecko. But you can still use XML, SOAP, or COM! And, for that matter, Fortran, Lisp, and awk! Putting down enough roots to ensure a technology’s long-term survival is different from the tech remaining super popular forever.
I can’t learn everything that’s coming out, but I do find today’s diversity of languages and compilers heartening. A couple decades back it felt like programming was kind of “stuck” because a decent compiler was a huge project and network effects made only a few languages and platforms viable. I think a lot of things have come together to change that, which is great, since we collectively learn by experimentation.
Another way of looking at it is generational. How many people are learning C vs C++ vs Rust? I’m prepared to go out on a limb and say the next generation of “non-GC” language users will be a predominantly Rust crowd. That’s the generation that will be writing new software.
Niche language changes happen all the time, look at the death of Perl in favor of Python. PHP to NodeJS. Now: C/C++ to Rust.
This does not account for the young folks currently learning and using C or C++. Or for the Perl coders currently working in banks. PHP is alive and well, too. Sure, there are certain shifts happening, but the death of all of those technologies has been greatly exaggerated.
Philosophically I’m not sure if using virtual machines to get old programs working is any different to having to modify code to get old programs working. Sometimes you are lucky and a standard VM setup fixes the problem, other times it’s much more complicated (eg games like Midtown Madness). Sometimes you are lucky and the code changes are simple, other times they’re complex and insidious.
That’s looking at software from now and ignoring the past. An equally accurate headline is “90% of software ever made broken due to changes over time”.
Is it really worth it keeping broken stuff around? 🤔
To flip your line on its head: is it worth trying to make memory-safe (or sim.) programs if they won’t stick around?
What’s the difference between code breaking or becoming insecure from memory-safety issues versus becoming broken or insecure with time (external changes)? Both lead to the same result.
To flip your line on its head: is it worth trying to make memory-safe (or sim.) programs if they won’t stick around?
What’s the difference between code breaking or becoming insecure from memory-safety issues versus becoming broken or insecure with time (external changes)? Both lead to the same result.
Memory unsafe code can cause huge harm in the real world through exploitation and information theft. I’d rather my browser from 10 years ago didn’t work than my current browser getting my identity stolen or my parent’s computer infested with ransomware.
To flip your line on its head: is it worth trying to make memory-safe (or sim.) programs if they won’t stick around?
Yes, they will be memory safe while they are used.
What’s the difference between code breaking or becoming insecure from memory-safety issues versus becoming broken or insecure with time (external changes)? Both lead to the same result.
Memory-safe software won’t become less safe with time.
Software is a tool to be used now foremost, I may or may not care if the firefox I use today still compiles in 50 years, but I sure care that it works and doesn’t crash today.
I don’t have any Rust programs that have been untouched for that long, but if you checkout 0.1.2 of ripgrep (released ~3.5 years ago), then it compiles just fine after running cargo update -p rustc-serialize.
I can see how time safety could be an issue with some languages, but I think both Rust and Go are at a point where one can be fairly confident that they will be around ten years from now.
In any case, this strikes me as a primitivist argument. If time safety is more important than memory safety, then that rules out every possible solution to memory safety that involves a new language.
I mean, I don’t get why this isn’t an obvious case of risk profiles. At least, it would be better phrased that way than bombastically claiming time safety as some kind of trump card over some very important kinds of progress in programming languages.
It should also be noted that (from what I understand) the Rust and Go toolchains provide a much cleaner path for upgrading code.
In my personal projects I use some C libraries that were written before I was born.
The libraries compile and run just fine, but if I wanted to port them to a modern (post C99) dialect of C it would take a lot of work, and there would surely be some mistakes that slip passed the compiler.
Meanwhile Rust and Go have strong tooling built into the language infrastructure that will automate a majority of the work in porting to newer language specifications.
In a way, software developed in these languages becomes more robust against the test of time, because the burden for upgrading a codebase does not fall solely on the programmer.
Is there actual data supporting the notion that code in memory-safe languages rots more than C code?
Java and JavaScript have been around for long enough to have a track record, and both are very good at keeping old code working. Rust and Go haven’t been around for as long but both seem to take care to keep old code working.
That is one area in which Java has impressed me. Not sure about the last versions but if you refrained from internal APIs, your program wouldn’t break with a new version.
That was _not _ true for C++ which was an older language in 2005 when I had to make sure that C++ code would work with newer Windows/AIX versions. I think that the priorities of the language designers matter a lot.
Go seems relatively conservative with only slow language and library evolution - so to an outsider line me, it looks relatively time safe.
Rust changes a lot but is committed to keep old code working. But idiomatic rust and the library ecosystem is still evolving rapidly. Therefore, I think your code might have a fair chance to keep working in 10 years but only with ancient library versions.
I just incorporated C++ code I wrote 18 years ago into a new project without effort. The C++ core language API does have this property. External libraries on the other hand might not, but this is also true for Java. Java has one advantage here in that the graphics APIs are part of the core language.
Go has an extreme compatibility guarantee. Rusts is much weaker - compiling older rust code with a new toolchain can require quite a bit of fiddling (much less once you know the language well).
I say that because I’ve tried to get a rust program written 18 months ago to compile, and had to fight all kinds of mess.
This is largely because in order to target the microcontroller I was using, you needed to run nightly. Original author didn’t pin the nightly version they were using, though.
Sorry for the “should”. I still think that misrepresents stable rust. Probably there is some misunderstanding between us. Or I was very lucky. So you think that it is common that your code breaks with a new stable update? This happened once years ago to me with an obscure crate because of a fixed compiler bug and the language team had offered a fix to me (!!!) as a pull request upfront.
For stable rust, they strive for high compatibility. As with go, they may fix for safety reasons and some others. Otherwise, the language itself has strong stability guarantees and the language team had upheld those well as far as I know.
For libraries, it’s different. The ecosystem is evolving.
Between the language and the ecosystem? => I somewhat agree. The language being stable is an important ingredient in getting the ecosystem to mature, though.
Between stable and nightly? => well, using nightly screems, “I care more about some fancy new experimental feature than about stability.
Between stable and nightly? => well, using nightly screems, “I care more about some fancy new experimental feature than about stability.
As an outsider, I come to rust when there’s an existing project I’d like to get running.
In the rust projects I’ve looked at, the ratio of ‘needs nightly’ to ‘runs on stable’ is about 50/50.
To me, that means that the stability of rust is only as good as the stability of nightly. That would be different if I were writing my own rust, but I’m not.
As a person who wants to compile software written in rust: It’s quite likely that the software doesn’t compile without hacking it.
This idea of being able to have your code still work when you want to show it to your kids also leads me to thinking about scientific and academic software and publishing. A lot of papers out there are hard to reproduce because the code isn’t available, or even if it is available there isn’t a documented list of the versions of every dependency. People include the seed values they used for random number generators in their Monte Carlo problems sometimes without actually listing the exact version of everything else they were using, which I think shows that people do care about reproducibility but just don’t really know how to get it.
I should be able to pick up a paper and reproduce exactly what the authors did 20 years later, 40 years later, 60 years later. There are lots of physical science experiments you can still reproduce hundreds of years after they were first carried out.
Virtual machines are one of many methods to get old code working. They’re not a full solution, at least not in my experience (some programs are still a pain/impossible).
For mainstream stuff that’s not too far gone: running old Windows or DOS in a VM is not that hard. Now look to more obscure systems and hardware, eg game consoles, and how poor some of their emulation is (ie many programs/games not working).
Given how complex modern operating system stacks are: creating this whole environment and getting it to work reliably, just for one program to run, is going to get harder and harder. Some language environments are essentially operating systems of their own; and others are VMs themselves. OS in OS, VM in VM. Possible to setup, but progressively harder.
True, but we need the underlying system being virtualised to itself be both thoroughly specified, and simple enough for new implementations to be both built and checked against specification. The closest thing we have to this would probably be JVM, but it’s not so small these days and a bit too high-level.
Agree 100%. This is a big issue, and it’s going to get bigger and bigger as more and more research is going to be essentialy the output of random Python scripts, written by researchers who really aren’t programmers.
Also, I remember years ago I was trying to compile C code written in the 80s with a modern version of clang. I had to fiddle quite a while with the cflags to get the pre C90 style code to compile. Very large portions of the code are not valid C code anymore and need explicit support by the compiler.
My experience with “digital archeology” and raising old C programs from the dead confirms this. You can write future-proof C, but then you can write future-proof anything as long as a a spec and/or open source compiler exists for it.
“ In contrast my old C projects from 5-8 years ago still compile and run”
You just made a great argument for Ada. Its standards range from 1983 to 2012. Companies that chose it had time safety (your need), more memory safety, better support for reusing/integrating code (your need), and built-in concurrency.
“newer and better Rusts and Gos are likely to form in the years ahead”
This is sort of misleading. You’re talking about how your C code lasted 8 years or so. You’re implying other languages might not have that affect. Then, you bring up that new languages might be invented or get popular. That doesn’t seem to matter for your argument given it happened after C was popular, too.
The real question is whether any better than C will be around for over a decade. Several already have. One sign they’ll keep getting supported somehow is if it’s a huge, tech company that pushes it with lots of enterprises writing software in it. They stick around a long time. COBOL, C/C++, Java. VB/C# in .NET, and Javascript are already there. Go has a good chance of it given it was designed for cheap labor with good tooling. Its simplicity vs other platforms means new maintainers could keep it alive if it gets abandoned, too. Much like FreeBASIC and Free Pascal w/ Lazaurus took over for QBasic and Delphi respectively.
A bit of a counterpoint to the new language thing: Javascript from 10 years ago just works! I’m pretty sure this is the same for Haskell, Java, and probably PHP as well.
There’s definitely languages that break a lot in their stuff, but it’s not impossible to keep stuff working out of the box.
That being said, there is a certain magic to “ANSI C” stuff feeling like it will always compile in any environment until the end of time
Not all JavaScript from 10 years ago runs. There is a fairly steady stream of browser APIs as well as base JS APIs that are deprecated. I just encountered this yesterday at work with Chrome 80 having removed a browser API that we use for debugging remote Chrome apps.
The thought of trying to compile Haskell from last year fills me with existential horror, never mind Haskell from 10 years ago. Though, this is more a problem with the ecosystem and dependency management rather than the language itself.
It’s good to hear there’s some stability out there. I don’t think I uses anything Haskellish; I have definitely encountered some java code that can’t be run any more (Palm Pilot 2 ARM game called CounterSheep IIRC) but generally desktop stuff has been OK, and PHP’s practical culture seems to centre a lot around not rocking existing stuff/servers that work.
I’m not a JS dev, but what about dialects such as jscript?
I suspect there might be some survivorship bias here. What we consider canonical/main/normal javascript history are the parts that survived and that new dialects/features/etc were built on. Not the bits that died and many websites that only worked in IE.
Last week, I was able to take C++ code that I wrote 18 years ago and incorporate it into a new project. This is precisely why I consider C++ to be one of the greatest languages and Bjarne Stroustrup one of the best if not the best language desgners in the world.
It’s even better than C, because C++ is improving at a faster rate while making sure your code that you wrote 18 years ago still works. People severely underestimate how hard and impressive this is.
This is why I trust C++ with my new code because I trust that it will work 20 years from now.
I totally agree with your comment; I just wanted to point out that the evolution of C++ is managed by a committee, not just Bjarne Stroustrup. Bjarne was the initial language designer, but we should make sure we acknowledge the hard work and effort of everyone involved in the evolution of C++ (or really any community-driven technology).
Yes definitely! I don’t want to diminish the herculean efforts of the committee. Just pointing out that Bjarne could have given up long ago or created other languages like so many language designers do. He is now just a member of the committee but I think it’s fair to say his opinions still carry a lot of weight. The committee embodies a lot of the values he has always advocated for.
You pay for only what you use.
Don’t break old code.
Make things as simple as possible but not simpler.
If you keep rubbing a stone, eventually it will shine.
Those that I distrust the least. The paucity of my available options has no impact on the quality of code being written around the world, the causal chain goes in the other direction if at all.
Sure it does! Have you wondered WHY there are so few options? There are social and economic reasons. In other words, security requires investment that no government or corporation have interest in. They WANT leaky software.
This article is a different take and perspective to a lot of what I read regarding (memory) safer languages. I hope it doesn’t grate with people too much. I’m regularly trying to fix things setup by other people, whether it’s systems at work or old programs and games for fun, so I have probably have different priorities and views to many other authors.
As a programming language developer, “time safety” to me really does refer to certain sorts of time-isolating techniques akin to memory safety. This blog post about the paper Time Protection is a good introduction to the topic.
What you claim as C’s time-tested durability, I see as a powerful series of network effects around C, starting from the fact that C was designed to be part of UNIX. Other languages with the same trajectory, notably AWK and sh, have the same staying power over the decades. It was possible to ride along this memetic current if one designed their language appropriately; Perl is probably the best example.
I dislike these sorts of appeals to positive usage. It is relatively trivial to get code that has positive functionality; shove a bunch of combinators into a genetic evolution chamber, apply trials and fitness, and evolve programs that positively satisfy some given property or specification. The problem, of course, is that the evolved programs have no constraints on what they may not do. (If you dislike evolution, then you may instead imagine a feature factory where junior developers continually adjoin new modules and let unused modules rot.) When we design languages these days, we are extremely preoccupied not just with what one may build, but with what one may not build.
I, too, am a maintenance programmer; when I am fixing code, I want to first know what it does. I need to know how code behaves apart from its specification, and this usually leads to considering the breadth of the range of the program’s effects. I want to know both positive and negative properties.
It’s not directly espoused by the article, but one good argument I had after reading this is the “not using $X is irresponsible!!!” argument is not constructive. There may be a better Rust in the future. Does that make everyone who used Rust irresponsible? Or if they choose it for a new project, is that irresponsible? Of course not.
When I read “time safety”, I mentally prepared for a discussion of timing attacks, which some libraries available in both C and Rust can harden potentially affected code against. But no! This was the “young languages not going to survive” angle.
So let me offer a prophesy: Rust is here to stay. Yes, there will be other languages that may offer even more enticing tradeoffs (one should hope), but just as C will survive competition from Rust, Rust will survive competition from those new languages.
I thought the author meant spatial/temporal safety. Papers on the topic talk about memory (eg buffer overruns) and spatial (eg use-after-frees). Then, there’s time-related attacks like time-of-check to time-of-use attacks, timing channels, etc. The phrase @Hales was looking for is software longevity.
I think most of us has the same thought(s).
Slack happened to publish a post about adoption of new technologies that is sort of interesting on this front.
I think Rust and Go (and some other newish langs like TypeScript) fill niches that will give them an active community for a good while, but even if we assume they go out of fashion and stop growing, there is more than enough code already written in them to economically justify companies keeping them maintained a long time. That seems like more or less how older langs got established, too.
If you need examples, Docker, Kubernetes, and a ton of internal code at Google, CloudFlare, etc. are in Go, and Firefox and various companies’ critical infra depends on Rust. Rust also benefits from the yet-more-established LLVM project handling a lot of lower-level bits.
For comparison, in the early 2000’s there was incredible hype around XML and fancy object models like COM. They turned out not to be the future: HTML evolved into HTML5 instead of XHTML; REST and JSON ended up a more popular base for APIs than XML-RPC or SOAP; and Mozilla ended up removing a lot of XPCOM use from Gecko. But you can still use XML, SOAP, or COM! And, for that matter, Fortran, Lisp, and awk! Putting down enough roots to ensure a technology’s long-term survival is different from the tech remaining super popular forever.
I can’t learn everything that’s coming out, but I do find today’s diversity of languages and compilers heartening. A couple decades back it felt like programming was kind of “stuck” because a decent compiler was a huge project and network effects made only a few languages and platforms viable. I think a lot of things have come together to change that, which is great, since we collectively learn by experimentation.
Another way of looking at it is generational. How many people are learning C vs C++ vs Rust? I’m prepared to go out on a limb and say the next generation of “non-GC” language users will be a predominantly Rust crowd. That’s the generation that will be writing new software.
Niche language changes happen all the time, look at the death of Perl in favor of Python. PHP to NodeJS. Now: C/C++ to Rust.
This does not account for the young folks currently learning and using C or C++. Or for the Perl coders currently working in banks. PHP is alive and well, too. Sure, there are certain shifts happening, but the death of all of those technologies has been greatly exaggerated.
If you want to emulate current software in the future, use virtual machines. From a headline going around lobste.rs some time ago: “Memory Safety Bugs Form 70 Percent Of Vulnerabilities” https://www.i-programmer.info/news/149-security/12538-memory-safety-bugs-form-70-percent-of-vulnerabilities.html Is it really worth it keeping broken stuff around? 🤔
Philosophically I’m not sure if using virtual machines to get old programs working is any different to having to modify code to get old programs working. Sometimes you are lucky and a standard VM setup fixes the problem, other times it’s much more complicated (eg games like Midtown Madness). Sometimes you are lucky and the code changes are simple, other times they’re complex and insidious.
That’s looking at software from now and ignoring the past. An equally accurate headline is “90% of software ever made broken due to changes over time”.
To flip your line on its head: is it worth trying to make memory-safe (or sim.) programs if they won’t stick around?
What’s the difference between code breaking or becoming insecure from memory-safety issues versus becoming broken or insecure with time (external changes)? Both lead to the same result.
Memory unsafe code can cause huge harm in the real world through exploitation and information theft. I’d rather my browser from 10 years ago didn’t work than my current browser getting my identity stolen or my parent’s computer infested with ransomware.
Yes, they will be memory safe while they are used.
Memory-safe software won’t become less safe with time.
Software is a tool to be used now foremost, I may or may not care if the firefox I use today still compiles in 50 years, but I sure care that it works and doesn’t crash today.
Disagree. New kinds of attacks will be developed, just like they always have.
They might have meant less memory-safe over time.
My Go programs from ~7 years ago still compiles and run with one example needing a small non-language related change.
I don’t have any Rust programs that have been untouched for that long, but if you checkout
0.1.2
of ripgrep (released ~3.5 years ago), then it compiles just fine after runningcargo update -p rustc-serialize
.I can see how time safety could be an issue with some languages, but I think both Rust and Go are at a point where one can be fairly confident that they will be around ten years from now.
In any case, this strikes me as a primitivist argument. If time safety is more important than memory safety, then that rules out every possible solution to memory safety that involves a new language.
I mean, I don’t get why this isn’t an obvious case of risk profiles. At least, it would be better phrased that way than bombastically claiming time safety as some kind of trump card over some very important kinds of progress in programming languages.
It should also be noted that (from what I understand) the Rust and Go toolchains provide a much cleaner path for upgrading code.
In my personal projects I use some C libraries that were written before I was born. The libraries compile and run just fine, but if I wanted to port them to a modern (post C99) dialect of C it would take a lot of work, and there would surely be some mistakes that slip passed the compiler.
Meanwhile Rust and Go have strong tooling built into the language infrastructure that will automate a majority of the work in porting to newer language specifications. In a way, software developed in these languages becomes more robust against the test of time, because the burden for upgrading a codebase does not fall solely on the programmer.
Is there actual data supporting the notion that code in memory-safe languages rots more than C code?
Java and JavaScript have been around for long enough to have a track record, and both are very good at keeping old code working. Rust and Go haven’t been around for as long but both seem to take care to keep old code working.
A great point, that IMO should flag this post as inaccurate.
That is one area in which Java has impressed me. Not sure about the last versions but if you refrained from internal APIs, your program wouldn’t break with a new version.
That was _not _ true for C++ which was an older language in 2005 when I had to make sure that C++ code would work with newer Windows/AIX versions. I think that the priorities of the language designers matter a lot.
Go seems relatively conservative with only slow language and library evolution - so to an outsider line me, it looks relatively time safe.
Rust changes a lot but is committed to keep old code working. But idiomatic rust and the library ecosystem is still evolving rapidly. Therefore, I think your code might have a fair chance to keep working in 10 years but only with ancient library versions.
I just incorporated C++ code I wrote 18 years ago into a new project without effort. The C++ core language API does have this property. External libraries on the other hand might not, but this is also true for Java. Java has one advantage here in that the graphics APIs are part of the core language.
Go has an extreme compatibility guarantee. Rusts is much weaker - compiling older rust code with a new toolchain can require quite a bit of fiddling (much less once you know the language well).
I’m not sure why you say it’s weaker for Rust. This is the point of their “editions”. https://doc.rust-lang.org/edition-guide/editions/index.html
Software authors have to opt-in to breaking compatibility.
I say that because I’ve tried to get a rust program written 18 months ago to compile, and had to fight all kinds of mess.
This is largely because in order to target the microcontroller I was using, you needed to run nightly. Original author didn’t pin the nightly version they were using, though.
I think you should have mentioned the circumstances in the initial comment.
That said, go as a language does not evolve much and that makes it a better platform right now if you are change averse.
Go doesn’t have a “this may (will) be broken in 6 months” mode; rust does. That’s the weaker guarantee.
Personally I like rust better than go, but that wasn’t the question being asked.
Sorry for the “should”. I still think that misrepresents stable rust. Probably there is some misunderstanding between us. Or I was very lucky. So you think that it is common that your code breaks with a new stable update? This happened once years ago to me with an obscure crate because of a fixed compiler bug and the language team had offered a fix to me (!!!) as a pull request upfront.
For stable rust, they strive for high compatibility. As with go, they may fix for safety reasons and some others. Otherwise, the language itself has strong stability guarantees and the language team had upheld those well as far as I know.
For libraries, it’s different. The ecosystem is evolving.
As an outsider trying to get existing rust code to run, there isn’t an obvious distinction.
The existence of stuff that breaks under the “rust” name is significant.
Possibly worse because for a long time nightly was required to target microcontrollers, so lots of things used unstable features.
Which distinction?
Between the language and the ecosystem? => I somewhat agree. The language being stable is an important ingredient in getting the ecosystem to mature, though.
Between stable and nightly? => well, using nightly screems, “I care more about some fancy new experimental feature than about stability.
As an outsider, I come to rust when there’s an existing project I’d like to get running.
In the rust projects I’ve looked at, the ratio of ‘needs nightly’ to ‘runs on stable’ is about 50/50.
To me, that means that the stability of rust is only as good as the stability of nightly. That would be different if I were writing my own rust, but I’m not.
As a person who wants to compile software written in rust: It’s quite likely that the software doesn’t compile without hacking it.
I understand your point now.
I think that you are just exposed to different parts of the ecosystem. I don’t know what is more typical.
I think that if your perspective is typical, that’s quite problematic and I have been blind to that before.
This idea of being able to have your code still work when you want to show it to your kids also leads me to thinking about scientific and academic software and publishing. A lot of papers out there are hard to reproduce because the code isn’t available, or even if it is available there isn’t a documented list of the versions of every dependency. People include the seed values they used for random number generators in their Monte Carlo problems sometimes without actually listing the exact version of everything else they were using, which I think shows that people do care about reproducibility but just don’t really know how to get it.
I should be able to pick up a paper and reproduce exactly what the authors did 20 years later, 40 years later, 60 years later. There are lots of physical science experiments you can still reproduce hundreds of years after they were first carried out.
I recently took a job at a lab and hope to help solve this exact problem eventually. I was hoping to attack it with guix (or nix).
I think virtual machines images will be the de facto way of resolving this. Problem is long-term storage, among other things.
Virtual machines are one of many methods to get old code working. They’re not a full solution, at least not in my experience (some programs are still a pain/impossible).
For mainstream stuff that’s not too far gone: running old Windows or DOS in a VM is not that hard. Now look to more obscure systems and hardware, eg game consoles, and how poor some of their emulation is (ie many programs/games not working).
Given how complex modern operating system stacks are: creating this whole environment and getting it to work reliably, just for one program to run, is going to get harder and harder. Some language environments are essentially operating systems of their own; and others are VMs themselves. OS in OS, VM in VM. Possible to setup, but progressively harder.
True, but we need the underlying system being virtualised to itself be both thoroughly specified, and simple enough for new implementations to be both built and checked against specification. The closest thing we have to this would probably be JVM, but it’s not so small these days and a bit too high-level.
Agree 100%. This is a big issue, and it’s going to get bigger and bigger as more and more research is going to be essentialy the output of random Python scripts, written by researchers who really aren’t programmers.
Also, I remember years ago I was trying to compile C code written in the 80s with a modern version of clang. I had to fiddle quite a while with the cflags to get the pre C90 style code to compile. Very large portions of the code are not valid C code anymore and need explicit support by the compiler.
My experience with “digital archeology” and raising old C programs from the dead confirms this. You can write future-proof C, but then you can write future-proof anything as long as a a spec and/or open source compiler exists for it.
“ In contrast my old C projects from 5-8 years ago still compile and run”
You just made a great argument for Ada. Its standards range from 1983 to 2012. Companies that chose it had time safety (your need), more memory safety, better support for reusing/integrating code (your need), and built-in concurrency.
“newer and better Rusts and Gos are likely to form in the years ahead”
This is sort of misleading. You’re talking about how your C code lasted 8 years or so. You’re implying other languages might not have that affect. Then, you bring up that new languages might be invented or get popular. That doesn’t seem to matter for your argument given it happened after C was popular, too.
The real question is whether any better than C will be around for over a decade. Several already have. One sign they’ll keep getting supported somehow is if it’s a huge, tech company that pushes it with lots of enterprises writing software in it. They stick around a long time. COBOL, C/C++, Java. VB/C# in .NET, and Javascript are already there. Go has a good chance of it given it was designed for cheap labor with good tooling. Its simplicity vs other platforms means new maintainers could keep it alive if it gets abandoned, too. Much like FreeBASIC and Free Pascal w/ Lazaurus took over for QBasic and Delphi respectively.
A bit of a counterpoint to the new language thing: Javascript from 10 years ago just works! I’m pretty sure this is the same for Haskell, Java, and probably PHP as well.
There’s definitely languages that break a lot in their stuff, but it’s not impossible to keep stuff working out of the box.
That being said, there is a certain magic to “ANSI C” stuff feeling like it will always compile in any environment until the end of time
Not all JavaScript from 10 years ago runs. There is a fairly steady stream of browser APIs as well as base JS APIs that are deprecated. I just encountered this yesterday at work with Chrome 80 having removed a browser API that we use for debugging remote Chrome apps.
The thought of trying to compile Haskell from last year fills me with existential horror, never mind Haskell from 10 years ago. Though, this is more a problem with the ecosystem and dependency management rather than the language itself.
It’s good to hear there’s some stability out there. I don’t think I uses anything Haskellish; I have definitely encountered some java code that can’t be run any more (Palm Pilot 2 ARM game called CounterSheep IIRC) but generally desktop stuff has been OK, and PHP’s practical culture seems to centre a lot around not rocking existing stuff/servers that work.
I’m not a JS dev, but what about dialects such as jscript?
https://en.wikipedia.org/wiki/Jscript
I suspect there might be some survivorship bias here. What we consider canonical/main/normal javascript history are the parts that survived and that new dialects/features/etc were built on. Not the bits that died and many websites that only worked in IE.
Last week, I was able to take C++ code that I wrote 18 years ago and incorporate it into a new project. This is precisely why I consider C++ to be one of the greatest languages and Bjarne Stroustrup one of the best if not the best language desgners in the world.
It’s even better than C, because C++ is improving at a faster rate while making sure your code that you wrote 18 years ago still works. People severely underestimate how hard and impressive this is.
This is why I trust C++ with my new code because I trust that it will work 20 years from now.
I totally agree with your comment; I just wanted to point out that the evolution of C++ is managed by a committee, not just Bjarne Stroustrup. Bjarne was the initial language designer, but we should make sure we acknowledge the hard work and effort of everyone involved in the evolution of C++ (or really any community-driven technology).
:-)
Yes definitely! I don’t want to diminish the herculean efforts of the committee. Just pointing out that Bjarne could have given up long ago or created other languages like so many language designers do. He is now just a member of the committee but I think it’s fair to say his opinions still carry a lot of weight. The committee embodies a lot of the values he has always advocated for.
If you keep rubbing a stone, eventually it will shine.
I too would trust C++ in this fashion. What I don’t trust is all the meat to write C++ code without all the CVEs.
Curious, which OS and browser are you using?
Those that I distrust the least. The paucity of my available options has no impact on the quality of code being written around the world, the causal chain goes in the other direction if at all.
Sure it does! Have you wondered WHY there are so few options? There are social and economic reasons. In other words, security requires investment that no government or corporation have interest in. They WANT leaky software.
Yes, they do. And they also explicitly want everybody to throw it out and replace it constantly, which is the opposite of what we’re talking about.
I guess, who is “they” in your response?
This article is a different take and perspective to a lot of what I read regarding (memory) safer languages. I hope it doesn’t grate with people too much. I’m regularly trying to fix things setup by other people, whether it’s systems at work or old programs and games for fun, so I have probably have different priorities and views to many other authors.
As a programming language developer, “time safety” to me really does refer to certain sorts of time-isolating techniques akin to memory safety. This blog post about the paper Time Protection is a good introduction to the topic.
What you claim as C’s time-tested durability, I see as a powerful series of network effects around C, starting from the fact that C was designed to be part of UNIX. Other languages with the same trajectory, notably AWK and sh, have the same staying power over the decades. It was possible to ride along this memetic current if one designed their language appropriately; Perl is probably the best example.
I dislike these sorts of appeals to positive usage. It is relatively trivial to get code that has positive functionality; shove a bunch of combinators into a genetic evolution chamber, apply trials and fitness, and evolve programs that positively satisfy some given property or specification. The problem, of course, is that the evolved programs have no constraints on what they may not do. (If you dislike evolution, then you may instead imagine a feature factory where junior developers continually adjoin new modules and let unused modules rot.) When we design languages these days, we are extremely preoccupied not just with what one may build, but with what one may not build.
I, too, am a maintenance programmer; when I am fixing code, I want to first know what it does. I need to know how code behaves apart from its specification, and this usually leads to considering the breadth of the range of the program’s effects. I want to know both positive and negative properties.
It’s not directly espoused by the article, but one good argument I had after reading this is the “not using $X is irresponsible!!!” argument is not constructive. There may be a better Rust in the future. Does that make everyone who used Rust irresponsible? Or if they choose it for a new project, is that irresponsible? Of course not.
Now apply that argument backwards in time.