The title sets off my silliness alarm … But now that I see it’s from Cantrill, it’s intriguing because I watched his previous talks on Rust.
He’s stated the sensible position that “no we’re not going to rewrite tons of working code in Rust”. He was thinking more along the lines of writing some new parts in Rust and interfacing with C code, which I thought reflected a more mature viewpoint (i.e. from someone who actually writes kernel code).
But I’ll watch this take to see if that position has really changed, or if it’s just something fun to talk about …
I just watched this video. I liked the part about failed C++ operating systems from the early 90’s (Apple’s Copland, an effort from Sun, and Taligent).
Overall he’s excited by Rust, but points out some problems with using it for kernel development:
Multiply-owned data structures are all over Unix kernels, and Rust’s ownership system doesn’t like those. (i.e. the doubly linked list problem)
Rust doesn’t allow you to handle memory allocation failure. This is being worked on? I didn’t know this about Rust and it seems odd for a low level language.
He says that instead of kernel development, Rust could be used for:
user-level services like systemd, or
firmware.
This doesn’t seem like anything more than a wish though. I don’t see that Rust really shines in those areas.
I think you could probably write systemd in Go. I don’t see that Rust has any advantage there. Whereas a kernel in Go has some obvious downsides (I know it’s been tried).
Rewriting firmware in Rust sounds nice, but I don’t think vendors will do it. They have enough problems writing decent C, and they don’t write open source code. He is wishing for them to write Rust and release it as open source, but I’m not optimistic about either of those things.
I think the kernel and the browser are where Rust is interesting – when you need both performance and memory safety. I’m not really excited about Rust for other applications like web services, i.e. doing the job that Go, Python, or Ruby an do. So it does seem unfortunate that there are still some open problems with using Rust for the kernel.
He also points out the many nascent Rust OSes kernels, and the fact that they are “from scratch” systems, i.e. not compatible. I agree that you need a compatible kernel to have any hope of adoption.
Rust doesn’t allow you to handle memory allocation failure. This is being worked on? I didn’t know this about Rust and it seems odd for a low level language.
Small correction: Rust the language can deal with failure to allocate memory. It is the standard library that doesn’t provide a real strategy for dealing with it. It was an explicit trade off made for several reasons.
Opinion difference: I don’t really agree with you that Rust’s use cases need to be so narrow. But we have very different ideas about programming language choices I suspect. For example, I will never use a unityped language for a big project ever again. :)
Please just call them dynamically typed languages. It’s a much more well understood term, and is much more accurate as well. ‘Unityped’ is basically a shibboleth that says ‘dynamically typed’ but in a way that only makes any sense if you’re part of the ‘in-crowd’ of people that think that because they’ve heard of type theory they’re better than the plebians writing Javascript or Python. It’s also inaccurate, because it’s based on the quite incorrect assumption that ‘type’ can only mean what type theorists mean when they say ‘type’, which is patent nonsense.
The funny thing is that I actually wrote “dynamically typed language” initially, and then replaced it with “unityped” because I was sure the former was bound to attract a pedant’s notice. Little did I know that using “unityped” would get me labeled a nonsensical snob. Lose, lose.
In any case, if lobsters had an ignore list, I would have put you on it the second I saw that you joined. As far as I’m concerned, the majority of the content you put out onto the Internet under that username is complete trash. So let’s just please go back to ignoring each other. Things are more pleasant that way.
It’s still sensible to rewrite it so long as you can show equivalence with the old code. Also, using automated tools that make it easier to do. Then, done gradually over time starting with the most critical parts first (80/20 rule). I mean, let’s say volunteers can just do the networking stack and filesystem. Those are the first components that got put on separation kernels since they were critical in about every app. Even if that’s all they did, a project could follow Poly2’s lead stripping out everything they don’t need in the kernel for their specific app or appliance. Most of the unsafe code is gone. The stuff the app will use the most is in safe Rust. That’s a big win for all kinds of deployments.
Then, just gradually add to the list a system call or module at a time. Also, I’d say improve static analysis for Rust, esp unsafe Rust, to achieve whatever capabilities C has. Meanwhile, apply the best of static analysis, test generators, and fuzzers to the existing C code fixing everything they find. That helps immediately plus later on if it’s converted to Rust.
As a dyed-in-the-wool rustacean, I love it. “No don’t Rewrite It In Rust. It already works, what’s the gain? Now if you’re writing something new, maybe consider Rust as an option…”
I’ll give this presentation as a counterpoint showing how they’re constantly adding more bugs with both new code and changes to existing code. We need to make sure these people making these changes are using a language immune-by-default to common problems they can’t seem to avoid. So, that justifies gradually rewriting existing code.
Getting them doing it for new code first is a smart strategy, though. They’ll also see the benefits of the safe language as they deal with problems in modifications to unsafe, lower-level code more than modifications to safe, higher-level code.
That’s a very good point and a convincing presentation. That said, Rust and other safe-by-default languages are not a panacea. I would expect this to be especially true in things like operating systems where you will have a fair amount of unsafe code floating around, trying to have safe semantics while often not quite managing it. That said, while OS dev in Rust is definitely of interest to me it’s not something I have dug into too much yet, so I shouldn’t try to extrapolate too much.
Good point. I’ll note that a conversion with unsafe’s means it will still be safer with the unsafes being, at worst, no less safe than the original. However, unsafe Rust still has some safety over unsafe C. The unsafe parts might improve. Finally, there are automated techniques for verifying a lot of those unsafe components that can be applied. It will be more feasible to do it once the part that needs such verification is a tiny part of the codebase.
So, there’s still some potential to reduce risk even in presence of unsafes. The unsafes will still have risk, though. Far as OS dev, you might find the book on embedded Rust useful since it targets lots of low-level interactions. That community is both trying to do them and figure out ways to leverage the type system when doing them.
Is there any evidence that people are adding more bugs than people rewriting things in Rust are? Rust isn’t immune to bugs, far from it, nor is it immune to security bugs. Everyone knows rewriting code will result in introducing some bugs. Can you really be confident they’re fewer and less severe than the bugs already there? Can you be confident they won’t just reintroduce the same bugs into the Rust version?
Yeah. Rust is immune by default to problems many codebases keep having. Many lead to code injection whereas Rust’s just lead to panics or something. The default going from hackers controlling our boxes to applications crashing would be an improvement.
I’ll also note that the temporal errors the borrow checker catches are the source of many heisenbugs. Those are errors that are just hard to find or reproduce. Even OpenBSD had a bunch of them despite their attention to code quality. So, a language and/or tooling that prevents them makes more sense than trying to hunt for them.
It’s just not true that Rust is immune to security bugs. Isolating a particular set of security bugs and claiming they’re worse, conveniently also being the ones that Rust can’t have (if you follow a long set of restrictions that nobody follows, like not using unsafe code)? I think that’s intellectually dishonest.
Rust fixes some things, sure, but there are lots of issues it doesn’t fix and cannot fix, and many of them are just as bad or worse than the issues it does fix.
“It’s just not true that Rust is immune to security bugs.”
You’re being intellectually dishonest by misquoting me, setting up a strawman like that, and knocking it down. I’ll restate what I said so you can reply to that instead:
“Rust is immune by default to problems many codebases keep having. Many lead to code injection whereas Rust’s just lead to panics or something”
I didn’t say all security problems: just many that are common. Like described here, Safe Rust blocks spatial errors (i.e. memory-safe) and common types of temporal errors (eg null dereference, use-after-free, some races). These are blocked by design where the compiler either adds checks or forces code to be structured to make detection automatic. C allows these problems by default. Most vulnerabilities people find are these kinds of vulnerabilities. Most of the really, clever attacks start with one of them before building a chain. So, making a language immune to the specific classes of vulnerability that turn up all the time in C code will reduce those classes of vulnerability. That’s what you need to argue against.
The next claim I make is there will be reliability and security failures left due to stuff the type system can’t cover. The benefit a safe, system language retains is you can spend your bug hunting time on those other things. You don’t have to check the code for the same stuff that keeps getting people in C. I’ll add that arrays, stacks, and so on are common primitives that people have to use constantly. Whereas, the esoteric errors will be in less common code. It’s easier to find them when one has more time with less code to look at. So, making the majority of code safe even helps one potentially catch the other errors for those reasons. Obviously, we’ll also develop more checkers for stuff like that on the side.
Rust is immune to an arbitrary subset of security issues. We both agree with that.
What I am saying is that taking that fairly arbitrary subset and suggesting they’re the more important issues is just not true. ‘The issues that Rust happens to prevent’ is actually characterised by anything other than that Rust happens to prevent them.
The other thing I’m saying is that there’s no evidence, as far as I am aware, that taking away those issues reduces the overall prevalence of security issues in software, or reduces their severity. Maybe it does? But I haven’t seen any evidence of it. And even if such evidence exists, is there evidence that Rust is the best way of achieving that reduction? Could the same reduction in security issues be achieved by doing something much simpler and smaller like standardising some safer string and buffer operations and types in the next C standard and promoting their use?
We do not agree on that. Rust followed the path of many safe languages to look at most common failures to block them first. You saying arbitrary implies it’s as if they picked stuff at random with unknown effects on the code out there. No, they picked memory-safety and temporal errors that were hitting people constantly, including experts at secure coding. The errors that are in CVE’s with code injections all the time. That’s not arbitrary: it’s evidence-based mitigation focusing on stopping the most attacks with the least language or security features.
“The other thing I’m saying is that there’s no evidence, as far as I am aware, that taking away those issues reduces the overall prevalence of security issues in software, or reduces their severity.”
Most of the reported vulnerabilities that lead to code injection are due to unsafe languages having no mitigations. Rust mitigates those by design. That’s evidence it reduces overall, code injections. I’ll add I keep mentioning code injections since a hacker taking over your box in secret is much worse than it crashing, optionally telling you where it crashed. Both Ada and Rust prioritized stopping the most common bugs and severe outcomes.
We do not agree on that. Rust followed the path of many safe languages to look at most common failures to block them first.
That’s literally just not true. That’s not what the design process for Rust looked like.
Most of the reported vulnerabilities that lead to code injection are due to unsafe languages having no mitigations. Rust mitigates those by design. That’s evidence it reduces overall, code injections.
Again, what Rust mitigates by design is not a special class of security issue to anyone except Rust advocates who like to pretend that ‘memory safety’ is a special class of security issue that far surpasses any other.
A hacker taking over the box so they can DDOS someone is a far less serious security issue than personal data of customers being leaked, IMO. But because Rust allegedly prevents one and not the other, Rust advocates reorient their world view around the former being qualitatively worse.
I bet. Even more scary to me was a cliche that said we’ll keep programming like we only have PDP-11’s no matter what changes or happens. Lots of damage followed that mindset. Whereas, hardly any damage followed alternative practice of giving people stuff that was safe-by-default. So, I’m less scared about pushing it given the better results.
Really? The topic is about whether it’s worthwhile to write operating systems in Rust, and you’re trying to call someone out for saying “yeah, it makes sense to write operating systems in Rust”? When he’s even got a reasonable supporting argument?
The topic is about whether it’s worthwhile to write operating systems in Rust, and you’re trying to call someone out for saying “yeah, it makes sense to write operating systems in Rust”?
No, I never objected to writing new software (specifically operating systems) in Rust.
If it’s a company paying for it, they can dictate the language to developers. We saw this with Microsoft’s .NET snd Sun’s Java. It’s happening selectively at Mozilla with Rust and some companies who do Ada/SPARK (eg for Muen).
If it’s FOSS and mostly paid developers (eg Linux), they might be able to get more code in a specific language if offering to match rewrites or threaten to pull support. This has lower odds of success than No 1.
If it’s FOSS and done by volunteers, then odds of pushing a switch is close to nothing. In that case, the route would be to fork it rewriting legacy and recently-added code incrementally.
As a Rustacean, I agree with you and with Bryan. The economics of taking an existing piece of software that works and rewriting it in Rust aren’t good and the second system syndrome might make Rust look like a terrible choice. Rather, we should include Rust in the pool of options when building new software, and not be zealous when it turns out that Python is a better choice.
There’s no point rewriting existing systems in rust, because there are people with veto power who will veto it. Better IMO to write new systems that by exhibiting their difference can lure users and devs away.
That sounds true in most cases. Greenfield is easier to support. Genera (Common LISP), kdb+ (Q), TrustDNS (Rust) and maybe Caddy Web Server (Go) are examples of that. Maybe throw in CompCert to represent the math-heavy stuff.
I’m less interested in OS re-implementation in the programming language du-jour, and more interested in novel interfaces that operating systems might present to the user.
That’s a really good point. I suspect that the vast majority of severe (*) security bugs are not operating system or library bugs but application bugs. And a lot of those could probably be fixed with different operating system and library interfaces.
Standardising a few new string manipulation functions in C2x for manipulating length-prefixed strings and buffers, for example, would hugely impactful. A new (mostly) safe string and buffer API in a C standard would be considerably easier to adopt into existing software written in C, a lot of which implement string and buffer operations poorly and in ways that confuse contributors into creating security issues.
Would it solve all security issues? No. But neither does Rust.
(*): by ‘severe’ I mean ones that end up having severe consequences for innocent end users, not theoretical issues that nobody has provably exploited in the wild
Maybe, maybe not. This presentation shows thousands of bugs. Many turn into vulnerabilities. Quite a few apps don’t have thousands of bugs like the kernel. The memory-safe apps might not have even one exploitable bug. If they do, the number will be far lower because they’re designed to not be exploitable. They just crash or raise exceptions.
This video’s title is like the opposite of a click-bait. Bryan Cantrill is an awesome presenter.
The title sets off my silliness alarm … But now that I see it’s from Cantrill, it’s intriguing because I watched his previous talks on Rust.
He’s stated the sensible position that “no we’re not going to rewrite tons of working code in Rust”. He was thinking more along the lines of writing some new parts in Rust and interfacing with C code, which I thought reflected a more mature viewpoint (i.e. from someone who actually writes kernel code).
But I’ll watch this take to see if that position has really changed, or if it’s just something fun to talk about …
I just watched this video. I liked the part about failed C++ operating systems from the early 90’s (Apple’s Copland, an effort from Sun, and Taligent).
Overall he’s excited by Rust, but points out some problems with using it for kernel development:
He says that instead of kernel development, Rust could be used for:
This doesn’t seem like anything more than a wish though. I don’t see that Rust really shines in those areas.
I think the kernel and the browser are where Rust is interesting – when you need both performance and memory safety. I’m not really excited about Rust for other applications like web services, i.e. doing the job that Go, Python, or Ruby an do. So it does seem unfortunate that there are still some open problems with using Rust for the kernel.
He also points out the many nascent Rust OSes kernels, and the fact that they are “from scratch” systems, i.e. not compatible. I agree that you need a compatible kernel to have any hope of adoption.
Small correction: Rust the language can deal with failure to allocate memory. It is the standard library that doesn’t provide a real strategy for dealing with it. It was an explicit trade off made for several reasons.
Opinion difference: I don’t really agree with you that Rust’s use cases need to be so narrow. But we have very different ideas about programming language choices I suspect. For example, I will never use a unityped language for a big project ever again. :)
Please just call them dynamically typed languages. It’s a much more well understood term, and is much more accurate as well. ‘Unityped’ is basically a shibboleth that says ‘dynamically typed’ but in a way that only makes any sense if you’re part of the ‘in-crowd’ of people that think that because they’ve heard of type theory they’re better than the plebians writing Javascript or Python. It’s also inaccurate, because it’s based on the quite incorrect assumption that ‘type’ can only mean what type theorists mean when they say ‘type’, which is patent nonsense.
The funny thing is that I actually wrote “dynamically typed language” initially, and then replaced it with “unityped” because I was sure the former was bound to attract a pedant’s notice. Little did I know that using “unityped” would get me labeled a nonsensical snob. Lose, lose.
In any case, if lobsters had an ignore list, I would have put you on it the second I saw that you joined. As far as I’m concerned, the majority of the content you put out onto the Internet under that username is complete trash. So let’s just please go back to ignoring each other. Things are more pleasant that way.
It’s still sensible to rewrite it so long as you can show equivalence with the old code. Also, using automated tools that make it easier to do. Then, done gradually over time starting with the most critical parts first (80/20 rule). I mean, let’s say volunteers can just do the networking stack and filesystem. Those are the first components that got put on separation kernels since they were critical in about every app. Even if that’s all they did, a project could follow Poly2’s lead stripping out everything they don’t need in the kernel for their specific app or appliance. Most of the unsafe code is gone. The stuff the app will use the most is in safe Rust. That’s a big win for all kinds of deployments.
Then, just gradually add to the list a system call or module at a time. Also, I’d say improve static analysis for Rust, esp unsafe Rust, to achieve whatever capabilities C has. Meanwhile, apply the best of static analysis, test generators, and fuzzers to the existing C code fixing everything they find. That helps immediately plus later on if it’s converted to Rust.
As a dyed-in-the-wool rustacean, I love it. “No don’t Rewrite It In Rust. It already works, what’s the gain? Now if you’re writing something new, maybe consider Rust as an option…”
I’ll give this presentation as a counterpoint showing how they’re constantly adding more bugs with both new code and changes to existing code. We need to make sure these people making these changes are using a language immune-by-default to common problems they can’t seem to avoid. So, that justifies gradually rewriting existing code.
Getting them doing it for new code first is a smart strategy, though. They’ll also see the benefits of the safe language as they deal with problems in modifications to unsafe, lower-level code more than modifications to safe, higher-level code.
That’s a very good point and a convincing presentation. That said, Rust and other safe-by-default languages are not a panacea. I would expect this to be especially true in things like operating systems where you will have a fair amount of
unsafe
code floating around, trying to have safe semantics while often not quite managing it. That said, while OS dev in Rust is definitely of interest to me it’s not something I have dug into too much yet, so I shouldn’t try to extrapolate too much.Good point. I’ll note that a conversion with unsafe’s means it will still be safer with the unsafes being, at worst, no less safe than the original. However, unsafe Rust still has some safety over unsafe C. The unsafe parts might improve. Finally, there are automated techniques for verifying a lot of those unsafe components that can be applied. It will be more feasible to do it once the part that needs such verification is a tiny part of the codebase.
So, there’s still some potential to reduce risk even in presence of unsafes. The unsafes will still have risk, though. Far as OS dev, you might find the book on embedded Rust useful since it targets lots of low-level interactions. That community is both trying to do them and figure out ways to leverage the type system when doing them.
Is there any evidence that people are adding more bugs than people rewriting things in Rust are? Rust isn’t immune to bugs, far from it, nor is it immune to security bugs. Everyone knows rewriting code will result in introducing some bugs. Can you really be confident they’re fewer and less severe than the bugs already there? Can you be confident they won’t just reintroduce the same bugs into the Rust version?
Yeah. Rust is immune by default to problems many codebases keep having. Many lead to code injection whereas Rust’s just lead to panics or something. The default going from hackers controlling our boxes to applications crashing would be an improvement.
I’ll also note that the temporal errors the borrow checker catches are the source of many heisenbugs. Those are errors that are just hard to find or reproduce. Even OpenBSD had a bunch of them despite their attention to code quality. So, a language and/or tooling that prevents them makes more sense than trying to hunt for them.
It’s just not true that Rust is immune to security bugs. Isolating a particular set of security bugs and claiming they’re worse, conveniently also being the ones that Rust can’t have (if you follow a long set of restrictions that nobody follows, like not using
unsafe
code)? I think that’s intellectually dishonest.Rust fixes some things, sure, but there are lots of issues it doesn’t fix and cannot fix, and many of them are just as bad or worse than the issues it does fix.
“It’s just not true that Rust is immune to security bugs.”
You’re being intellectually dishonest by misquoting me, setting up a strawman like that, and knocking it down. I’ll restate what I said so you can reply to that instead:
“Rust is immune by default to problems many codebases keep having. Many lead to code injection whereas Rust’s just lead to panics or something”
I didn’t say all security problems: just many that are common. Like described here, Safe Rust blocks spatial errors (i.e. memory-safe) and common types of temporal errors (eg null dereference, use-after-free, some races). These are blocked by design where the compiler either adds checks or forces code to be structured to make detection automatic. C allows these problems by default. Most vulnerabilities people find are these kinds of vulnerabilities. Most of the really, clever attacks start with one of them before building a chain. So, making a language immune to the specific classes of vulnerability that turn up all the time in C code will reduce those classes of vulnerability. That’s what you need to argue against.
The next claim I make is there will be reliability and security failures left due to stuff the type system can’t cover. The benefit a safe, system language retains is you can spend your bug hunting time on those other things. You don’t have to check the code for the same stuff that keeps getting people in C. I’ll add that arrays, stacks, and so on are common primitives that people have to use constantly. Whereas, the esoteric errors will be in less common code. It’s easier to find them when one has more time with less code to look at. So, making the majority of code safe even helps one potentially catch the other errors for those reasons. Obviously, we’ll also develop more checkers for stuff like that on the side.
Rust is immune to an arbitrary subset of security issues. We both agree with that.
What I am saying is that taking that fairly arbitrary subset and suggesting they’re the more important issues is just not true. ‘The issues that Rust happens to prevent’ is actually characterised by anything other than that Rust happens to prevent them.
The other thing I’m saying is that there’s no evidence, as far as I am aware, that taking away those issues reduces the overall prevalence of security issues in software, or reduces their severity. Maybe it does? But I haven’t seen any evidence of it. And even if such evidence exists, is there evidence that Rust is the best way of achieving that reduction? Could the same reduction in security issues be achieved by doing something much simpler and smaller like standardising some safer string and buffer operations and types in the next C standard and promoting their use?
“arbitrary subset of security issues”
We do not agree on that. Rust followed the path of many safe languages to look at most common failures to block them first. You saying arbitrary implies it’s as if they picked stuff at random with unknown effects on the code out there. No, they picked memory-safety and temporal errors that were hitting people constantly, including experts at secure coding. The errors that are in CVE’s with code injections all the time. That’s not arbitrary: it’s evidence-based mitigation focusing on stopping the most attacks with the least language or security features.
“The other thing I’m saying is that there’s no evidence, as far as I am aware, that taking away those issues reduces the overall prevalence of security issues in software, or reduces their severity.”
Most of the reported vulnerabilities that lead to code injection are due to unsafe languages having no mitigations. Rust mitigates those by design. That’s evidence it reduces overall, code injections. I’ll add I keep mentioning code injections since a hacker taking over your box in secret is much worse than it crashing, optionally telling you where it crashed. Both Ada and Rust prioritized stopping the most common bugs and severe outcomes.
That’s literally just not true. That’s not what the design process for Rust looked like.
Again, what Rust mitigates by design is not a special class of security issue to anyone except Rust advocates who like to pretend that ‘memory safety’ is a special class of security issue that far surpasses any other.
A hacker taking over the box so they can DDOS someone is a far less serious security issue than personal data of customers being leaked, IMO. But because Rust allegedly prevents one and not the other, Rust advocates reorient their world view around the former being qualitatively worse.
That sounds scary, to be honest.
I bet. Even more scary to me was a cliche that said we’ll keep programming like we only have PDP-11’s no matter what changes or happens. Lots of damage followed that mindset. Whereas, hardly any damage followed alternative practice of giving people stuff that was safe-by-default. So, I’m less scared about pushing it given the better results.
There are no positive results from attempts to do back-seat driving.
You mean seatbelt laws and regulations improving crash safety? Definitely been positive results from those.
Then again for software in DO-178B cuz it mandates quality with high penalties for mistakes. Folks naturally started using better tooling.
I mean you telling people to write operating systems in your language of choice.
Really? The topic is about whether it’s worthwhile to write operating systems in Rust, and you’re trying to call someone out for saying “yeah, it makes sense to write operating systems in Rust”? When he’s even got a reasonable supporting argument?
No, I never objected to writing new software (specifically operating systems) in Rust.
Maybe. Let’s judge different contexts:
If it’s a company paying for it, they can dictate the language to developers. We saw this with Microsoft’s .NET snd Sun’s Java. It’s happening selectively at Mozilla with Rust and some companies who do Ada/SPARK (eg for Muen).
If it’s FOSS and mostly paid developers (eg Linux), they might be able to get more code in a specific language if offering to match rewrites or threaten to pull support. This has lower odds of success than No 1.
If it’s FOSS and done by volunteers, then odds of pushing a switch is close to nothing. In that case, the route would be to fork it rewriting legacy and recently-added code incrementally.
As a Rustacean, I agree with you and with Bryan. The economics of taking an existing piece of software that works and rewriting it in Rust aren’t good and the second system syndrome might make Rust look like a terrible choice. Rather, we should include Rust in the pool of options when building new software, and not be zealous when it turns out that Python is a better choice.
There’s no point rewriting existing systems in rust, because there are people with veto power who will veto it. Better IMO to write new systems that by exhibiting their difference can lure users and devs away.
That sounds true in most cases. Greenfield is easier to support. Genera (Common LISP), kdb+ (Q), TrustDNS (Rust) and maybe Caddy Web Server (Go) are examples of that. Maybe throw in CompCert to represent the math-heavy stuff.
Yes, but why stop at rust?
Indeed, I’d like to see some systems written in zig!
[Comment removed by author]
Does it have first class functions? Can functions be composed?
I’m less interested in OS re-implementation in the programming language du-jour, and more interested in novel interfaces that operating systems might present to the user.
That’s a really good point. I suspect that the vast majority of severe (*) security bugs are not operating system or library bugs but application bugs. And a lot of those could probably be fixed with different operating system and library interfaces.
Standardising a few new string manipulation functions in C2x for manipulating length-prefixed strings and buffers, for example, would hugely impactful. A new (mostly) safe string and buffer API in a C standard would be considerably easier to adopt into existing software written in C, a lot of which implement string and buffer operations poorly and in ways that confuse contributors into creating security issues.
Would it solve all security issues? No. But neither does Rust.
(*): by ‘severe’ I mean ones that end up having severe consequences for innocent end users, not theoretical issues that nobody has provably exploited in the wild
Maybe, maybe not. This presentation shows thousands of bugs. Many turn into vulnerabilities. Quite a few apps don’t have thousands of bugs like the kernel. The memory-safe apps might not have even one exploitable bug. If they do, the number will be far lower because they’re designed to not be exploitable. They just crash or raise exceptions.
No.