I found that when programming Rust I tend to use the type system to my advantage, rather than sprinkle runtime asserts everywhere.
Looking at some examples from your blog post:
For example, if I’m writing a warehouse stock system, parts of my program might assume properties such as “an item’s barcode is never empty”.
In that case you can represent your barcode as Option<Barcode>
where it can be empty, and use Barcode
where it cannot. Then you assert as needed by expect
ing the barcode. Though you cannot make it into a debug_assert!
, and rightly so - reading from None
in release mode would result in reading from potentially uninitialized memory, which is undefined behavior.
For example, I might have written a function which runs much faster if I assume the input integer is greater than 1.
For this case even the stdlib has built-in types (which are similar but not exactly fitting this particular example) - std::num::NonZero{U,I}{8,16,32,64,size}
- whose new
functions return an Option<Self>
. Likewise you could create your own type like GreaterThanOne<T>
which provides the exact functionality you described.
There are also some things that can’t really be checked with a runtime assertion - right now I’m writing a parser for a certain old game scripting language. I have a bunch of functions like Keyword::matches
, which take in the keyword substring. A mistake I sometimes make however is that I pass in the parser’s entire input instead of just the keyword substring to the function, therefore Keyword::matches
always returns false
. This is not something you can reasonably check with a runtime assertion, but you could check against it at compilation, by defining a struct InputFragment(str);
with a Deref<Target = &str>
, which could only be produced by slicing the full input, and the type checker would rightly complain whenever you forget to slice the input before passing it into somewhere.
And just know I’m not saying runtime assertions, and especially the presence of debug_assert!
are completely useless 😄 I have used them before myself. It’s just that I tend to use them much less in the presence of Rust’s strong typing and explicit error handling culture.
Yes, this is the way. Make illegal states unrepresentable. By leveraging Rust’s powerful sum-types, you can eliminate almost all input checking and assertions and the complexity that goes along with it. It is mind-blowing how many corner cases and error conditions simply cannot exist anymore and how much safer and denser the code is when this is done right.
Sorry, not following your example:
A mistake I sometimes make however is that I pass in the parser’s entire input instead of just the keyword substring to the function, therefore Keyword::matches always returns false. This is not something you can reasonably check with a runtime assertion, but you could check against it at compilation, by defining a struct InputFragment(str); with a Deref<Target = &str>, which could only be produced by slicing the full input, and the type checker would rightly complain whenever you forget to slice the input before passing it into somewhere.
Is the full input a String
so you use InputFragment
to ensure you’re matching a slice?
Just a thought, why not have a debug_assert
that looks for a space? Wouldn’t this let you know that it’s just the keyword instead of the full input?
I have a Vec
in my code that contains u32
IDs which should always be in sorted order, and that I need to be in sorted order to perform a merge. I could simply call sort on both, but that’s a wasteful runtime cost. Instead I have a debug_assert
that simply ensures each value is larger than the pervious (using the amazing iterators available). This is complied away, but ensures the lists are sorted in all my tests.
Is the full input a
String
so you useInputFragment
to ensure you’re matching a slice?
Yeah, that’s the idea.
Just a thought, why not have a
debug_assert
that looks for a space? Wouldn’t this let you know that it’s just the keyword instead of the full input?
That’s a good idea, and definitely shorter than creating a newtype. However I still prefer my approach, because it prevents undesired failures at compilation, so I need to write less tests to cover those bad cases.
Regarding your example, the way I would do it is something like:
pub struct SortedIds(Vec<u32>);
impl SortedIds {
pub fn new(maybe_sorted: Vec<u32>) -> Result<Self, Vec<u32>> {
if is_sorted(maybe_sorted) { Ok(Self(maybe_sorted)) }
else { Err(maybe_sorted) }
}
pub unsafe fn new_unchecked(maybe_sorted: Vec<u32>) -> Self {
debug_assert!(is_sorted(maybe_sorted));
Self(maybe_sorted)
}
pub fn sort(unsorted: Vec<u32>) -> Self { /* ... */ }
}
So still using a debug_assert
in the new_unchecked
function to signal loudly that an invariant was broken, but also using a strong type so that we only need to verify the invariant at the edges of the API.
Hope that clears it up!
This was the perfect reminder for starting my Friday morning. I’m going through a similar situation right now. Historically I’ve jumped in and pulled the heroic “fix it at the 11th hour” bullshit that the article talks about at the end after having the system fail in the ways I’ve predicted it would; this time around we’re communicating clear requirements to the other team. While I’ll still likely be keeping an eye on things and preparing for that 11th hour firefighting, I’m going to be taking a more passive approach in the lead-up and let the root causes see the light of day before just fixing it.
Yeah. You will destroy yourself if you try to fix everything on overtime. When things stay broken for a while, people suddenly get some appreciation for all the systems that “magically “ just work all the time. Suddenly there’s talk about scheduled maintenance and which resources are assigned to keep things running.
On my current project with working with an external team, I’ve been taking a more stern approach when things fall apart. Writing up very detailed reports on what all went wrong leading up to this, and action items that need to be taken so that we never have to deal with this again. They’ve taken to those much more than up front stuff, and things have actually gotten better.
I fully agree with this article. One of the most annoying things about the C standard library, apart from all the things mentioned here, is that the atoi
function returns 0
for invalid strings. So there is no way to distinguish the input string horse
from 0
and there is no way to properly verify input with that function. The entire C standard library is fraught with such really quite obvious and terrible API mistakes, and it sucks the fun out of writing correct software in C. I usually just reimplement the standard library functions with correct and sane replacements… But you really don’t want to reimplement the standard library as your first task when you write a C program. The C standard library didn’t just not age well, it was terrible from the get-go.
What I’m really curious about is whether the author has some kind of standard library alternative that he is using, and if so, which one and can we use it too?
I agree with your frustration at much of the standard library, but the solution to atoi
’s crap is to use strtol
and friends (along with errno
checking) - POSIX even requires atoi
to be equivalent to
(int) strtol(str, (char **)NULL, 10)
except for error handling. The only reason atoi
remains in POSIX is “because it is used extensively in existing code”.
I know, but atoi
just strikes me as a prime example of what is wrong with the C standard library design. A lot has to go wrong for such a function to find its way into a language standard library. And as you said, it is used extensively in existing code - probably all of it buggy and full of parsing issues.
C was around for over 15 years, maybe close to 20, before the first standard was released, and all the C compiler vendors at the time wanted to do the least amount of work to confirm to the standard, so a lot of compromises were made. Also, a lot of C libraries prior to standardization where implemented right from K&R, and gues what? atoi()
is right there in the book (first edition was 1978 to put things into perspective).
I mean. It’s not like these optimizing compilers were something programmers didn’t ask for. We chose to use aggressively optimizing compilers and eschew ones that focused more on being simple and predictable.
The problem is not the eagerness and aggression with which the compiler optimizes. The problem regarding undefined behavior is shaped by the gap between the logic the programmer intends to express and the logic that the compiler understands, which is vast in the case of C. It is this gap which causes confusion and surprises, because the programmer clearly intended one behavior, but the compiler was smart enough to detect a crack in that logic and optimized the whole thing out. It is a problem of communication and expressiveness of the language and it is possible to write languages and compilers that optimize aggressively without becoming adversarial logic corruption machines.
Just a note on UB…
As a C and C++ programmer UB is something I rarely think about during day to day development. Maybe it has just been decades of mental memory to know what to avoid. Or maybe I’ve written code that uses UB without knowing? I don’t think I’ve ever read a comment in a C or C++ code base where someone indicated that some segment of code invokes UB but that was their only option. So I think I’m not unusual in this way. Maybe such comments are in compilers, though.
I feel like UB is thrown around as some scary thing on HN and this site but it’s talked about much less on the C and C++ subreddits.
In my experience people don’t generally realize that they’re invoking UB. I primarily work with C and C++, and often the first thing I’ll do when I start on a new project is build and run any unit tests with -fsanitize=undefined
. It pretty much always finds something, unless the people working on the system before me were already doing something similar, and people are often surprised.
I mean yes it is ultimately a language problem that C UB is so broadly defined and stuff. That said, you actually do want compilers to optimize out checks that they can prove aren’t necessary, and in order to do that they need rules for what you can and can’t do. Unsafe Rust is eventually going to run into a lot of these same issues as the model for what’s okay and what isn’t gets more and more sophisticated and the compiler learns to better take advantage of it.
chromium has one of the longest compile times of any open source project, I’m not sure rust would make it substantially worse.
As a Gentoo user I regularly witness hours-long chromium compile times and know the pain all too well, even when it’s running in the background. Isn’t it scary to think that we might reach new extremes in chromium compile times now?
From what I heard is that this might not be a relevant metric to the Chromium project? AFAIU most of the contributors compile “in the cloud” and don’t care about downstream compilation :/
I have first-hand experience with this. My compiling machine was a jaw-droppingly beefy workstation which sat at my desk. A cached compilation cycle could take over an hour if it integrated all of Chromium OS, but there were ways to only test the units that I was working on. Maybe this is a disappointing answer – Google threw money at the problem, and we relied upon Conway’s Law to isolate our individual contributions.
Chromium’s build tool Goma support distributed remote build. So as long as you have a server farm to support Goma, the build is actually pretty fast.
Similarly, at Mozilla they used distcc+ccache with Rust. So the compilation has always been distributed to the data center instead of running locally.
Either you own your computers or you do not. If I need a datacenter/cluster to make software compilation at least half-bearable, the problem is software complexity/compilation speed and not too little compuational power. And even in the cloud it takes many minutes, also for incremental builds.
The first step to solving a problem is admitting there is one.
The reason Chrome exists is to allow a multinational advertising company to run their software on your computer more efficiently. Slow compile times are not a problem they experience and they are not a problem they will address for you.
I agree that the software complexity has grown. I don’t necessarily think of it as a problem though. Chromium to me is the new OS of the web. You build apps running on top of this OS and there are complex security, performance, and multi-tenancies concerns.
IMO, modern software has gotten complicated but that simply follows a natural growth as technologies mature and progress over time. You could solve this “problem” by inventing better tools, and better abstractions… that help you navigate the complexity better. But I don’t think reducing the complexity is always possible.
This is where I fundamentally disagree. There are many good examples which tell that complexity is often unnecessary. Over time, software tends to build up layers over layers, often completely for no reason other than historic growth and cruft.
The system will not collapse, I think, but fall into disrepair. Currently, there is enough money in companies to pay armies of developers to keep this going, but now that we are already deep in a recession and going into a depression, it might be that the workload to feed these behemoths exceeds the available manpower. This might then motivate companies to give more weight to simplicity as it directly affects the bottom end and competitivity.
Systems tend to collapse, replacing complex mechanisms with simpler equivalents. This used to be called “systems collapse theory” but apparently is now called collapsology. For example, we are seeing an ongoing migration away from C, C++, and other memory-unsafe languages; the complexity of manual memory safety is collapsing and being replaced with automatic memory management techniques.
This is a bit like saying “well the browser already has so many features: versions of HTML and CSS to support, Bluetooth, etc. – could adding more make it substantially worse?”
Yes, it could – there’s no upper bound on compile times, just like there’s no upper bound on “reckless” features.
That said, I only use Chrome as a backup browser, so meh
Rust compile times are really good now. It’s not C fast but a lot better than C++. At this point it’s a non-issue for me.
(YMMV, depends on dependencies, etc etc)
Is there any evidence to support the claim that replacing C++ with Rust code would substantially slow down compile times? As someone who writes C++ code every day and also has done some Rust projects that see daily use at work, I really don’t see much of a difference in terms of compile time. It’s slow for both languages.
In the context of gentoo, you now need to compile both clang and rustc, this probably is a substantial increase.
It appears to be about the same, perhaps slightly worse.
Builds would be way quicker if they just ported everything from C++ to C.
Memory safety problems would be way worse, but this thread seems unconcerned with that question.
My takeaway is that Gnome and KDE are doing a great job at evolving their respective desktops and ecosystems.
Meanwhile, my own desktop looks consistently crude at least.
GTK4 cannot even scroll lists [1], a bug that is open for literal years. Gnome based UIs are basically falling apart, the situation is extremely dire. KDE and qt on the other hand are doing better.
[1] https://gitlab.gnome.org/GNOME/gtk/-/issues/2971 (“Scrolling in GtkListView is broken. The scroll is set to random places”)
Oh good, interesting to know that seeing this in GTK4 Nautilus wasn’t just me. It is concerning to find out that it’s a fundamental toolkit bug. I usually don’t run into those on Windows or macOS…
If you study the GTK4 API, you will notice that they painted themselves into a corner in terms of API specification and performance guarantees. The API is very ambitious and I commend them for that, but it will be nearly impossible for the GTK4 library implementers to actually follow through with a sane, correct, robust and complete implementation. Perhaps it’s even mathematically impossible. Furthermore, they are severely understaffed, with only a handful of key contributors (who I respect a lot for their hard work) doing almost everything.
Rust’s standard library is really big. The “core language” is billed as small and that may be true, but to do anything with the language, you need to use the standard library…and it is quite big and not always easily discoverable, in my experience.
Not sure where I’m going with this, just an observation.
Interesting, I would probably say that from learnability perspective, Rust stdlib is perfectly sized: it has the basic things (hashmaps, subprocesses, TCP), but doesn’t have anything extra (CLI argument parsing, JSON, HTTP). I’d say that going in either direction would make stuff harder to learn.
But year, learning stdlib is always hard, because that’s just a lot of stuff. What worked for me is just reading the docs for all stdlib modules. It’s not that big of a pile of text, and, if you glanced through everything, it becomes much easier to find stuff later.
Rust’s standard library is really big.
Compared to what? When I look at the usual suspects in the industry (C++, C#, etc.), then Rust seems slim. If you cut things out of Rust’s standard library, you will have a mess of conflicting crates that try to patch the holes in the standard library. This is not without precedent, there’s already a mess with time
and chrono
[1].
[1] https://www.reddit.com/r/rust/comments/ts84n4/chrono_or_time_03/
When you’re planning an event with 15,000 hackers in a tight space these days, the COVID logistics can take the wind right out of your sails.
It’s important to understand the context here. While the pandemic has been declared over in large parts of the world (e.g. with 60,000 World Cup fans being hosted in the stadium in Quatar [1], tightly seated together, basically without any Covid measures), Germany is still in the firm grip of excessive worries. Germany still has some measures and is only at the beginning of a very long process of people healing psychologically, i.e. losing their fear and returning to normalcy. The return to normalcy is particularly difficult there because there is some cognitive dissonance and pain in realizing how obsolete the measures really are, so this realization must propagate very slowly as to not upset the populace too much.
The moral of the story is, rabbit holes are crucial
This is so, so important. Always dig into your problems and dig deep. If you don’t know how something works in a dependency find out the code is right there.
Absolutely, and it doesn’t just apply to Open Source. I have too many anecdotes of “huh, that’s odd… It doesn’t bother me right now and it’s not a part I’m working on right now, but let’s look into it…” leading to crucial insights, bug fixes or even design decisions that saved me or my employer tons of time and money. And whenever I’m too busy to do this kind of work, I can feel how everything around me gets inefficient, unreliable and brittle. It’s an essential part of good software development.
I would say that around 90% of the paid open-source work that I’ve done has been working on things that I learned about going down rabbit holes.
In other words nothing was stolen but a mirror/proxy was set up.
In a way that’s the same thing one pays cloudflare for and what Google does with cached URLs, or something archive.org etc du. I think some of the measures mentioned therefor would prevent that too.
And yes reporting to their DNS and/or housing provider is the right way to go. Of course depending on persistence it’s whack a mole. In the end one could download the whole website (just like your browser does) and upload it somewhere. On the topic of JS. An “attacker” could run a headless browser. That’s something some front end frameworks do/did for websites to be indexed by search engines.
So what you put in the internet people can download and upload or proxy in some form.
In other words nothing was stolen but a mirror/proxy was set up.
Insert obligatory “You wouldn’t steal a car” here
https://www.youtube.com/watch?v=HmZm8vNHBSU
The words “theft” and “steal” are inaccurate when duplication is essentially free and nobody lost their copy.
The correct word is probably impersonation. They are impersonating the author. In certain cases impersonation is indeed a crime.
I think the better word is plagiarism. They are not pretending to be the author (good.com), they are trying to be another, better ranked, full of ads website (proxy.com) that happens to have the same content (it’s just plagiarized).
Damn. I remember when people where writing articles like this about Python 2.7. Time really does fly.
Checked the comments for this exact sentiment. I want to say “but I just cut everything over from 2.x!”, but then I realize that was 4-5 years ago…
The introduction of Python 3 was traumatic for the entire ecosystem. I still have to install packages like python-is-python3
[1] on some machines because otherwise, some things just don’t work. This was more than a decade of pain.
[1] https://packages.ubuntu.com/jammy/python/python-is-python3
I still have to make hacks in things like build scripts to accommodate for tools which “work” with python 3 but expect the shebang #!/usr/bin/env python
to work. All of Google’s C++ stuff, for example, assumes python
is python 3, which it just isn’t on most systems.
Personally, I don’t care about any of the improvements in Python 3, they just made string handling a bit more annoying. But it’s impressive that the roll-out was so incredibly botched that it’s still causing major problems 15 years later.
Well, the lengthy treatise can be summarized that C++ is a language which has quirks and with which you can do potentially dangerous things. This is also true for any kitchen knife or car. The way move semantics has been solved in C++11 may not seem very elegant to many. What is often overlooked (including in the article) is that it was possible to do something similar already with previous C++ versions. Rust has advantages in terms of performance, if one compares e.g. with shared and weak pointers. However, Rust also has various disadvantages or requires restrictions in terms of architecture, and certain things are very difficult to do or not feasible. “There is no silver bullet”.
The way move semantics has been solved in C++11 may not seem very elegant to many.
It is elegant, given the constraint of backwards compatibility to C++98. It’s not elegant compared to what modern languages can do.
What is often overlooked (including in the article) is that it was possible to do something similar already with previous C++ versions.
This is not entirely true. Move semantics brought things to the table that could not be done in C++98. Especially std::unique_ptr
is literally impossible to implement in C++98. Despite all its shortcomings, C++11 was a huge improvement.
It is elegant, given the constraint of backwards compatibility to C++98. It’s not elegant compared to what modern languages can do.
Maintaining backwards compatibility is a challenge indeed. I’m glad they’re doing it; it’s an essential feature to protect investments made. Regarding modern languages: C++20 is obviously “modern”, but certainly not an example of beauty.
Move semantics brought things to the table that could not be done in C++98
I can’t explain why this rumor is so persistent; someone has apparently invested a lot of money in marketing. It is well possible to implement move semantics and even unique pointers with pre-2011 C++, and there is even a Boost library for it.
I can’t explain why this rumor is so persistent
This rumour is so persistent because it is true. The std::auto_ptr
in C++98 has entirely different semantics. You cannot implement a std::unique_ptr
in C++98. And the smart pointer collection that boost provided back then was also just a band-aid at best. Trust me, I develop C++ software in my spare time and professionally for many years and I maintain codebases that are locked into C++98. This is my daily life, I feel the pain every day.
C++20 is obviously “modern”, but certainly not an example of beauty.
I don’t think C++20 is modern. You still have all the legacy burden, unsafe memory management, insane integer promotion rules, strings without encoding, crazy locale library, the list goes on. C++20 is as modern as a cassette player with a Bluetooth 5.2 module soldered on.
This rumour is so persistent because it is true.
No. I’m not talking about auto_ptr. Have e.g. a look at https://www.boost.org/doc/libs/1_63_0/doc/html/move.html. Every C++ programmer likely uses or at least knows Boost.
I don’t think C++20 is modern
It’s modern by definition; there are a lot of very important people in the commitee who are pushing for the latest craze with every release; even C++98 was officially called “modern C++”; the term is arbitrarily stretchable; personally I don’t care if it’s modern; most features we have in today’s languages were invented 40 to 60 years ago.
Have e.g. a look at https://www.boost.org/doc/libs/1_63_0/doc/html/move.html.
Just like the boost pointers, this is a band-aid at best. First of all, you now have to put macros everywhere into your code (also, have fun debugging and stepping through that). And then you still don’t have support for e.g. move-appending a std::vector
element and things like that. As I said, many aspects of move semantics are impossible without actually using C++11. At best, you’re emulating C++11 behaviour with macros and elaborate C++98 incantations, while fighting tooth and nail against both the standard library and the language itself. Did you have good experiences with Boost.Move?
We had many libraries in the nineties, and there still are some today, where a lot of macros are used all over the code, some even with pre-processor or code generators. That’s absolutely no problem; on the contrary it makes the code clearer. The so-called new features of C++11 are primarily syntax sugar. Only in C++14 or 17 there started to appear a few things you couldn’t do already with C++98.
I remember well the time when trees were wasted on programming journals, and where there were regular puzzle columns on C++ that demonstrated its surprising capabilities. This was before Alexandrescu’s famous book and anticipated much of what we later had in Boost. Yes, Boost is an amazing and qualitatively impressive library in every respect.
This is a good post. The C++ question of what remains in the “moved from” object is a big, nasty can of worms. Sometimes it’s well-defined, like e.g. std::unique_ptr
are nulled when they are moved from (at the cost of performance). Other classes are less clear on the specifics, so you have to consider it on a case-by-case basis. Ideally, of course, you never use “moved from” objects, and that is how Rust does it, enforced by the compiler. This is faster and safer. In C++, the entire thing makes my head spin, so I usually just never use “moved from” objects again, unless I know the guaranteed behaviour off the top of my head.
It’s a difficult problem in general once you start to have structure fields. The example of the string in Rust is fine for a local variable, but presumably this means that you can’t move a string out of a field in an object in Rust, because doing so would require the field to become invalid. C++ works around this by effectively making it a sum type of string and invalid string (which, as the author points out, is deeply unsatisfying and error prone). Pony does this a lot better by effectively making = a swap operator, so you have syntax for cheaply replacing the string with an empty string or other canonical marker.
The other problem with C++ that the author hints at is that move isn’t really part of the type system. R-value references are part of the type system, move is a standard library idiom. This means that you can’t special case this in useful places. For example, it would be nice to allow destructive moves out of fields when implementing the destructive move of an object. A language with move in the type system could implement a consume and destructure operation that took a unique reference to an object and returned its fields in a single operation. This would let you safely move things out of all fields, as long as you could statically prove that the object was not aliased.
The example of the string in Rust is fine for a local variable, but presumably this means that you can’t move a string out of a field in an object in Rust, because doing so would require the field to become invalid. C++ works around this by effectively making it a sum type of string and invalid string (which, as the author points out, is deeply unsatisfying and error prone).
I’m not a rust expert by any means, but I think in rust you could achieve this pretty easily (and safely and explicitly) by having the struct field be an Option<T>
and using Option::take()
.
The problem then is that it’s now a union type and you need an explicit check for validity on every single field access. This is safer than C++ where you need the check but the type system doesn’t enforce it, it’s just undefined behaviour if it’s omitted, but it’s incredibly clunky for the programmer.
Well sure, but it sounds like the alternative is “deeply unsatisfying and error prone”? If you want the Pony approach (by my understanding of your description; I’m not familiar with the language) of using some special sentinel value instead of an Option<T>
, you could also use std::mem::replace()
, or more concisely std::mem::take()
if the type has a meaningful default value like ""
for String.
Yep a good post, bad title :-).
It has a good observation that Types must have specific, clearly expressed ‘capabilities’ (or traits) that make them usable or non-usable for a specific algorithmic or technical operation.
In a way, C++ templates system is a mechanism to express if a given type is ‘fitting’ to a specific ‘operation’ or algorithm. And if it is not – there was (often difficult to read), compilation error. But still there was a compilation error.
C++ allowed to expressed fitment of user types to algorithms, but did not allow to express the ‘fitment’ to the technical operations (like Move, parallel access, allocation model, exception model, etc).
And there is a impedance mismatch between built-in types and user-defined types, in the area of how these capabilities are expressed.
Now as the number and complexity of these technical operations grows, the inconsistency in expression of the capabilities through type system, is causing higher and higher cognitive load on developers (and I am sure compiler designers).
The problem with language now, is that it cannot evolve in a way where the complexity is constrained, but downward compatibility is maintained. It seems that we have to start choosing one over the other. As our cognitive faculties cannot just keep up with the complexity caused by the desire to maintain the previously-written-code compatibility of the language.
This might be borderline off-topic but with all the interest and hopes around the Fediverse as a total replacement for mainstream social media, these kinds of hands-on stories might serve as a reality check.
If you give an advertisement company the keys to your kingdom, you should not be surprised to find it defiled by its henchmen.
Good point, some of our tech giants are actually mega (or “meta”) advertising companies. Ads should be fine, but when advertising starts to mingle with maths, neuro marketing, and technology everything gets dystopian.
Didn’t you reinvent docker
with your solution? Docker also uses pivot_root
and essentially solves the exact same problem with very similar methods. A simple Dockerfile with FROM ubuntu:focal
would probably give you the same thing you outlined here, unless I missed some crucial requirement that cannot be satisfied with docker.
The similarities to Docker end at the systemd-nspawn step. After the systemd pivot-root step, the host userland is shut down entirely and the Ubuntu userland takes over, including privileged hardware access such as the graphics system.
Yeah, as hufman says this isn’t running both OSes in parallel, it’s having them installed in parallel – but mimicking a “normal” installation much more closely than one could ever achieve with docker. None of the namespace or cgroup stuff – real direct access to everything the kernel can provide.
Sounds like it’s because of COVID-19 infection risk for attendees though they don’t say it explicitly.
It’s fascinating that this is still such a concern after everyone who wanted (and even more) got their shots.
It’s not surprising – the Congressseuche was a kind of flu that was common during previous congresses, and while being out sick for a week was already not great, with Covid and the risk of long-term damage, the tradeoff has changed quite a bit. With vaccines, the morbidity risk of Covid is mostly solved and long-term damage has been reduced, but it’s still not entirely gone.
I guess the bigger issue is COVID is never going away, so the tradeoff at this point is do you want to do something now with reasonable precautions like wearing masks in crowded halls or just never ever do it in person again. The never do it in person option makes sense for lots of things. There are tons of conferences that could just be webinars. But if you think doing it in person is good, the risks from COVID are going to be more or less identical in 2023, 2024, etc. Like I hope they do come out with that vaccine that’s nasal and addresses all variants, but uh, even after that it’s not realistically going to get 100% uptake.
DEF CON (similar size) in August had close to 700 of 25,000 people report positive cases, but within that group over 12% of volunteer “goons” that had a better reporting rate.
The main Congress event isn’t held in a wildly different space (big convention center), and while it does have fewer cramped, hot, and sweaty hotel room parties than DC (I’m pretty sure I got COVID at one this year), instead it has more mixing of attendees with the general public in public transport.
By contrast, Camp is entirely outdoors (to the point that during a thunderstorm there’s nowhere really safe to go), with lots of fresh air and space for everyone.
Yeah, after Oktoberfest in Munich the numbers were spiking. Hospitals are full and they assume it will only be worse later this year. I think it is the right move, but still I am infinitely sad about it being cancelled
Windows builds of the Rust compiler now use profile-guided optimization, providing performance improvements of 10-20% for compiling Rust code on Windows.
Indeed, I do see about 10% faster builds on Windows. Nice.
Nothing?
Widely deploying remote attestation as described in the doomsday scenario here is probably not possible, even if Microsoft wanted it. The blogpost largely just pose questions about potential danger without really describing how any of this would be achiveable. It’s lazy, really.
It’s simply too brittle outside of tightly controlled environments and would break far too often to actually give any consumer value.
Can a school run some WPA2 endpoint and restrict access based off on some Chromebook the school issues and validate it with an attestation protocol? Sure. It’s tightly controlled.
Is this going to be achiveable with everything from my self-built desktop running Windows and my random consumer-grade laptops? It would be an engineering marvel. Microsoft would need to be supplying tightly controlled hardware configurations to all consumers and uh….I don’t see how that would happen. The infrastructure needed to even begin validating this would be an interesting problem on it’s own.
I still think Matthews take on this is the better one. Pluton is not (currently) a threat to software freedom
If you also care about the opinion of the FSF/Richard Stallman: They went back on their stance about TPMs in 2015. https://www.gnu.org/philosophy/can-you-trust.en.html
The TPM has proved a total failure for the goal of providing a platform for remote attestation to verify Digital Restrictions Management. […] The only current uses of the “Trusted Platform Modules” are the innocent secondary uses—for instance, to verify that no one has surreptitiously changed the system in a computer. […] Therefore, we conclude that the “Trusted Platform Modules” available for PCs are not dangerous, and there is no reason not to include one in a computer or support it in system software.
Widely deploying remote attestation as described in the doomsday scenario here is probably not possible
It already happened on Android with many apps. There are no modern Android phones that aren’t shipped with Google certified keys inside the TPM. The claim that a large-scale deployment of such systems is not possible is quite foolish, given that it already happened to various ecosystems and such trends are only accelerating. The threat is real and serious. May I ask, do you have a smartphone? Android, iPhone? Do you have a banking app? Did you try to assert your ownership of the device by installing an operating system of your choice? How many apps were you unable to use afterwards?
Do you have a banking app? Did you try to assert your ownership of the device by installing an operating system of your choice? How many apps were you unable to use afterwards?
What you are demanding here is not “ownership of the device”, but ownership or ownership-like rights to third-party services and hardware. The bank will, I am certain, still serve you via their web site, via phone call, probably these days via SMS, and certainly if you just show up in-person at a branch. They are free to set out the terms on which you get access to their systems, and for some banks and some apps, one of the terms is “app, but only if we can verify you haven’t messed with it or with crucial system libraries it depends on”.
You’re free to dislike that. But you don’t have an inherent moral right to access their systems without their consent. They have the same ownership rights to their systems and networks and devices that you have to yours, and your unfettered right to tinker with your stuff ends where their stuff begins.
Also, in my experience most people have completely wrong ideas about the incentives that lead to things like this – it’s not that the bank hates your freedom, or that Google hates your freedom, or that either of them wants to take your freedom or your “ownership of the device” from you. That’s the mustache-twirling hyperbolic strawman I’ve already pointed out in my other comment.
Instead, it’s that large corporate environments have weird incentive systems to begin with, and for large corporate environments in heavily-regulated industries that weirdness is generally at least squared if not cubed. So, say, the website might get declared a low-trust environment and they check some boxes to say they’re properly securing it, while the app gets declared a high-trust environment and they just forbid access from a modified version or from a version where they can’t reliably detect if modification has occurred, and that checks the required corporate and regulatory boxes to ship the app, which is less effort than they had to put in for the site. Even if the app literally just embeds a web view. Even if it makes no sense right now, it’s often the cumulative result of a bunch of seemed-reasonable-at-the-time decisions that most people go through life unaware of.
(my personal favorite example of this, from healthcare – the highly-regulated industry with which I’m most familiar – is that in the US many entities clung to faxes well past the time when it made any technical sense, largely because of a weird quirk where US health-care regulations treated fax systems differently than basically all other electronic transmission methods, for reasons that make no sense whatsoever today but presumably did multiple decades ago when the decision was made)
Meanwhile, the nightmare dystopian scenario that, for years, people have been asserting would be the endgame for the deployment of all this stuff… has still not materialized. If it had, you wouldn’t have been able to “assert ownership of the device” in the first place, remember.
You’re free to dislike that.
Although I don’t disagree with the main argument, saying things like “you are free not to use an iPhone or Android” is like saying “you are free to become a monk or live secluded in a jungle”. It is unrealistic and perpetrates the poor argument that most “people have a choice”, when it comes to social median, online services, identity etc. They don’t. Not even geeks. Try creating an eshop without integrating with Google (ads, analytics), Facebook, Twitter, Stripe, PayPal, Amazon… if you are Walmart you MIGHT pull it off, but otherwise good luck.
All of the things you mention in your example of setting up an online shop are pushed on you by social forces, not by technological handcuffs.
There is no technological solution to the social forces.
This is why it’s important to keep the door open to compatible third party implementations. Because without that, social forces become technological handcuffs.
Technology forms part of the social fabric, and therefore can interact with social forces. The classic example of this is copyleft and the free software movement. I’m not saying that FOSS was a great success, but it’s certainly true that it influenced the direction of software for 20 years or more.
As technologists, we should remember more often that technology does not exist outside of society and morality.
What you are demanding here is not “ownership of the device”, but ownership or ownership-like rights to third-party services and hardware
I wasn’t aware that my phone was a third party service or hardware.
I already explained this in a way that makes it hard to take your reply as being in good faith, but I’ll explain it again: you’re free to modify the things you own. You’re not free to demand unlimited/unrestricted access to the things other people own.
So the bank is free to set hours when their branch is open and say they won’t provide in-person service at the branch outside of those hours, no matter how much some people might insist this infringes their freedom to come in when they want to. They’re free to set a “shirt and shoes required” policy for receiving service at the branch, no matter how much some people might insist this infringes their freedom to come in dressed as they please.
And they’re free to set rules for accessing their systems via network connection.
Sometimes those rules include “mobile app, but only if we can verify it hasn’t been tampered with”, and it’s their right to do that no matter how much some people might insist this infringes their freedom to tinker with their devices.
As I said already, that freedom to tinker ends where someone else’s systems and devices begin. You don’t own the bank’s systems. Therefore you don’t get to dictate to them how and on what terms you’ll access those systems, no matter how much you might like to, because they have the same ownership rights to their systems and devices that you have to yours.
So the bank is free to set hours when their branch is open and say they won’t provide in-person service at the branch outside of those hours
But it isn’t free to demand complete control of the contents of cars on nearby roads. No matter how much the ability to inspect them may reduce bank robberies.
The bank may want the ability to inspect your car, but society doesn’t need to say yes to every misguided request.
But it isn’t free to demand complete control of the contents of cars on nearby roads.
That analogy doesn’t work, because “nearby roads” aren’t the bank’s property.
So if you want to go with that analogy and make it work: the bank branch may have drive-through facilities, and they may not accommodate all vehicle types. Say, due to lane width, a huge pickup truck or SUV might not fit, or due to the height of the covering over the lane, a very tall vehicle might not fit.
You still have the freedom to buy and drive a vehicle that doesn’t fit in the drive-through lane. But you don’t have the right to demand the bank rebuild the drive-through lane to accommodate you. They’re free to tell you to park and come inside, or use the ATM, or bank online, or any of the other methods they offer.
And, again, there is no situation in which “ownership of your device” creates a moral right to demand access to systems owned by others on terms you dictate. If the bank doesn’t want to grant you access on your preferred terms, they don’t have to; they can set their own terms for access to their systems and (subject to local laws about accessibility, etc.) enforce those terms.
(also, in some jurisdictions the bank absolutely could regulate the “contents of cars” on the bank’s property – for example, the bank could post a sign saying no firearms are permitted on the bank’s property, and that would apply equally to one stored in a car as it would to one brought inside the branch)
That analogy doesn’t work, because “nearby roads” aren’t the bank’s property.
And my phone is?
My phone is an access method, and I have neither sold nor rented it to the bank.
Your car is your property. But when you want to use your car on someone else’s property they can make rules about it. For example, where you can park, how fast you can drive, which direction you can drive, and so on.
Your networked device is your property. But when you want to use your networked device to access someone else’s devices/systems, which are their property and not yours, they can make rules about it.
I’ve explained this now multiple times, and I don’t see how any legitimate difficulty could still exist in understanding the point I’m making.
Yes, I understand that it is technically legal for them to do this. Technically legal is not the same as desirable. It’s a horrifyingly dystopian future being described here, and painted as desirable because it is possible.
I want a way off this ride, and I don’t see one.
No, “stop keeping your money in banks” is not a serious option.
No, I do not use Linux, Windows, or OSX.
Yes, I already refuse to install apps for this on my phone – I use my phone exclusively for tethering, maps, and getting paged when I am on call for work. I do not trust it to act in my best interests, and I do not want enforced software that I dislike spread to the rest of my computing devices.
Your networked device is your property. But when you want to use your networked device to access someone else’s devices/systems, which are their property and not yours, they can make rules about it.
It would probably be legal for a bank to require you to install a GPS tracker on your car to gain access to the bank. It would be safer for the bank if they could track the location of possible getaway cars. It would be safer for the bank to ensure that you didn’t go into sketchy neighborhoods where you could get mugged and have your bank cards stolen.
But I don’t think a future where banks remotely enforcing what you do with your car is a good one. It’s not worth the safety.
Every time I point out that the analogy falls apart when you try to extend control past the bank’s property line, you propose another analogy which extends control past the bank’s property line.
I cannot engage further with this.
What do you mean, “extend control past the bank’s property line?”.
In this analogy, the bank allows you to drive cars without GPS trackers; They just require you to have one installed to engage with them. They’re not controlling your property –you’re voluntarily complying with their business requirements. It’s just them choosing how you engage with their business. You can avoid getting a GPS tracker so long as you don’t set foot on a bank’s property.
This is less hypothetical than it sounds. While I’m not aware of banks pushing for GPS information, insurance companies already want this information in order to dynamically adjust rates based on driving habits, and to attribute blame more accurately in collisions.
I’ve interviewed for an offshoot of State Farm that was established to explore exactly this. The interviewer was very excited about the increased safety you’d get because drivers would know they’re being watched. This was a few years ago – today, of course, you’d need to do some remote attestation to ensure that the system wasn’t tampered with and the data was transmitted with full integrity.
Once this pool of data is established for analysis, it becomes very tempting for law enforcement, less pleasant regimes, and three letter agencies to access it.
Also, in my experience most people have completely wrong ideas about the incentives that lead to things like this – it’s not that the bank hates your freedom, or that Google hates your freedom, or that either of them wants to take your freedom or your “ownership of the device” from you.
Let’s ignore the fact that large corporations have a long and well-documented history of nefarious behavior. I mean, one of the first corporations in the west was the British East India Company. Calling it nefarious is a huge understatement. But that’s all not quite relevant to the point I’m making.
Instead, it’s that large corporate environments have weird incentive systems to begin with, and for large corporate environments in heavily-regulated industries that weirdness is generally at least squared if not cubed.
Fine. Does it truly matter if the reason is maliciousness or ignorant apathy combined with perverse incentives, if the end result is still the same? A difference which makes no difference is no difference at all. What I’m seeing is gradual disempowerment of people, not some quick power grab. And I don’t care what the reasons are, if the results are still the same.
Every time this discussion comes up on Lobsters, you trot out the comic-book villain trope as a way to belittle the people you disagree with. A box-ticking technocrat can be just as harmful as a villain.
Does it truly matter if the reason is maliciousness or ignorant apathy combined with perverse incentives, if the end result is still the same?
Except the end result is not the same. The freedom-hating cartoon villain would not give you a way out. Yet out here in the real world you do get a way out. And as I pointed out, it’s been getting finer-grained over time so that you actually have even more control over which security features you want on and which ones you want off.
This is not how an actual “war on general-purpose computing” would be waged!
Every time this discussion comes up on Lobsters, you trot out the comic-book villain trope as a way to belittle the people you disagree with. A box-ticking technocrat can be just as harmful as a villain.
My central assertion is that the Free Software movement and its adherents are actively hostile to security measures that are A) reasonable, B) desired by and C) accepted by much of the market, and that this hostility goes all the way back to the early days with Stallman writing purple prose about how he was standing up for “the masses” by having GNU su
refuse to support a “wheel” or equivalent group. Today that manifests itself as reflexive hyperbolic opposition to fairly banal system security enhancements, which inspire yet more reams of purple prose.
I further note that this opposition relies on appeals to emotion, especially fear (they’re coming for your freedom!), and on erecting straw-man opponents to knock down, neither of which is a particularly honest rhetorical tactic.
And finally, this opposition also doesn’t stand up to even the slightest bit of actual scrutiny or comparison to what’s occurring in the real world, and on that theme I note you yourself largely refused to actually engage with any of the points I made, and instead went meta and tried to tone-police how I made the points, or the fact that I was making them at all.
Yes, and the industry trend is to slowly extend that to more computing devices, including PCs. And, in this very thread, we have someone who is arguing that not only is it a company’s right, it’s effectively their duty, to ensure users aren’t tampering with their computing devices so that bad actors can’t compromise them.
Yup.
The hyperbole around this stuff runs into the inconvenient fact that all the horrible things have been technically possible for a very long time, using only features that already exist on consumer hardware and consumer operating systems, and yet the predicted dystopia… has not arrived.
Microsoft has been theoretically able to fully lock down laptops and desktops for years. Apple has been theoretically able to fully lock down laptops and desktops for years. The reason they haven’t is not that they lack the one final piece of the freedom-destroying superweapon that will finally let them power it on, and it is not that scrappy freedom-warriors on the internet have pushed back too hard. The reason they haven’t is that destroying freedom is not, and never has been, their goal.
So, so much of the argumentation around this stuff relies on building up strawmen and knocking them down. In the real world, there are no mustache-twirling executives cackling about how this time, finally, they will succeed at destroying freedom forever and bringing an end to general-purpose computing. There are just ordinary, usually really tired, people doing things for honestly pretty ordinary and banal reasons, and we generally will do best by taking their statements at face value: that they’re doing it for security, which is something both corporate and consumer users are loudly demanding.
A lot of people are tired of living in constant fear. Fear that looking at, or in some cases just being a recipient of, the wrong email or the wrong text message or the wrong PDF or the wrong link will silently and completely compromise their systems. Fear of malicious actors, both remote and intimate. Fear of being fired if they slip up and make even the tiniest mistake instead of being a perfect “human firewall”. Fear of all manner of things that we can prevent by default if we just choose to.
So we get more and more systems that have those protections by default. That let you just use it without being afraid. And if you want to live dangerously, you still can. The “I know what I’m doing” escape hatches are there! They’re even getting finer-grained over time, so that you can choose just how dangerously you want to live. You can turn off bits and pieces, or go all-in and replace the OS entirely. This is not the progression we would see in a world where the vendors were waging a “war on general-purpose computing”, and the actual observed state of the world is the strongest possible counterargument against the existence of such a “war”.
And yet we still get hyperbole like this article. I’m so, so tired of it at this point.
I don’t think it is not possible. Obviously game publishers already want this solution, windows 11 requires the chip (or will require the new one too, doesn’t make a difference), and if you ever owned an android device, you’ll know how many apps break when you install your own android copy.
I can definitely imagine banks and other services rolling this out as a requirement. Developing a new browser is already hard enough, but if only 10% of the services under cloudflare require this in the future, you’re basically locked out of using linux/owned android/new browsers. We have a regular inspection requirement in germany for all vehicles on the street, for the safety of others. Maybe this will come one day for the internet, simply because that could reduce the amount of spammers and bots.
Let’s spin this idea further: There is a law in germany that you’re responsible for your network connection. What if because of that you’ll not be allowed on public wifi anymore, without such an attestation ? No more headaches due to compromised devices.
I am so hyped. I work on embedded devices that run QNX. We have huge, lumbering C++ codebases. For the past couple of years, I’ve been building Rust skills and tooling. This stuff now runs on QNX. Not only will this make our systems safer and more reliable, it will also save us so, so much time as safe Rust cannot cause memory corruptions or segfault and it almost never leaks memory. All of this with the same performance as C++.
I wouldn’t try to sell rust to the team/org using that particular claim. The so called modern c++ leaks memory as often as rust. It’s much better to focus on claims that will be very visible easy to verify, like prevention of memory corruption, data races and undefined behaviours when convincing others :)
I say “almost never leaks memory” because technically speaking, safe Rust does not guarantee freeing all the memory. In practice, however, memory leaks in Rust are extremely rare due to lifetime management at compile time. It pretty much never happens because the compiler will complain if you mismanage the lifetimes of your objects. You automatically do it right because the language deliberately makes it hard to do it wrong.
This is in stark contrast to C++, where it is indeed true that
std::unique_ptr
(etc.) cleans up automatically, but the language does literally nothing to prevent you from just allocating memory willy-nilly and not freeing it.Saying that modern C++ leaks memory just as often as Rust is like saying that if you always brake carefully and in time, you travel just as safely as if you wore a seatbelt. Which might technically be true, but it does not reflect reality because humans make mistakes.
A very easy way to leak memory in Rust is to have a cyclical reference with an
Arc
without having the other side as aWeak
. This makes building classic Java style trees a bit cumbersome, if you need to traverse in both directions.A good structure is to just use flat vectors for your tree, but if you come from Java, cyclical reference counting is a quite common leak to make.
I don’t know what is your experience with both languages, but as far as I’m concerned allocating a pointer and losing track of it is not the typical leak that I had to deal with (more common in c++ but possible in safe rust). In fact I had to fix a leak caused by incorrect reference count in
Arc
due to bug inunsafe
part usingMaybeUninit
last week 🙃In my experience, most of the time the reason for constantly increasing memory usage (both rust and c++) is some long lived set or map that someone forgot to remove entries from. Since such space leaks are much more common and are equally likely in both languages, I wouldn’t try to sell rust as leaks preventing language.
This is in big contrast to stuff like preventing use after free and other UB - I’m writing rust professionally for last 3 years and I haven’t yet had to deal with any memory corruption, or data race. Which were a recurring issues when I was working in c++ codebases previously.
Interesting. That is precisely the type of leak that I most commonly encounter in C++ and pretty much never in Rust.
I’m talking about leaks where you lose or drop the pointer/handle to the resource. Of course there’s always the “endlessly growing map” kind of leak. This kind of leak cannot be prevented by any language because the programmer deliberately grows the data structure, which is an entirely different class of leak which is way easier to detect because you still have a handle to the data structure.
I agree with you that the main benefit of Rust is the safety that comes from the absence of memory corruptions and data races. And yes these issues appear in pretty much every non-trivial C++ code base no matter how skilled the programmers are.