Threads for algesten

  1. 4

    I always found the “Let it crash” very convincing. One of the most convincing talks I know is not even about Erlang. But I’ve never actually coded anything substantial in either Erlang or Elixir myself. I went the other way instead with Rust and static typing.

    But I’m curious, for those of you who do work on such systems, does it deliver on its promise? Is it simpler? More robust? And is modern Elixir done in the same vein of “Let it crash” and very few tests or verification?

    1. 5

      Rust follows the “let it crash”-philosophy, its panic system is Erlang-inspired. I used to be even stronger baked into the language, when it still had language-level tasking with a runtime. The nomicon chapter on unwinding still calls it out

      You can see that in the tasking/threading APIs where a panic crashes that component and another part of the system is responsible for handling.

      1. 4

        I’ve had to deal with more than one Rust service that take this philosophy to heart and so will fully crash the entire program in the presence of, say, a network connection timeout to a non-business-critical API endpoint. Maybe this isn’t the intended effect of the panic approach to error management, but it does seem to be a common outcome in my experience.

        The problem here is a mismatch of expectations. It’s nominally OK to crash an Erlang actor in response to many/most runtime faults, because Erlang actors always operate in a constellation of redundant peers, and failure is a first order concern of their supervisor. That crash impacts a single request.

        But e.g. systemd is not the OTP, and OS processes don’t operate in a cluster. A service running as an OS process is expected to be resilient to basically all runtime errors, even if those errors mean the service can’t fulfill its user-facing requirements. If an OS process crashes, it doesn’t impact a single request, it impacts every request served by the process, every other process with active connections to that process for any other reason, assumptions made by systemd about the soundness of that binary, probably some assumptions about off-host e.g. load balancers shuttling traffic to that instance, everything downstream from them, etc. etc.

        If “crash-only” means “terminate the request” and not “terminate the process” then all good! But then “crash” isn’t the right verb, I don’t think, as crashing is pretty widely understood to mean the OS level process of the program. Alas.

        1. 7

          Yeah, I think this is an acute case of catchy, but wildly misleading terminology. What is really (as in Elang or Midori) understood as proper “let it crash” is two dual properties:

          • abandoning the current “thing”
          • containing abandonment to some well-defined boundary, such that:
            • abandonment don’t propagate outside of this boundary
            • tearing things down at this boundary doesn’t compromise the state
            • restarting at the boundary is a well-defined operation which can fix transient errors
            • the actual blast radius from abandonment is small

          Everyone gets the first point, buts it’s the second one which matters, which is hard, and which leads to simplicity and reliability.

          1. 3

            To expand on this, Rust does only marginally, if at all, better here than you average $LANG:

            • the build-in boundary is OS thread, which is often too coarse-grained, there’s catch_unwind for do-it-yourself boundaries. There’s nothing to protect from thread monopolizing the CPU due to an infinite loop bug. Some errors (stack overflow, OOM) abort the process bypassing the recovery mechanism.
            • UnwindSafe machinery in theory helps somewhat with the tainted state problem. In practice, it’s too cumbersome to use and people often silence it. I had one spectacular bug where UnwindSafe would’ve saved couple of days of debugging, if it wasn’t silenced due to it tripping a compiler bug.
            • nothing to make restart workable, do-it-yourself again.
          2. 2

            But e.g. systemd is not the OTP, and OS processes don’t operate in a cluster. A service running as an OS process is expected to be resilient to basically all runtime errors, even if those errors mean the service can’t fulfill its user-facing requirements.

            I think this might be oversimplifying. Whether it’s reasonable to let a service continue to fulfill tasks despite encountering a serious fault is not clear cut. Example: A service that has a lot of shared state, say thread bound caches of various sensitive user data, a crash might lead to failed cleanups and subsequent data leaks.

            1. 1

              Let me rephrase my claim to be more precise.

              A service running as an OS process is generally expected to be resilient to runtime errors. If a runtime error puts the service in a state where it can no longer fulfill user requirements, and that state is transient and/or recoverable, it is usually preferable for the service to continue to respond to requests with errors, rather than crashing.

        2. 4

          In my experience across several companies and codebases in Elixir, I’d say the following things.

          “let it crash” can lead to clean code. It also originated out of a design space that I believe doesn’t map as directly onto modern webshit as folks want to believe. This is neither good nor bad, it’s just an occasional impedance mismatch between system design philosophies.

          “let it crash” encourages some people, drunk on the power of OTP and the actor model, to grossly overcomplicate their code. They decide deep supervision trees and worker pools and things are needed when a simple function will do. This is the curse of the beginner Elixir or Erlang developer, and if properly mentored this goes away quickly. If not properly mentored it progresses to a case of conference talks and awkward libraries.

          Testing and verification in the BEAM ecosystem is weird, and until recently was both best and worst in class depending on what languages you were up against. Dialyzer for example is a marvelous typechecker, but there is growing suspicion that it is severely stunted in the sorts of verification it is categorically capable of. On the other side, property-based testing is strictly old-hat over in at least the Erlang ecosystem and has been for quite some time iirc. Other folks are catching up.

          (Testing is also–in my opinion–most often valuable to solve team coordination problems and guard against entropy caused by other humans. This is orthogonal to language concerns, but comes out when you have larger webshit-style teams using BEAM stuff when compared with the origin of Erlang.)

          Robustness is quite possible. I’ve alluded elsewhere to how a running BEAM instance is more of a living thing (gasp, a pet!) than most contemporary app models (pour one out for Smalltalk)…this unlocks some very flexible things you can do in production that I haven’t really seen anywhere else and which make it possible to do things without downtime during an incident that most folks would just look at and go “wat.”. On the other hand, you have to design your systems to actually enable robustness–writing your standard webshit without structuring the application logic to have affordances for interactivity or process isolation or whatever means you’re basically using the BEAM like you would other more conventional systems.

          (You can also, as I’ve done with Python on at least one occasion, build a BEAM-like actor model with fault tolerance. The idioms are baked into Erlang and Elixir, but you can with sufficient effort reproduce them elsewhere.)

          (Also, “let it crash” doesn’t mean you won’t sometimes have to wrap your whole VM in a systemd unit to restart things when, say, an intern pushes to prod and blows all the way up the supervision tree.)

          1. 2

            In a sense, yes—an example I’ve run into several times is that a service you depend on becomes intermittently unresponsive. In a “regular” software service, unless you clutter your code up with retry and “fail soft” logic (basically your own ad-hoc OTP) this usually means a hard error, eg a end-user received an error message or a service needed to be restarted by the OS process manager.

            In Erlang the system can usually deal with these kinds of errors by automatically retrying; if the operation keeps failing, the error will propagate up to the next level of the “supervision tree”. Unless it makes it all the way up to the root the application will keep running; sometimes the only indication that something went wrong is some log output.

            The nice thing about “Let it crash” is that you don’t have to consider every possible failure scenario (eg what happens if this service call returns malformed data? What if it times out?). Instead of trying to preempt every possible error, which is messy and basically intractable, you can focus on the happy path and tell OTP what to do in case of a crash.

            That said, “Let it crash” is not a silver bullet that will solve all errors in your distributed system; you still have to be acutely aware about which parts of the system can be safely restarted and how. The nice thing is that the base assumption of OTP is that the system will fail at some point, and it gives you a very powerful set of tools to deal with it.

            Another thing that makes Erlang more robust is the process scheduling model: Processes are lightweight and use preemptive multitasking with fair scheduling, which means you’re less susceptible to “brownouts” from runaway processes.

            1. 1

              But I’m curious, for those of you who do work on such systems, does it deliver on its promise? Is it simpler? More robust? And is modern Elixir done in the same vein of “Let it crash” and very few tests or verification?

              I can only speak to the first half, and it is completely dependent on the team culture. If the team is very OTP/Erlang native, it can work out incredibly well. These teams tend to be extremely pragmatic and focused on boring, obvious, naive ways to solve problems, using global state as needed even!

              However, when OTP/Erlang collide with a team trying to treat it like anything else things can go badly…. quickly.

            1. 30

              Basically, the problems that Rust is designed to avoid can be solved in other ways — by good testing, good linting, good code review, and good monitoring.

              This sounds like a false economy to me. In my experience catching problems already at compilation stage is pretty much always a saving over catching bugs later in the life cycle – especially if it goes into production and is caught via monitoring. It might initially feel like higher velocity, but as the code complexity grows, it really isn’t.

              1. 16

                I’m reminded of the facetious adage from science that “a year in the library can save a day in the lab.”

                1. 11

                  You have reminded me of one of my favorites, which is, “weeks of programming can save you hours of planning.”

              1. 18

                It would be good in general if people were more aware of the political considerations of choosing a TLD and the dangers they might pose to a registration.

                I’ve seen people using .dev and .app a lot, it’s worth considering these are Google-controlled TLDs. What really rubbed me the wrong way about these TLDs is Google’s decision to make HSTS mandatory for the entire TLD, forcing HTTPS for any website using them. I’m sure some people will consider this a feature but for Google to arbitrarily impose this policy on an entire TLD felt off to me. No telling what they’ll do in the future.

                1. 12

                  .app and .dev aren’t comparable to ccTLDs like .sh and .up, however. gTLDs like .app and .dev have to stick to ICANN policies; ccTLDs don’t, and you’re at the mercy of the registry and national law for the country in question.

                  1. 11

                    I was actually just discussing this fact with someone, but interestingly, we were discussing it as a positive, not a negative.

                    All of the newTLDs are under ICANN’s dominion, and have to play by ICANN’s rules, so they don’t provide independence from ICANN’s influence. Whereas the CCTLDs are essentially unconditional handouts which ICANN can’t exert influence over. So there’s a tradeoff here depending on whom you distrust more: ICANN, or the specific country whose TLD you’ve chosen.

                  2. 10

                    HSTS preload for the entire TLD is brilliant idea, and I think every TLD going forward should have it.

                    Defaulting to insecure HTTP URLs is a legacy problem that creates a hole in web’s security (it doesn’t matter whats on insecure-HTTP sites, their mere existence is an entry point for MITM attacks against browser traffic). TOFU HSTS is only a partial band-aid, and per-domain preload list is not scalable.

                    1. 1

                      Does HTTPS really count as TOFU? Every cert is ultimately checked against a known list of CAs.

                      1. 4

                        The Trust-On-First-Use aspect is that HSTS is remembered by the browser only after the browser has loaded the site once; this leaves first-time visitors willing to connect over unencrypted HTTP.

                        (Well, except for the per-domain preload list mentioned by kornel.)

                        1. 2

                          Sure, but HSTS is strictly a hint that HTTPS is supported, and browsers should use that instead, right? There is no actual trust there, because the TLS certificate is still authenticated as normal.

                          Compare this to SSH, which actually is TOFU in most cases.

                          1. 3

                            Not quite - HSTS prevents connection over plaintext HTTP and prevents users from creating exceptions to ignore invalid certificates. It does more than be a hint, it changes how the browser works for that domain going forward. The TOFU part is that it won’t apply to a user’s first connection - they could still connect over plaintext HTTP, which means that a suitably positioned attacker could respond on the server’s behalf with messages that don’t include the HSTS header (if the attacker is fast enough). This works even if the site itself isn’t serving anything over HTTP or redirects immediately to HTTPS.

                            Calling it TOFU is admittedly a bit of a semantic stretch as I’m not sure what the specific act of trust is (arguably HSTS tells your browser to be less trustful), but the security properties are similar in that it only has the desired effect if the initial connection is trustworthy.

                            1. 1

                              Okay, I see the point about first-time connections, but that wouldn’t change regardless of the presence or absence of HSTS. So why single that header out? It seems to me that having HSTS is strictly better than not having one.

                              1. 2

                                The discussion was about HSTS preload which avoids the first connection problem just explained by pre-populating HSTS enforcement settings for specific domains directly in the browser distribution, so there is no risk of that first connection hijack scenario because the browser acts as if it had already received the header even if it had never actually connected before.

                                Normally this is something you would opt-in to and request for your own domain after you registered it, if desired… but Google preloaded HSTS for the entire TLDs in question, so you don’t have the option to make the decision yourself. If you register a domain under that TLD then Chrome will effectively refuse to ever connect via http to anything under that domain (and to my knowledge every other major browser uses the preload list from Chrome.)

                                It’s this lack of choice that has some people upset, though it seems somewhat overblown, as Google was always very upfront that this was a requirement, so it shouldn’t have been a surprise to anyone. There is also some real concern that there’s a conflict of interest in Google’s being effectively in total control of both the TLDs and the preload list for all browsers.

                                1. 1

                                  The discussion was about HSTS preload which avoids the first connection problem just explained by pre-populating HSTS enforcement settings for specific domains directly in the browser distribution, so there is no risk of that first connection hijack scenario because the browser acts as if it had already received the header even if it had never actually connected before

                                  Ahh, THIS is the context I was missing here. In which case, @kornel’s original comment about this being a non-scalable bandaid solution is correct IMO. It’s a useful mitigation, but probably only Google could realistically do it like this.

                                  I think the more annoying thing about .dev is that a bunch of local development dns systems like puma-dev and pow used .dev and then Google took it away and made us all change our dev environments.

                                  1. 2

                                    I think the more annoying thing about .dev is that a bunch of local development dns systems like puma-dev and pow used .dev and then Google took it away and made us all change our dev environments.

                                    That seems unfortunate, but a not terribly surprising consequence of ignoring the names that were specifically reserved for this purpose and making up their own thing instead.

                        2. 1

                          I mean user typing “site.example.com” URL in their browser’s address bar. If the URL isn’t in the HSTS preload list, then it is assumed to be HTTP URL and HTTPS upgrade is like TOFU (the first use is vulnerable to HTTPS-stripping). There are also plenty of http:// links on the web that haven’t been changed to https://, because HTTP->HTTPS redirects keep them working seamlessly, but they’re also a weak link if not HSTS-ed.

                      2. 5

                        uh! I chose .app (unaware, stupid me) for a software project that discarded the Go toolchain for this very reason. Have to reconsider, thx!

                        1. 3

                          I have no idea where to even start to research this stuff. I use .dev in my websites but I didn’t know it was controlled by Google. I legitimately thought these all are controlled by some central entity.

                          1. 2

                            I have no idea where to even start to research this stuff.

                            It is not really that hard. You can start with https://en.wikipedia.org/wiki/.dev

                            If you are going to rent a property (domain names) for your www home and if you are going to let your content live in that home for many years it pays off to research this stuff about where you are renting the property from.

                            1. 1

                              .test is untainted.

                              1. 6

                                Huh? There’s no registrar for .test, it’s just for private test/debug domains.

                          1. 6

                            This is the tweet that I believe prompted the blog post. https://twitter.com/antirez/status/1587581541022142464

                            I was one of the voices saying that I wish people had some other go to datastructure than linked lists for learning a new language.

                            Mainly because I love Rust (especially the borrow checker), and I hate the idea people get turned off the language due to ownership rules messing with their first attempt coding something.

                            1. 8

                              Now I’m wondering how many C books and tutorials include buggy linked lists with behavior worse than O(n) append.

                              There are some pretty bad C books, and even good ones do have errors. https://wozniak.ca/blog/2018/06/25/1/index.html

                              1. 3

                                Thanks for sharing that link. I fear I have permanently lost some coding iq from stepping through the (mentally corrected) version of the code sample from that page.

                              2. 7

                                messing with their first attempt coding something.

                                It’s totally fine if Rust is not a good choice for somebody’s first language. C++ is a horrible first language, as are several others.

                                1. 2

                                  Yeah. I meant “coding something in Rust”.

                                  Wheter Rust is a good first language, I don’t know. I think it could be, because there’s plenty of stuff you can do without being even close to “fight the borrow checker”.

                                  Maybe it comes down to whether teaching pass by reference vs value is something that’s OK to learn early or not.

                                2. 3

                                  The borrow checker vs linked lists conflict is so unfortunate. I wonder if borrow checking hasn’t been done before, because every time a language researcher considered such design they’ve thought “it won’t even work on linked lists, obviously a dead-end idea”.

                                  1. 6

                                    I wonder if borrow checking hasn’t been done before, because every time a language researcher considered such design they’ve thought “it won’t even work on linked lists, obviously a dead-end idea”.

                                    I think the historical record (read: breadcrumbs through the literature) suggests otherwise; there’s been continual progress on this kind of thing stretching back 30 years (some original type theory work in the early 90’s, some work on applying it to systems programming in the context of Cyclone in the early 00’s, various research-grade projects, and Rust first showed up on the scene in 2010 but it takes a while for a language to become mature). I think the reason it hasn’t been done before is because it wasn’t actually trivial figuring out how, and it took a while to get there.

                                    1. 4

                                      I wonder if borrow checking hasn’t been done before, because every time a language researcher considered such design they’ve thought “it won’t even work on linked lists, obviously a dead-end idea”

                                      This makes me wonder if a prerequisite for Rust’s creation was a critical mass of people who all hold the opinion “eh, linked lists suck anyway, no big deal”.

                                      1. 10

                                        prerequisite for Rust’s creation was a critical mass of people who all hold the opinion “eh, linked lists suck anyway, no big deal”.

                                        I don’t know why this is how people think about rust. For me, as a low-level dev that does tons of crazy stuff that isn’t necessarily borrowck-friendly, I just figured “free safety checks when/where possible” then back to unsafe when I can’t. It’s not like you lose anything compared to before, you just don’t get to take advantage of the safety guarantees the language has to offer as much as you would like to at all times.

                                        (I also don’t end up reaching for unsafe as often as I thought I would.)

                                        1. 13

                                          The Rust community is pretty big now, and opinions on unsafe vary a lot. Some people write really twisted or inefficient code just to avoid unsafe {}. I guess it depends whether you see Rust as safer C, or faster Python.

                                          1. 1

                                            I’ll bite my tongue here and refrain from saying anything other than I agree with you.

                                          2. 3

                                            It’s not like you lose anything compared to before,

                                            Ah, this is a very good way to look at it

                                      2. 2

                                        Anyone know what Rust blog post antirez was reading?

                                      1. 6

                                        As a smaller company without dedicated sysadmins, we definitely got some alarm fatigue by this. The one week’s notice made it seem like another Heartbleed level bug with a catchy new name and dedicated website, and it really wasn’t.

                                        I don’t know what the solution is, and I don’t know the processes here, but I wish the CRITICAL status would have been confirmed and downgraded to HIGH a week earlier.

                                        1. 23

                                          A Rust compiler must also implement unsafe, of course, which disables a lot of the checks that the compiler makes.

                                          This is just wrong, and it’s frustrating to hear it repeated so often. unsafe permits calling other unsafe code, dereferencing raw pointers, reinterpreting unions, and implementing unsafe traits like Send and Sync. It does not “turn off” the type system, borrow checker, or anything else.

                                          1. 9

                                            Was also quite unhappy seeing this incorrect understanding of unsafe repeated.

                                            My favorite explanation is Steve Klabnik’s “You can’t ‘turn off the borrow checker’ in Rust”.

                                            1. 3

                                              Though you can do:

                                              let myref = unsafe {
                                                  let ptr = &self.some_thing as *const SomeThing;
                                                  &*ptr
                                              };
                                              

                                              Which makes Rust forget that myref is borrowing self. Not turning off borrow checker, but at least limiting how much it “knows”.

                                          1. 1

                                            Interesting article!

                                            I’m toying with the idea of using technique combined with the Sans IO style (like seen in Quinn https://docs.rs/quinn-proto/0.8.4/quinn_proto/struct.Connection.html)

                                            1. 2

                                              Isn’t the main issue here that SMTP was a protocol designed back in the day when internet was nice?

                                              The idea of being able to unsolicited, send a message to some random inbox feels outdated. Personally I don’t want any unsolicited calls to my phone, snail mail to my postbox, SMS to my mobile, Whatsapp messages etc. In fact, unless I’ve actively consented to some communication, I don’t want it at all, regardless of medium.

                                              We arrived here by steadily eroding the idea of doing what’s the morally right thing to do. It simply isn’t morally right to push marketing on to people that didn’t ask for it. And there’s sadly no turning back the clock.

                                              In that spirit, I think the concept of “consent” needs to be made into a protocol. Something distributed and technology agnostic that all methods of communication be required to use. No soft opt-ins, no exceptions for b2b.

                                              1. 2

                                                I don’t think it’s the times, but more the cost. It’s crazy cheap to obtain and send emails. Just like with other ads.

                                                Hashcash, which Bitcoin (and thereby others) took the PoW concept from and unlike with blockchain stuff it’s just per email, so it doesn’t have such an environmental impact. It’s more like how you have a cost for password checks using bcrypt, scrypt, etc. these days. The idea is to severely slow down spammers. I think it could still work well, if widely adopted and required. Might also cut down a bit on annoying newsletters. ;)

                                                Of course that won’t stop spam for good, but I think solving it completely is pretty much impossible. However you can raise the cost and you can educate people thereby working on making it unprofitable even with small margins. Sometimes I wonder whether spam blockers are even a disservice at times. When people are good with handling spam, profitability sinks. And there’s certainly trends in spam, because some approaches don’t work so well anymore.

                                                While I am not saying that it will all get better, especially with Gmail and others not displaying the senders address or having it grey on white and tiny isn’t exactly helping. At the same time I do think that as “digital natives” are going to shrink email and maybe also telephone spam.

                                              1. 15

                                                It’s a grandiose extrapolation of what Rust was supposed to be. There is a grain of truth in there — Rust really was supposed to take established post-ALGOL academic research and make it a practical, pragmatic systems programming language.

                                                You can get a better, straightforward explanation straight from the author: http://venge.net/graydon/talks/intro-talk-2.pdf

                                                1. 4

                                                  That PDF is funny, because I don’t think a single code example can be compiled on the Rust which became 1.0. It does however explain the history of how Rust came about.

                                                  1. 21

                                                    Rust explored a lot of designs before 1.0. It tried to have GC pointers, Erlang-like tasks, Golang-like green threads. When people wonder “why won’t Rust just add feature X?”, surprisingly often the answer is “Rust 0.x had it, and decided not to keep it”.

                                                    1. 2

                                                      Classes and interfaces is my favourite feature here.

                                                1. 2

                                                  I think this post maybe misses a reason for doing panic.

                                                  I tend to prefer crashing over recovering. However expect is mostly more useful than unwrap to provide context. In my production code my “standard” is:

                                                  1. panic = 'abort'
                                                  2. keep assert
                                                  3. keep debug symbols
                                                  1. 7

                                                    This is honestly the only thing that’s been holding me back from making anything in rust. Now that it’s going into GCC there’s probably going to be a spec and hopefully slower and more stabler development. I don’t know what’s going to come after rust but I can’t find much of a reason to not jump ship from C++ anymore.

                                                    1. 33

                                                      I doubt a new GCC frontend will be the reason a spec emerges. I would expect a spec to result from the needs of the safety and certification industry (and there already are efforts in that direction: https://ferrous-systems.com/blog/ferrocene-language-specification/ ) instead.

                                                      1. 15

                                                        Thanks for highlighting that. We’re well on track to hit the committed release date (we’re in final polish, mainly making sure that the writing can be contributed to).

                                                      2. 6

                                                        hopefully slower and more stabler development

                                                        As per usual, slower and more stable development can be experienced but using the version of rust in your OS instead of whatever bleeding edge version upstream is shipping…

                                                        1. 1

                                                          Unless one of your dependencies starts using new features as soon as possible.

                                                          1. 4

                                                            Which is the exact same problem even when using GCC Rust, so it’s not really a relevant argument.

                                                            1. 4

                                                              Stick with old version of dependency?

                                                              1. 21

                                                                Let’s be honest, Rust uses evergreen policy, the ecosystem and tooling follows it, and fighting it is needless pain.

                                                                I still recommend to update the compiler regularly. HOWEVER, you don’t have to read the release notes. Just ignore whatever they say, and continue writing the code the way you used to. Rust keeps backwards compatibility.

                                                                Also, I’d like to highlight that release cadence has very little to do with speed of language evolution or its stability. Rust features still take years to develop, and they’re just released on the next occasion. This says nothing about the number and scale of changes being developed.

                                                                It’s like complaint that a pizza cut into 16 slices has too many calories, and you’d prefer it cut into 4 slices instead.

                                                                1. 2

                                                                  The time it takes to stabilize a feature doesn’t really matter though if there are many many features in the pipeline at all times.

                                                                  1. 10

                                                                    Yup, that’s what I’m saying. Number of features in the pipeline is unrelated to release frequency. Rust could have a new stable release every day, and it wouldn’t give it more or less features.

                                                                2. 3

                                                                  Do that, and now you’re responsible for doing security back-ports of every dependency. That’s potentially a lot more expensive than tracking newer releases.

                                                                  1. 13

                                                                    So then don’t do that and track the newer releases. Life is a series of tradeoffs, pick some.

                                                                    It just seems like a weird sense of entitlement at work here: “I don’t want to use the latest version of the compiler, and I don’t want to use older versions of dependencies because I don’t want to do any work to keep those dependencies secure. Instead I want the entire world to adopt my pace, regardless of what they’d prefer.”

                                                                    1. 1

                                                                      The problem with that view is that it devalues the whole ecosystem. You have two choices:

                                                                      • Pay a cost to keep updating your code because it breaks with newer compilers.
                                                                      • Pay a cost to back-port security fixes because the new version of your dependencies have moved to an incompatible version of the language.

                                                                      If these are the only choices then you have to pick one, but there’s always an implicit third choice:

                                                                      • Pick an ecosystem that values long-term stability.

                                                                      To give a couple of examples from projects that I’ve worked on:

                                                                      FreeBSD maintains very strong binary compatibility guarantees for C code. Kernel modules are expected to work with newer kernels within the same major revision and folks have to add padding to structures if they’re going to want to add fields later on. Userspace libraries in the base system all use symbol versioning, so functions can be deprecated, replaced with compat versions, and then hidden for linking by new programs. The C and C++ standards have both put a lot of effort into backwards compatibility. C++11 did have some syntactic breaks but they were fairly easy to mechanically fix (the main one was introducing user-defined string literals, which meant that you needed to insert spaces between string literals and macros in old code) but generally I can compile 10-20-year old code with the latest libraries and expect it to work. I can still compile C89 code with a C11 compiler. C23 will break C89 code that relies on some K&R features that were deprecated in 1989.

                                                                      Moving away from systems code and towards applications, GNUstep uses Objective-C, which uses late binding by default and (for the last 15 years or so) even extends this to instance variables (fields) in objects, so you don’t even have an ABI break if a library adds a field to a class that you subclass. Apple has been a bit more aggressive about deprecating things in their OpenStep implementation (Cocoa), but there are quite a few projects still around that started in 1988 as NeXTSTEP apps and have gradually evolved to be modern macOS / iOS apps, with a multi-year window to fix the use of features that were removed or redesigned in newer versions of Cocoa. You can still compile a program with XCode today that will run linked against a version of the Cocoa frameworks in an OS release several years old.

                                                                      The entitlement that you mention cuts both ways. If an ecosystem is saying ‘whatever you do, it’s going to be expensive, please come and contribute to the value of this ecosystem by releasing software in it!’ then my reaction will be ‘no thanks, I’ll keep contributing to places that value long-term stability because I want to spend my time adding new features, not playing catch up’.

                                                                      LLVM has the same rapid-code-churn view of the world as Rust and it costs the ecosystem a lot. There are a huge number of interesting features that were implemented on forks and weren’t able to be upstreamed because the codebase has churned so much underneath it that updating was too much work for the authors.

                                                            2. 3

                                                              Corroding codebases! this was my reason too for not switching from C++. Only last week I was thinking of dlang -betterC for my little “system programming” projects. It is now hard not to ignore rust. perhaps after one last attempt at learning ATS.

                                                            1. 9

                                                              I’m really excited about releases like these. It’s just “finishing” the language and not adding any new features conceptually, just making it all more cohesive. The lack of Enum default derive is one of those things I run into rarely, but adding it in now reduces the conceptual overhead of Rust since deriving default now works on both Enums and Structs.

                                                              I also think this shows the benefit of Rust’s release model, which is that smaller fixes and stabilizations can be made over time, and don’t have to be part of a larger release. I’m curious how the potential Rust stabilization in GCC affects things, especially when smaller fixes in a release like this might be nice in older versions of Rust (and as far as I know GCC is targeting an older Rust version).

                                                              1. 10

                                                                Rust has a fixed 6-week release train model. Nobody decides which release is going to be small or not. When stuff is ready, it lands.

                                                                Once in a while a large feature that took years to develop lands, and people freak out and proclaim that Rust is changing to fast and rushing things, as if it invented and implemented every feature in 6 weeks.

                                                                In this release: cargo add feature request was filed 8 years ago. Implementation issue has been opened 4 years ago. It waited for a better TOML parser/serializer to be developed, and once that happened, the replacement work started 5 months ago.

                                                                1. 2

                                                                  a better TOML parser/serializer

                                                                  This piques my interest. What library is this? Does it maintain comments/formatting?

                                                                  1. 3

                                                                    The crate is toml_edit, and it does preserve comments and (most) formatting.

                                                                    Maybe something to format Cargo.toml files could be helpful as well?

                                                                2. 1

                                                                  I’m curious how the potential Rust stabilization in GCC affects things

                                                                  What do you mean? The gcc-rs project? I’d hope it doesn’t affect mainline Rust at all.

                                                                  1. 2

                                                                    Yeah the gcc-rs project. I wonder about certain stabilizations in later versions of Rust which are very easily added to earlier versions of Rust built by gcc-rs. I don’t think it will affect mainline Rust, but if certain nice-to-haves, or more importantly unsound fixes, are backported for gcc-rs that could cause an unfortunate schism in the ecosystem.

                                                                    I haven’t seen or heard of anything indicating this might happen, but with multiple implementations in-use I do think it is something that will eventually occur (especially for safety-related and unsound concerns)

                                                                  2. 1

                                                                    I have never liked the idea of a Default trait or typeclass. Default with respect to what operation? Most times people want defaulting, they seem to have some kind of monoidal operation in mind.

                                                                    1. 10

                                                                      Default with respect to what operation?

                                                                      Initialization right? I don’t see how one would use the trait for any other operation. To me it seems quite natural that a number has a “reasonable default” (0) as well as a string (""). It’s not like the language forces you to use Default in case you have other defaults in mind.

                                                                  1. 6

                                                                    In Rust, sometimes I feel dirty when I shadow an outer variable in an inner scope. But this syntax basically takes that and runs with it. No shame! :)

                                                                    1. 5

                                                                      It almost looks like flow-based type checks:

                                                                      // animation is nullable here
                                                                      if animation {
                                                                          // animation is non-nullable here
                                                                      }
                                                                      
                                                                      1. 2

                                                                        Yeah. I think “if animation” reads a bit better english wise than “if let animation”. But I guess that let reinforces that we are indeed doing a new binding, so maybe it’s better for clarity.

                                                                        if let Some(foo) = foo {

                                                                        Do we need a shorthand here I wonder? I guess in some respects it’s more elegant that this is not a special case and I can just as easily plug in my own type in the deconstruction.

                                                                    1. 18

                                                                      /me raises hand.

                                                                      I do need a “ascii gui” platform which I can use to build my personal workflow on top of over the years. But Emacs doesn’t feel good enough to invest significant time into:

                                                                      • architecturally, it’s pretty horrible with asynchrony – things routinely block GUI. I think this got better recently with the introduction of the new async API, but I don’t think it’s possible to proprely retro-fit non-blocking behavior throughout the stack
                                                                      • it’s pretty horrible with modularity – I want to pull other people’s code as self-contained, well-defined modules. VS Code extension marketplace is doing much better job at unleasing uncoordinated open-source creativity. I feel Emacs kits like spacemacs are popular because they solve “composition” problem unsolved in the core
                                                                      • it’s pretty horrible with modularity – elisp single global mutable namespace of things is good to build your tool by ourself, but is bad for building a common tool together
                                                                      • it’s pretty horrible with modulariy and asynchrony – poorly behaved third party code can easily block event loop (cf whith VS Code, where extensions run in a separate process, and host/plugin interface is async by construction)
                                                                      • elisp is a pretty horrible language – modularity, performance, lack of types, familiarity
                                                                      • extension APIs make complex things possible, but they don’t make simple things easy. To compare with VS Code again, it’s well-typed and defiend APIs are much easier to wrap your head around: https://github.com/microsoft/vscode/blob/main/src/vscode-dts/vscode.d.ts.
                                                                      • Emacs user features make complex things possible, but they don’t make simple things easy. 90% of configuration should be editing config file with auto-completion. Emacs customize is overly complicated and unusable, so you end up writing code for 100% of config, not 10%.

                                                                      Still, Emacs is probably closer to a good platform than anything else I know :-)

                                                                      1. 5

                                                                        it’s pretty horrible with modularity – I want to pull other people’s code as self-contained, well-defined modules. VS Code extension marketplace is doing much better job at unleasing uncoordinated open-source creativity.

                                                                        I think a big part of this, though, is that compared to Emacs, the extension framework in VS Code isn’t very powerful (i.e., it exposes much fewer internals), a fact that has led to bad performance for widely-used extensions which had to be solved by bringing them into core. If all you want to do is install extensions, the current Emacs package system works well enough. I do agree about the lack of namespaces; honestly, 90% of the things you mention would be fixed if Emacs were simply implemented in Common Lisp (including both modularity and asynchrony). The problem being that there is too much valuable code in Emacs Lisp.

                                                                        1. 4

                                                                          it’s pretty horrible with modularity – elisp single global mutable namespace of things is good to build your tool by ourself, but is bad for building a common tool together

                                                                          On the other hand, having all of Emacs itself be available is part of what makes it such a malleable substrate for one to build their own, customized editor out of Emacs. Having things locked down, isolated contexts, all that good “modern” stuff would certainly be nicer from a software engineering perspective, maybe make it easier to write reusable packages (although there are an awful lot of packages out there now), but would, I fear, kill what is so powerful about Emacs. The ability to introspect on just about every aspect of what the editor is doing, see the code, hook into it and change it is nearly unique and is so powerful on its own that (for good and ill) most else falls by the way-side.

                                                                          1. 3

                                                                            I kind of wish Emacs lisp had a module system that worked like CL modules. Everything the package exports, and which is intended for other user code to interact with, you access with my-package:my-variable. But you can still get at all the internals with my-package::my-variable.

                                                                            1. 2

                                                                              Honestly, Emacs would be strictly better were it just written in Common Lisp. Same, RMS doesn’t like it, so Emacs Lisp stayed its own separate dialect.

                                                                              But hey, at least it’s a Lisp!

                                                                          2. 2

                                                                            poorly behaved third party code can easily block event loop (cf whith VS Code, where extensions run in a separate process, and host/plugin interface is async by construction)

                                                                            This works rather well in practice. There have been a couple of bugs in vsc (fixed quite quickly) over the last few years where something caused the event loop to lock up because of the main thread getting blocked, and it’s actually surprising when it happens because it’s rare.

                                                                            1. 1

                                                                              I do need a “ascii gui” platform

                                                                              What would the ideal such platform look like?

                                                                              1. 4

                                                                                It’s hard!

                                                                                I don’t think I am capable of designing one, but I have some thoughts.

                                                                                1. We need some model for character-grid based GUI. Sort-of how html + dom + event handlers allow programming the web, but simpler and with the focus on rectangular, keyboard driven interactions. While the primary focus should be character grid, this thing should support, eg, displaying images for the cases you need that.

                                                                                2. With GUI model, we should implement the core of the application, the event loop. “nothing blocks GUI” should happen there, extension management and isolation should happen here, window/frame management should happen here.

                                                                                3. Given the core, we should implement some builtin components which provide the core value, scaffold for extensions, and in general dictate the feel of the platform. I think this should be an editor, an shell, and a window/frame/buffer manager. And “shell” here is not the traditional shell+emulator of an ancient physical device, but rather a shell build holistically, from the first principles, like the thing that Arcan folks are doing.

                                                                                4. In parallel, there should be a common vocabulary of “widgets” for user interaction. Things like command pallet, magit-style CLI-flavored GUI dialogs, completion popups, configuration flow.

                                                                                5. These all should be implemented on top of some sane software component model, with physical separation between components. For scalable, community-driven, long-lived system I think stability should take preference over ability to monkey-patch internals, so we really want something like OS + processes + IPC here, rather than a bunch of libraries in the same address space. Good news is WebAssembly is I think building exactly that software components model which we need here. WebAssembly components are still a WIP though.

                                                                                6. A non-insignificant part of the component model is the delivery mechanism. I think what Deno does might work! Otherwise, some centralized registry of packages also a tested solution.

                                                                                7. Another important aspect of component model is stability culture. Here I think two-tier model makes sense – core modules are very pedantic about their interfaces, and essentially do vscode.d.ts thing – perpetually stable, thoroughly documented and reified-in-a-single file interface. Modules you pull from the Internet can have whatever guarantees their authors are willing to provide.

                                                                                8. Hope this system doesn’t get obsoleted by the world of user-faced computing finally collapsing into “everything is HTML” black hole.

                                                                            1. 11

                                                                              Another hurdle to learn Rust is how disjoint the sync vs async experience is. A lot of teaching material presupposes sync Rust, but the patterns you learn for sync are only half applicable to async.

                                                                              I love Rust, but more specifically, I love sync Rust. I’m reluctantly pushed into async every day because a lot of crates we rely on have opted to go the async route. This makes it harder for me to bring the rest of the team along.

                                                                              1. 5

                                                                                Do you have any examples of crates pushing you into using async where you otherwise wouldn’t? Not to cast doubt on your experience, I just haven’t noticed this yet and I’m curious what kind of libraries it affects.

                                                                                1. 3

                                                                                  Sure. Latest was rusoto_dynamodb and webrtc-rs. I believe all of the rusoto AWS api crates are async. webrtc-rs is ported from Go, so async is closest to the original code.

                                                                                2. 4

                                                                                  I’m very sad about rust going with async / await and splitting the library ecosystem in two. I get that javascript/node pretty much has no other choise. Python should’ve shown how bad it can be. I think Rust is great overall, but wish for a Go-like experience of not having to worry about async. I get that Rust wants to offer the option of not having a runtime. But for most usecases having the go runtime take care of it is a bliss. Sometimes I also wish for a GC instead of the borrow checker, but I guess a GC couldn’t give the same assurances about safe concurrency that the borrow checker can. I really like the typing / error handling. Wish go would adopt a more modern type system in addition to generics.

                                                                                1. 2

                                                                                  I love this article (and the previous in the series). I really hope Rust will pick up on the provenance experiment and lead the way here.

                                                                                  1. 5

                                                                                    I’m not against people using SML, but I feel like the language the post author might have wanted but for some reason missed is the other ML—OCaml. ;)

                                                                                    • It has all the features of SML since they share common origin, but it’s evolving and has many useful features that SML doesn’t (polymorphic variants, local module opens, monadic let operators…).
                                                                                    • MLton is a very slow compiler (it’s whole program optimizing, but as a side effect also slow) and lacks a REPL, which is why people often use a second compiler for experimentation, e.g. SML/NJ that is interactive but can’t produce binaries. OCaml bootstraps itself in ~10 minutes, but the binaries it produces are neither slow nor bloated. It also has a built-in bytecode target and a REPL.
                                                                                    • SML’s packaging is still in its infancy, while OPAM has been a standard way to install OCaml itself and its packages. There are quite a lot of packages, too.
                                                                                    • There’s a musl compiler flavor readily available for building eternal Linux binaries.
                                                                                    • There’s also a mature tool for trans-compiling to JS that can make a JS version of OCaml’s REPL.

                                                                                    I can also personally attest to it being at least a 20% of a 100-year programming language. The Lua-ML project has been dormant for almost two decades. It also makes rather intricate use of the module system and functors to make the standard library of the Lua interpreter reconfigurable.

                                                                                    When I set out to resurrect it, there were very few things I had to update for modern compiler versions, mostly changing String to Bytes when a mutable string was intended. I don’t remember any runtime errors caused by a new compiler. In the compiler, that change was preceded by a years-long deprecation, too.

                                                                                    Generally, both compiler maintainers and library authors take compatibility very seriously. Modules removed from the standard library become OPAM packages. Many third-party library packages has the upper bounds of their dependencies open rather than pinned, and that rarely fails.

                                                                                    Of course, it’s not a perfect language or a perfect ecosystem, but to me it’s quite close.

                                                                                    1. 2

                                                                                      but it’s evolving and has many useful features that SML doesn’t (polymorphic variants, local module opens, monadic let operators…).

                                                                                      I think OP would consider this an anti-feature. The language should NOT evolve. The longer it’s been static the better :)

                                                                                      1. 1

                                                                                        It’s evolving in a backward-compatible manner, so I see no problem with it. Besides, one can always opt out of further evolution by vendoring a specific compiler. ;)

                                                                                        There are tools in the OPAM repo for preparing a tarball with everything including the compiler to reproduce a program build from scratch, so it’s pretty easy to make an “eternal snapshot” of your dependencies.

                                                                                    1. 2

                                                                                      I think you made the case for WebAuthn just look better :) Regardless very nice analysis!

                                                                                      1. 5

                                                                                        Care to elaborate?

                                                                                        I think the major downsides of WebAuthn for me are:

                                                                                        1. A “regular user” can’t backup or transfer their credentials. Logging with a new device is a disaster.
                                                                                        2. IIUC no mutual authentication support. (Although maybe this can be added as I would like to do with passwords).
                                                                                        1. 2

                                                                                          A “regular user” can’t backup or transfer their credentials. Logging with a new device is a disaster.

                                                                                          That’s not currently easy but it’s not a limitation of WebAuthn. Logging in with a new device for most WebAuthn services I’ve used is to log in with one of my existing devices and press the ‘allow this new device’ button. You could make that completely transparent with some QR-code magic: If you’re logged in, show a QR code contains a single-use URL that adds a new device to the logged-in account. If you’re not logged in, show a QR code that contains a single-use URL that requires you to log in with an already-authorised device and signs you in.

                                                                                          The flow that you want is not to transfer your credentials because your credentials should always live in a tamper-proof hardware token (TPM, U2F device, enclave on the phone, whatever), it’s to transfer authorisation.

                                                                                          The big problem with WebAuthn is in the opposite direction: revocation. If my phone is stolen, hopefully its credential store is protected by a biometrics and something that rate limits (with exponential back-off) attempts to unlock the device with spoofed biometrics, so it’s probably fine, but there’s no good way (yet) of saying ‘revoke my phone’s WebAuthn credentials for every single service that I can sign on with’.

                                                                                          IIUC no mutual authentication support. (Although maybe this can be added as I would like to do with passwords).

                                                                                          As I understand the WebAuthn flow, this is not quite true. With a password, if I provide my password to a phishing server, it is compromised. With WebAuthn, there is a challenge-response protocol. A phishing site must intercept a valid challenge from the site that it is trying to attack and ask me to sign it. My signature will use a key derived from a local secret and the domain that I am talking to. If I get a challenge that originated from example.com but that was forwarded by evilxample.com that is trying to phish me, then I will sign it with a key that is SomeKDF(evilxample.com, myDeviceSecret). The signature will then not match what example.com expects. The MITM attempt then fails. I don’t get a direct failure indication, but the phishing site can’t then show me anything that I expect to see because it has failed to log in to the site that it is trying to MITM my connection to.

                                                                                          Note that I already get the authentication in the opposite direction from TLS, the problem is not at the protocol level, it’s at the UI level: users can’t tell the difference between example.com and example.phish.com. WebAuthn should at least mean that if they log into example.phish.com, the intercepted credentials can’t be used to log into example.com.

                                                                                          1. 5

                                                                                            The flow that you want is not to transfer your credentials because your credentials should always live in a tamper-proof hardware token (TPM, U2F device, enclave on the phone, whatever), it’s to transfer authorisation.

                                                                                            The problem here is that it is a disaster to transfer all of these authorizations when you get a new device. And if you miss one before getting rid of your old devices you are now locked out of your accounts. (Unless there is backup authorization options of course.) This is why I only use 2FA on a tiny number of services. Managing which keys are enrolled in which services is a disaster even for a single-digit number of services. I can’t imagine a word where I had to do this for every login.

                                                                                            That is why the password is nice. I can sync my whole list of passwords or even write them down without needing to involve every service.

                                                                                            … With WebAuthn, there is a challenge-response protocol …

                                                                                            I think you are confusing phishing resistance with mutual authentication.

                                                                                            Note that I already get the authentication in the opposite direction from TLS

                                                                                            Yes, I agree that for most use cases one direction of auth provided by TLS and one direction of auth provided by whatever else is sufficient. However it is always nice not to need to rely on the PKI. Things are getting better with certificate transparency and pinning but the sad fact is still that for any site you haven’t visited before there are dozens of third-parties who have the authority to validate that connection. Reducing that set to only the site I want to take to is an absolutely huge improvement.

                                                                                            1. 1

                                                                                              This is why I only use 2FA on a tiny number of services. Managing which keys are enrolled in which services is a disaster even for a single-digit number of services. I can’t imagine a word where I had to do this for every login.

                                                                                              Because of this, I wrote my own TOTP app that syncs the 2FA with all my devices. Which is totally wrong if you ask any security expert – the TOTP should be one way.

                                                                                              Personally it’s an acceptable risk, because I ensure I don’t sync the passwords via the same mechanisms as the TOTP. If one sync is hacked, it doesn’t affect both parts of the 2FA.

                                                                                              1. 4

                                                                                                This is why I hate 2FA with TOTP, it loads most the security obligations to the user (while having the server store the shared secret in clear text).

                                                                                                1. 2

                                                                                                  I do the same. All of the OTP secrets go into my password manager.

                                                                                                  Most sites don’t even support multiple OTP devices and there is no way I’m getting locked out of sites because my phone was broken or stolen.

                                                                                            2. 1

                                                                                              What do you mean by mutual authentication? Is this not another word for “unphishable”?

                                                                                              1. 3

                                                                                                They are related and I think mutual authentication probably implies unphishable but not the other way around.

                                                                                                For example if I use a password manager I am basically unphisable because it doesn’t fill the password into the wrong site. But mutual authentication provides stronger validation of the site.

                                                                                                For example if you are using WebAuthn I think a phishing site can pretend to log you in without knowing anything about you, then ask for sensitive info. For example you go to g00gle.com, tap your authenticator and then it asks you for sensitive information. It seems legit because you just logged in. With mutual authentication unless the site actually knows who you are and has per-exchanged credentials the browser will fail to perform the authentication and let you know. No website can pretend that you have an account.

                                                                                                Of course getting UX that makes this obvious to the average user is definitely not easy. Maybe we need a new padlock that shows that you are using a password-authenticated connection. And of course we need to make sure that they can’t make a “signup” form look like a “login” form and get that lock for a new account.

                                                                                                1. 1

                                                                                                  With this explanation I believe The no on mutual authentication for TLS client auth is wrong. TLS client auth sends a signed record of the Key exchange till the auth message. So yes the client don’t directly checks the server cert, but it isn’t a problem because the auth message can only used to auth this exact session. So even if you trick the user to auth against g00gle.com with the key for google.com the attacker can’t do anything with it.

                                                                                                  1. 1

                                                                                                    I’m not sure, because the browser prompts you in its sacred area, and also if the domain is new it must go through an enrollment phase which looks distinctly different.

                                                                                                    1. 3

                                                                                                      I tested and this doesn’t appear to be the case. The following code seems to result in a “login” dialog on any site when clicking the button. At lest for my simple stateless security key (Yubikey Neo 4 IIRC). The request does actually fail but at least on Firefox there is no user-visible indication of this. Maybe this UX could be added and at least the site would need to know a valid key for the user.

                                                                                                      <button id=b>Click me</button>
                                                                                                      <script>
                                                                                                      function rand(len) {
                                                                                                        let r = new Uint8Array(len);
                                                                                                        window.crypto.getRandomValues(r);
                                                                                                        return r;
                                                                                                      }
                                                                                                      
                                                                                                      b.onclick = function() {
                                                                                                        navigator.credentials.get({
                                                                                                          publicKey: {
                                                                                                            challenge: rand(32),
                                                                                                      	  allowCredentials: [{
                                                                                                              type: "public-key",
                                                                                                              id: rand(64),
                                                                                                            }],
                                                                                                          },
                                                                                                        })
                                                                                                      };
                                                                                                      </script>
                                                                                                      

                                                                                                      Live demo: https://cdpn.io/kevincox-ca/fullpage/rNpvGNy

                                                                                                      1. 3

                                                                                                        The request does actually fail but at least on Firefox there is no user-visible indication of this

                                                                                                        I was able to verify this claim, indeed Firefox doesn’t show any indication of it failing. However if I test this in Chromium it does give a good error! Namely

                                                                                                        Try a different security key. You’re using a security key that’s not registered with this website

                                                                                                        Tracking this down, I figured out why Firefox doesn’t do the same. It’s caused by Firefox only implementing CTAP version 1 (aka U2F), which can’t detect this. CTAP 2 however does support that, and Chromium implements that. You can see how it works by searching for CTAP2_ERR_NO_CREDENTIALS in the CTAP2 specification. See this bugzilla issue for the status of CTAP2 in Firefox. This also prevents username-less login from working in Firefox at the moment.

                                                                                                        Given that, I think your mutual authentication criteria on webauthn can be dismissed. Though we might have to wait a bit longer for Firefox to catch up, but the specification is already stable.

                                                                                                        1. 1

                                                                                                          It still isn’t clear to me exactly how unguessable these keys are expected to be. I guess these should be basically completely random since they are public keys of a certificate pair but it also depends if they are site-specific (I think they are supposed to be) otherwise you do a 2-part phishing where you get the key ID from one site and use it on a different one.

                                                                                                          But it sounds like you are right. I can probably mark WebAuthn as mutual authentication.

                                                                                                          1. 2

                                                                                                            Guessing public keys isn’t possible, but it also doesn’t matter, because servers don’t have a way to check whether a public key is valid. If you mean credential IDs instead, that’s also unguessable, check the linked specification. And yeah public keys and credential ID’s are site-specific. Webauthn takes great care to make it hard to track users between multiple websites. Check out this section of the specification which goes into detail about this concern.

                                                                                                            If you have other questions or concerns about webauthn, I’d love to help. I spent a while with this specification (I worked on a server implementation) so I know it pretty well by now.

                                                                                                            1. 2

                                                                                                              Thanks for the info! I did find the spec really hard to navigate and maybe skimmed it faster than I should have. I’ve updated the post to say that WebAuthn provides mutual authentication. It maybe isn’t quite as strong as a TLS connection witk PAKE but I think it is close enough that for a simple comparison like in this post saying ✅ is more correct than ❌ and not really worth a footnote for the details.

                                                                                              2. 1

                                                                                                [removed accidental duplicate]

                                                                                              1. 6

                                                                                                source-based code coverage

                                                                                                I’m probably dense, but I don’t get it. I read the Rust release notes, I google the clang feature. Code coverage of what? Test coverage?

                                                                                                1. 18

                                                                                                  It’s in the linked docs, but “coverage” here means “the code is executed.” It tells you where your dead code is, and can be summarized into four useful high-level statistics:

                                                                                                  • Function coverage is the percentage of functions executed at least once.
                                                                                                  • Instantiation coverage is the percentage of function instantiations executed at least once (this is for generics and macro-generated functions).
                                                                                                  • Line coverage is the percentage of lines of code executed at least once.
                                                                                                  • Region coverage is the percentage of code regions executed at least once (there’s a particular definition of a “region” but basically it’s more granular than the function-level metrics).
                                                                                                  1. 3

                                                                                                    Specifically to the question: it’s quite easy to record coverage in a binary (i.e. which instructions were executed), you add some instrumentation on each branch to record what the target was and you can then generate a list of the basic blocks that were executed. You need some extra tooling to map these back to locations in the source code. This is what source-based code coverage mean. Often, it’s implemented by inserting the instrumentation in the front end (which has more relevant information for the coverage), though it’s also possible to use source locations in debug info to map from pure binary coverage info back to the source code. The latter approach is nice if you can make it work because it can be non-disruptive: if your CPU and OS support modern tracing functionality then they can generate basic-block-level traces for arbitrary binaries and you can then map this back to the source code for the exact version that was executed.

                                                                                                    1. 1

                                                                                                      Thanks!