Threads for valpackett

  1. 2

    I’ve definitely had similar situations with C++, where MSVC and GCC were as useless as human brains while clang instantly made it obvious what the error was.

    1. 1

      What you actually want if you really want a “C style” linked list, esp. in an embedded setting, is intrusive-collections

      1. 1

        Can one claim that the three finger salute holds up better against XScreensaver and this bug? Does the fact that Windows is natively graphical play to its advantage?

        1. 2

          I don’t know if it holds up better, but the screensaver is on a separate desktop than the default desktop is on a separate desktop than the login screen. These desktops are different than X11 style virtual desktops, and are a sort of security boundary. The screensaver simply crashing will not lead to the desktop being switched IIUC, and calling SwitchDesktop is guarded against from those secured desktops.

          1. 1

            This seems similar to having the lock screen be on a different VT, say directly in the login manager – I think gdm should work like that these days…

        1. 3

          I think it probably would have been fine to implement an API where the service simply requested a random password and then stored it in the existing browser password sync.

          Question to lobsters: Would this have been fine? It sounds pretty fine.

          1. 6

            No, it does not work. Webauthn, etc isn’t complicated simply for the hell of it.

            First, we already have that: every browser supports random password generation, the biggest problem is sites blocking those passwords because of absurd rules - essentially you’re asking for something that browsers already do, except when the site actively breaks secure password use. From a lock in perspective it’s essentially the same as webauthn: your logins are tied essentially to one password manager.

            Further a simple random key breaks if you ever have a MiTM or XSS hole - because the password is tied to the site, and is unchanging either of these attacks leaks the secret, and the secret can be subsequently reused.

            The complexity of webauthn is a baseline requirement for an actual security credential system. The challenge-response handshake ties both ends of the connection together directly, preventing forwarding/proxy attacks (which already happen against existing 2fa systems). Both ends of the handshake verify the domain involved in the handshake, so an error on either end (xss, mitm, phishing, …) can be blocked by at least one party.

            This blog post is basically uninformed nonsense that demonstrates a failure to actually understand what they’re complaining about and instead doing “anything I don’t understand must be wrong” approach to security.

            Now the whole webauthn/fido/passkey system kind of brought some of this nonsense on itself by pushing “biometric” security so hard. Honestly the only thing that requiring user authentication defends against is a person who has already got physical access to your unlocked device, and even then that doesn’t have to be biometric.

            1. 1

              The post gets way better towards the end, it just has some provocative things in the beginning :)

              your logins are tied essentially to one password manager

              Thing is that those are way more cross-platform than current (Apple and Google) passkey providers. But if password managers will be allowed by the platforms to handle passkeys, that will be solved.

              1. 1

                But if password managers will be allowed by the platforms to handle passkeys, that will be solved.

                I was about to say that that would be an obvious step [and was what I intended to imply], but then I remembered capitalism :-/

                But the more general issue is that if you have an actual HSM you absolutely do not want any way to extra the private key material from it, what you want is to ask the HSM to give you a handle for a given private key, then you ask the hsm to decrypt or sign or whathaveyou and provide the handle and the data to operate over. So any easy migration path runs into that - you can only support easy migration if you’re willing to take a significant security reduction vs what you could theoretically accomplish.

                1. 1

                  See my other comment — the thing with passkeys is they’re already that security reduction. They’re already “cloud” synced — but only within Apple or Google which is what feels unfair to users.

                  It’s not entirely unreasonable to argue that allowing third party apps to handle that instead is a further potential security reduction, however it’s also fair to argue for user choice over security in this specifically.

                  1. 1

                    the thing with passkeys is they’re already that security reduction.

                    The way syncing of secrets on apple hardware works at least does not extract the raw material out of the HSM - the material to be synced is encrypted by the HSMs to keys from the already approved devices.

                    1. 1

                      I thought they might do something like that. But then, does that mean there’s no recovery from losing all devices at once? That’s not great :/

                      1. 1

                        The general security model for apple’s end-to-end encrypted services is that loss of all devices very close to loss of all e2e encrypted data.

                        Now there is the “cloud key vault” (Matt Green has a good write up: https://blog.cryptographyengineering.com/2016/08/13/is-apples-cloud-key-vault-crypto/) which can recover enough info to recover normal encrypted iCloud data, but I’m not sure whether it contains info that can be used to recover synced keychain items (even just basic passwords).

              2. 1

                It is quite easy to export passwords from one password manager and import then into another. There’s a CSV format that’s generally recognized across the industry.

                Personally I think the ability to do that is a baseline requirement of any sort of auth system. If your auth system precludes that, so much the worse for it.

                1. 1

                  Yup, and it’s also quite easy to leak passwords because of that.

                  Importantly however the “protect user secrets at all costs” isn’t a requirement of webauthn, so there’s nothing stopping an implementation from providing such an export functionality. Assuming that they believe that the reduction in security vs usability trade off warrants it.

              3. 1

                If it can be MITM’d once, than you’re screwed. With private keys, that wouldn’t work.

                Also, for what it’s worth, the current JS API for webauthn evolved from a proposal for just the API being talked. The possibility for it is still there, but AFAIK every browser has moved on to only doing webauthn through it.

              1. 1

                I’m a little out of date on the details of WebAuthN… does this not allow for scratch codes that you could print out, keep in a safe, and migrate from one authenticator to another?

                1. 2

                  No, the keys are supposed to not be extractable at all. Backup codes are something a site needs to implement itself.

                  1. 2

                    My understanding was that there’s nothing preventing a pure software client aside from the RP not allowing an attestation chain they don’t recognize.

                    1. 1

                      It’s true that there’s no technical limitation preventing you from making an implementation that gives you a copy of the key, it’s just a really bad idea. Take crypto wallets for example. They give you a backup copy of the key and those get phished all the time.

                      1. 1

                        I was thinking of “bring your own cross-platform password manager” rather than exporting individual keys from a given manager. (Not to be confused with *bulk * exporting of keys)

                    2. 1

                      Sure. But the scheme still allows for backup codes that you can use to migrate in case you lose a device, right? With no more friction than TOTP, from the look of it?

                      I’m not sure I yet understand the (perceived) need to extract the keys as long as there’s a migration mechanism for accounts.

                      Edit to add: In a similar vein, I’ve never perceived the fact that I can’t extract keys from a Gemalto smart card as some attempt at lock-in on their part, for similar reasons.

                      1. 4

                        Those backup codes aren’t part of WebAuthn, but rather a different factor that the RP would have to implement in addition to WebAuthn. They are, by nature, weaker than WebAuthn and an additional exploit point – They’re phishable, copyable, visible to the OS, typically shorter, potentially not hashed server-side, etc.

                        In addition, you would have to do this for every RP that you’re registered with; a significant operative burden. Being able to back up an entire device would be once-per-device, and a concern of the authenticator rather than the RP.

                        1. 1

                          I’ve never perceived the fact that I can’t extract keys from a Gemalto smart card as some attempt at lock-in on their part

                          Well, people’s perception with “the smartcard” becoming very cloudy like “your whole Apple ecosystem” is like: when it becomes multi-device at all, only devices from the same vendor being included in “the smartcard” starts feeling unfair.

                          1. 1

                            The difference is that passkeys are meant to have billions of users. It’s going to be a rude awakening for a lot of ordinary people the first time they switch from iOS to Android or vice versa.

                      1. 3

                        I really like AWS for personal computing projects because it’s almost impossible to go above the free tier for that. 400,000 gb-seconds of lambda compute a month? Psssh like I’m ever gonna go above 10,000

                        1. 3

                          Funny, I hated them for personal projects because they don’t support spending limits. I’m afraid of configuring something wrongly (even though I am careful) and even with a spending alarm going up to more than I want to pay before I can react to the alarm.

                          1. 1

                            Funnily enough you can DIY this feature using.. (surprise) Lambda.

                        1. 5

                          It looks similar to Kakoune, which I recently started using to try and get some of my coding workflow out of VS Codium and into a terminal. Can anyone compare/contrast Helix vs Kakoune? The homepage says it was heavily inspired by Kakoune’s design, but I’m curious why I’d pick Helix over Kakoune.

                          1. 10

                            I can speak to this a little. I’m a kakoune user who tried helix for a day or two. Helix is a more batteries-included editor. It incorporates LSP support into the editor itself instead of implementing it as a plugin (which kakoune does). It uses tree-sitter for syntax highlighting, which is more accurate than kakoune’s regex-based approach. It also is slightly more discoverable than kakoune, as pressing spacebar opens a command palette that eagerly tries to teach you all of the features. It’s written in Rust, and it works well on Windows (where kakoune struggles both in WSL and Cygwin).

                            However, I’m sticking with kakoune. I like the low dependency footprint (only a C++ compiler and stdlib at this point), fast build time, client-server architecture, easy extensibility, public domain licensing, and minimal design. Kakoune lets me plug it into my environment in a way of my choosing and wire it into my tools how I’d like. Helix is a more conventional editor with it’s own internal windowing, file browsing, etc. None of that is inherently bad, but I prefer the kakoune approach. YMMV.

                            1. 3

                              Adding to the other comment: Helix’s default keymap actually slightly walks back on Kakoune’s one, I’ve had to bind some keys to get more selection operations in normal mode instead of select mode and make it more Kakoune-like

                            1. 2

                              Ooh, interesting! How does this approach compare to this bulk one?

                              upd: oh I see

                              ZSTD uses “LD4 u8 interleaved” but SIMDJSON uses the “Pairwise” approach whereas there was no consistent winner across a set of benchmarks

                              1. 4

                                Is this still true now that we have HTTP/2 and QUIC?

                                1. 2

                                  It says so in the article.

                                  1. 3

                                    Except the article seems to sort of handwave TLS away while in practice HTTP/2 only works over TLS…

                                    1. 1

                                      To be pedantic, http2-the-protocol works without tls. The spec does NOT require it. http2-as-implemented by most (all?) browsers requires it.

                                      1. 1

                                        That is precisely what I meant by “in practice” :)

                                        1. 1

                                          Whoops, I missed that :p

                                  2. 1

                                    This slow start algorithm occurs at the TCP level, which underlies HTTP, so I doubt that HTTP/2 or QUIC has any bearing on this.

                                    1. 19

                                      QUIC is on UDP

                                        1. 2

                                          Oh wow, I didn’t know that, thanks!

                                          1. 0

                                            In http3. Everything still sits on tcp in http2.

                                            1. 2

                                              HTTP/2 has nothing to do with QUIC and predates it by quite some years.

                                              1. 1

                                                I think you’re replying to the wrong person?

                                                1. 1

                                                  You are both technically correct, but it’s hard to understand what both of you are trying to convey apart from being technically correct.

                                                  1. 1

                                                    Yes, this thread was a disaster.

                                      1. 4

                                        I strongly recommend yubikey-agent

                                        Unless you really need compatibility with legacy servers that don’t support sk-* keys, you don’t need to use third-party agents to use a Yubikey.

                                        1. 15

                                          Forward Yubikey Agent

                                          I think the warning label on this advice was far too understated. You should only forward your local agent if you trust the admin of the remote machine with your private keys. Because when you forward your agent, root on the box you connect to can use your key as long as you’re connected.

                                          In lots of cases, that’s no problem. The admin of the remote box might be you anyway. Or they might be the person/entity who issued your Yubikey and associated it to your identity. But if you wouldn’t hand that remote admin (or anyone who escalated their privileges to admin on the remote machine) your yubikey and PIN, don’t forward your agent.

                                          1. 1

                                            Yubikeys should be configured to require touch on every operation, in which case the box wouldn’t be able to do operations “behind your back”. (You could still maybe be confused into allowing an unwanted operation but that’s… hopefully difficult enough.)

                                            1. 2

                                              Last time I set it up (I was using the PIV applet on the Yubikey, so that may make a difference) that boiled down to one of two cases:

                                              1. You didn’t require further touches once the agent was authorized and there were no touch requirements for the agent.

                                              2. You did require those touches, and you were asked for touches without any discernible user action that triggered it, just in the case of normal operations (e.g. you’d transferred enough encrypted data that the ssh agent wanted to re-key) frequently enough that we were afraid requiring touch on every operation would train users to assume touching was the right thing to do no matter what.

                                              Maybe they’ve improved the agent recently, or maybe without using the PIV applet things are different.

                                              I still think this bit of advice needs a bigger warning label than the article gave it.

                                          1. 15

                                            Wasn’t Server Push added to the spec by Google? It’s one of those micro-optimizations that only make sense when you have google-level traffic.

                                            1. 23

                                              I think that describes all of HTTP/2.

                                              1. 13

                                                Pretty much. The HTTP specs have been more or less taken over by Google and are adding features/functionality according to what Google wants/needs. Which is sad to me because the wire protocol has become far less debuggable and explorable than it used to be – I remember the days of doing telnet 80 and typing in a raw HTTP request to learn how it worked (and doing the same to learn how email worked by putting together the HELO, etc.). With later HTTP versions you need tooling to generate even basic requests for you since it’s no longer a plain-text protocol.

                                                1. 8

                                                  The HTTP specs have been more or less taken over by Google and are adding features/functionality according to what Google wants/needs.

                                                  This isn’t true at least for HTTP/3 & QUIC, both of which have been worked on by far more than just Google. (QUIC has actually morphed significantly from the original Google version.)

                                                  1. 5

                                                    Hey I used a telnet mail 143 earlier this week!

                                                    I had the same reservations with HTTP/2. Implemented it anyway because Google said it was good for speed and SEO, discovered the speed gains were dubious, pre-load never actually helped, and it didn’t seem to improve SEO.

                                                    Ask me about AMP.

                                                    1. 3

                                                      AMP and Dart are proof that things don’t just succeed because Google pushes them. It needs to also clear some minimum bar of quality or else it will be rejected by the internet no matter how much Google pushes it.

                                                      1. 2

                                                        The nice thing about AMP is that I can just ignore it. However, my browser will be lugging around a useless layer of HTTP/2 support for a couple decades because it was a “standard”.

                                                        1. 2

                                                          HTTP/2 will likely remain in use for decades.

                                                          HTTP/3 is great but won’t replace HTTP/2, simply because it’s not always feasible to use anything non-TCP. Some network admins block UDP, there will always be hosting environments that can’t do anything other than TCP for reasons, and so on…

                                                    2. 4

                                                      For https: over HTTP/1.0 or HTTP/1.1, you can always do openssl s_client -connect www.google.com:443 and it works just like telnet did for port 80. For HTTP/2+, yes, you need specialized clients.

                                                      Edit: for clarity

                                                    3. 6

                                                      HTTP/2 has at least one very useful consequence for everybody: we don’t have to optimize for a number of simultaneous connections anymore. Removing code that was trying to cleverly bundle extra data to unrelated requests has made a huge impact to maintainability of our code at $WORK.

                                                      HTTP/3, though, is exactly what you say. Google making everyone’s life more complicated because they settled on a stupid business model, and drowned clients in ad code.

                                                      1. 6

                                                        HTTP3 is great for video delivery performance.

                                                        Actually it improves load times for just about everything, but losing TCP’s head-of-line blocking and replacing TCP’s slow-start and loss-recovery mechanisms with more suitable ones makes a real difference to the quality/latency/buffering-probability tradeoff for DASH and HLS, especially on mobile or otherwise unreliable connections.

                                                  1. 1

                                                    I just remembered another platform that was suffering from the “distributions overriding the default theme” problem.

                                                    Android. It wad Android.

                                                    Before 4.x which mandated an unmodified Holo theme, phone vendors (in their “distributions”) customizing the default theme was a hilarious trainwreck.

                                                    1. 9

                                                      Prompted by the current top story: The dangers of Microsoft Pluton - would this attack be mitigated by something like Pluton?

                                                      1. 10

                                                        Yes and no. The TPM measures the UEFI code and adds it to a PCR (basically a running hash of everything that’s been fed into it). This means that it would detect a modification of the UEFI code and, because the PCR value for the firmware doesn’t match, wouldn’t release the key for decrypting a BitLocker / LUKS-encrypted volume or any WebAuthn tokens or any other credentials stored in the TPM. There are two possible failure modes:

                                                        First, if the bootkit is installed before the first SecureBoot boot, then the keys will be released only if you boot with the compromised firmware and you’ll need to do the recovery thing to boot with the non-compromised version. If the malware is installed early on in the supply chain before you do the OS install, then Pluton / TPM is no help.

                                                        Second, the symptom that the user sees is likely to be incomprehensible. They will see an error saying BitLocker needs them to enter their recovery key because the TPM has detected a change to the UEFI firmware. For most users, this will read as ‘enter your recovery key because wurble mumble wibble fish banana’ and so they will either enter their recovery key (if they kept it somewhere safe) and grant the malware access to everything or reinstall their OS (if they lost their recovery key) and grant the malware access to everything.

                                                        So, it would be more accurate to say that something like Pluton can detect such malware and prevent it from compromising a user’s data, but it is easy for the user to circumvent that protection.

                                                        1. 4

                                                          but it is easy for the user to circumvent that protection

                                                          I would even go so far as to say the user is induced to circumvent that protection.

                                                        2. 14

                                                          Pluton is for securing company computers against employees, and streaming video against computer “owners”, not for securing your machine against nation-state and organised crime actors.

                                                          1. 8

                                                            I’m confused why this was downvoted; it’s correct and answers the question. I think someone may have thought this was unrelated political posturing? If so, please read it again. It is a direct answer to the question it’s responding to.

                                                            1. 4

                                                              Not the flagger, but I think a direct answer could refer to the technical differences in protections asserted by Pluton vs these UEFI attacks. Microsoft themselves refer to nation-state actors and cybercriminals in the copy around Pluton, and I remain unclear whether there’s an overlap here.

                                                              1. 9

                                                                That’s quite fair. On my own background knowledge, Pluton does not establish a complete chain of trust for the firmware in the way that ie. ChromeOS does, and therefore does not prevent bootkits. At best it provides a fallible approach to detecting bootkits, but a sophisticated attacker would be able to circumvent this detection in common circumstances.

                                                                Empty rhetoric about all the threats that are out there is quite common in the security world, and Microsoft’s rhetoric about Pluton is in that category. I could get into why this makes sense for them as marketing strategy, but that would perhaps verge on being too much politics.

                                                                1. 1

                                                                  IIRC, currently Pluton firmware just implements a TPM, but they promised to add lots more things in the near future. It’s a bit more than just rhetoric since they have actually built the hardware side of things?

                                                                  1. 1

                                                                    Sorry, just now seeing this! That’s quite fair. I’m not familiar with Microsoft’s future plans, so I’m not able to speak to that.

                                                            2. 2

                                                              How does a UEFI bootkit circumvent the protections offered by Pluton/TPM?

                                                            3. 4

                                                              Yes, at least partially. The modification of BIOS code would be detected and access to secrets like Bitlocker or LUKS keys could be denied, if the system was set up correctly. Of course now there’s a question of what the user would do in that case, they might just enter the backup key and re-seal the secret, which wouldn’t do anything. The more proper way would be to check with the BIOS vendor whether the measurement the TPM is getting matches with any of their versions or not, and if not, promptly re-flash their BIOS. This doesn’t need Pluton, any old TPM would do though, Pluton just has more security in a case of physical access.

                                                              1. 1

                                                                Do BIOS flash utilities work in this scenario? It seems like the utility has to be booted with UEFI so it’s too late to trust it…? Though I guess it has to work when the device is bricked by a bad BIOS, so there’s some even lower-level way to boot the utility?

                                                                1. 1

                                                                  You can of course try booting it from a USB stick and try re-flashing it, and see if that returns it to a good state. if it doesn’t, you could probably re-flash the SPI flash itself with an inexpensive programmer, but that requires some knowledge and definitely isn’t doable by an end user.

                                                                  1. 1

                                                                    What I’m wondering is, couldn’t the bad BIOS just hook the flash utility the same way it does the OS? What is the accepted secure way for an end user to completely factory-restore the machine? Because that seems like the rational and intended response to the Bitlocker TPM change message.

                                                                    1. 2

                                                                      If you have reason to believe the device is compromised at that low a level, don’t keep using the device. Yes, nobody who’s not a big organization can afford to just throw laptops away, but it’s also quite impractical - especially on closed hardware - to be sure you re-flashed everything that needs to be re-flashed. You should be trying really hard to not be in this scenario in the first place.

                                                            1. 16

                                                              After 5 years of using Mercurial I’m now at a new job using git and I want to murder myself. It’s so awful. And I used to work on git tooling two jobs ago so I’m not new to it.

                                                              I’m constantly performing unsafe operations. Rewriting history is somehow both unsafe and extremely painful. Maintaining small, stacked PR branches is nearly impossible without tooling like git-branchless.

                                                              I’ve convinced that anyone that says “git is not the absolute worst thing ever” has not invested enough time into learning better systems, even closely related ones like Mercurial.

                                                              Everyone using git is so distracted by their accomplishment of learning how to survive git’s UI and by reading blog posts explaining clean history and squashing and all this irrelevant philosophy that they forgot to examine if any of it was necessary.

                                                              1. 11

                                                                Do you know of any good write-ups that explain to git users, in a constructive way, why none of it is necessary? I used SVN up until ~2010 when I switched to git, and my experience using git is far better than it ever was with SVN. I’ve never used mercurial. Any articles I can find that attempt to tell folks about better alternatives usually devolve (like your comment) into some git-bashing piece. Usually if you want to convince someone that they are doing the wrong thing, it’s not helpful to spend a lot of time telling them they are doing the wrong thing.

                                                                Everyone using git is so distracted by their accomplishment of learning how to survive git’s UI and by reading blog posts explaining clean history and squashing and all this irrelevant philosophy that they forgot to examine if any of it was necessary.

                                                                I don’t think it’s fair to say “everyone”, it sounds like you’re now using git after all :P

                                                                1. 6

                                                                  I’ve only seen rants that give specific examples of how insane the command UI is without giving practical example of how’d you’d end up using those commands and rants that give concrete examples without showing alternatives. I agree with them but they don’t illustrate the problems to git users very well.

                                                                  I’m sure a good rant is out there but I can’t find it. Perhaps I need to write it instead of red-in-the-face ranting to lobsters and my friends :p

                                                                  1. 5

                                                                    Write it, I’d read it! :D

                                                                    1. 2

                                                                      Perhaps I need to write it instead of red-in-the-face ranting to lobsters and my friends :p

                                                                      I’ll be checking your user page so I don’t miss it :D

                                                                  2. 5

                                                                    Maybe jj/Jujutsu (mentioned in the article) is what you need instead of the actual git client. I personally find interactive rebase far more intuitive than branchless/jj commands…

                                                                    1. 5

                                                                      It’s really not just a question of intuitiveness, though. For example, how do you split an ancestor commit into two? An interactive rebase where you edit the commit, reset head, commit in parts, and then continue? What do you do with the original commit message? Store it temporarily in a text file before you reset? That’s madness. And the git add -p interface is embarrassing compared to hg split.

                                                                      I don’t mind interactive rebase but why are there no abstractions on top of it, and why is it so hard to use non-destructively?

                                                                      And thanks for the pointer, I’ll bump checking out jujutsu higher on my todo list.

                                                                    2. 3

                                                                      I’m constantly performing unsafe operations. Rewriting history is somehow both unsafe and extremely painful.

                                                                      I’ve literally destroyed hours or days of work by rewriting history in git so that it would be “clean”.

                                                                      Everyone using git is so distracted by their accomplishment of learning how to survive git’s UI and by reading blog posts explaining clean history and squashing and all this irrelevant philosophy that they forgot to examine if any of it was necessary.

                                                                      It is as though “git log” et al encourage a certain kind of pointless navel-gazing.

                                                                      1. 5

                                                                        I’ve literally destroyed hours or days of work by rewriting history in git so that it would be “clean”.

                                                                        Before doing complex operations, I run git tag x and can reset with git reset --hard x any time. (Using the reflog after the fact is also possible, but having a temporary tag is nicer to use.)

                                                                        1. 3

                                                                          I do the same but with git branch -c backup, and then git branch -d backup when I’m successfully done rebasing. I also often git push backup so I have redundancy outside my current working copy.

                                                                          And to the grandparent post: I find git log invaluable in understanding the history of code, and leave good commit messages as a kindness to future maintainers (including myself) because meaningless commit messages have cost me so much extra time in trying to understand why a given piece of code changed the way it did. The point of all of that is not to enable navel-gazing but to communicate the intent of a change clearly for someone who lacks the context you have in your head when making the change.

                                                                          1. 2

                                                                            This command will show you all the things your branch has ever been, so if a rebase goes wrong you can easily see what you might need to reset to. (Replace <branch> with your branch name.)

                                                                            BRANCH=<branch>; \
                                                                            PAGER="less -S" \
                                                                            git log --graph \
                                                                                    --decorate \
                                                                                    --pretty=format:"%C(auto)%h %<(7,trunc)%C(auto)%ae%Creset%C(auto)%d %s [%ar]%Creset" \
                                                                                    $(git reflog $BRANCH | cut '-d ' -f1)
                                                                            
                                                                          2. 3

                                                                            I’ve done incorrect history edits too, but I don’t think I’ve ever done one that I couldn’t immediately undo using the reflog.

                                                                        1. 3

                                                                          My best guess, is that Git is a local maximum we’re going to be stuck on until we move away from the entire concept of “historic sequence of whole trees of static text” as SCM.

                                                                          1. 4

                                                                            darcs/pijul are the move away from fixed sequences of entire blobs. If only there was some powerful force to drive the adoption of pijul…

                                                                            1. 2

                                                                              There’s nothing more powerful than people and project adopting it one by one. If you start a new project, using Pijul and the Nest is the best thing you can do to make the project grow.

                                                                          1. 8

                                                                            Congrats! I believe Hare uses QBE as a backend, and there may be others. Perhaps some users could be listed on the home page?

                                                                            QBE’s goals make a good bit of sense to me; it seems right for the backend to focus on emitting really good machine code, and it’s ok to have a “garbage in, garbage out” philosophy. LLVM was designed to handle almost everything, but recent languages (Swift, Rust, Julia) have their own middle IRs for language-specific optimisations. So it increasingly makes sense to expect that the backend receives relatively good IR, and doesn’t need some of the more magical simplifications. All those arch-specific peephole optimisations provide the real value.

                                                                            I’m only slightly disappointed to see Phi nodes, which are a bit less elegant than block arguments IMO (which MLIR, Swift and Cranelift’s newer IRs use – rationale). But of course it’s no deal-breaker.

                                                                            1. 7

                                                                              I was quite disappointed to see that there’s no pointer type in the IR. That means that it will never be able to target a CHERI platform (or any other architecture where pointers are not Integers), so the Morello system that I’m writing code on right now can never be a target.

                                                                              1. 6

                                                                                It looks like it changed, but I remember at the beginning the goal of QBE was to be super simple as opposed to the “bloated LLVM”, they were planning to only target amd64 and arm64. It looks like they now also support riscv64, so they might have changed and give up on that “few architecture” goals.

                                                                                1. 3

                                                                                  so the Morello system that I’m writing code on right now

                                                                                  Exciting! If I had a desktop-capable CHERI machine on my desk, I would also think first of coming online to tell the world :-)

                                                                                  1. 2

                                                                                    Unfortunately, the GPU driver doesn’t work yet, but apparently Wayland does once the GPU driver is working. I’m hoping to start using it as my work desktop soon, for now I’m sshing into it from a Windows machine. My normal working environment is the Windows Terminal sshing into a FreeBSD/x86-64 VM, so switching it to sshing into a FreeBSD/Morello computer isn’t that big a change…

                                                                                    1. 2

                                                                                      On a quick skim, over the first few search results, CHERI is an ISA, something similar to RISC, just extended with some capabilities around virtualization, memory protection? And Morello is …a CPU? SoC? As in, not exactly ARM, but something like that. Am I in the neighbourhood?

                                                                                      Can you try to explain to a layman, what does it do differently then arm or riscv?

                                                                                      1. 14

                                                                                        CHERI is a set of extensions that add a capability model to memory. The hardware supports a capability type that is protected by a non-addressable tag bit when stored in memory. A capability grants rights to a range of an address space (e.g. read, write, and / or execute permissions). Every memory access (load, store, instruction fetch) must be authorised by a capability, which must be presented for the operation to succeed. For new load and store instructions, the base operand is a capability in a general-purpose capability register. For legacy instructions, the capability is an implicit default data capability.

                                                                                        In CHERI C/C++, every pointer is lowered by the compiler to be represented by a capability. This means that you cannot access any memory except via a pointer that was created by something holding a more powerful capability. For example, the compiler will derive bounded capabilities to from the stack capability register for stack allocations. The OS will return a capability in response to mmap system calls, which the memory allocator will then hold and subdivide to hand out object-bounded capabilities in response to malloc. This means that you cannot forge a pointer and you cannot ever access out of bounds of an object (guaranteed by the hardware). With our temporal safety work, you also cannot access an object that has been freed and reallocated (guaranteed by software, accelerated by some hardware features). To be able to compiler for this kind of target, the compiler must maintain the integer/pointer distinction all of the way through to the final machine-code lowering (arithmetic on pointers uses different instructions to arithmetic on integers, for example).

                                                                                        The CHERI extensions are designed to be architecture neutral. We originally prototyped on MIPS and are now dong RISC-V prototyping and are in the early stages of an official CHERI RISC-V extension. Morello is a localisation to AArch64 and Arm has produced a few thousand test chips / dev systems based on the Neoverse N1 for software development and experimentation. This is what I have under my desk: a modified quad-core Neoverse N1 running at 2.5GHz with 16 GiB of RAM in a desktop chassis with a 250 GiB SSD. We also have a load of them in a rack for CI (snmalloc CI on Morello and benchmarking.

                                                                                        If all goes well, I expect to see CHERI extensions on mainstream architectures on the next few years and so developing a compiler toolchain based on an abstraction that can’t possibly support them without significant rearchitecting seems like an unfortunate decision, especially when maintaining a separate pointer type is fairly simple if you design it in from scratch. The fact that LLVM had separate integer and pointer types in IR made the original CHERI work feasible, the fact that it loses that distinction by the time it reaches SelectionDAG and the back end (one of the first questions the target-agnostic code generator asks the back end is ‘so, which integer type do you want to use for pointers?’) made it harder.

                                                                                        1. 1

                                                                                          Thanks for the summary, very cool stuff. Kudos for pushing this long.

                                                                                        2. 2

                                                                                          Discussions from around the Morello announcement: https://lobste.rs/s/w32bav/morello_arm_cheri_prototype_hits_major <- I recommend the Microsoft article https://lobste.rs/s/wqts1n/capability_hardware_enhanced_risc

                                                                                          CHERI is (I think) both a security model and set of hypothetical and/or experimental extensions for multiple ISA including MIPS, ARM, RISC-V, and x86 the latter of which is currently just a “sketch” [1]

                                                                                          Morello is the realization of actual silicon implementing the extensions [2]

                                                                                          As for an actual description I’d rather point you towards the Microsoft article (I honestly really liked it). That and the discussions were what painted most of my picture of the project(s). There’s also the technical report An Introduction to CHERI which helped fill in other details but there were things or referenced concepst I wasn’t clear on.

                                                                                          [1] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/cheri-faq.html:

                                                                                          What ISA(s) does CHERI extend?

                                                                                          To date, our published research has been based on the 64-bit MIPS ISA; MIPS was the dominant RISC ISA in use in 2010 when the project began. … However, since that time we have performed significant investigation into CHERI as portable architectural security model suitable for use in multiple ISAs. We have also developed an “architectural sketch” of a CHERI-x86-64 that extends the 64-bit x86 ISA with CHERI support.

                                                                                          [2] https://msrc-blog.microsoft.com/2022/01/20/an_armful_of_cheris/

                                                                                          The Morello CPU is a quad-core, 2.5GHz modified Arm Neoverse N1, a contemporary superscalar server core. Prior to this, the most advanced CHERI implementation was the CHERI version of Toooba, which can run in an FPGA at 50MHz in a dual-core configuration and is roughly equivalent in microarchitecture to a mid-‘90s CPU.

                                                                                          1. 2

                                                                                            Thank you. I’ll put the MS article on a reading list, sounds very interesting.

                                                                                        3. 1

                                                                                          By GPU do you mean the Panfrost port br@ is working on, or an amdgpu in a PCIe slot? How’s the PCIe situation on Morello?

                                                                                          1. 1

                                                                                            The panfrost bit. There are some PCIe slots, but I’ve not tried plugging anything into them yet. We’re hoping to set some of them up with some RDMA-capable smartNICs and see if we can do something interesting eventually.

                                                                                            1. 1

                                                                                              Would be very interesting to try amdgpu :)

                                                                                    2. 5

                                                                                      Congrats! I believe Hare uses QBE as a backend, and there may be others. Perhaps some users could be listed on the home page?

                                                                                      There’s a “Users” tab at the top that lists cproc, hare and others: https://c9x.me/compile/users.html

                                                                                      1. 3

                                                                                        Right there in the menu as well! Thank you for pointing this out to me. I think I had assumed this would be something like a community page (eg “user mailing list”).

                                                                                    1. 1

                                                                                      bytesize which I’ve been using in systemstat for a long time does something slightly interesting regarding various types – number + ByteSize impls are explicit for each numeric type via macros, but ByteSize + number is just one impl<T> Add<T> for ByteSize where T: Into<u64>.

                                                                                      1. 1

                                                                                        I mention in the article that I was able to do something similar where LHS is a Size, one generic impl<T> Mul<T> for Size where T: IntoIntermediate covers all the primitive integers but you need the separate impls for each primitive type as the RHS.

                                                                                        I don’t use Into<u64> because a) PrettySize supports negative sizes (e.g. the difference between two sizes) so the “base” unit is i64, b) rust doesn’t provide impls for Into<uXX> from signed iXX values, c) I also support floating-point sources (e.g. Size::from_mib(1.1)) - all of which just means I have my own (sealed/private) trait called AsIntermediate that I implement via a macro for all the primitive signed, unsigned, and float types (except x128) which does a saturating conversion (e.g. u64::MAX becomes i64::MAX).

                                                                                        I guess again unlike bytesize, I also have a second impl even for the LHS of Size case to support ops on a reference - you need for Size and for &Size separately (again because of rust’s orphan rule) since you can’t just do impl ... for Borrow<Size> to cover both Size and &Size (this is discussed briefly in the article).

                                                                                      1. 10

                                                                                        Re GDPR, Matrix shouldn’t be directly compared to IRC or XMPP but email. A matrix home server is kindof like an IMAP server. Once the message has been sent out to recipients, they have their own copies.

                                                                                        Some of these notes are directed at matrix the protocol, some at synapse the implementation. Many solutions are on the roadmap but not being worked on yet.

                                                                                        1. 7

                                                                                          Here’s an evaluation of Matrix vs. the GDPR by an actual lawyer (German): https://www.cr-online.de/blog/2022/06/02/ein-fehler-in-der-matrix/ – I was unsure if this is on-topic on lobste.rs, so I refrained from posting it as a story, but it does fit in here. Feel free to submit it as a story if you want. The article specifically addresses the e-mail comparison point.

                                                                                          tl;dr: It’s not compliant.

                                                                                          1. 2

                                                                                            I still think the comparison is valid in some senses, though — it’s reasonable to want your instant messages to not live forever in the same way that emails do. (Of course, from a legal standpoint, you might have to use the email comparison to get around GDPR, which is a different thing.)

                                                                                            1. 13

                                                                                              Eh, well, it’s also reasonable to be able to search your history to find that thing from 4 years ago that you suddenly remembered…

                                                                                              1. 3

                                                                                                also all of the IRC channels I frequent have logging bots..

                                                                                          1. 2

                                                                                            Why this over seahorse?

                                                                                            1. 12

                                                                                              Some people are really into “unix minimalism” which usually includes hating on anything that works over D-Bus and reinventing wheels instead :)

                                                                                              secret-service/libsecret/seahorse are pretty great.

                                                                                              1. 3

                                                                                                Thank you for saving me from searching the web for ‘seahorse’ only to find it works over D-Bus. :)

                                                                                                I think that complicated stuff might be useful to someone. It’s not for me, though.

                                                                                                My personal desktop environment lacks a lot of functionality that others enjoy, but I can jump straight to the appropriate line of source for every piece of functionality that I allow in. It’s … relaxing. I’d rather have problems that are my own darn fault than not know how something I rely on worked in the first place–so I have to just search the web when it eventually breaks.

                                                                                                You’ll note that I speak very narrowly there and I quickly draw curtains over huge chunks of machinery that I don’t understand at all (the kernel, the cpu, the bus, the devices, the compiler, everything, everything!). There’s just that one little piece I understand well enough to find relaxing, and I can’t let D-Bus or Gnome’s dconf or KDE’s KIO or any of that cool stuff get in there, or I won’t understand it anymore.