Threads for strugee

    1. 42

      Does this mean that the first frame of Doom can now be rendered in only ~1.2 days?

      1. 26

        For folks lacking context, this is a reference to this video: https://youtu.be/0mCsluv5FXA where the creator emulated DOOM using just TS types. It took 12 days to render the first frame of the game.

        1.  

          The author of that project replied in the comments.

          The thing that bottlenecks Doom (like 60% of total time spent by the type checker) is serializing the multi-megabyte-type to a string, which I bet is going to be much faster than 10x – because it’s not something a typical typecheck has to do at all. But honestly, even if it’s just 10x (look at us… “just” 10x! what a dream today is!!!), that’s still gonna get it down to sub-1-day for the first frame, more than likely.

        2. 16

          A chromecast is just a wireless cable connecting my laptop to my tv. I don’t want it to have a “device authentication certificate” that expires, any more than I want my USB cables to contain these things. The purpose of this certificate isn’t anything that benefits me, it sounds like an anti-consumer measure for enforcing someones business model. Can anyone explain?

          1. 16

            Presumably it’s there to prevent anyone from releasing devices that “work with Chromecast” without approval from Google. So yeah, it’s not really in your interest. It definitely means that Cast is not an open protocol, which is a shame.

            1. 3

              I wish someone just made a “wireless HDMI cable”, but Chromecasts were never really that. I chose not to buy one once I found out they don’t let you actually use them like an external display - you can’t show anything you want, it has to be a chrome tab.

                1. 3

                  The problem with Miracast is that it uses a separate WiFi Direct connection, so it needs OS and hardware support (can’t just work over a regular TCP/IP network like Chromecast) and can’t be routed, sent over Ethernet, competes with client usage of the WiFi interface, etc… I’ve never had it work well.

                  I wish there was a standard like Miracast but over an existing TCP/IP network/AP. The closest thing is AirPlay, but that’s not an open standard either and last I checked there was no open source AirPlay sender out there to cast from Windows/Linux machines…

                  1. 6

                    The closest thing is AirPlay, but that’s not an open standard either

                    In December the EU Commission in relation to the DMA proposed forcing Apple to among other things open up AirPlay. My first reaction was that it would be hilarious if AirPlay, against Apple’s will, ended up becoming a better alternative to Google Cast for everyone. So I guess keep your fingers crossed?

                    1.  

                      I think I remember reading that MiracleCast originally only supported that mode of operation due to incompleteness, but it’s been a long time.

                      1. 1

                        Microsoft actually has a protocol extension to Miracast for that [0], it does still use WiFi Direct for display discovery though which annoyingly means your device has to have a WiFi card for it to work. Not very many of the open source implementations support it though, the only one I know of that does is GNOME Network Displays.

                        [0] https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-mice/9598ca72-d937-466c-95f6-70401bb10bdb

                    2. 3

                      Chromecast does let you cast your display to them, you can do that from Chrome (cast display) or Android.

                      On the receiving end it’s just an instance of Chrome, but I’m guessing it’s just implemented by having a web page display a WebRTC feed (which is just as well).

                    3. 1

                      It’s more likely a DRM thing, so that Disney and Netflix will let you play stuff on it, right?

                      1. 4

                        From what I understand of the Cast protocol, in most cases, the client (phone) just tells the Chromecast device what to do; it doesn’t actually stream the content. There is a mode that allows the client to tell the Chromecast to connect back to the client and stream from it directly, which is exposed in the Chrome browser, but that isn’t used by streaming services.

                        It is possible I suppose that Netflix and co demanded that the communication channel between the client app and the Chromecast device be protected, even if you generally aren’t using it to send protected data.

                        1. 4

                          I don’t think that makes much sense. DRM stuff like Netflix would just rely on Widevine support in the Chromecast itself. So a third party device not licensed to play Widevine content just won’t work with Netflix (but should work with everything else).

                          That is, the device authentication for Chromecast is one thing (and one certificate), and the authentication for Widevine DRM is separate (with its own certificate).

                          1. 1

                            DRM stuff like Netflix would just rely on Widevine support in the Chromecast itself.

                            Well, yes, different devices, 1st party vs 3rd party, have different Widevine/chrome-cdm “protection” features, and the streaming party (netflix, disney+) might check that level vs their policies. This is why some apps indeed fail to cast to Xiaomi sticks etc.

                    4. 1

                      i go somewhat out of my way to avoid building anything usable by commercial entities in my off time, but this one from ffmpeg lives rent-free in my brain.

                      1. 8

                        Time and time again wlroots proves how solid it is as a project. Really outstanding work!

                        It’s just a shame that Wayland didn’t dare to define such things on the protocol level in the first place. I mean, given the rock-sold colour space support in macOS, any sane engineer designing a new display manager/compositor in the 2010’s would have put colour management as a design-centerpiece. Libraries like Little CMS prove that you don’t even need to do much in terms of colour transformations by hand; simply define your surfaces in a sufficiently large working colour space and do the transformations ad-hoc.

                        From what I remember back then, the only thing the Wayland engineers seemed to care about was going down to the lowest common denominator and ‘no flickering’ (which they saw in X in some cases).

                        For instance, it is not possible to portably place an application window ‘at the top’, given one may not dare to assume this even though 99.99% of all displays support this. It would have made more sense to have ‘feature flags’ for displays or have more strict assumptions on the coordinate space.

                        In the end, a wayland compositor requires close to 50.000 LOC of boilerplate, which wlroots gracefully provides, and this boilerplate is fragile as you depend on proprietary interfaces and extensions. You can write a basic X display manager in 500 LOC only based on the stable X libraries. With all of X’s flaws, this is still a strong point today.

                        1. 7

                          In the end, a wayland compositor requires close to 50.000 LOC of boilerplate, which wlroots gracefully provides, and this boilerplate is fragile as you depend on proprietary interfaces and extensions. You can write a basic X display manager in 500 LOC only based on the stable X libraries. With all of X’s flaws, this is still a strong point today.

                          This instinctually bothers me too, but I don’t think it’s actually correct. The reason that your X display manager can be 500 LOC is because of the roughly 370 LOC in Xorg. The dominance of wlroots feels funny to me based on my general dislike for monocultures, but if you think of wlroots as just “the guts of Xorg, but in ‘window manager userland’”, it actually is not that much worse than Xorg and maybe even better.

                          1. 2

                            I think you mean 370k LOC.

                            1. 1

                              Yes indeed, my bad.

                          2. 6

                            I don’t really get your criticism. Wayland is used on a lot of devices, including car displays and KIOSK-like installations. Does an application window even make sense if you only have a single application displayed at all times? Should Wayland not scale down to such setups?

                            Especially that it has an actually finely-working extension system so that such a functionality can be trivially added (either as a standard if it’s considered widely useful, or as a custom extension if it only makes sense for a single server implementation).

                            A Wayland compositors’ 50 thousands LOC is the whole thing. It’s not boilerplate, it’s literally a whole display server communicating in a shared “language” with clients, sitting on top core Linux kernel APIs. That’s it. Your 500 LOC comparison under X is just a window manager plugin, just because it operates as a separate binary it is essentially the same as a tiling window manager plugin for Gnome.

                            1. 5

                              It’s just a shame that Wayland didn’t dare to define such things on the protocol level in the first place.

                              Then it would have taken 2× as long to get it out of the door and gain any adoption at all.

                              Routine reminder that the entire F/OSS ecosystem worth of manpower and funding is basically a rounding error compared to what Apple can pour into macOS in order to gain “rock-solid colour space support” from day zero.

                              1. 3

                                For instance, it is not possible to portably place an application window ‘at the top’, given one may not dare to assume this even though 99.99% of all displays support this. It would have made more sense to have ‘feature flags’ for displays or have more strict assumptions on the coordinate space.

                                What do you mean by this? I can’t understand it.

                              2. 6

                                Asked me to jump on a call to coach their team on how to use my library in their proprietary, closed source code. In the same proverbial breath, asked if I’d “kindly” add their logo to the readme as a user of the library.

                                I never responded.

                                1. 7

                                  You could have kindly added their logo to the README in the “companies that use this library and made an absurd unsolicited request for free labor” section.

                                2. 16

                                  You already trusted your team with good test/push/deploy discipline. Merge queues, deployment pipelines, and high ceremony CI is … all too much.

                                  No. I don’t. I wouldn’t trust anyone at my company with “good… discipline”, and I wouldn’t trust anyone on this site either, including myself. We’re all humans. Humans get tired, we make mistakes, we make bad judgment calls, we get lazy. We forget things. Mechanized systems like CI, or deploy scripts (it doesn’t have to be a full pipeline!), or infrastructure-as-code, or linters - they exist to deal with the fact that we remain the same gray mud with pretensions of godhood.

                                  People often benefit from structure even if they don’t know to ask for it. I don’t want to think about whether I’ve run the tests correctly on my local machine before I gh signoff because that is an enormous waste of my time and mental energy. I want and expect a computer to do that for me. Not having repeatable CI is robbing your developers of a tool and safety net that they could have relied on. (A perfect example: how many typo fix patches, or comment clarifications, are going to go unsubmitted to Basecamp’s codebase because they’re now more work than just “use the basic GitHub web editor and let CI smoketest it”?)

                                  Now, I’m all for moving stuff out of the cloud. If that’s your goal, buy a NUC and chuck it in the office closet with Debian and selfhosted CI on it. I think that’s a fantastic idea. But that’s not what this is. (Developer environments moving from localhost to the cloud is also an atrocity… but developer environments not being repeatable compromises productivity, not uptime. Because of repeatable CI.)

                                  1. 10

                                    No. I don’t. I wouldn’t trust anyone at my company with “good… discipline”, and I wouldn’t trust anyone on this site either, including myself.

                                    I do not trust especially myself. I know that I am lazy, sleazy, often mistaken, I forgot about things like formatting code or running tests locally.

                                    I setup CI even in solo projects to protect myself from myself.

                                  2. 5

                                    I think this is a great idea, but I am anticipating folks explainIng why it isn’t.

                                    1. 22

                                      The main argument against is that even if you assume good intentions, it won’t be as close to production as an hosted CI (e.g. database version, OS type and version, etc).

                                      Lots of developers develop on macOS and deploy on Linux, and there’s tons of subtle difference between the two systems, such as case sensitivity of the filesystem, as well as default ordering just to give an example.

                                      To me the point of CI isn’t to ensure devs ran the test suite before merging. It’s to provide an environment that will catch as many things as possible that a local run wouldn’t be able to catch.

                                      1. 6

                                        To me the point of CI isn’t to ensure devs ran the test suite before merging.

                                        I’m basically repeating my other comment but I’m amped up about how much I dislike this idea, probably because it would tank my productivity, and this was too good as example to pass up: the point of CI isn’t (just) to ensure I ran the test suite before merging - although that’s part of it, because what if I forgot? The bigger point, though, is to run the test suite so that I don’t have to.

                                        I have a very, very low threshold for what’s acceptably fast for a test suite. Probably 5-10 seconds or less. If it’s slower than that, I’m simply not going to run the entire thing locally, basically ever. I’m gonna run the tests I care about, and then I’m going to push my changes and let CI either trigger auto-merge, or tell me if there’s other tests I should have cared about (oops!). In the meantime, I’m fully context switched away not even thinking about that PR, because the work is being done for me.

                                        1. 4

                                          You’re definitely correct here but I think there are plenty of applications where you can like… just trust the intersection between app and os/arch is gonna work.

                                          But now that I think about it, this is such a GH-bound project and like… any such app small enough in scope or value for this to be worth using can just use the free Actions minutes. Doubt they’d go over.

                                          1. 6

                                            any such app small enough in scope or value for this to be worth using can just use the free Actions minutes.

                                            Yes, that’s the biggest thing that doesn’t make sense to me.

                                            I get the argument that hosted runners are quite weak compared to many developer machines, but if your test suite is small enough to be ran on a single machine, it can probably run about as fast if you parallelize your CI just a tiny bit.

                                          2. 2

                                            I wonder if those differences are diminished if everything runs on Docker

                                            1. 5

                                              With a fully containerized dev environment yes, that pretty much abolish the divergence in software configuration.

                                              But there are more concern than just that. Does your app relies on some caches? Dependencies?

                                              Where they in a clean state?

                                              I know it’s a bit of an extreme example, but I spend a lot of time using bundle open and editing my gems to debug stuff, it’s not rare I forget to gem pristine after an investigation.

                                              This can lead me to have tests that pass on my machine, and will never work elsewhere. There are millions of scenarios like this one.

                                              1. 3

                                                I was once rejected from a job (partly) because the Dockerfile I wrote for my code assignment didn’t build on the assessor’s Apple Silicon Mac. I had developed and tested on my x86-64 Linux device. Considering how much server software is built with the same pair of configurations just with the roles switched around, I’d say they aren’t diminished enough.

                                                1. 1

                                                  Was just about to point this out. I’ve seen a lot of bugs in aarch64 Linux software that don’t exist in x86-64 Linux software. You can run a container built for a non-native architecture through Docker’s compatibility layer, but it’s a pretty noticeable performance hit.

                                            2. 13

                                              One of the things that I like having a CI is the fact that it forces you to declare your dev environment programmatically. It means that you avoid the famous “works in my machine” issue because if tests works in your machine but not in CI, something is missing.

                                              There are of course ways to avoid this issue, maybe if they enforced that all dev tests also run in a controlled environment (either via Docker or maybe something like testcontainers), but it needs more discipline.

                                              1. 2

                                                This is by far the biggest plus side to CI. Missing external dependencies have bitten me before, but without CI, they’d bite me during deploy, rather than as a failed CI run. I’ve also run into issues specifically with native dependencies on Node, where it’d fetch the correct native dependency on my local machine, but fail to fetch it on CI, which likely means it would’ve failed in prod.

                                              2. 4

                                                Here’s one: if you forget to check in a file, this won’t catch it.

                                                1. 3

                                                  It checks if the repo is not dirty, so it shouldn’t.

                                                  1. 1

                                                    This is something “local CI” can check for. I’ve wanted this, so I added it to my build server tool (that normally runs on a remote machine) called ding. I’ll run something like “ding build make build” where “ding build” is the ci command, and “make build” is what it runs. It clones the current git repo into a temporary directory, and runs the command “make build” in it, sandboxed with bubblewrap.

                                                    The point still stands that you can forget to run the local CI.

                                                  2. 1

                                                    What’s to stop me from lying and making the gh api calls manually?

                                                  3. 2

                                                    The big lesson from 1936 Turing:

                                                    Anything that’s Turing complete can run DOOM.

                                                    1. 4

                                                      I’m wondering if we’re at the point where we can flip the definition on its head: something is Turing complete if it can run DOOM.

                                                      1. 1

                                                        Unfortunately, that is trivially untrue. One could imagine an ASIC that plays doom, but does not have the capability to do anything else (like the original analogue PONG circuit). Ergo, playing DOOM is not an indicator of being Turing complete. (note that this does not work with all games, a factorio ASIC would be Turing complete, because you can build a Turing machine inside factorio itself.)

                                                        1. 11

                                                          Actually, Doom is Turing-complete: NAND gates are possible, and they can be used to assemble any other gate.

                                                        2. 1

                                                          Trivially though, no real-world system is Turing complete.

                                                          1. 2

                                                            What do you mean? Because the tape isn’t infinite?

                                                            1. 1

                                                              Yeah, which in case of Typescript types is reflected in things such as limited recursion depth and such. While Dimitro worked around for development by modifying the Typescript compiler, the final thing runs as-is with regular Typescript, so it is subject to all the limitations (and works regardless). It’s not obvious that it would work

                                                        3. 4

                                                          I wonder what the author would think of my blog. Each post has year and month in the URL (but no date, that was so fine-grained I decided it was pointless). You can remove any element from the hierarchy and get a real HTML page - time-filtered indexes of posts. I implemented this so you could filter by time without me having to run a (dynamic) search engine server-side.

                                                          1. 3

                                                            Minor bubble: laptop/tablet/phone convergent UIs. Ubuntu Touch was 2011 (so was GNOME 3.0). Windows 8, and the Metro design language, was 2012. According to Wikipedia where I stole this information from, Google announced Android apps on ChromeOS devices in 2014, though I don’t really see that as part of the hype cycle honestly - it feels more like a useful feature than a impossible-to-achieve convergence goal. (But, maybe that’s a good indication that the hype cycle technology became boring around 2013, 2014?)

                                                            1. 7

                                                              This is a recipe for an eventual security disaster.

                                                              A little blown out of proportion? I have never seen a service attempt to set up 1-digit TOTP authentication with their users. Pretty much every service on the planet stays on the well-trodden path of 6 digits, 30 seconds.

                                                              So what would a tighter TOTP spec accomplish? Ok, your auth apps might reject dumb TOTP configurations. But who is generating these dumb configurations now? Nobody.

                                                              1. 4

                                                                The point I was making isn’t that a single digit code is bad for security, but that having a loosely defined spec which major implementations disagree on is bad for security.

                                                                For example, Yubico generate 7 digit TOTP codes. If you move your secrets from their TOTP app to Google, will you be locked out? If you move from Google arbitrary number of seconds to an app which only supports 30 seconds, will your codes occasionally be wrong? Can dodgy URl encoding of certain fields be used to trick or confuse users?

                                                                Because major implementations are diverging from the spec and the deficiencies in the original spec, it is possible that unforeseen security issues could arise.

                                                                As I say in my post, choosing a single digit TOTP code is stupid. But relying on a stagnant spec is probably worse.

                                                                1. 1

                                                                  That is a much clearer explanation, thanks. You bring up a good point: it would be annoying if switching TOTP apps also meant losing my ability to log in because of a funky setup for one of my websites. A well-defined standard might help there.

                                                                  Though I could bet money that all my current TOTP codes would work in every major app, or the apps that break are probably not the high-quality ones you want to rely on anyway.

                                                                  I’m also not worried about, for example, malicious QR codes tricking users. Maybe I’m missing something here, because that implies a website is… attacking its own users? Attacking the users of another site somehow?

                                                                  Anyway, there are some valid security concerns here; I just don’t see how it could possibly amount to disaster. Perhaps the standard is stagnant because nobody sees any practical real-world scenario where things are likely to go sideways?

                                                                  In any case, the poor UX of TOTP codes in general is probably the biggest security concern IMO.

                                                                  1. 1

                                                                    In any case, the poor UX of TOTP codes in general is probably the biggest security concern IMO.

                                                                    No, that would be that they’re still phishable. (Unless that’s what you meant by “poor UX”?)

                                                                    1. 2

                                                                      Yes, I do think poor UX leads to phishability, as well as other problems.

                                                                2. 3

                                                                  Pretty much every service on the planet stays on the well-trodden path of 6 digits

                                                                  As a small aside, Blizzard/BattleNet had 8 digit TOTP, which was in their exclusive “Battle.net Authenticator” app. I found tool on github that could act like the bnet authenticator app to set up, then export it so I could load it to my sandard TOTP app.

                                                                  Blizzard have since discontinued it requiring everyone to migrate to using their battle.net app, or disable 2nd factor. Now I only have one factor. Please everyone just use the standard TOTP.

                                                                  1. 1

                                                                    Blizzard have since discontinued it requiring everyone to migrate to using their battle.net app, or disable 2nd factor. Now I only have one factor. Please everyone just use the standard TOTP.

                                                                    What is with game companies doing this crap? Steam Guard gives you the option to use either email or their proprietary mobile app.

                                                                3. 62

                                                                  There is so much about this story that disgusts me, and that’s before getting to the security issues. Please, if you’re in a financial position to spend money on a product like this: don’t. The “willing to overlook” list should be a blueprint for exactly the kinds of things none of us should ever overlook when considering a purchase in this day and age. As long as people keep overlooking those huge red flags, products like this will continue to create e-waste.

                                                                  1. 7

                                                                    at the beginning: “willing to overlook […] It won’t function if the internet goes down”, and later “Personally, I don’t want my bed data accessible to anyone”. I have no idea what’s going on in these people’s head. How can these two beliefs be held at the same time?

                                                                    1. 2

                                                                      There’s a difference between those two though. I can self host and isolate the first one, if the internet goes down I won’t be able to access it, but it’s not in the cloud and it’s not accessible by everyone.

                                                                      1. 1

                                                                        I think the point is that if your bed (or any other IoT) doesn’t work if it’s not internet connected, you aren’t going to be able to self host (failing some clever rev-eng) and that’s a fair indicator that your ‘bed data’ is already accessible by someone else. So subsequently claiming “I want privacy” is profoundly tone deaf. And in this case, yes, the data is indeed in the cloud (it’s what AWS Kinesis is for). It might not be available to “anyone”, but it is available for “someone” who isn’t sleeping in the bed, and personally, that’s probably too many.

                                                                        Personally, I still can’t get my head past the simple idea of “I can’t control my bed if the internet isn’t up, and I’m good with that”.

                                                                      2. 1

                                                                        The data could be end-to-end encrypted. (Though of course, usually it isn’t.)

                                                                        1. 2

                                                                          Usually done by having the same private key for every device, which inevitably gets leaked. We can’t have nice things. :-)

                                                                    2. 1

                                                                      But if you did [enable ‘Advanced Data Protection’], your backups would be encrypted securely under your phone’s passcode — something you should remember because you have to type it in every day — and even Apple would not be able to access them.

                                                                      Maybe this is a dumb question, but… a passcode? Like, a short-ish digit sequence, a PIN? Because that doesn’t have nearly enough entropy to make a good key for a remote backup. I understand recent iDevices have fancy thumbprint and face ID stuff, so maybe they are using that instead?

                                                                      1. 5

                                                                        Disclaimer: I’m not totally sure this is accurate. But I’m like, 90% sure, both from what I already know and from reading this article.

                                                                        The backup (and other assets) are not literally encrypted with the PIN. If you give the PIN to Apple, they still cannot retrieve your information. Instead, your device purges all the relevant encryption keys from Apple’s HSMs, and then does a key rotation. Meaning that the only place where the active (randomly-generated, high-entropy) encryption keys are is on your device(s). The passcode comes in because these encryption keys are protected by the Secure Enclave, which enforces rate-limiting on the passcode (the passcode or biometric authentication are required to release the encryption keys for use).

                                                                        1. 1

                                                                          Thank you, that makes much better sense. So when the secret police come for my ADP encrypted backup, they’ll have to just beat the PIN out of me the old fashioned way. (j/k, I don’t use any of this stuff, never trusted Apple!)

                                                                          1. 2

                                                                            Doesn’t the secret police beating secrets out of you apply regardless of what products you do or do not you?

                                                                            1. 1

                                                                              Of course. Quite possible they’ll even beat me if I don’t actually have any secrets; maybe I’m just a fall guy or something.

                                                                              But that doesn’t scale nearly as well as just being able to issue a secret subpoena and log into some backdoor portal.

                                                                      2. 12

                                                                        Replace and with your WiFi’s name and password in plaintext. Yes, you read that right - a PASSWORD stored in PLAINTEXT. I’m pretty shocked by this, but it seems to be a norm for WiFi tools, not sure why.

                                                                        It is required to have the password somehow if you want to connect to WiFi, as the WiFi access point wants to know that you have the correct password. How else would you accomplish this?

                                                                        If you were to encrypt it, the decryption password would have to be stored somewhere on disk in plaintext, so that wouldn’t help much.

                                                                        Another case: username and password for a website. The website’s server can have a hashed password stored for every user, but the client must have the password in plaintext. The difference being that the plaintext is often “stored” in the user’s mind. If you were to set up a script to automatically log in and post something, the the password would have to be accessible/plaintext.

                                                                        1. 10

                                                                          From the tone of the article, I’m guessing the author is new(ish) to Linux. They don’t seem to understand that if you “roll your own” network connectivity by following random tutorials and running low-level networking utilities like wpa_supplicant directly, then yeah, you also get to figure out how to store the secrets securely, if that’s what you want.

                                                                          I’m not 100% positive about how GNOME stores wifi passwords. I would think in GNOME keyring? But I do know for sure that KDE Plasma prefers (although does not require) you to store wifi passwords securely in the Wallet.

                                                                          1. 2

                                                                            From the tone of the article, I’m guessing the author is new(ish) to Linux.

                                                                            That is a good guess :)

                                                                            They don’t seem to understand that if you “roll your own” network connectivity by following random tutorials and running low-level networking utilities like wpa_supplicant directly, then yeah, you also get to figure out how to store the secrets securely, if that’s what you want.

                                                                            In the last footnote in the post, I mentioned how the machine’s existing WiFi config (NetworkManager) also stored passwords in plaintext. The option of using a keyring didn’t really occur to me because I didn’t see it being used by the machine.

                                                                            Using a keyring also brings up another question - how would wpa_supplicant‘s config get the password from the keyring? I can run a command manually, but is it possible to automate this on boot? (I wouldn’t actually use this as a permanent setup, but just curious)

                                                                            1. 3

                                                                              wpa_supplicant gets controlled either over its control socket or its D-Bus API by e.g. NetworkManager (which in turn communicates with the keychain implementation, often also over D-Bus), and either gets the secret pushed with the connection request, or requests it over those connections once it needs the secret.

                                                                              1. 1

                                                                                ohh I remember seeing the D-Bus connection by NetworkManager while working on this. This makes sense, thank you!

                                                                          2. 4

                                                                            You can use wpa_passphrase to precompute the WPA-PSK for a network and store that instead. It’s not going to stop a real attacker, but it can keep the password away from shoulder surfers and other low-effort attacks.

                                                                            1. 2

                                                                              That option was mentioned in footnote 2 of the article, but was dismissed as “this (…) makes hashing pointless” because an attacker can use that value to gain access to the network.

                                                                              1. 3

                                                                                I didn’t see that footnote. Regardless, I agree with your original comment.

                                                                            2. 4

                                                                              Doesn’t every CPU have a “Secure Enclave” for storing secrets? I’m not familiar with how Linux uses it, but on Mac/iOS this is used as the basis of the Keychain, a secure db for keys and passwords.

                                                                              1. 5

                                                                                No. All modern PCs have a TPM, and all modern Intel machines have SGX (for now). Neither of these is exactly equivalent to Apple Silicon’s Secure Enclave. Probably the closest you can come is using the TPM to release a keychain encryption secret only if the computer booted with a genuine software chain (genuine bootloader, genuine kernel, etc. etc.) but this is not done anywhere today to my knowledge. TPM support only just landed in systemd (the production release was a month or two ago).

                                                                                1. 4

                                                                                  SGX has not been supported by Intel desktop CPUs for a few generations now.

                                                                                2. 2

                                                                                  That would at least protect against theft of a powered off laptop, as the keyring would only be accessible after the user has logged in to their machine. If an attacker tried to dump the hard drive, the password wouldn’t be there, as it would in the default scenario.

                                                                                  The same-ish protection would also apply to a regular keyring encrypted with the user’s login password. The difference being not even an encrypted blob to bruteforce on.

                                                                                  It wouldn’t do much about a root level attacker, which was the implied threat model, as file permissions only allowed reading by root (and someone reading directly from the drive).

                                                                                3. 4

                                                                                  I think you can do something like store a wifi password in gnome-keyring

                                                                                  1. 3

                                                                                    Yeah, that is probably right. The passwords would have to be added to each user’s keyring if there are multiple users of the machine. You’d forego connectivity before user login after boot.

                                                                                    I think this isn’t done because the keyrings that might be installed is orthogonal to the WPA supplicant daemon. I’ve also run WPA supplicant on a Raspberry Pi which doesn’t have any keyring at all – but it could have been an optional feature for supported keyring implementations.

                                                                                  2. 1

                                                                                    This makes sense.

                                                                                    This makes me realize that I probably should have elaborated on the “not sure why”. I was thinking on something along of the lines of a keyring (I was thinking of Apple’s Keychain, which is where my WiFi password is stored on macOS). My Linux machine’s existing setup (NetworkManager) also stored passwords in plaintext, so I was unsure of why a keyring setup wasn’t the default.

                                                                                    Using a keyring also brings up another question - how would wpa_supplicant‘s config get the password from the keyring? I can run a command manually, but is it possible to automate this on boot? (I wouldn’t actually use this as a permanent setup, but just curious)

                                                                                    1. 1

                                                                                      There’s a bunch of authentication protocols that could be used with WPA Enterprise access points (https://en.wikipedia.org/wiki/Extensible_Authentication_Protocol), so you could have a setup with EAP-TLS where the TLS private key is stored in an HSM/TPM, or EAP-SIM using a SIM card.

                                                                                    2. 52

                                                                                      Couldn’t agree more! I think I shared this on lobsters years ago, but my favorite thing in my ~/.zshrc is this little guy:

                                                                                      function t {
                                                                                        pushd $(mktemp -d /tmp/$1.XXXX)
                                                                                      }
                                                                                      

                                                                                      If I run ‘t blah’, I’ll drop into a temporary directory with a name like /tmp/blah.1tyC. I can goof around without worrying about cleaning up the mess later. When I’m done I can popd back to where ever I was. On the off chance I like what I did, I can just move the folder somewhere permanent. I use this every day; my $HOME would be unnavigable without it.

                                                                                      1. 8

                                                                                        I like to automate the “popd && rm” part in the alias directly. This bash function enters a subshell inside the tmpdir and when I exit or ^D, it pops back to the previous directory and deletes the tmpdir – with a little safeguard when you mounted something inside it! Had a bad time when experimenting with mount namespaces and accidentally deleted my home directory because it was mounted inside this tmpdir …

                                                                                        tmp() {
                                                                                          history -w || true
                                                                                          t=$(mktemp --tmpdir -d tmpdir-XXXXXX) \
                                                                                            && { $SHELL -c \
                                                                                             "cd '$t' \
                                                                                              && printf '\033[31m%s\033[0m\n' 'this directory will be removed upon exit' \
                                                                                              && pwd \
                                                                                              && exec $SHELL" \
                                                                                             || true; \
                                                                                            } \
                                                                                            && if awk '{ print $2 }' /etc/mtab | grep "$t"; then
                                                                                              echo -e "\033[31maborting removal due to mounts\033[0m" >&2
                                                                                            else
                                                                                              echo -e "\033[31mremoving temporary directory ...\033[0m" >&2
                                                                                              rm -rf "$t"
                                                                                            fi
                                                                                        }
                                                                                        

                                                                                        Here is a more recent one for fish as well.

                                                                                        1. 5
                                                                                        2. 5

                                                                                          I have nearly the same function, but I’m using ~/tmp has the base, precisely because /tmp is often emptied on boot and I know that I sometimes want to go back to these experiments.

                                                                                          Using ~/tmp helps keep my home directory clean and makes it obvious that the stuff is easily removable, but if, on an off-chance, I might need one of those experiments again, it’s there, waiting for me in ~/tmp even though I might have rebooted in the mean time.

                                                                                          But in general, yes, this is the way to go. I’ve learned it here on lobsters years ago and I’m using it daily.

                                                                                          1. 1

                                                                                            I like this idea, I came to the same conclusion as you, but instead of doing ~/tmp, I added “tmp” to my personal monorepo’s .gitignore. But sometimes I would put random downloads in that folder, lol. Thanks for the idea, I feel like having two of these makes sense, one for coding stuff and one for everything else

                                                                                          2. 4

                                                                                            This is absolutely fantastic. I’ve been independently using this trick for years (through ^R instead of an alias) and I love it too. I didn’t know mktemp took an argument though, thank you!

                                                                                            1. 3

                                                                                              I like to do programming-language-specific versions of this. So like “newrust” creates a temp dir, puts a Cargo hello world project in there, cd’s in there, and opens my editor. Similarly “newgo”, “newcpp”, etc. Great for playing around.

                                                                                              1. 1

                                                                                                I tend to have two use-cases for these, one that’s very very temporary (single shell instance), and one that should stick around a little longer (i.e. duration of the boot). This works out to something along the lines of

                                                                                                t() {
                                                                                                	mkdir -p /tmp/t
                                                                                                	cd /tmp/t
                                                                                                }
                                                                                                tt() {
                                                                                                	local dir=$(mktemp -d)
                                                                                                	[ -d "$dir" ] || return 1
                                                                                                	cd "$dir"
                                                                                                	trap "rm -rf '$dir'" EXIT
                                                                                                }
                                                                                                

                                                                                                Notably, I use cd because I rarely popd at the end, I usually just close the shell once I’m done with it. I probably should do it anyway though :)

                                                                                                1. 1

                                                                                                  Oh man, thank you so much, this is the kind of “small purpose, great impact” tools that I love! Here’s my fish version where the argument is optional.

                                                                                                  function t
                                                                                                      if test -z $argv[1]
                                                                                                          set dirname xx
                                                                                                      else
                                                                                                          set dirname $argv[1]
                                                                                                      end
                                                                                                      pushd (mktemp -d -t $dirname.XXXX)
                                                                                                  end
                                                                                                  
                                                                                                  1. 1

                                                                                                    What kind of experiments are you doing with folders exactly? I’m curious about the overall workflow where you would like a folder’s contents or not.

                                                                                                    1. 7

                                                                                                      Personally I often try out small experiments with different programming languages which need a project structure to be set up before they can work properly. For this a new folder is often needed.

                                                                                                      1. 2

                                                                                                        For me, I just do a lot of stuff directly in /tmp, but this seems nice to keep things organized in case I want to move a directory to somewhere more permanent.

                                                                                                        Scenario A: I’m trying to test something in C/Zig and don’t want to pollute my actual project directory. I just make a /tmp/test.{c, zig} and compile in /tmp. I think putting it in a temp dir would be nice, if unnecessary.

                                                                                                        Scenario B: I, semi-frequently, will clone git repos into /tmp if I’m just trying to quickly look at something. Occasionally, I clone multiple related repos at the same time. Having a temp dir to keep them together would be nice if I ended up wanting to move them out.

                                                                                                        1. 2

                                                                                                          For me it’s poking around in exploded jar files, or tarballs, or other archive formats mostly.

                                                                                                          Sometimes you don’t just want to list the contents, or maybe there are container formats in container formats, a zip inside a tar, inside a cpio for example.

                                                                                                          I want to unpack all these, look around, examine files, etc. without worrying about needing to clean up the mess after.

                                                                                                        2. 1

                                                                                                          I have the exact same function. It’s so freakin’ useful.

                                                                                                          1. 3

                                                                                                            This. Cryptography is quite the exception in software, in that it’s pretty much the only domain where people are crying left and right not to do it. But this indeed applies to pretty much everything. And yet, we don’t hear nearly as much outcry when it’s about “merely” processing untrusted input — though we are being increasingly serious about using memory safe languages for this.

                                                                                                            1. 9

                                                                                                              Security is unusual in that correctness is mostly about what you do in erroneous cases. For most software, if there is a bug of the form ‘a user does this weird and stupid thing and the software doesn’t handle it correctly’, it’s safe to make that low priority and maybe document that users shouldn’t do the stupid thing. In security, you replace ‘user’ with ‘attacker’ in the above and now your code is broken and it’s a high-priority fix.

                                                                                                              1. 2

                                                                                                                Oh, I see. Since the first time I learned to properly test my code was when working on Monocypher, I kinda was blind to the difference. Now I think I get it: for casual stuff, we care about the happy path. For security stuff, the error paths are just as important, if not more.

                                                                                                                I can see how this affect tests: when testing the happy path, you just seek to confirm your theory that your software probably kinda works. When testing the error paths, it’s more about trying to disprove that theory. Both approaches are about correctness, but they’re very different approach. I just happen to systematically use the second one, except for the most casual stuff.

                                                                                                                I need to update this.

                                                                                                                1. 5

                                                                                                                  Now I think I get it: for casual stuff, we care about the happy path. For

                                                                                                                  Not just casual stuff. Imagine you ship, say, an office suite. In the word processor, if you select the correct three fonts, which are not system fonts on any supported platform, in adjacent text and then mash the keys really quickly while the third one is selected, it crashes. Is this a high-priority bug? Probably not: most users will not hit the first condition and so hitting both in a row is really unlikely.

                                                                                                                  In a security context, that crash may be a symptom of a data corruption that can lead to arbitrary-code execution. Now it’s something you need to care about.

                                                                                                                  This is the biggest issue I see when people start to think about security. It’s not just about being correct, it’s about being correct in the presence of an intelligent adaptive adversary. That’s a very different mindset because most people do not look at a system and immediately think ‘I could break this if I did these four things in a row and these two concurrently’. Those that do either end up in security or law (on one side or the other).

                                                                                                                  1. 1

                                                                                                                    Ok, I see what you mean: in an adversarial context, what should have been an unlikely glitch can quickly transform into an easily exploitable vulnerability leading to remote code execution: both the likelihood and stakes are drastically raised, sometimes to the point of transforming something negligible into something critical.

                                                                                                                    It’s not just about being correct, it’s about being correct in the presence of an intelligent adaptive adversary.

                                                                                                                    That’s where we differ, I think. Personally, I think of correctness irrespective of context. It’s simpler that way: correct software satisfy all requirements, which by my definition include all security requirements. Vulnerable software fails to satisfy at least one security requirement, and is therefore incorrect. The software doesn’t become “more incorrect” when I add an intelligent adversary into the mix.

                                                                                                                    But that’s because I don’t think of correctness in terms of probability of occurrence. The only exception to that rule I allow myself is for stuff like cryptographic hash collisions, which I know are not impossible, but are improbable enough that I can ignore this “bug”. That may be too black&white an attitude.

                                                                                                                    That said, I hate maintaining existing software, and prioritising bugs just hurts me.

                                                                                                                    1. 5

                                                                                                                      Personally, I think of correctness irrespective of context.

                                                                                                                      Nontrivial software is almost never correct. Even formally verified software just guarantees that the bugs are present in the specification as well as the implementation. Most software does not have a formal specification to define correctness, let alone proofs of correctness.

                                                                                                                      In the absence of such a specification, you have to prioritise the kinds of bugs you want to try to eliminate by construction and the ones that you want to ensure are low probability by careful testing. That prioritisation is very different if you assume the person providing the inputs to your program is incentivised to make it work correctly or break it.

                                                                                                                      1. 1

                                                                                                                        You make too good a case for me to disagree. But then we have a problem: the second your program is processing untrusted inputs it’s a security context, and priorities shift accordingly. Thing, is, we process untrusted input everywhere. Anything networked, anything that reads external documents or multimedia content… That is way too much software to ever hope to be secure.

                                                                                                                        I’m guessing the only viable solution is to move as much software as we can out of a security context. An image reader for instance can guarantee the absence of remote code execution if it is implemented in a memory safe language (now the security requirements are on the compiler). One could properly parse & validate data before passing it to the rest of the program, which should severely limit (eliminate if we’re lucky) the possibility of attack if the parser is correct.

                                                                                                                        Though wasn’t it you who said to me, that once we have a trusted enclave everyone wants to be in the enclave? Not that we should allow it, but I sense conflicting incentives.

                                                                                                                        1. 4

                                                                                                                          One could properly parse & validate data before passing it to the rest of the program, which should severely limit (eliminate if we’re lucky) the possibility of attack if the parser is correct.

                                                                                                                          You should read about Qubes OS’ trusted image system.

                                                                                                                          1. 4

                                                                                                                            But then we have a problem: the second your program is processing untrusted inputs it’s a security context, and priorities shift accordingly

                                                                                                                            Absolutely. There are three things that help:

                                                                                                                            • Some programs simply do not process untrusted inputs. Unfortunately, it!s very common for programs to be designed to process only trusted inputs and then to discover that some inputs are untrusted. This is how we got MS Office Macro viruses.
                                                                                                                            • Some programs run sandboxed and so a complete compromise doesn’t matter too much.
                                                                                                                            • Most programs that process untrusted data do so only from a small subset of their inputs. For example, if you open an Office document, anything in that is untrusted, but anything coming from the user is probably fine to trust.

                                                                                                                            An image reader for instance can guarantee the absence of remote code execution if it is implemented in a memory safe language (now the security requirements are on the compiler).

                                                                                                                            Or if it sandboxes the decoder. If you run libpng, libjpeg, and so on in a sandbox where the input is a file in a complex format and the output is an uncompressed bitmap then an attacker who provides a malicious file that gets arbitrary code execution in the image library can generate an arbitrary image as output. Conveniently, that’s exactly the same as an attacker who just provides an image that doesn’t rely on any exploits. This is exactly the kind of thing that Capsicum and CHERI were designed to support.

                                                                                                                            Though wasn’t it you who said to me, that once we have a trusted enclave everyone wants to be in the enclave?

                                                                                                                            Sounds like something I say regularly. I think this is different because you’re encouraging people to put things in the sandbox not the secure world, and it’s possible to have a lot of mutually distrusting sandboxes.

                                                                                                                            Most of what we’ve done in CHERIoT has been around making building software like this easy, rather than just possible.

                                                                                                                            1. 1

                                                                                                                              it’s possible to have a lot of mutually distrusting sandboxes.

                                                                                                                              Got it.

                                                                                                                  2. 1

                                                                                                                    Users should not enter into a form names like Johnny'; DROP TABLE users; … or in a HTTP GET parameter id=123%20OR%201%3D1. You can put it in the manual and users may follow this rule… but you can replace „user“ with „attacker“ everywhere.

                                                                                                              2. 2

                                                                                                                Summary after a day of comments. Since many claim something is insecure, bad params, bad library, etc, (many unrelated to the use-case in question), I’ll put my money where my mouth is. Bounty for breaking my terrible choices: https://github.com/dsagal/plainopen-mill-blog/discussions/2#discussioncomment-12028272.

                                                                                                                1. 4

                                                                                                                  Cross-posting a GitHub comment I just left (I thought about just writing it on Lobsters, but I wanted people not on Lobsters to see the warning):

                                                                                                                  This contest does not provide the reassurance you think it does. While there’s some differences due to the use case, it reminds me strongly of when Telegram ran a contest for people to break their garbage encryption. Here’s a great post (since taken down both from the live internet and the Wayback Machine for some reason…) on why Telegram’s contest - and this one - prove nothing about the security of the cryptography. Borrowing stealing from that blog post, this contest is missing: no MITM perspective, no known plaintext, no chosen plaintext, no chosen ciphertext, no tampering, no replay access (maybe this one isn’t relevant side it’s not a messenger, but the rest are), etc.

                                                                                                                  You might argue that some of these are unfair. For example, in the use case the blog post describes (two humans manually encrypting blobs to each other), an attacker probably won’t get a chosen plaintext because that would require some social engineering. To which I say: do you really think it is a good idea to have the security of the cryptography rely on nobody ever building some kind of automation wrapper around this to make it a little more convenient to use? Not just anybody, a software engineer?

                                                                                                                  What you really want is a password manager. So just pay for Bitwarden or 1Password and get on with it. But okay, no, the blog post wants something that can be easily understood and audited. So just use age instead. It isn’t as small as this script, sure, but my guess is it’s not too hard to audit. And meanwhile, it’s actually secure and peer-reviewed (plus now you don’t have to maintain it).

                                                                                                                  Using this code is a gamble, and a bad one. This contest does nothing to demonstrate otherwise.

                                                                                                                  1. 3

                                                                                                                    I doubt anyone will break it. Typically attacks on this sort of thing are going to consist of attacking the live system or surrounding code. IMO your live system is informal enough that attacks are probably impractical (it’s one human sending another human a blob, the other human decrypts it, and then tries to use it). The issues would be if this were ever put behind an API or if it evolved at all.

                                                                                                                    If I were to attack a system using this cryptography I’d start with things like:

                                                                                                                    1. Seeing how it handles modifications to the ciphertext. Extending values, etc. If it’s aes-256-gcm, could I collide the tag? I wouldn’t spend the time to do it, but I’d think about it at least.

                                                                                                                    2. I’d see if there’s something in that service that deserializes the value after.

                                                                                                                    But if all I have is a static blob to decrypt, yeah I doubt anyone is going to decrypt it. The fact that you’re using gcm now makes attacks far less practical, assuming you dropped the RSA encryption.

                                                                                                                    That said, people are still going to recommend you use the cryptographic primitives that maximize performance, maximize security properties and their strengths, and minimize footguns. Even without a break, people will recommend that, and they’ll be right.

                                                                                                                    1. 1

                                                                                                                      Finally a comment I mostly agree with! It’s been frustrating, when I say “I use this thing for purpose x”, to get responses “It’s so very wrong to use this thing for y”. There is no service. The use-case is explained.

                                                                                                                      I still believe that both RSA and RSA+AES are equally secure for this use case. (The bounty provides ciphertext produced using both approaches.) Though I succumbed to the pressure to leave in only RSA+AES (and I like that it reduces my tiny script further), that may even be weaker: now either a weakness in RSA or a weakness in AES will make the result weak. As for all the advice to drop OpenSSL, that’s missing the point of the original post. It was about using tools we already have.

                                                                                                                      1. 1

                                                                                                                        Finally a comment I mostly agree with!

                                                                                                                        Probably because I’m a developer so I’ve actually been where you are lol. The reality is that a lot of cryptographers would see something like “2^32 keys before it’s unsafe” as ridiculously bad, but also for a use case where one value gets encrypted one time by a human… 2^32 will never be reached. But obviously 2^48 is better! But is 2^32 so horrific? No, obviously not.

                                                                                                                        idk it’s a whole thing.

                                                                                                                  2. 12

                                                                                                                    I think that the developer is being more interested in using rust is a very good way to pass the torch to new developers. There have been many changes since c, and I think that rewriting some of these historic utilities in a more modern language is a very Noble and useful effort.

                                                                                                                    1. 6

                                                                                                                      Agreed. I think this is a much better argument than the security argument. Not that the security argument is entirely false but… I’ve always thought it was a little silly. Really, ls needs to be memory-safe badly enough that you’re going to spend time rewriting it instead of, say, libpng?

                                                                                                                      1. 13

                                                                                                                        Yep, and this is why iniatives focused on security, such as the ISRG’s Prossimo project, are not pouring resources into uutils but instead into Rust replacements for things like OpenSSL, sudo or zlib.

                                                                                                                        Uutils is nifty nonetheless, but I think of it primarily as an educational/fun endeavor, not a security one. Everyone knows coreutils so it’s a good place to cut your teeth with Rust.

                                                                                                                        1. 5

                                                                                                                          I’m not sure if I’ve ever seen the uutils folks talk much about security. I’ve seen a lot of arguments about it in comment sections, but I haven’t seen the project itself argue the security angle.

                                                                                                                          1. 2

                                                                                                                            The blog post talks about it a decent amount - 5 times, according to Ctrl-F. BUT, one of those times is an FAQ entry I missed the first read-through saying they don’t find security a compelling argument. So, 🤷

                                                                                                                      2. 28

                                                                                                                        I think the reality is that “don’t roll your own crypto” was probably good for getting people to stop rolling their own caesar ciphers and calling it AES but has been extremely insufficient as practical advice otherwise. Developers have to “roll their own crypto” by some definition sometimes. The article points this out and I think this is the key:

                                                                                                                        Designing your own cryptography protocol on top of standard cryptography libraries? This is way more novel than you think it is.

                                                                                                                        Most developers think of crypto as being a local property that can be wrapped by a protocol and all of the safety is encapsulated, but it isn’t. For example, they don’t think about what happens after decryption, like when the data is deserialized - deserializing isn’t crypto, therefor it’s not a security concern, I think, to devs. But of course as many know deserialization is extremely sensitive if the data was previously encrypted, even under gcm if you need to care about auth tag collisions.

                                                                                                                        I tend to see two major issues:

                                                                                                                        1. Using a library that sucks like openssl, which does insane things like set a null iv if unset

                                                                                                                        2. Protocol issues where crypto is treated as a black box and everything about the values going in/ coming out is treated as not-crypto-related

                                                                                                                        Comms just need to change. Devs like practical information. They don’t like “this is weak because it doesn’t X” they like “if it doesn’t X an attacker can do Y, which would undermine Z”. Devs think of things as binary “can decrypt it” vs “can’t decrypt it” vs “reduces the cost of decryption” etc, and that needs to change too.

                                                                                                                        I’m not a cryptographer. I’m largely uncomfortable writing code that does crypto things so I defer to libraries or colleagues where possible, but I have had to do a few things before and it’s been interesting communicating why X is unsafe to developers.

                                                                                                                        1. 18

                                                                                                                          One way I like to try to communicate this is that generalist developers often have a blind spot about crypto code because you can’t test it in the same way.

                                                                                                                          Ciphertext looks like binary nonsense? Job done, it must be encrypted.

                                                                                                                          If you ask a generalist developer how confident they are of writing the code to implement a client for a communication protocol and never running or testing the code, but it has to work in production, they get a better idea of the challenge of successfully implementing a cryptosystem.

                                                                                                                          (Also the word ‘crypto’ on its own is unhelpful, because it is used to mean both the low-level algorithms and the “whole cryptosystem”)

                                                                                                                          1. 12

                                                                                                                            I agree entirely re: devs seeing “it’s a blob” as “must be working”. And it’s hard to know if it’s a good blob or a bad blob.

                                                                                                                            I think they’d have an easier time testing it if they knew about expected properties. For example, here is a test I have for some loose wrapper I wrote around some cryptography (99% of the code is just providing safe APIs that ensure things like random IVs, a Secret class that ensures it’s not accidentally logged, etc). I’m cutting 99% of the test out, but…

                                                                                                                                  tampered_ciphertext =
                                                                                                                                    encrypted_data.ciphertext[0...-1] + (encrypted_data.ciphertext[-1].ord ^ 1).chr
                                                                                                                                expect{decrypt(tampered_ciphertext)}.to raise_error(Crypto::Errors::CipherError)
                                                                                                                            

                                                                                                                            Basically “does this property hold?” tests for each expected property of this code. Similarly, I have properties like “encrypting the same plaintext twice leads to two different ciphertexts”. And many of these tests have a 1000.times.each wrapper with randomized inputs to ensure the properties hold up beyond incident.

                                                                                                                            To write these tests you have to know what properties you want though. I’m hesitant to comment on the project that spurred this, but one thing that the author was unaware of was that encryption is really more of a read-protection, it’s aead that provides write-protection (and strengthens read protection on top of that!). I think that most developers think of encrypted values as having a sort of tamper-proof property, even though that’s not the case at all.

                                                                                                                            These sorts of properties are things that devs can actually understand and test for, imo. What they tend to have a harder time with is knowing which properties to care about and how much, in my experience. Developers have a very hard time determining risk, which is where a security pal can be super helpful.

                                                                                                                            1. 4

                                                                                                                              I think they’d have an easier time testing it if they knew about expected properties.

                                                                                                                              I’ll add that this is true of pretty much anything. Cryptographic code has higher stakes and is easier to screw up than “ordinary” code, so it needs it more; but personally, whenever I’m doing something even remotely tricky, I use property based tests to validate it. No way I can trust it until I do.

                                                                                                                              Examples of things I wrote that required property based tests to find all the bugs: ring buffers, multiple-writers-single-reader message queues, parsers.

                                                                                                                            2. 4

                                                                                                                              generalist developers often have a blind spot about crypto code because you can’t test it in the same way.

                                                                                                                              Unless you have test vectors, or a reference implementation to compare to. Then you mostly can test it in the same way. Gotta generate lots of tests of course, your regular unit tests obviously won’t cut it. But it remains a matter of mundane correctness — only the possibility of errors and the stakes are higher.

                                                                                                                              The one thing that escapes ordinary tests even if you have a reference, are side channels. That stuff tend to require knowledge that the side channel might be a problem in the first place (rule of thumb: without physical access you only care about timings, with physical access you also care about energy consumption and EMI), and how to cut all flow of information from secrets to the side channel — which may require intimate platform knowledge.

                                                                                                                              The minute you do something that doesn’t have a reference to compare to however, good luck.

                                                                                                                              1. 7

                                                                                                                                You can test a crypto algorithm (e.g. did I implement AES correctly) with test vectors, you can’t test a cryptosystem (e.g. am I at risk of nonce re-use which will completely invalidate my system).

                                                                                                                                1. 2

                                                                                                                                  Ah, those pesky nonces. I agree, those need to be proven correct in some way (“I’m using random nonces from a trusted random source”), though tests can in some cases increase confidence. For instance if you’re using a counter that is not transmitted over the network, you can verify that every time you try to encrypt with the wrong nonce (that is, anything but the previous one + 1), then decryption fails. It’s not enough, but it helps.

                                                                                                                                2. 4

                                                                                                                                  See my example above. I could run test vectors against libhydrogen and they’d all be fine. My code is even find if my threat model is defence against a passive adversary. If my threat model is an active adversary in control of the MQTT server, it is not.

                                                                                                                                  1. 4

                                                                                                                                    Encryption isn’t much of a defence against a passive observer when most of the interesting information is the existence of the message and its sender :-)

                                                                                                                                3. 3

                                                                                                                                  To give a concrete example, I just extended our IoT lightbulb demo to use end to end encryption, using libhydrogen’s secret box abstraction (libhydrogen is from the same people as libsodium and is a smaller version for embedded devices with fewer cyphers). The key is randomly generated by the function exposed from the library and communicated out of band (phone scans QT code to pair with the device). Messages are related via an MQTT server, libhydrogen manages authenticated decryption and will fail if messages are encrypted with the wrong key.

                                                                                                                                  Nice and secure, right?

                                                                                                                                  Well, it depends on the threat model. The demo wants to be able to have multiple phones controlling the light. At the same time, if the device loses network, it will miss MQTT packets. This means that there isn’t any kind of protection against replays. The MQTT server can retransmit any message that it’s seen before to control the light.

                                                                                                                                  Is that a problem? You’re protected against passive snooping of the server, but not against an active adversary. Up to you to decide whether that matters. It is possible to protect against replays but it’s more engineering work (it now requires at least some loose synchronisation, whereas previously the controllers were unidirectional).

                                                                                                                                  1. 2

                                                                                                                                    That sounds like a home-rolled protocol with issues, which is exactly what the article is discussing ;-).

                                                                                                                                    That doesn’t seem ideal code to have out in the wild. Maybe you would consider writing something more generically safe? I’d expect you to use either an interactive protocol (challenge-response-ish - requires some volatile state) or to keep some per-phone state (separate keys or per-phone nonce - requires some non-volatile state).

                                                                                                                                    1. 4

                                                                                                                                      A realistic deployment would not use an untrusted MQTT server, it would use one that was either provided by the device vendor or run by the user, so the extra crypto is defence in depth in case an attacker somehow manages to snoop those messages. This is possible due to misconfiguration of the server. If the server is so broken that an untrusted party can send arbitrary messages, you already have a complete denial of service attack on the system.

                                                                                                                                      To be honest, I probably wouldn’t bother with the E2EE for a real use case because sensible ACLs on the server will do a better job and the server has to be in the TCB for availability anyway (even with all of the encryption in the world, it can still drop all messages). The demo is mostly about how you can pick up existing libraries (libhydrogen, a QR Code lib, the LCD drivers) and run them with least privilege. The crypto isn’t the focus of the example. It just serves to remind you that you need to think about threat models when you deploy something like this.

                                                                                                                                      If I wanted to fix it, then the device would publish a montonically increasing 64-bit counter every ten seconds if it had received any messages in the preceding 20 seconds. It would use this as the context parameter on the secret and would try both contexts for decryption. You would be vulnerable to replays only if an attacker sent a message that you!d sent in the previous 20 seconds, which is easy to spot (it will happen only while you’re controlling the lightbulb).

                                                                                                                                      I mostly didn’t because writing Android apps (the controller runs on Android) is so much harder than writing CHERIoT device firmware and I didn’t want to touch it more than I had to.

                                                                                                                                      1. 5

                                                                                                                                        You probably know this, but it bears repeating: any “sample” code will end up in production use eventually, including sample keys etc, regardless of warnings and such in the documentation. So be extra careful when providing such examples. It’s almost better to not have any examples at all :(

                                                                                                                                        1. 2

                                                                                                                                          If you have built a system so broken that the second layer of defence in depth having limitations is a problem, nothing I do can save you.

                                                                                                                                        2. 1

                                                                                                                                          If you wanted to fix this, I’d consider going for (wrapping every message for simplicity):

                                                                                                                                          • client sends “request-challenge”
                                                                                                                                          • light sends “challenge:
                                                                                                                                          • client sends “command:

                                                                                                                                          Using an incrementing 64-bit counter as a nonce is perfectly fine, but requires nonvolatile state; if you can assume that you have a decent RNG, I’d just pick a 256-bit random value for the nonce (128 bits is also almost certainly enough to avoid collisions given that the lightbulb is slow, but 256 bits is enough that you don’t need to think about it.)

                                                                                                                                          Of course, you can also skip the request-challenge step if you assume clocks are sufficiently-synchronized. Or if the light can send a challenge and leave the challenge enqueued on the MQTT server (I must admit that I’m far from an expert on MQTT…)

                                                                                                                                          [EDITed typo: challenged -> challenge in last line.]

                                                                                                                                          1. 2

                                                                                                                                            That approach would be hard to make work with MQTT. And using a 128-bit nonce would require using a different set of cyphers.

                                                                                                                                            1. 1

                                                                                                                                              Okay; thanks for entertaining my questions, in any case!

                                                                                                                                              1. 1

                                                                                                                                                huh, I wonder why Denis decided to design libhydrogen’s secretbox like that, since the underlying primitives use 128-bit nonces.

                                                                                                                                      2. 1

                                                                                                                                        Ciphertext looks like binary nonsense? Job done, it must be encrypted.

                                                                                                                                        Like encrypting in Base64? Maybe some government or police official think like that. But it sound too dumb even for junior developers. I do not say they do not exist, but it is not common (at least in my neighborhood).

                                                                                                                                        If you ask a generalist developer how confident they are of writing the code…

                                                                                                                                        In many companies, there are too high expectations, too low budgets and too deadly deadlines, that the pace of the development is too fast and that developers are not confident about anything. No time to study how things work, no time to test thoroughly, just make it somehow work and skip to another task. This is a hazardous environment that calls for bugs – not only crypto ones but various overflows, injections, omitted checks, improper use of frameworks or libraries, logic mistakes etc. with same serious impacts (private data leaks, DoS, data integrity breach etc.). Senior developers have a better ability to correct management requests or even refuse such work.

                                                                                                                                        1. 3

                                                                                                                                          Like encrypting in Base64? Maybe some government or police official think like that. But it sound too dumb even for junior developers. I do not say they do not exist, but it is not common (at least in my neighborhood).

                                                                                                                                          I had an applicant to a senior devops engineer role tell me that it was really important to make sure all your Kubernetes secrets are encrypted with base64.

                                                                                                                                          1. 3

                                                                                                                                            If it’s related to k8s then it’s probably in a YAML file and it may contain special characters, so base64 at least improves correctness, which may be important enough. /s

                                                                                                                                            1. 1

                                                                                                                                              Have quantum computers cracked rot13 yet?

                                                                                                                                            2. 2

                                                                                                                                              Like encrypting in Base64? Maybe some government or police official think like that. But it sound too dumb even for junior developers. I do not say they do not exist, but it is not common (at least in my neighborhood).

                                                                                                                                              Behold: https://www.cryptofails.com/post/87697461507/46esab-high-quality-cryptography-direct-from