Threads for bryce

  1. 3

    At my work our backend uses LISTEN/NOTIFY to listen to database changes and inform the UI over the websocket connection if the user’s (who is using the browser) view needs to be refreshed.

    I think the idea is good but our implementation is not good. Would love to see better working examples of something like this.

    1. 3

      I’m doing the same in a Phoenix app. I start a GenServer as part of the app that handles the listens and sends out an Endpoint.broadcast/3 when a relevant one comes in (on the busy one that broadcast includes the query results the clients crave). The LiveView instances clients are on subscribe to the endpoint channel when they start up, so the updated query results generate new HTML and bang it out to the browsers.

      1. 2

        I dunno about your implementation, but like you said the idea sounds fine. We did something similar at a previous job. We had a 3rd party integration which made changes to certain models in the background, which would then trigger LISTEN/NOTIFY to tell other parts of the software to restart a computation.

        In my current job, we also have a “main” server which runs and sends updates to clients, and when a command line task or cron job makes some changes to the db, it informs the main server about the changes so that it can send updates to the clients.

      1. 3

        What do we think the cause is? It seems unlikely to me that physical bit flips would cause this. Lossy compression software gone wrong?

        1. 3

          My guess is they are trying some experimental JPEG XL options for thumbnail level images.

          1. 2

            One thing I’ve seen mentioned one of those JPEG sequels like HEIF or whatever, so I could see that they’re trying to optimize the hot thumbnails and proxies you get to see and mess with before the original has time to get retrieved.

            In Lightroom non-classic, I jumped to a photo from Dec. 31, 2020, noticed it was a “Smart Preview” proxy, made some edits, and at some point it had finished downloading the 30mb raw. I didn’t notice any visible change once it quit working with the proxy and started on the raw directly, but the original is bigger than my screen and may not be in the same color space either so ¯\_(ツ)_/¯

            1. 1

              My first thought was that Google has been experimenting with running the images through some sort of lossy NN compression system, but on a second glance I think that’s less likely.

              1. 1

                Some people report that clicking on edit on the photo helps. Not really sure what that implies though, but maybe just the exported/cached format causing issues? Some conversion from a format used back then maybe?

              1. 3

                This is great! One issue that recently came up for me is that the Ubuntu 20.04 LTS docker.io package omits the /etc/init.d/docker script from upstream, so there was no (easy) way to start the docker daemon in WSL. Honestly, a super odd packaging decision given how much Ubuntu seems to be the first-est class citizen of WSL.

                1. 4

                  From my experience, WSL would really rather you run Docker Desktop (which runs the docker-machine in hyper-v) and let it push the docker cli into the WSL guest.

                  1. 1

                    Docker desktop now uses wsl2 for it’s host. So far i haven’t had any problem with it

                1. 3

                  I’m not sure I understand. If you read only the key ID from the authenticated payload in order to authenticate it, is there a problem? Or is the problem that this is error-prone to implementers? I’m no crypto expert, but I suppose I care about security more than average, and I thought it was obvious that nothing but the key ID should be used before authentication.

                  1. 8

                    My interpretation is that reaching in and grabbing just one thing from the untrusted payload is bad spec design, since it means that API developers are going to want to implement grabbing any ol’ thing out of the untrusted payload.

                    (Facetiously) I’m beginning to think JWT is bad?

                    1. 2

                      Meanwhile I’m beginning to think you can have an implementation of jwt that is non compliant but good; like ignoring any specified algorithm, and only verifying but never encrypting.

                      1. 1

                        I agree with you… but what’s the point of JWT if the only good implementations are non-compliant? I remember reading good things about paseto but I’ve never actually used it.

                        1. 1

                          The point is to have a tool that can be used to track sessions without maintaining much state on the server (revocation list is an obvious but, depending on your environment, plausibly optional thing). That’s all I need.

                      2. 2

                        I’m really not a fan of JWT, but I have questions here. X.509 certificates also have an issuer field that is part of the signed data even though it doesn’t strictly need to be. Would X.509 be better if we stopped signing the issuer?

                        It has some of the other problems that have gotten JWT in trouble, too: certificates identify their own issuer, leaving it to the application to decide whether that issuer is acceptable, and their own signature algorithm.

                        Of course X.509 is much more tightly specified, and includes a standard way for issuer certificates to identify what kind of key they have. It also doesn’t mix asymmetric and symmetric cryptosystems. But I wonder if the main reason we consider it a reasonable security standard isn’t exactly the same reason developers might prefer JWT—the bar to implement X.509 at all is so high that people aren’t tempted to roll their own.

                    1. 2

                      A lovely little diversion into physical machines. And also a little surreal how much expertise goes into it while at the same time never thinking of ‘turning off’ their solar panels by just putting them face-down on a towel or something instead of carefully moving them somewhere dark. Everyone will always have these “oh why didn’t I think of that” moments sooner or later.

                      1. 2

                        They’re probably worried about light passing through the back of the panel making a current, and also sunlight can absolutely be too bright to work in. If you’ve got a shipping container workshop set up for solar panel work, why not?

                        1. 1

                          Oh, why didn’t I think of that.

                          XD

                      1. 3

                        I’ll bite: this was a ridiculous situation before left-bad broke the world and it’s only more ridiculous since then. That checking for numeric properties is expedited instead of hindered by pulling in multiple dependencies over multiple HTTP requests each is kind of an indictment of JavaScript and its curation over the last thirty years.

                        1. 5

                          Is the problem just that an upgrade requires downtime? Because I’ve never had an issue, just take down the old cluster, run upgrade, bring up the new cluster. Debian basically walks me through it

                          1. 5

                            My issue is that the upgrade requires RTFM: my database works, I upgrade the OS or packages, and now my database doesn’t work, and I have to figure out what commands to run to fix it. In my personal projects I don’t have time for this.

                            In a professional setting I’ve never seen Postgres upgraded. Projects are born and die with the same version.

                            1. 10

                              In a professional setting I’ve never seen Postgres upgraded. Projects are born and die with the same version.

                              That’s odd, in my career we’ve typically been eager to upgrade, because almost every release offers performance benefits which are always welcome, and the occasional new feature which would be really nice to have.

                              1. 1

                                This is obviously elitist, but I think that if you’re writing on this site, you’re probably in the top 5% (or higher) of the global sysadmin population. But then again, Postgres is rather good for its long maintenance periods for older major versions, so using one of those is usually just fine – at least if you keep up with the minor versions at least.

                                1. 6

                                  I think that if you’re writing on this site, you’re probably in the top 5% (or higher) of the global sysadmin population.

                                  Even if that were true (I’m not even a sysadmin - I’m a backend developer), that should go for @kornel too. I’m responding to his comment which he also wrote on this site. After some consideration, I think it’s quite possible that the type of projects we’ve worked on are wildly different, or the teams/company’s we’ve been on are of a different culture.

                                  Most projects I’ve worked on are productivity systems for businesses, where some downtime can usually be coordinated with the customer. The project I’m currently working on is a public system where high availability is so important that it can literally save lives. But that’s also why it’s designed to be fully distributed, so taking down a node for upgrade is “trivial”; when we bring it back up it’ll sync state with its peers. In theory, we can also just completely replace a node with a new one which can then get its state from the other peers.

                                  1. 2

                                    Lack of upgrades is “if it ain’t broke, don’t fix it”. New Postgres versions are cool, but upgrade is a non-zero risk. If there are no performance issues, no features that are really required (not merely nice), then there’s no business case for an upgrade.

                                    I’ve also worked in one place where this hilarious exchange took place:

                                    – Should we be worried about PHP 5.6 getting EOL-ed?
                                    – Nah, we use 5.2.

                              2. 3

                                Well, after the upgrade the database still works, it just still runs on the old postgres version until you run the three commands to switch it.

                                1. 1

                                  is that so? I thought postgres would refuse to start until you run pg_ugrade.

                                  1. 1

                                    The new cluster won’t start until the data gets moved of course, but at least on Debian the old cluster stays running no problem until you do that

                                    1. 1

                                      oh right, we’re in the thread that talks about debian upgrades. Still just a pg_upgrade thingy (upgrade strategy n°2 in the post), but now I see what you mean. that’s not specific to debian. pg_upgrade requires to have both versions at hand, so you can decide to run the old binary on the old data dir.

                                2. 2

                                  In a professional environment I’d strongly recommend not to blindly upgrade an OS. The point of having major versions is that they might break stuff.

                                  I have never worked in a setting where they stuck with the same version. I just saw things like the managed versions of GCP for example doing a horrible job, because they require huge downtimes on major upgrades by design. ([RANT] Yet another area where clouds hold us back sigh[/RANT])

                                  1. 4

                                    The nice thing about Postgres is that the developers maintain an official repositories of RPM and Debian packages that are well-tested against existing OS releases. This makes it easy to install different Postgres versions side-by-side on the same OS.

                                    1. 2

                                      Yes, and a lot of others simply support that in a different way.

                                      On top of it I think there’s many situations where your data isn’t that big and you can comfortably do dump and restore during your maintenance window and don’t have to worry about anything really. It’s not like you upgrade your DB every day. I’m way more happy about good official docs, than having to pray that the upgrade doesn’t break stuff and going through issue trackers, big breaking changes lists, etc.

                                  2. 2

                                    In a professional setting I’ve never seen Postgres upgraded. Projects are born and die with the same version.

                                    I call this law of the welded monolith. This has also been my experience and not only with Postgres. Other complex systems also.

                                  3. 1

                                    Yeah. When I used to run long term apps downtime was the enemy because what if a potential customer clicked a banner ad and got a 500?

                                  1. 4

                                    Although the keys of this initial encryption are known to observers of the connection

                                    I haven’t looked at the specs yet. Is that true? Isn’t that horrible?

                                    1. 6

                                      I think it’s fundamentally unavoidable. At the point that a browser initiates a connection to a server, the server doesn’t yet know which certificate to present. DH alone doesn’t authenticate that you haven’t been MITM’d.

                                      1. 5

                                        It’s not unavoidable if both parties can agree on a PSK (pre-shared key) out-of-band, or from a previous session - and IIRC, the TLS 1.3 0-RTT handshake which is now used by QUIC can negotiate PSKs or tickets for use in future sessions once key exchange is done. But for the first time connection between two unknown parties, it is certainly unavoidable when SNI is required, due to the aforementioned inability to present appropriate certificates.

                                        1. 2

                                          On the other hand, if you have been MitM’d you’ll notice it instantly (and know that the server certificate has been leaked to Mallory in the Middle). And now every connection you make is broken, including the ones they did not want to block. I see to ways of avoiding that:

                                          1. Don’t actually MitM.
                                          2. Be a certificate authority your users “trust” (install your public key in everyone’s computers, mostly).
                                          1. 2

                                            No, but DH prevents sending a key across the wire, making them known and prevents passive observers from reading ciphertext. Wouldn’t it make sense to talk to the server first?

                                            1. 3

                                              Without some form of authentication (provided by TLS certificates in this case), you have no way to know whether you’re doing key exchange with the desired endpoint or some middlebox, so you don’t really gain anything there.

                                              1. 3

                                                You gain protection against passive observers, thereby increasing costs of attackers trying to snoop on what services people connect to. Also when you then anyways end up receiving the certificate you at worst retro-actively could verify you weren’t snooped at, which is more than you have when it’s actually that you send a key that allows you to decrypt, which still sounds odd to me.

                                                1. 3

                                                  What you’re suggesting is described on https://www.ietf.org/id/draft-duke-quic-protected-initial-04.html This leverages TLS’s encrypted client hello to generate QUIC’s INITIAL keys.

                                              2. 1

                                                I don’t know how much sense it makes? Doing a DH first adds more round trips to connection start, which is the specific thing QUIC is trying to avoid, and changes the way TLS integrates with the protocol, which affects implementability, the main hurdle QUIC has had to overcome.

                                                1. 1

                                                  I get that, but how does it make sense to send something encrypted when you send the key to decrypt it with it? You might as well save that step, after all the main reason to encrypt something is to prevent it from being read.

                                                  EDIT: How that initial key is sent isn’t part of TLS, is it? It’s part of QUIC-TLS (RFC9001). Not completely sure, but doesn’t regular 0-RTT in TLSv1.3 work differently?

                                                  1. 5

                                                    The purpose of encrypting initial packets is to prevent ossification.

                                                    1. 1

                                                      Okay, but to be fair that kind of still makes it seem like the better choice would be unauthenticated encryption that is not easily decryptable.

                                                      I know 0RTT is a goal but at least to me it seems like the tradeoff isn’t really worth it.

                                                      Anyways thanks for your explanations. It was pretty insightful.

                                                      I guess I’ll read through more quic and TLS on the weekend if I have time.

                                                      1. 1

                                                        The next version of QUIC has a different salt which prevents ossification. To achieve encryption without authentication, the server and the client can agree on a different salt. There’s a draft describing this approach, I think.

                                                    2. 1

                                                      how does it make sense to send something encrypted when you send the key to decrypt it with it?

                                                      According to https://quic.ulfheim.net/ :

                                                      Encrypting the Initial packets prevents certain kinds of attacks such as request forgery attacks.

                                              3. 2

                                                It’s not more horrible than the existing TLS 1.3 :-) I sent out a link to something that may be of interest to you.

                                                1. 0

                                                  It’s only the public keys that are known, and if they did their job well, they only need to expose ephemeral keys (which are basically random, and thus don’t reveal anything). In the end, the only thing an eavesdropper can know is the fact you’re initiating a QUIC connection.

                                                  If you want to hide that, you’d have to go full steganography. One step that can help you there is making sure ephemeral keys are indistinguishable from random numbers (With Curve25519, you can use Elligator). Then you embed your abnormally high-entropy traffic in cute pictures of cats, or whatever will not raise suspicion.

                                                  1. 1

                                                    This is incorrect, see RFC9001. As a passive observer you have all the information you need to decrypt the rest of the handshake. This is by design and is also mentioned again in the draft that rpaulo mentioned.

                                                    The problems with this are mentioned in 9001, the mentioned draft and the article.

                                                    1. 1

                                                      Goodness, I’m reading section 7 of the RFC right now, it sounds pretty bad. The thing was devised in 2012, we knew how to make nice handshakes that leak little information and for heaven’s sake authenticate everything.

                                                      As a passive observer you have all the information you need to decrypt the rest of the handshake.

                                                      Now I’m sure it’s not that bad. I said “It’s only the public keys that are known”. You can’t be implying we can decrypt or guess the private keys as well? And as a passive observer at that? That would effectively void encryption entirely.

                                                1. 4

                                                  I have used the c920 on a mac for years, and it has always been overexposed. I’m not sure whether it’s Logitech or Apple or both to blame here. The solution for me is to install the app “Webcam Settings” from the Apple store (yeah it’s a generic name), which lets you tweak many settings on webcams and save profiles for them. It’s not perfect, but I already have the camera and it’s significantly easier to work with than hooking my DSLR up.

                                                  1. 5

                                                    The equivalent to “Webcam Settings” on Linux is guvcview. I have a Microsoft LifeCam Studio and have to use this tool to adjust the exposure when I plug it into a new machine. Thereafter it persists… somehow.

                                                    1. 6

                                                      Or qv4l2, depending on your taste — but one advantage of qv4l2 is that it lets you set the controls even while another app has the camera open, whereas guvcview wants the camera for its own preview window, and will decline to work at all if it can’t get the video stream.

                                                      1. 3

                                                        Oh very nice, qv4l2 is exactly what I needed to adjust focus during a meeting. Thank you!

                                                        1. 2

                                                          update: someone anon-emailed me out of the blue to mention that guvcview has a -z or --control-panel option that will open the control panel without the preview window, letting you do the same thing as qv4l2. So use the one that makes you happy.

                                                      2. 3

                                                        Congrats, you are working around a hardware problem with a software patch.

                                                        Me, I don’t care enough to spend the effort to get the software working. My audio input is an analog mixer, my audio output the same, and eventually my camera will be a DSLR because that way I don’t twiddle with software for something that really should just work on all my machines without me caring.

                                                        Different tradeoffs in different environments.

                                                        1. 8

                                                          It’s a driver settings tool, not a patch. It doesn’t do post-processing. Every OS just fails to provide this tool, not sure why, possibly because webcam support is spotty and they don’t want to deal with user complaints. Some software (like Teams) include an interface for the settings. Changing it in Teams will make system wide changes. Others (like Zoom) only have post-processing effects, and these are applied after the changes you made in Teams.

                                                          1. 2

                                                            I can confirm this tool definitely affects the camera hardware’s exposure setting. I’ve used it for adjusting a camera that was pointed at a screen on a remote system I needed to debug. The surrounding room was dark (yay timezones!) so with automatic exposure settings it was just an overexposed white blur on a dark background. This tool fixed it. There’s no way this would have been possible with just post-processing.

                                                            (No, VNC or similar would not have helped, as it was an incompatibility specific to the connected display, so I needed to see the physical output. And by “remote” I mean about 9000km away.)

                                                            1. 4

                                                              a camera that was pointed at a screen on a remote system

                                                              Sounds like you had some fun

                                                              1. 1

                                                                That’s definitely one way of describing it! Not necessarily my choice of words at the time.

                                                            2. 1

                                                              Oh, Teams can do this? Thanks, I’ll have to check that out as an alternative.

                                                            3. 6

                                                              The DSLR/mirrorless ILC (interchangeable lens camera) route is great for quality but it has its risks. I started off with a $200 entry level kit and now I’ve got two bodies, a dozen lenses, 40,000 pictures, and a creatively fulfilling hobby.

                                                              1. 2

                                                                don’t forget the tripod! I like landscape photography, and a good tripod was surprisingly (> $200) expensive.

                                                                1. 2

                                                                  So the risks are spending too much money?

                                                                2. 1

                                                                  I fail to see how you’re going to use a DSLR as a webcam without “twiddling with software”. Sure, you’ll have a much better sensor, lens and resulting image quality. But I’ve yet to see a setup (at least with my Canon) that doesn’t require multiple pieces of software to make even work as a webcam. Perhaps other brands have a smoother experience. I still question how this won’t require at least as much software as my route.

                                                                  There’s also the physical footprint that matters to me. A webcam sits out of the way on top of my monitor with a single cable that plugs into the USB on the monitor. A DSLR is obviously nowhere near this simple in wiring or physical space. It also has a pretty decent pair of microphones that work perfectly for my quiet home office.

                                                                  Are either the audio or video studio quality? Nope, but that’s completely fine for my use case interacting with some coworkers on video calls.

                                                                  1. 1

                                                                    My perception has been a DSLR with HDMI output gives you the ability to capture HDMI and just shove that as a webcam line.

                                                                    The other things that a camera does can be tweaked with knobs instead of software.

                                                              1. 1

                                                                Sometimes I forget everything in elixir is a different process. Articles like this bring me back to reality.

                                                                1. 2

                                                                  They’re called “processes,” but outside the BEAM process they’re really just green threads, implemented well and used exhaustively.

                                                                1. 4

                                                                  This is pretty good, IMO. The take-home is: use font-loading: optional; in your CSS, and host your fonts yourself (or at least, the CSS for them). The alternative offered if you can’t bear to make your custom fonts optional is to adjust the metrics of your fallback font to match your custom font. This could be ugly, but it probably won’t be, and it prevents a reflow when your custom font finally loads.

                                                                  1. 3

                                                                    Your fonts are always optional, though, no? Because the rendering can’t be enforced by the site. My browser can render it any way I want, and I don’t have to listen to your CSS. So why not make it optional, always? And then get rid of that CSS attribute, and just render and reflow? Because this is all an illusion of control in the first place.

                                                                    As always, the browser and standards should solve this for us. Developers should not have to type “font-loading:optional;” as boilerplate in every website they make.

                                                                    1. 3

                                                                      It’s always optional, but that means many things to many people. font-loading: optional; has a specific meaning as described in the article: that the CSS author would rather have no custom fonts than custom fonts that block loading or jiggle the layout after a slow load.

                                                                      1. 1

                                                                        Right, but why do we make the CSS author make that decision? Or rather, why do we allow them to? We should ALWAYS do that. Or better, just have browsers implement the throwaway custom font-width matching tool that this author came up with, for everything? Just forcing every web developer to read this document is THOUSANDS of hours of human-hours that could be better spent on other things. For the millions of web developers who won’t read this document, they’ll have fonts that slow the loading of their websites, for what is almost certainly no reason at all.

                                                                        So, just…don’t? Don’t let this CSS setting exist: Always do the jiggle thing, but minimize it with a width-matched fallback font. Then, everyone is happy, and nobody has had to invest any time into learning any of this.

                                                                        There’s so, so many of these “One simple trick to not make your website suck” for users and accessibility that should absolutely just be what happens automatically.

                                                                        1. 3

                                                                          If it’s a landing page for a marketing site, the author (or their customers) probably prefer it get better metrics on google by loading faster. If it’s a big interactive experience all on one page load, the custom font is probably important. Letting the authors opt in to the former case seems pretty reasonable.

                                                                          The browser can’t do the font-width matching because it requires the metrics of the fancy font, which aren’t available before it loads.

                                                                          In general this stuff can’t be implemented automatically because it’s not universally ideal, or would break the (conflicting, heh) assumptions developers have made over the last 30 years. It’s complicated, but it can also be folded into tools that analyze page speed and provide suggestions, which are in common use.

                                                                  1. 17

                                                                    This rocks, I hate it so much, I need to tell everyone about this. Thanks!

                                                                    1. 6

                                                                      I feel certain I’m missing something… I never cared for Heroku. It always seemed slow, made me think I had to jump through weird hoops, and never seemed to work very well for anything that needed more horsepower than github/gitlab’s “pages” type services. And their pricing always had too much uncertainty for me.

                                                                      Granted, I’m old, and I was old before heroku became a thing.

                                                                      But ever since bitbucket and github grew webhooks, I lost interest in figuring out Heroku.

                                                                      What am I missing? Am I just a grouch, or is there some magical thing I don’t see? Am I the jerk shaking my fist at dropbox, saying an FTP script is really just the same? Or am I CmdrTaco saying “No wireless. Less space than a Nomad. Lame.”? Or is it just lame?

                                                                      1. 5

                                                                        By letting and making developers only care about the code they develop, not anything else, they empower productivity because you just can’t yak shave nor bikeshed your infra nor deployment process.

                                                                        Am I the jerk shaking my fist at dropbox, saying an FTP script is really just the same?

                                                                        Yes and you’d be very late to it.

                                                                        1. 3

                                                                          Yes and you’d be very late to it.

                                                                          That’s what I was referencing :-)

                                                                          I think I’m missing this, though:

                                                                          By letting and making developers only care about the code they develop, not anything else, they empower productivity because you just can’t yak shave nor bikeshed your infra nor deployment process.

                                                                          What was it about Heroku that enabled that in some distinctive way? I think I have that with gitlab pages for my static stuff and with linode for my dynamic stuff. I just push my code, and it deploys. And it’s been that way for a really long time…

                                                                          I’m really not being facetious as I ask what I’m missing. Heroku’s developer experience, for me, has seemed worse than Linode or Digital Ocean. (I remember it being better than Joyent back in the day, but that’s not saying much.)

                                                                          1. 2

                                                                            I just push my code, and it deploys.

                                                                            If you had this set up on your Linode or whatever, it’s probably because someone was inspired by the Heroku development flow and copied it to make it work on Linode. I suppose it’s possible something like this wired into git existed before Heroku, but if so it was pretty obscure given that Heroku is older than GitHub, and most people had never heard of git before GitHub.

                                                                            (disclaimer: former Heroku employee here)

                                                                            1. 3

                                                                              it’s probably because someone

                                                                              me

                                                                              was inspired by the Heroku development flow and copied it to make it work on Linode

                                                                              Only very indirectly, if so. I never had much exposure to Heroku, so I didn’t directly copy it. But push->deploy seemed like good horse sense to me. I started it with mercurial and only “made it so” with git about 4 years ago.

                                                                              Since you’re a former Heroku employee, though… what did you see as your distinctive advantage? Was it just the binding between a release in source control and a deployment into production, or was it something else?

                                                                              1. 3

                                                                                Since you’re a former Heroku employee, though… what did you see as your distinctive advantage? Was it just the binding between a release in source control and a deployment into production, or was it something else?

                                                                                As a frequent customer, it was just kind of the predictability. At any point within the last decade or so, it was about three steps to go from a working rails app locally to a working public rails app on heroku. Create the app, push, migrate the auto-provisioned Postgres. Need to backup your database? Two commands (capture & download). Need Redis? Click some buttons or one command. For a very significant subset of Rails apps even today it’s just that few steps.

                                                                                1. 1

                                                                                  I don’t really know anything about the setup you’re referring to, so I can only compare it to what I personally had used prior to Heroku from 2004 to 2008, which was absolutely miserable. For the most part everything I deployed was completely manually provisioned; the closest to working automated deploys I ever got was using capistrano, which constantly broke.

                                                                                  Without knowing more about the timeline of the system you’re referring to, I have a strong suspicion it was indirectly inspired by Heroku. It seems obvious in retrospect, but as far as I know in 2008 the only extant push->deploy pipelines were very clunky and fragile buildbot installs that took days or weeks to set up.

                                                                                  The whole idea that a single VCS revision should correspond 1:1 with an immutable deployment artifact was probably the most fundamental breakthru, but nearly everything in https://www.12factor.net/ was first introduced to me via learning about it while deploying to Heroku. (The sole exception being the bit about the process model of concurrency, which is absolutely not a good general principle and only makes sense in the context of certain scripting-language runtimes.)

                                                                                  1. 2

                                                                                    I was building out what we were using 2011-2013ish. So it seems likely that I was being influenced by people who knew Heroku even though it wasn’t really on my radar.

                                                                                    For us, it was an outgrowth of migrating from svn to hg. Prior to that, we had automated builds using tinderbox, but our stuff only got “deployed” by someone running an installer, and there were no internet-facing instances of our software.

                                                                            2. 2

                                                                              By letting and making developers only care about the code they develop

                                                                              This was exactly why I never really liked the idea of it, even though the tech powering it always sounded really interesting. I think it’s important to have contextual and environmental understanding of what you’re doing, whatever that may be, and although I don’t like some of the architectural excesses or cultural elements of “DevOps”, I think having people know enough about what’s under the hood/behind the curtain to be aware of the operational implications of what they’re doing is crucial to be able to build efficient systems that don’t throw away resources simply because the developer doesn’t care (and has been encouraged not to care) about anything but the code. I’ve seen plenty of developers do exactly that, not even bothering to try and optimise poorly-performing systems because “let’s just throw another/bigger dyno at it, look how easy it is”, and justifying it aggressively with Lean Startup quotes, apparently ignorant of the flip-side of “developer productivity at all costs” being “cloud providers influencing ‘culture’ to maximize their profits at the expense of the environment” - and I’ve seen it more on teams using Heroku than anywhere else because of the opaque and low-granularity “dyno” resource division. It could be that you can granularize it much more now than you could a few years ago, I haven’t looked at it for a while, and maybe even that you could then if you dug really deep into documentation, but that was how it was, and how developers used (and were encouraged to use) it - and to me it always seemed like it made the inability to squeeze every last drop of performance out of each unit almost a design feature.

                                                                          1. 3

                                                                            Between this, Brandur’s post a couple days ago,, and a bundler/deploy issue I fought with for a few hours spread over a week, I’m kind of sad it’s feeling like Heroku’s over. I’ve got dozens of stupid little apps that’ve been running for a decade, and nothing else comes close. I’ve looked at the AWS and Azure knockoffs and they seem like so much work. Glitch is neat but it seems like it’s only for “stupid little” apps, nothing with a database or longevity.

                                                                            1. 3

                                                                              This article is very well done!

                                                                              Regarding QUIC itself, though, here’s a radical idea: Instead of creating a new protocol that mixes different OSI layers and complicates things by several orders of magnitude, why not simply stop making bloated websites that pull in dozens of stylesheets and scripts? I know that QUIC shaves off a few milliseconds in RTT, but today’s latency/lagginess comes from other sources.

                                                                              I don’t expect most modern web developers to stop and think about this, given they are so occupied with learning a new JavaScript-framework every 4 months. The modern web seriously needs a reform.

                                                                              1. 10

                                                                                Regarding QUIC itself, though, here’s a radical idea: Instead of creating a new protocol that mixes different OSI layers and complicates things by several orders of magnitude, why not simply stop making bloated websites that pull in dozens of stylesheets and scripts?

                                                                                QUIC isn’t just for HTTP/3. It’s a lighter-weight protocol than TLS + TCP that can be used for anything that needs end-to-end encryption. Like SCTP, it provides multiple streams, so anything that wants to send independent data streams could benefit from QUIC. It would be great for SSH, for example, because things like X11 forwarding would be in a separate QUIC channel to the TTY channel and so a dropped packet from the higher-bandwidth X11 channel wouldn’t increase latency on the TTY. SSH already multiplexes streams like this but does so over TCP and so suffers from the head-of-line blocking problem (a dropped packet causes retransmit and blocks all SSH channels that are running over the same connection).

                                                                                1. 1

                                                                                  Good luck funneling the QUIC-UDP-SSH-stream through any half-decent firewall, and you can always multiplex streams in other ways. Surely there are performance advantages to QUIC, but at what cost? It’s just reinventing the wheel (TCP), just much more complicated. The complex state-handling will just lead to more bugs, but it’s another toy web developers can play with. Instead of striving for more simplicity, we are yet again heading towards more complexity.

                                                                                  1. 5

                                                                                    Any half-decent firewall is either going to see the QUIC and not be able to know what the contents are, or it’s going to be a bank-type setup with interstitial certificates because it’s very important for them to enforce which connections are taking place.

                                                                                    It’s more complicated than TCP, sure, but motorized transport is more complicated than walking because the complication provides a compelling speed advantage.

                                                                                    It’s also not really a toy for web developers? My experience with HTTP 2 was that it was basically just a checkbox on a CDN or the presence of a config line for a couple years, and then it was just something I didn’t think about.

                                                                                    I agree that it sucks that the ship has seemingly sailed on simpler sites, but saving round trips is valuable when loading a smaller site off a bad network connection, like you might find in rural or underserved places and also major airports.

                                                                                    1. 3

                                                                                      If you have multiple logical data streams in your protocol then you have the complex state handling anyway, just at the application level rather than the protocol level.

                                                                                  2. 2

                                                                                    here’s a radical idea:…

                                                                                    chill!! There are other reasons to use this. For example I’m working on a reverse-tunnel-as-a-service project similar to ngrok or pagekite, aimed at self-hosting and publishing stuff on the fly. Currently its tunnel uses Yamux over TLS – this is fine, but I noticed that it multiplies the initial connection latency quite a bit. When the tunnel server is nearby it’s fine, but someone from Sweden was testing this system and for them the initial page load lag was notice-able. I know that ultimately having servers in every region is the only “best” solution but after looking at the yamux code I believe that by using QUIC for my tunnel instead, I can cut the painful extra latency in half. It’s nice that its internal “logical” connections don’t add extra latency to establish!!

                                                                                    The modern web seriously needs a reform.

                                                                                    I agree!! This is why I’m working on a tool to make it easier for everyone to serve from the home or from the phone.

                                                                                  1. 1

                                                                                    Does this translate to a FreeBSD kernel bug? It’s my understanding the switch kernel is derived from the FreeBSD kernel.

                                                                                    1. 3

                                                                                      It’s not.

                                                                                      From https://media.ccc.de/v/34c3-8941-console_security_-_switch#t=700

                                                                                      There was this rumor it was running FreeBSD, and everyone was asking, “Does it run…” No. It doesn’t run it and stop asking.

                                                                                      Instead it runs a custom microkernel called Horizon that’s been in development at Nintendo since the 3DS.

                                                                                        1. 2

                                                                                          The copyright notice is required if they copy any code from the FreeBSD kernel. My understanding is that, as with a lot of other systems, they bundle the FreeBSD network stack, but the rest of the kernel is not FreeBSD.

                                                                                        1. 10

                                                                                          Ready to give it a shot? Make sure to update your macOS to version 12.3 or later, then just pull up a Terminal in macOS and paste in this command:

                                                                                           curl https://alx.sh | sh
                                                                                          

                                                                                          Pay close attention to the messages the installer prints, especially at the end!

                                                                                          Installing an operating system with curl | sh? Well, I’ve done riskier things to my machines before.

                                                                                          1. 12

                                                                                            at this point isn’t that essentially what home-brew is? :D

                                                                                            1. 7

                                                                                              How’s that different security-wise from downloading an ISO and running it at boot time?

                                                                                              1. 4

                                                                                                In general a shell script is easier to man in the middle.

                                                                                                In this specific case since it is https you are right there is not much difference.

                                                                                                Assuming most people will copy paste the command from a webbrowser into the terminal there is also the possibility of some css/unicode trickery

                                                                                                1. 3

                                                                                                  You can verify the integrity of the ISO with a SHA and/or verifying signatures with the vendor’s public key. You could do this with the shell script by downloading it, verifying it, checking to ensure that it verifies the ISO it downloads, and then running it.

                                                                                                  Simply doing curl | sh skips all of this.

                                                                                                  1. 5

                                                                                                    curl verifies the script with the vendor’s public key too, that’s what https does.

                                                                                                    As far as I can tell, the big difference is the sh step gets to live in your running OS with the disks mounted, but that’s it.

                                                                                                    1. 1

                                                                                                      It also skips any passive vulnerability scans such as the known-exploit signature checks and virus scans that are common in browsers now. So if the site is compromised and they are providing file hashes that will validate their exploit script then a browser downloading the same script and/or ISO would have a chance of catching it.

                                                                                                    2. 2

                                                                                                      You can do easily do curl > file and verify to your hearts content. :)

                                                                                                      1. 2

                                                                                                        Yeah, that’s why I’m always surprised to see websites instruct users to curl | sh when they could tell users to curl > file && sha256sum file cross-check the sum, and then sh file at least.

                                                                                                        1. 9

                                                                                                          I mean, if the script, the checksum, and the instructions to check the checksum are all served from the same https server, you don’t actually gain anything by checking the checksum.

                                                                                                          1. 2

                                                                                                            As the downloader you don’t gain anything directly but as the publisher it would require an attacker to change the content of the site, which is another opportunity for detection. Not anywhere near complete protection but an additional low cost security later.

                                                                                                1. 7

                                                                                                  I don’t understand the ergonomics of this shape of thing. It doesn’t look comfortable to use hunched over it on a table or your knees, and it looks too bulky to hold like an iPad for a long session.

                                                                                                  1. 7

                                                                                                    It looks cool, but it seems more like a movie prop than something you would actually want to hunch over and use.

                                                                                                    1. 2

                                                                                                      Yeah, make it look more like the Atari Portfolio and I’m in! As the leader of the human resistance once said, they’d make “Easy money.”

                                                                                                    2. 4

                                                                                                      All “cyberdecks” are modelled after the TRS-80 Model 100/Alphasmart Dana. Looks cool, will ruin your neck.

                                                                                                      1. 2

                                                                                                        yeah if the keyboard+display were hinged then I might consider buying it but even as a toy it seems uncomfortable to use.

                                                                                                      1. 4

                                                                                                        What is the interest of the EU to get involved in certificate issuance? Is it bureaucracy overreach or is there something else behind this effort?

                                                                                                        1. 24

                                                                                                          My guess would be a genuine feeling that it’s not good for EU people that an American advertising company, an American browser vendor, an American computer company, and an American software company functionally control who’s allowed to issue acceptable certificates worldwide.

                                                                                                          1. 5

                                                                                                            Sure, but then the answer is that the EU should make Mozilla open an office in Brussels or somewhere and then shovel money at FireFox, so that they have their own player in the browser wars. Tons of problems are created for users by the fact that Google and Apple have perverse incentives for their browser (and that Mozilla’s incentive is just to figure out some source, any source of funding). Funding Mozilla directly would give EU citizens a voice in the browser wars and provide an important counterbalance to the American browsers.

                                                                                                            1. 4

                                                                                                              Directly funding a commercial entity tasked with competing with foreign commercial entities is a huge problem; Airbus and Boeing have had disputes about that for a long time: https://en.wikipedia.org/wiki/Competition_between_Airbus_and_Boeing#World_Trade_Organization_litigation

                                                                                                              On the other side, passing laws that require compliance from foreign firms operating in the EU has been successful; for as much as it sucks and is annoying to both comply with and use websites that claim to comply with it, the GDPR has been mostly complied with.

                                                                                                              1. 5

                                                                                                                A) In an EU context, it’s hard to argue that Aerobus hasn’t been successful for promoting European values. If the WTO disagrees, that’s because the WTO’s job is not to promote European values. I can’t really imagine how Google or Apple could win a lawsuit against the EU for funding a browser since they give their browsers away for free, but anyone can file a lawsuit about anything, I suppose.

                                                                                                                B) I don’t see how anyone can spend all day clicking through pointless banners and argue that the current regulatory approach is successfully promoting EU values. The current approach sucks and is not successful. Arguably China did more to promote its Chinese values with Tiktok than all the cookie banners of the last six years have done for the EU’s goals.

                                                                                                                1. 4

                                                                                                                  None of this is about “promoting EU values.”

                                                                                                                  The EU government’s goal for Airbus is to take money from the rest of the world and put it in European paychecks.

                                                                                                                  The goal of the GDPR is to allow people in Europe a level of consent and control over how private surveillance systems watch them. The GDPR isn’t just the cookie banners; it’s the idea that you can get your shit out of facebook and get your shit off facebook, and that facebook will face consequences when it comes to light that they’ve fucked that up.

                                                                                                                  Google could absolutely come up with a lawsuit if the EU subsidizes Mozilla enough to let Mozilla disentangle from Google and start attacking Google’s business by implementing the same privacy features that Apple does.

                                                                                                                  1. 2

                                                                                                                    The goal of the GDPR is to allow people in Europe a level of consent and control over how private surveillance systems watch them.

                                                                                                                    Yes, and it’s a failure because everyone just clicks agree, since the don’t track me button is hidden.

                                                                                                              2. 1

                                                                                                                That’s one answer, but what does it have to be “the” answer?

                                                                                                            2. 8

                                                                                                              A trusted and secure European e-ID - Regulation, linked to in the article’s opening, is a revision of existing eIDAS regulation aiming to facilitate interoperable eID schemes in Member States. eIDAS is heavily reliant on X.509 (often through smartcards in national ID cards) to provide a cryptographic identity.

                                                                                                              The EU’s interest in browser Certificate Authorities stems from the following objective in the draft regulation:

                                                                                                              1. They should recognise and display Qualified certificates for website authentication to provide a high level of assurance, allowing website owners to assert their identity as owners of a website and users to identify the website owners with a high degree of certainty.

                                                                                                              … to be implemented through a replacement to Article 45:

                                                                                                              1. Qualified certificates for website authentication referred to in paragraph 1 shall be recognised by web-browsers. For those purposes web-browsers shall ensure that the identity data provided using any of the methods is displayed in a user friendly manner.

                                                                                                              Mozilla’s November 2021 eIDAS Position Paper, also linked in the original article, goes into more detail about the incompatibilities with the ‘Qualified Website Authentication Certificates’ scheme and the CA/Browser Forum’s policies.

                                                                                                            1. 10

                                                                                                              Here’s the thing: nobody is going to build a massive complex distributed system on accident.

                                                                                                              I feel like the majority of massive complex distributed systems are built on accident? I’ve been party to a few of these; you start with a monolith, add some extra components and start looking outside the system for answers, and then a year later you’re doomed.