Threads for thiht

  1. 20

    I imagine this means that if you had e.g. Disqus embedded on a bunch of sites, you’d need to log into Disqus in each one. Is that correct?

    (I think I’d be fine with that. Just curious what the user-visible effects are.)

    1. 15

      Yes. And Like-buttons will also break.

      1. 6

        Thankfully those seem to have gone out of fashion somewhat?

        It’s kind of ironic that the centralization / silo-ification of the web (“people just stay on facebook all the time and don’t care about interacting with facebook from embedded widgets on random articles”) is making this amazing privacy improvement palatable for mainstream users.

      2. 10

        i admittedly have a very limited understanding of browser technologies, but everything described in the section on what they’re doing was how i imagined cookies already working in my head. i’m kind of … used to being horrified by browsers, by now, but yeah, learning how things used to work was an eye-opening lesson in how awful most browsers are. holy shit.

        1. 10

          In a better world, the way “things used to work” is how you’d want them to work. Shareable cookies do add value, they’re just very easy to abuse. I also don’t think this technically limits the tracking, though it may require it to make more network requests; it’s hard to stop two cooperating websites from communicating in order to track you, and adtech tracking is hosted by cooperating websites.

          1. 1

            I don’t understand why it couldn’t be a permission. « xxx.com wants to access some of your data from yyy.com [Review][Allow][Block] »

            1. 1

              Well, it’s more that xxx.com wants to access your data from xxx.com, but one xxx.com is direct and one is embedded in yyy.com’s page. The point I’m making on is that this is impossible to block if yyy.com and xxx.com are working together, which in the context of ads they always are. As one possible “total cookie protection” break, yyy.com could set a cookie with a unique tracking ID specific to yyy.com, redirect to xxx.com with the unique tracking ID as a URL parameter, and have xxx.com redirect back to it. Your xxx.com and yyy.com identities are now correlated, and neither site had to do anything browsers could reasonably block.

        2. 9

          As a developer working for a company that makes an embedded video player that’s used across the internet: this semi-breaks some user preferences, like remembering volume and preferred caption language — now they have to be set per embedding domain instead of applying globally when they’ve been set once.

          And it thoroughly breaks our debug flags: during a tech support conversation, we can have users enable or disable certain features to track down where a bug is coming from. The UI for that is a page on our domain (the domain the embed is served from). Now users can set those flags, but they won’t actually do anything, because they won’t be readable on the domain where they’re really needed.

          We could possibly move the UI for that inside of the embed to make it work again, but A) it would look and feel bad, and B) it probably won’t happen for a browser with a 3% share.

          The Storage Access API offers very little help in this context: we can’t have the player pop up a permission request dialog for every user on every player load just to check whether they even have a debug flag set, so there would have to be some kind of hidden “make debugging work right” UI element that would trigger the request.

          1. 6

            Disclaimer: I trust you know this far better than I do, I’m just curious.

            I can see how this Firefox feature breaks that functionality, and it sounds like unfortunate collateral damage.

            For volume control, is that better handled by either the browser or the OS anyway?

            For their preferred caption language, can the browser’s language be inferred from headers?

            If a user wishes to override their browser’s language, it sounds plausible that this should be at the domain-level anyway. Perhaps I want native captions on one site, and foreign captions on a site I’m learning language from?

            And it thoroughly breaks our debug flags: during a tech support conversation, we can have users enable or disable certain features to track down where a bug is coming from. The UI for that is a page on our domain (the domain the embed is served from). Now users can set those flags, but they won’t actually do anything, because they won’t be readable on the domain where they’re really needed.

            How does Safari handle this?

            1. 2

              For volume control, is that better handled by either the browser or the OS anyway?

              Arguable. Browsers don’t do anything helpful that I know of, and the OS sees the browser as one application.

              For their preferred caption language, can the browser’s language be inferred from headers?

              We default to the browser language (which generally defaults to the OS language) but there are reasons why some users tend to select something different for captions. It’s not the end of the world, it’s just annoying.

              How does Safari handle this?

              I’m unsure, sorry. I don’t see a ticket about it, and I don’t have any Safari-capable devices on hand.

            2. 1

              Interesting, thank you. The caption and volume preferences thing sounds annoying. But on the other hand, it won’t be any worse for you than it is for your competitors which is… something, at least.

              You may want to take a look at how YouTube and Brightcove (off the top of my head) handle the debug part of this – right-clicking on a video provides all sorts of debug and troubleshooting information.

              1. 2

                We have that too, but it’s a different feature. We didn’t put the controls in there because we can give them a nicer presentation if they’re not stuck inside of an iframe :)

          1. 24

            Am I the only one being completely tired of these rants/language flamewars? Just use whatever works for you, who cares

            1. 11

              You’re welcome to use whatever language you like, but others (e.g. me) do want to see debates on programming language design, and watch the field advance.

              1. 6

                Do debates in blogs and internets comments meaningfully advance language design compared to, say, researchers and engineers exploring and experimenting and holding conferences and publishing their findings? I think @thiht was talking about the former.

                1. 2

                  I’m idling in at least four IRC channels on Libera Chat right now with researchers who regularly publish. Two of those channels are dedicated to programming language theory, design, and implementation. One of these channels is regularly filled with the sort of aggressive discussion that folks are tired of reading. I don’t know whether the flamewars help advance the state of the art, but they seem to be common among some research communities.

                  1. 5

                    Do you find that the researchers, who publish, are partaking in the aggressive discussions? I used to hang out in a couple Plan 9/9front-related channels, and something interesting I noticed is that among the small percentage of people there who made regular contributions (by which I mean code) to 9front, they participated in aggressive, flamey discussion less often than those that didn’t make contributions, and the one who seemed to contribute the most to 9front was also one of the most level-headed people there.

                    1. 2

                      It’s been a while since I’ve been in academia (I was focusing on the intersection of PLT and networking), and when I was there none of the researchers bothered with this sort of quotidian language politics. Most of them were focused around the languages/concepts/papers they were working with and many of them didn’t actually use their languages/ideas in real-world situations (nor should they, the job of a researcher is to research not to engineer.) There was plenty of drama in academia but not about who was using which programming language. It had more to do with grant applications and conference politics. I remember only encountering this sort of angryposting about programming languages in online non-academic discussions on PLT.

                      Now this may have changed. I haven’t been in academia in about a decade now. The lines between “researcher” and “practitioner” may have even become more porous. But I found academics much more focused on the task at hand than the culture around programming languages among non-academics. To some extent academics can’t be too critical because the creator of an academic language may be a reviewer for an academic’s paper submission at a conference.

                      1. 2

                        I’d say that about half of the aggressive folks have published programming languages or PLT/PLD research. I know what you’re saying — the empty cans rattle the most.

                2. 8

                  You are definitely not the only one. The hide button is our friend.

                  1. 2

                    So I was initially keen on Go when it first came out. But have since switched to Rust for a number of different reasons, correctness and elegance among them.

                    But I don’t ever say “you shouldn’t use X” (where ‘X’ is Go, Java, etc.). I think it is best to promote neat projects in my favorite language. Or spending a little time to write more introductory material to make it easier for people interested to get started in Rust.

                    1. 2

                      I would go further, filtering for rant, meta and law makes Lobsters much better.

                      rant is basically the community saying an article is just flamebait, but short of outright removing it. You can choose to remove it.

                    2. 5

                      I think this debate is still meaningful because we cannot always decide what we use.

                      If there are technical or institutional barriers, you can ignore $LANG, such as if you’re writing Android apps, where you will use a JVM language (either Kotlin or Java) but if you are writing backend services, outside forces may compel you to adopt Go, despite its shortcomings detailed in this post (and others by the author).

                      Every post of this kind helps those who find themselves facing a future where they must write Go to articulate their misgivings.

                    1. 1

                      That’s a weird hill to die upon. What’s wrong with creating a gmail account with the minimum required info and using it just for that? Do you even have to provide real info?

                      Are you equally offended when you need to create a Stackexchange account to post Cc-By-Sa content on Stackoverflow?

                      1. 2

                        It is reasonable that I should need an account on your project’s bug tracker.

                        It is not unreasonable that that account should be email address based - after all that’s the primary mens of contact in many cases.

                        It is absolutely unreasonable that I should have to also create an email account, on google’s own email service, in which they demand a pile of personal information.

                        It is unreasonable that they should require you create another account for their primary tracking platforms. Remember, Google IDs are used for unique tracking across every site with analytics, Chrome routinely “forgets” settings and logs the browser itself into your google id.

                        Given the actual business use for Google IDs, and their interest in pulling people into their mail system, it isn’t reasonable for Google to demand people inflict that on themselves.

                        Obviously people can always choose not to contribute, but it’s so stupid for that to be the required option.

                        Look at everyone* else: Plenty of projects use GitHub for project sources and bug DBs, etc and interacting with a GitHub hosted repo or bug DB requires a GitHub account. But a GitHub account doesn’t require you adopt a Microsoft account as well.

                        That’s what people have a problem with. Google already has a horrendous track record with user and data privacy, despite that, they still have some good projects that people want to contribute to, but then they require you to submit to then tracking and abuse infrastructure to do so. For no reason.

                        It just seems unnecessary.

                        • I said everyone and then realized I can’t think of any other project hosts that are subsidiaries of other companies?
                        1. 1

                          I am on board with refusing to use a Google email to contribute to a project. However, in this case, it is to contribute to golang, a project that is created and led by Google, and to which every accepted contribution is, in effect, free labor to the benefit of Google.

                          That doesn’t add up to me.

                          1. 1

                            Any contribution to to an open source project has a “cost” - you have committed time, energy, etc creating whatever it is you’re contributing. You’re generally ok providing that free labour, so other people can use/benefit from your work.

                            What is happening in cases like this (and I include the FSF IP theft) is you are actually being charged to contribute. That is where it has become a problem. Google demanding you use their unrelated services, which have mandatory contract terms that surrender even more rights, and that are primarily designed to support surveillance, is a very heavy price to provide them with your own labour.

                      1. 3

                        I don’t think there are any angels in the domain registrar world. It’s an unneccessary layer of resellers making a lot of money by making API calls to the actual domain registers. That said, I would second Gandi as not being particularly exploitative and tangilbly supporting FOSS.

                        1. 11

                          Feel free to get accredited to the ICANN and to the registries of your choice then, but don’t underestimate the work done by registrars. From the top of my head:

                          • whois server
                          • contact validation and control (WDRP)
                          • sometimes full contact management (Verisign)
                          • lifecycle management
                          • fraud detection
                          • dispute management (UDRP)
                          • the APIs of the registries suck, there’s the EPP standard (XML over TCP) but the E means « extensible », and registries extend EPP a lot in non standard, sometimes undocumented ways

                          I work for a registrar, and it’s definitely not easy.

                          1. 2

                            You seem to have largely listed the activities of registries, not registrars.

                            1. 4

                              You would think so, but no. Registries love to delegate what should be their job to the registrars.

                        1. 1

                          I’m writing a Visual Studio Code extension for a declarative integration testing runner1 written by some colleagues. To my surprise VSCode’s APIs are way harder to use than I would have expected. The documentation is there, the examples exist and are working, and the bootstrapping is great, but it’s still kinda hard to get into. Probably because of some holes in the documentation, but it’s not that easy to pinpoint.

                          1. 7

                            The main request I have for release notes is: please don’t generate them from commits! That’s absolutely unreadable and useless, no matter how good your commit messages are.

                            Please just take a bit of time on each release to manually write proper release notes.

                            1. 2

                              Slightly less useless than “bug fixes and performance improvements”, though.

                            1. 3

                              For example the following routes conflict and require manual ordering to resolve.

                              • `/email/remove?<proof>ˋ
                              • ˋ/email/remove?<unsubscribe>ˋ

                              I don’t use Rust nor Rocket but that seems sane to me. What should happen if both ˋproofˋ and ˋunsubscribe` are provided? I don’t think the router should rely solely on the declaration order in this case (and maybe Rocket can’t do that if it uses some kind of parallelism for route matching). Forcing to declare a rank mitigates all the issues. But maybe I’m missing something related to the typings? I didn’t really understand the next line about Option.

                              1. 1

                                That’s a good point. Personally I would be ok throwing a Bad Request or something but I can see how it isn’t clear what should happen if both are provided.

                                Declaring a parameter as Option<&str> makes the parameter optional. If both were optional then it isn’t clear what route would be taken if neither argument was passed.

                              1. 4

                                Kubernetes! My org switched to it from Marathon, and after a slight period of annoyance, I’m excited to make the switch. I’ll try to pass the CKAD certification as a related goal.

                                1. 1

                                  My org switched to it from Marathon

                                  Haha that was enough to notice you’re a (distant) coworker of mine!

                                  1. 1

                                    Hehe, it’s a small world!

                                1. 15

                                  You lost me at “the great work from homebrew”

                                  Ignoring UNIX security best practices of the last. I dunno, 30, 40 years, and then actively preventing people from using the tool in any fashion that might be more secure, and refusing to acknowledge any such concerns is hardly “great work”.

                                  I could go on about their abysmal dependency resolution logic, but really if the security shit show wasn’t enough to convince you, the other failings won’t either.

                                  But also - suggesting Apple ship a first party container management tool because “other solutions use a VM”, suggests that either you think a lot of people want macOS containers (I’m pretty sure they don’t) or that you don’t understand what a container is/how it works.

                                  The “WSL is great because now I don’t need a VM” is either ridiculously good sarcasm, or yet again, evidence that you don’t know how something works. (For those unaware, WSL2 is just a VM. Yes it’s prettied up to make it more seamless, but it’s a VM.).

                                  1. 23

                                    I don’t know what’s SO wrong about Homebrew that every time it’s mentioned someone has to come and say that it sucks.

                                    For the use case of a personal computer, Homebrew is great. The packages are simple, it’s possible and easy to install packages locally (I install mine in ~/.Homebrew) and all my dependencies are always up to date. What would a « proper » package manager do better than Homebrew that I care about? Be specific please because I have no idea what you’re talking about in terms of security « shit show » or « abysmal » dependency resolution.

                                    1. 12
                                      • A proper package manager wouldn’t allow unauthenticated installs into a global (from a $PATH perspective) location.
                                      • A proper package manager wouldn’t actively prevent the user from removing the “WTF DILIGAF” permissions Homebrew sets and requiring authenticated installs.
                                      • A proper package manager that has some form of “install binaries from source” would support and actively encourage building as an untrusted user, and requiring authentication to install.
                                      • A proper package manager would resolve dynamic dependencies at install time not at build time.
                                      • A proper open source community wouldn’t close down any conversation that dares to criticise their shit.
                                      1. 11

                                        Literally none of those things have ever had any impact on me after what, like a decade of using Homebrew? I’m sorry if you’ve run into problems in the past, but it’s never a good idea to project your experience onto an entire community of people. That way lies frustration.

                                        1. 5

                                          Who knew that people would have different experiences using software.

                                          it’s never a good idea to project your experience onto an entire community of people

                                          You should take your own advice. The things I stated are objective facts. I didn’t comment on how they will affect you as an individual, I stated what the core underlying issue is.

                                          1. 6

                                            You summarized your opinion on “proper” package managers and presented it as an authoritative standpoint. I don’t see objectiveness anywhere.

                                        2. 3

                                          I don’t really understand the fuss about point 1. The vast majority of developer machines are single user systems. If an attacker manages to get into the user account it barely matters if they can or cannot install packages since they can already read your bank passwords, SSH keys and so on. Mandatory relevant xkcd.

                                          Surely, having the package manager require root to install packages would be useful in many scenarios but most users of Homebrew rightfully don’t care.

                                        3. 8

                                          As an occasional Python developer, I dislike that Homebrew breaks old versions of Python, including old virtualenvs, when a new version comes out. I get that the system is designed to always get you the latest version of stuff and have it all work together, but in the case of Python, Node, Ruby, etc. it should really be designed that it gets you the latest point releases, but leaves the 3.X versions to be installed side by side, since too much breaks from 3.6 to 3.7 or whatever.

                                          1. 8

                                            In my opinion for languages that can break between minor releases you should use a version manager (python seems to have pyenv). That’s what I do with node: I use Homebrew to install nvm and I use nvm to manage my node versions. For Go in comparison I just use the latest version from Homebrew because I know their goal is retro compatibility.

                                            1. 5

                                              Yeah, I eventually switched to Pyenv, but like, why? Homebrew is a package manager. Pyenv is a package manager… just for Python. Why can’t homebrew just do this for me instead of requiring me to use another tool?

                                              1. 1

                                                Or you could use asdf for managing python and node.

                                              2. 7

                                                FWIW I treat Homebrew’s Python as a dependency for other apps installed via Homebrew. I avoid using it for my own projects. I can’t speak on behalf of Homebrew officially, but that’s generally how Homebrew treats the compilers and runtimes. That is, you can use what Homebrew installs if you’re willing to accept that Homebrew is a rolling package manager that strives always to be up-to-date with the latest releases.

                                                If you’re building software that needs to support a version of Python that is not Homebrew’s favored version, you’re best off using pyenv with brew install pyenv or a similar tool. Getting my teams at work off of brewed Python and onto pyenv-managed Python was a short work that’s saved a good bit of troubleshooting time.

                                                1. 2

                                                  This is how I have started treating Homebrew as well, but I wish it were different and suitable for use as pyenv replacement.

                                                  1. 2

                                                    asdf is another decent option too.

                                                  2. 5

                                                    I’m a Python developer, and I use virtual environments, and I use Homebrew, and I understand how this could theoretically happen… yet I’ve literally never experienced it.

                                                    it should really be designed that it gets you the latest point releases, but leaves the 3.X versions to be installed side by side, since too much breaks from 3.6 to 3.7 or whatever.

                                                    Yep, that’s what it does. Install python@3.7 and you’ve got Python 3.7.x forever.

                                                    1. 1

                                                      Maybe I’m just holding it wrong. :-/

                                                    2. 3

                                                      I found this article helpful that was floating around a few months ago: https://justinmayer.com/posts/homebrew-python-is-not-for-you/

                                                      I use macports btw where I have python 3.8, 3.9 and 3.10 installed side by side and it works reasonably well.

                                                      For node I gave up (only need it for small things) and I use nvm now.

                                                    3. 8

                                                      Homebrew is decent, but Nix for Darwin is usually available. There are in-depth comparisons between them, but in ten words or less: atomic upgrade and rollback; also, reproducibility by default.

                                                      1. 9

                                                        And Apple causes tons of grief for the Nix team every macOS release. It would be nice if they stopped doing that.

                                                        1. 2

                                                          I stopped using Nix on macOS after it is required to create an separate unencrypted volume just for Nix. Fortunately, NixOS works great on VM.

                                                          1. 2

                                                            It seems to work on an encrypted volume now at least!

                                                      2. 4

                                                        I really really hate how homebrew never ask me for confirmation. If I run brew upgrade it just does it. I have zero control over it.

                                                        I come from zypper and dnf, which are both great examples of really good UX. I guess if all you know is homebrew or .dmg files, homebrew is amazing. Compared to other package managers, it might even be worse than winget….

                                                        1. 2

                                                          If I run brew upgrade it just does it

                                                          … yeah? Can we agree that this is a weird criticism or is it just me?

                                                        2. 2

                                                          Overall I like it a lot and I’m very grateful brew exists. It’s smooth sailing the vast majority of the time.

                                                          The only downside I get is: upgrades are not perfectly reliable. I’ve seen it break software on upgrades, with nasty dynamic linker errors.

                                                          Aside from that it works great. IME it works very reliably if I install all the applications I want in one go from a clean slate and then don’t poke brew again.

                                                        3. 4

                                                          you think a lot of people want macOS containers (I’m pretty sure they don’t)

                                                          I would LOVE macOS containers! Right now, in order to run a build on a macOS in CI I have to accept whatever the machine I’m given has installed (and the version of the OS) and just hope that’s good enough, or I have to script a bunch of install / configuration stuff (and I still can’t change the OS version) that has to run every single time.

                                                          Basically, I’d love to be able to use macOS containers in the exact same way I use Linux containers for CI.

                                                          1. 1

                                                            Yes!!

                                                            1. Headless macos would be wonderful
                                                            2. Containers would be fantastic. Even without the docker-like incremental builds, something like FreeBSD jails or LXC containers would be very empowering for build environments, dev servers, etc
                                                            1. 1

                                                              Containers would be fantastic. Even without the docker-like incremental builds, something like FreeBSD jails or LXC containers would be very empowering for build environments, dev servers, etc

                                                              These days, Docker (well, Moby) delegates to containerd for managing both isolation environments and image management.

                                                              Docker originally used a union filesystem abstraction and tried to emulate that everywhere. Containerd provides a snapshot abstraction and tries to emulate that everywhere. This works a lot better because you can trivially implement snapshots with union mounts (each snapshot is a separate directory that you union mount on top of another one) but the converse is hard. APFS has ZFS-like snapshot support and so adding an APFS snapshotter to containerd is ‘just work’ - it doesn’t require anything else.

                                                              If the OS provides a filesystem with snapshotting and a isolation mechanism then it’s relatively easy to add a containerd snapshotter and shim to use them (at least, in comparison with writing a container management system from scratch).

                                                              Even without a shared-kernel virtualisation system, you could probably use xhyve[1] to run macOS VMs for each container. As far as I recall, the macOS EULA allows you to run as many macOS VMs on Apple hardware as you want.

                                                              [1] xhyve is a port of FreeBSD’s bhyve to run on top of the XNU hypervisor framework, which is used by the Mac version of Docker to run Linux VMs.

                                                          2. 2

                                                            Ignoring which particular bits of Unix security practices is problematic? There are functionally no Macs in use today that are multi-user systems.

                                                            1. 3

                                                              All of my macs and my families macs are multi-user.

                                                              1. 2

                                                                The different services in OS are running as different users. It is in general good thing to run different services with minimal required privileges, different OS provided services run with different privileges, different Homebrew services run with different privileges, etc. So reducing the blast radius, even if there is only one human user is a pro, as there are often more users at once, just not all users are meatbags.

                                                              2. 1

                                                                I’ve been a homebrew user since my latest mac (2018) but my previous one (2011) I used macports, given you seem to have more of an understanding of what a package manager should do than I have, do you have any thoughts on macports?

                                                                1. 4

                                                                  I believe MacPorts does a better job of things, but I can’t speak to it specifically, as I haven’t used it in a very long time.

                                                                  1. 1

                                                                    Thanks for the response, it does seem like it’s lost its popularity and I’m not quite sure why. I went with brew simply because it seemed to be what most articles/docs I looked at were using.

                                                                    1. 3

                                                                      I went with brew simply because it seemed to be what most articles/docs I looked at were using.

                                                                      Pretty much this reason. Homebrew came out when macports was still source-only installs and had some other subtle gotchas. Since then, those have been cleared up but homebrew had already snowballed into “it’s what my friends are all using”

                                                                      I will always install MP on every Mac I use, but I’ve known I’ve been in the minority for quite awhile.

                                                                      1. 1

                                                                        Do you find the number of packages to be comparable to brew? I don’t have a good enough reason to switch but would potentially use it again when I get another mac in the future.

                                                                        1. 3

                                                                          I’ve usually been able to find something unless it’s extremely new, obscure, or has bulky dependencies like gtk/qt or specific versions of llvm/gcc. The other nice thing is that if the build is relatively standard, uses ‘configure’ or fits into an existing PortGroup, it’s usually pretty quick to whip up a local Portfile(which are TCL-based so it’s easy to copy a similar package config and modify to fit).

                                                                          Disclaimer: I don’t work on web frontends so I usually don’t deal with node or JS/TS-specific tools.

                                                                          1. 3

                                                                            On MacPorts vs Homebrew I usually blame popularity first and irrational fear of the term ‘Ports’ as in “BSD Ports System”, second. On the second cause, a lot of people just don’t seem to know that what started off as a way to have ‘configure; make; make install’ more maintainable across multiple machines has turned into a binary package creation system. I don’t anything about Homebrew so I can’t comment there.

                                                                1. 27

                                                                  I suggested a rant tag since this feels like a super vague long form subtweet that likely has a specific story/example behind it. I don’t understand what dhh actually complains about there and whether it’s genuine without knowing that context.

                                                                  1. 11

                                                                    Pretty sure he’s railing against the /r/programmerhumor style “software development is just copy-and-pasting from stack overflow right guiz!” meme. I’m sympathetic to his frustration because this joke (which was never that funny in the first place) has leaked into non-technical circles. I’ve had non techies say to me semi-seriously “programming, that’s just copying code from the internet, right?” and it galls a bit. Obviously we all copied code when we were starting out but it’s not something that proficient developers do often and to assert otherwise is a little demeaning.

                                                                    1. 9

                                                                      Obviously we all copied code when we were starting out

                                                                      Well no, I copied examples from a book. Manually, line by line.

                                                                      1. 6

                                                                        I have 20 years experience and I regularly copy paste code rather than memorize apis or painstakingly figure out the api from its docs. I do the latter too, but if I can copy paste some code as a start all the better.

                                                                        1. 4

                                                                          The meme starts being a bit more condescending now though. I frequently come across tweets saying things like « lol no one of us has any idea what we are doing we just copy paste stuff ». The copy pasting stuff is kinda true in a way (although a bit more complicated, even as a senior dev I copy paste snippets but know how to adapt them to my use case and test them), but the incompetence part is not. But it sadly starts to feel like there are tons of incompetent OR self deprecating people in the field. That’s pretty bad.

                                                                          This blog post resonates with me, it really pinpoints something.

                                                                          1. 3

                                                                            It’s cool if that’s what he wanted to say, but the inclusion of impostor syndrome and gatekeeping made me think otherwise.

                                                                            1. 3

                                                                              That was probably just him hedging against expected criticism

                                                                            2. 2

                                                                              Why am I paying this exorbitant salary, to attract people like you with a fancy degree and years of experience when all you ever do is a four-second copy-and-paste job?

                                                                              You pay it because it because I spent a long time achieving my degree and accumulating years of experience to be able to judge which code to copy and paste where, and why and in only four seconds at that.

                                                                              No matter the context these reductions are always boiled down to the easy to perform operation, never the understanding behind the operation.

                                                                            3. 5

                                                                              It absolutely feels like a subtweet, but I have no idea what the context was. Did someone at Basecamp just post the no idea dog one time too often?

                                                                            1. 10

                                                                              It doesn’t recognise that software breaks are common and not all equal.

                                                                              I disagree with both parts.

                                                                              Not all breakage is equal in the sense that library developers should be even more reluctant to introduce a change that breaks say 50% of existing projects than a change that only break 0.5%.

                                                                              However, for downstream users, all breaking changes are equal. If it’s my code that breaks after updating from libfoo 1.0.0 to 2.0.0, I don’t care how many other projects have the same problem since it doesn’t reduce the need to adapt my code.

                                                                              If you define breaking change severity as the number of affected API functions, the picture doesn’t really change. A small number of affected functions doesn’t always mean a simple fix.

                                                                              I agree that software breakage is common, but the real solution is to stop making breaking changes without a really good reason, not to invent versioning schemes that make breaking changes less visible in the major version number.

                                                                              1. 8

                                                                                As a consumer of many libraries in dynamically typed languages “this easily grappable thing has a different call signature, same results” is qualitatively different from “this API disappeared/is now semantically entirely different”.

                                                                                Sure don’t break things without good reasons. But that’s true in any respect!

                                                                                1. 5

                                                                                  However, for downstream users, all breaking changes are equal.

                                                                                  Disagree on that. If a breaking change fixes a typo in a function name, renames something, or even just deprecates something with a clear and easy replacement path, I don’t care as a user. It’s just grooming.

                                                                                  The distinction between major and minor breaking changes fits my expectations both as a user and as a developer.

                                                                                  1. 3

                                                                                    However, for downstream users, all breaking changes are equal.

                                                                                    A breaking change that doesn’t break my code is facially less bad than one that does.

                                                                                    This emerging notion that breaking changes are an Ur-failure of package authors which must be avoided at all costs biases so strongly toward consumers that it’s actually harmful for software development in the large. Software needs to have breaking changes over time, to evolve, in order to be healthy. Nobody gets it right the first time, and setting that as the baseline expectation is unrealistic and makes everyone’s experience net worse.

                                                                                    1. 4

                                                                                      That notion is biased towards the ecosystem as a whole. We all are both producers and consumers.

                                                                                      If there are two libraries with comparable functionality and one makes breaking changes often while the other doesn’t, it’s the latter that brings more benefit to the ecosystem by saving developer time. In essence, compatibility is a feature.

                                                                                      I’m not saying that people should never make breaking changes, only that it requires a really good justification. Fixing unfortunate function names, for example, doesn’t require a breaking change—keeping the old name as an alias and marking it deprecated is all that’s needed, and it adds too few bytes to the library to consider that “bloat”.

                                                                                      1. 2

                                                                                        I’m not saying that people should never make breaking changes, only that it requires a really good justification. Fixing unfortunate function names, for example, doesn’t require a breaking change—keeping the old name as an alias and marking it deprecated is all that’s needed, and it adds too few bytes to the library to consider that “bloat”.

                                                                                        The amount of justification required to make a breaking change isn’t constant, it’s a function of many project-specific variables. It can easily be that the cost of keeping that deprecated function around — not just in terms of size but also in API coherence, future maintainability, etc. — outweighs the benefit of avoiding a breaking change.

                                                                                        1. 1

                                                                                          A lot of time the “maintaining API coherence” argument is a euphemism for making it look as if the design mistake was never made. Except it was, and now it’s my responsibility as the maintainer to ensure minimal impact on the people who chose to trust my code.

                                                                                          I completely agree that those arguments can be valid, but breaking changes I see in the wild tend to be of the avoidable variety where a small bit of effort on the maintainer’s side was all that needed to fix it for future users without affecting existing ones.

                                                                                          Breaking changes unannounced and perhaps even unnoticed by the authors are even worse.

                                                                                          1. 2

                                                                                            design mistake

                                                                                            I just can’t get on board with this framing. “Design mistakes” are an essential and intractable part of healthy software projects. In fact they’re not mistakes at all, they’re simply stages in the evolution of an API. Our tools and practices have to reflect this truth.

                                                                                            1. 2

                                                                                              Both in this thread and every other time I’ve seen you talk about this topic on this website you’re always on the rather extreme side of “it’s impossible not to have completely unstable software”.

                                                                                              There are many examples of very stable software out there, and there’s plenty of people who are very appreciative of very stable software. There are very stable software distributions out there too (debian) and there are plenty of people who are very appreciative of the fact that if they leave auto-updates on for 4 years and leave things alone that there’s a very small chance that they will ever have to fix any breakages, and when they eventually have to update, that things will also be very well documented and they likely won’t have to deal with a subtle and unnoticed breakage.

                                                                                              Yes, it is a reality that new and bleeding edge concepts need to go through a maturation stage where they experiment with ideas before they can be solidified into a design, but pretending like this is true for every software project in existence and that therefore instability must just be accepted is just flat out wrong.

                                                                                              We have had pretty decent stability for quite some time and plenty of people have been very happy with the tradeoff between stability and the bleeding edge.

                                                                                              What I’m saying here is really that nobody is asking you to take your extremely immature project for which you don’t have the experience or foresight (yet) to stabilise and stabilise it prematurely. Of course there will always be projects like that and like anything they’re going to be bleeding-edge avant-garde ordeals which may become extremely successful and useful projects that everyone loves or end up in a dumpster. What I am saying here is that it’s okay to have an unstable project, but it’s not okay to mislead people about its stability and it is not okay to pretend like nobody should ever expect stability from any project. If your project makes no effort to stop breaking changes and is nowhere near being at a point where the design mistakes have been figured out then it should make that clear in its documentation. If your project looks like it uses a 3 part versioning scheme then it should make it clear that it’s not semver. And most importantly, just because your project can’t stabilise yet doesn’t mean that nobody’s project can stabilise.

                                                                                              1. 2

                                                                                                there are plenty of people who are very appreciative of the fact that if they leave auto-updates on for 4 years and leave things alone that there’s a very small chance that they will ever have to fix any breakages

                                                                                                Let me distinguish software delivered to system end users via package managers like apt, from software delivered to programmers via language tooling like cargo. I’m not concerned with the former, I’m only speaking about the latter.

                                                                                                With that said…

                                                                                                you’re always on the rather extreme side of “it’s impossible not to have completely unstable software”

                                                                                                I don’t think that’s a fair summary of my position. I completely agree that it’s possible to have stable software.

                                                                                                Let’s first define stable. Assuming the authors follow semver, I guess the definition you’re advancing is that it [almost] never increments the major version, because all changes over time don’t break API compatibility. (If that’s not your working definition, please correct me!)

                                                                                                I’m going to make some claims now which I hope are noncontroversial. First, software that exhibits this property of stability is net beneficial to existing consumers of that software, because they can continue to use it, and automatically upgrade it, without fear of breakage in their own applications. Second, it is a net cost to the authors of that software, because maintaining API compatibility is, generally, more work than making breaking changes when the need arises. Third, it is, over time, a net cost to new consumers of that software, because avoiding breaking changes necessarily produces an API surface area which is less coherent as a unit whole than otherwise. Fourth, that existing consumers, potential/new consumers, and authors/maintainers are each stakeholders in a software project, and further that their relative needs are not always the same but can change depending on the scope and usage and reach of the project.

                                                                                                If you buy these claims then I hope it is not much of a leap to get to the notion that the cost of a breaking change is not constant. And, further, that it is at least possible that a breaking change delivers, net, more benefit than cost, versus avoiding that change to maintain API compatibility. One common way this can be true is if the software is not consumed by very many people. Another common way is if the consumers of that software don’t have the expectation that they should be able to make blind upgrades, especially across major versions, without concomitant code changes on their side.

                                                                                                If you buy that, then what we’re down to is figuring out where the line is. And my claim is that the overwhelming majority of software which is actively worked-on by human beings is in this category where breaking changes are not a big deal.

                                                                                                First and foremost, because the overwhelming majority of software is private, written in market-driven organizations, and must respond to business requirements which are always changing by their very nature. If you bind the authors of that software to stability guarantees as we’ve defined them, you make it unreasonably difficult for them to respond to the needs of their business stakeholders. It’s non-viable. And it’s unnecessary! Software in this category rarely has a high consumer-to-producer ratio. The cost of a breaking change is facially less than, say, the AWS SDK.

                                                                                                By my experience this describes something like 80-90% of software produced and maintained in the world. The GNU greps and the AWS SDKs and those class of widely-consumed things are a superminority of software overall. Important! But non-representative. You can’t define protocols and practices and ecosystem expectations for general-purpose programming languages using them as exemplars.

                                                                                                Second, because consumers shouldn’t have the expectation that they can make blind upgrades across major versions without code changes on their side. It implies a responsibility that authors have over their consumers which is at best difficult in closed software ecosystems like I described above, and actually literally impossible in open software ecosystems like the OSS space. As an OSS author I simply don’t have any way to know how many people are using my software, it’s potentially infinite; and I simply can’t own any of the risk they incur by using it, it doesn’t scale. I should of course make good faith effort toward making their lives easier, but that can’t be a mandate, or even an expectation, of the ecosystem or it’s tooling.

                                                                                                And, thirdly, because stability as we’ve defined it is simply an unreasonable standard for anything produced by humans, especially humans who aren’t being supported by an organization and remunerated for their efforts. Let me give you an example. Let’s say I come up with a flag parsing library that provides real value versus the status quo, by cleanly supporting flag input from multiple sources: commandline flags, environment variables, and config files. Upon initial release I support a single config file format, and allow users to specify it with an option called ConfigParser. It’s a hit! It delivers real value to the community. Shortly, a PR comes in to support JSON and YAML parsers. This expands the cardinality of my set of parsers from 1 to N, which means that ConfigParser is now insufficiently precise to describe what it’s controlling. If I had included support for multiple config files from the start, I would have qualified the options names: PlainConfigParser, JSONConfigParser, YAMLConfigParser, and so on. But now I’m faced with the question: do I avoid a breaking change, leave ConfigParser as it is, and just add JSONConfigParser and YAMLConfigParser in addition? Or do I rename ConfigParser to PlainConfigParser when I add the other two, in order to keep the options symmetric?

                                                                                                My initial decision to call the option ConfigParser was not a design mistake. The software and its capabilities evolved over time, and it’s not reasonable to expect me to predict all possible future capabilities on day one. And the non-breaking change option is not necessarily the right one! If I have 100 users now, but will have 100,000 users in a year’s time, then breaking my current users — while bumping the major version, to be clear! — is the better choice, in order to provide a more coherent API to everyone in the future.

                                                                                                Some people would say that this class of software should stay at major version 0 until it reaches a point of stability. This, too, isn’t reasonable. First, because stability is undefinable, and any judgment I make will be subjective and wrong in someone’s eyes. Second, because it strips me of the power to signal breaking vs. non-breaking changes to my users. Semver stipulates the semantics of each of its integers, but nothing beyond that — it doesn’t stipulate a mandatory rate of change, or anything like that. It leaves stability to be defined by the authors of the software. To be clear, more stable software is easier to consume and should be preferred when possible. But it can’t be the be-all end-all metric that software must optimize for.

                                                                                                1. 2

                                                                                                  So let’s get some things out of the way.

                                                                                                  I don’t care about private spaces, and I don’t care about what people do in them.

                                                                                                  I also don’t care about de jure enforcement of stablity through limitations or tooling or whatever. My interest is specifically in de facto stability expectations.

                                                                                                  For ease of my writing, let’s define the term interface to mean something which you could reasonably write a standard to describe. This means things like protocols, APIs or programming languages. Let’s define producer to be someone who produces an interface. Let’s define consumer to be someone who uses an interface.

                                                                                                  I am of the opinion that in the programming world (encompassing both consumers and producers of interfaces) the following expectations should be de jure:

                                                                                                  • If you are the author of an interface which you publish for users, it is your responsibility to be honest about your stability promises with regards to that interface.

                                                                                                  • People should expect stability from commonly used interfaces.

                                                                                                  Just like people enjoy the benefits of the stability of debian, there are plenty of people who enjoy the benefits of the stability of a language like C. C programs I wrote 5 years ago still compile and run today, C programs I will write today will still compile and run in 5 years. I obviously have to follow the standard correctly and ensure my program is written correctly, but if for some reason my program cannot be compiled and cannot be ran in 5 years, I have nobody to blame but myself. This is an incredibly useful quality.

                                                                                                  The fact that I can’t expect the same of rust completely discounts rust as a viable choice of software for me.

                                                                                                  There is no reason why I should have to be excluded from new features of an interface just to avoid breaking changes.

                                                                                                  Moreover, in the modern world where security of software is an ever-growing concern, telling consumers to chose between interface stability and security is an awful idea.

                                                                                                  Now with that out of the way, let’s go over your points (I have summarised them, if you think my summary is wrong then we may be talking past each other, feel free to try to address my misunderstanding).

                                                                                                  Interface stability is beneficial for existing consumers.

                                                                                                  Yes.

                                                                                                  Interface stability is a cost to interface producers.

                                                                                                  Yes.

                                                                                                  Interface stability is a cost for new consumers.

                                                                                                  No.

                                                                                                  Keeping your interface stable does not prevent you from expanding it in a non-breaking way to improve it. It does not prevent you from then changing the documentation of the interface (or expanding on its standard) to guide new consumers towards taking the improved view of the interface. It does not prevent you from stopping further non-essential development on the old portions of the interface. It does not stop you from creating a new major version of the interface with only the new parts of the interface and only maintaining the old interface insofar as it requires essential fixes.

                                                                                                  Interface consumers and producers are both interface stakeholders but have different needs.

                                                                                                  Sure.

                                                                                                  The cost of a breaking change is not constant.

                                                                                                  Sure.

                                                                                                  It is possible that a breaking change delivers more benefit than cost versus avoiding the breaking change.

                                                                                                  I think this is disputable because of what I said earlier about being able to improve an interface without breaking it.

                                                                                                  But I can see this being the case when a critical security makes consumers of your interface vulnerable through normal use of the interface. This is a rare corner case though and doesn’t really make the whole idea or goal of interface stability any less worthwhile.

                                                                                                  This can happen your interface is not used by many consumers.

                                                                                                  Yes, you obviously should feel less obliged to keep things stable in this case, but you should still be obliged to make consumers well aware of just how stable your interface is.

                                                                                                  This can happen if consumers do not have high stability expectations.

                                                                                                  Which you can ensure by making them well aware of what stability expectations they should have.

                                                                                                  But I would argue that in general, consumers should be allowed and encouraged to have high stability expectations.

                                                                                                  especially across major versions, without concomitant code changes on their side

                                                                                                  I don’t see how this is relevant given that you already defined stability a certain way.

                                                                                                  Of course if your interface follows semver and you increment the major then you’ve made your stability expectations clear and if the consumers do not understand this then they can only blame themselves.

                                                                                                  The majority of interfaces belong in a category which excludes them from high stability expectations.

                                                                                                  Yes, as interfaces follow a pareto distribution, very few of them have a significant number of consumers where breaking changes will have a significant impact.

                                                                                                  The majority of interfaces are private

                                                                                                  Actually this is irrelevant, you can just categorise this as “tiny userbase”.

                                                                                                  That being said, there’s two sides to this coin.

                                                                                                  Google mail’s internal APIs may have a “tiny userbase” (although google is a big company so maybe not so), on the other hand, google mail’s user facing interface has a massive userbase.

                                                                                                  If you force producers of those interfaces to guarantee stability as defined above then you make it unreasonably difficult to respond to the needs of their business stakeholders.

                                                                                                  Nobody is forcing or suggesting forcing anyone to do anything.

                                                                                                  You can’t use interfaces with enormous amounts of consumers as baselines for how people should handle interface stability of their interfaces with fewer users.

                                                                                                  I’d say this is irrelevant.

                                                                                                  Your hello world script doesn’t need interface stability. That is one extreme. The C standard needs extreme stability, that is the other extreme. If you consider the majority of the things I am actually thinking of when we talk about “things being horribly unstable” then they definitely don’t fall 1% point away from hello world. I talk about things like rust, nim, major libraries within those languages, etc.

                                                                                                  Those things have significant numbers of cosumers yet their interfaces seem to come with the same stability expectations as the hello world program.

                                                                                                  Yes, stability is a scale, it’s not binary, but the current situation is that a lot of people are treating it as completely optional in situations where it’s certainly shouldn’t be. I don’t think it’s an unreasonable expectation to expect more stability from some of these projects. It is also not an unreasonable expectation to

                                                                                                  Because consumers shouldn’t high interface stability expectations across major versions this implies producers have a responsibility which is at best difficult in closed software ecosystems described above.

                                                                                                  I’m not actually sure how to interpret this but it seems to talk about things irrelevant to the point. Feel free to clarify.

                                                                                                  As the producer of an open source interface I don’t have any way to know how many consumers I have and therefore I cannot own any risk incurred by the consumers consuming my interface.

                                                                                                  Nobody is asking anyone to own any risk (especially in the open source ABSOLUTELY NO WARRANTY case). I am simply saying that if your project is popular and you break stability expectations when you have made it clear that people should have them, you should expect people to distrust your interface, stop using it and potentially distrust future interfaces you produce. I think this is only fair, and really the most I can ask of people.

                                                                                                  Moreover, you do have many ways of estimating how many people are using your software. If enough people are using your software for interface stability to matter then you will know about it (or you’re some expert hermit who manages to successfully maintain key project while never knowing anything about the outside world).

                                                                                                  Stability as defined above is an unreasonable and unreachable standard for human producers of interfaces.

                                                                                                  No, I don’t think it is, or rather, I thing you have produced a false dichotomy. Correct me if I’m wrong but you seem to think that because perfect stability is impossible, imperfect stability is no longer worthwhile. I think this is just a poor way of looking at things, imperfect stability is achievable to a high level and is worthwhile in many cases.

                                                                                                  … especially humans who aren’t being supported by an organization and remunerated for their efforts.

                                                                                                  Let’s put it this way, just because you decided to volunteer to do something doesn’t mean you have no responsibility. You take on responsibility by making the risk of creating something which people might come to rely on and use. If you do not wish to take on this responsibility, then it is at the very least your responsibility to make THAT clear.

                                                                                                  I would say it is childish to suggest that just because you are doing something for free that therefore you have no responsibility for the consequences of what happens when people come to rely on that free activity. There are a million and one real world examples of how this does not pan out.

                                                                                                  Let me give you an example. Let’s say I come up with a flag parsing library that provides real value versus the status quo, by cleanly supporting flag input from multiple sources: commandline flags, environment variables, and config files. Upon initial release I support a single config file format, and allow users to specify it with an option called ConfigParser. It’s a hit! It delivers real value to the community. Shortly, a PR comes in to support JSON and YAML parsers. This expands the cardinality of my set of parsers from 1 to N, which means that ConfigParser is now insufficiently precise to describe what it’s controlling. If I had included support for multiple config files from the start, I would have qualified the options names: PlainConfigParser, JSONConfigParser, YAMLConfigParser, and so on. But now I’m faced with the question: do I avoid a breaking change, leave ConfigParser as it is, and just add JSONConfigParser and YAMLConfigParser in addition? Or do I rename ConfigParser to PlainConfigParser when I add the other two, in order to keep the options symmetric?

                                                                                                  I would say that it’s actually worthwhile to avoid breaking API here. If your bar for breaking API is this low then you’re going to be breaking API every day of the week. Moreover, you won’t learn anything about API design from your project.

                                                                                                  There is real value in living with your mistakes for a while and letting things like this accumulate for a while. Simply accepting every breaking change as it comes is going to teach you less about which parts of your API are actually good or not than waiting for a good point to create config_parser_of_the_snake_case_variety a few years down the road with all the things you’ve learned incorporated. The end result will be a far better design than whatever you could cobble together with a weekly breaking change.

                                                                                                  You’re also ignoring the possibility of just making a new API (also, surely the correct option is not a name-per-format but rather an option) and wiring the old API to just forward to the new one. You can begin to think about future proofing your API while keeping the old one around (to everyone’s great benefit).

                                                                                                  Finally, if you’ve just written a library, why is it 1.0.0 already? Why would you do that to yourself? Spend a few months at least with it in a pre-1.0.0 state. Stabilize it once you iron out all the issues.

                                                                                                  My initial decision to call the option ConfigParser was not a design mistake.

                                                                                                  Of course not, the mistake was pretending that JSON and YAML are config formats. (This is a joke.)

                                                                                                  The software and its capabilities evolved over time, and it’s not reasonable to expect me to predict all possible future capabilities on day one.

                                                                                                  Of course not. But I think you’re being unreasonably narrow-minded with regards to possible options for how to evolve a project while keeping the interface stable.

                                                                                                  And the non-breaking change option is not necessarily the right one! If I have 100 users now, but will have 100,000 users in a year’s time, then breaking my current users — while bumping the major version, to be clear! — is the better choice, in order to provide a more coherent API to everyone in the future.

                                                                                                  It’s also extremely unlikely that you will be able to predict the requirements of your library one year in the future. Which is possibly why your library shouldn’t be a library yet and should wait a few years of usecases through before it solidifies into one.

                                                                                                  There’s too many tiny libraries which do half a thing poorly out there, at least part of this stability discussion should be around whether some interfaces should exist to begin with.

                                                                                                  Some say this class of interface should stay at major version 0.

                                                                                                  Yes, I think this is synonymous with my position that it shouldn’t be a library to begin with.

                                                                                                  This is unreasonable because stability is undefinable.

                                                                                                  No, you already defined it a couple of times in a way I would agree with. A stable interface is an interface where a consumer should not expect things to break if they update the software to the latest version (of that particular major version).

                                                                                                  Any stability judgement I make will be questioned by someone.

                                                                                                  Why leave the house then? You might die. This comes back again to the all-or-nothing mentality I mentioned earlier where you seem to suggest that just because perfect stability is impossible, imperfect stability is not worthwhile.

                                                                                                  Second, because it strips me of the power to signal breaking vs. non-breaking changes to my users.

                                                                                                  It doesn’t do that. You have literally all the tools at your disposal including your changelog and documentation. Moreover, if your project is at 0.x.y then you are completely free to tell people that if x increases then the interface will have changed significantly and if y increases then the interface probably hasn’t changed. Semver does not specify how x and y are to be interpreted for 0.x.y, only for M.x.y where M > 0.

                                                                                                  Semver explains what the numbers mean but nothing else. It does not specify the rate of change. It leaves stability to be defined by the authors of the software.

                                                                                                  Yes, these are true statements.

                                                                                                  Stability should not be the the most important goal for all software.

                                                                                                  Again, a true statement. My point is and always has been that the issue is not that all interfaces aren’t perfectly stable, it’s that there’s a lot of interfaces which are far less stable than they reasonably should be and that normalising this means that software is going to get a lot more confusing, a lot more insecure and a lot more broken as time goes on.

                                                                                                  Overall I think we’re in agreement on most parts except for how easy stability and how important it is relatively speaking.

                                                                                                  1. 2

                                                                                                    There is real value in living with your mistakes for a while and letting things like [ConfigParserX] accumulate for a while. Simply accepting every breaking change as it comes is going to teach you less about which parts of your API are actually good or not than waiting for a good point to create config_parser_of_the_snake_case_variety a few years down the road with all the things you’ve learned incorporated. The end result will be a far better design than whatever you could cobble together with a weekly breaking change.

                                                                                                    I don’t know how better to say this: the decisions in this example are not mistakes. They are made by a human being at a particular point in the evolution of a software project.

                                                                                                    Finally, if you’ve just written a library, why is it 1.0.0 already? Why would you do that to yourself? Spend a few months at least with it in a pre-1.0.0 state. Stabilize it once you iron out all the issues.

                                                                                                    1.x.y in the semver nomenclature represents a notion of stability which I as the author get to define. It isn’t a universal notion of stability, it’s nothing more than what I define it to be. There is no single or objective denomination of “when I iron out all of the issues” which I can meet in order to qualify a 1.x.y version identifier. It’s all subjective!

                                                                                                    just because you decided to volunteer to do something doesn’t mean you have no responsibility. You take on responsibility by making the risk of creating something which people might come to rely on and use. If you do not wish to take on this responsibility, then it is at the very least your responsibility to make THAT clear.

                                                                                                    Semver provides all of the mechanics I need to signal what you demand. If I make a breaking change, I increment the major version number. That’s it! There’s nothing more to it.

                                                                                                    1. 2

                                                                                                      I don’t know how better to say this: the decisions in this example are not mistakes. They are made by a human being at a particular point in the evolution of a software project.

                                                                                                      If it was reasonable to assume that the API would need that feature eventually then yes it’s a mistake. If it was not reasonable to assume that the API would need that feature (I would err on this side since neither YAML nor JSON are actually meant for configuration) then there’s two possibilities: a, that the design change is unnecessary and belongs in an unrelated project or b, that the design change should be made with no need for breaking the existing interface.

                                                                                                      1.x.y in the semver nomenclature represents a notion of stability which I as the author get to define. It isn’t a universal notion of stability, it’s nothing more than what I define it to be. There is no single or objective denomination of “when I iron out all of the issues” which I can meet in order to qualify a 1.x.y version identifier. It’s all subjective!

                                                                                                      The fact that it’s subjective and you can’t with perfect accuracy determine when it’s appropriate to make a 1.x.y release does not mean that lots of projects haven’t already made good-enough guesses on when this is appropriate and does not mean that a good enough guess is not an adequate substitute for perfect. Once again, you seem to suggest that just because it’s impossible to do something perfectly that it’s not worth doing it at all.

                                                                                                      Semver provides all of the mechanics I need to signal what you demand. If I make a breaking change, I increment the major version number. That’s it! There’s nothing more to it.

                                                                                                      Yeah, so do that. I don’t get where the disagreement lies here.

                                                                                                      *As a completely unrelated side note, the fact that firefox binds ^W to close tab is a hideous design choice.

                                                                                                      1. 2

                                                                                                        you seem to suggest that just because it’s impossible to do something perfectly that it’s not worth doing it at all.

                                                                                                        I am 100% for releasing 1.x.y versions of software projects. What I’m saying is merely that a subsequent release of 2.z.a, and afterwards 3.b.c, and then 4.d.e, is normal and good and not indicative of failure in any sense.

                                                                                                        If it was reasonable to assume that the API would need that feature eventually then yes it’s a mistake.

                                                                                                        It is incoherent to claim that this represents a mistake. If you can’t agree to that then there’s no possibility of progress in this conversation; if this is a mistake then you’re mandating perfect omniscient prescience in API design, which is plainly impossible.

                                                                                                        1. 1

                                                                                                          I am 100% for releasing 1.x.y versions of software projects. What I’m saying is merely that a subsequent release of 2.z.a, and afterwards 3.b.c, and then 4.d.e, is normal and good and not indicative of failure in any sense.

                                                                                                          I would say that any interface which has need to change that often is too vaguely defined to be a real interface. Fundamentally rooted in the idea of an interface is the idea of being able to re-use it in multiple places, and the fundamental use in re-using something in multiple places is if adjustments to that thing benefit all users (otherwise why not just copy paste the code). As such, if your interface changes constantly, it does not provide any benefit as an interface and should not be one. Put another way, if every time I try to update your interface I have to change my code, what benefit is there at all in me even calling it a proper interface and not just maintaining my own version (either by periodically merging in new changes to the upstream version or just cherry-picking new features I like)?

                                                                                                          It is incoherent to claim that this represents a mistake.

                                                                                                          How so? Are you suggesting it is impossible to have design oversights?

                                                                                                          If you can’t agree to that then there’s no possibility of progress in this conversation; if this is a mistake then you’re mandating perfect omniscient prescience in API design, which is plainly impossible.

                                                                                                          Just because foresight is impossible, doesn’t mean you’ve not made a mistake. If foresight is impossible, maybe the mistake is attempting to foresee in the first place? In either case, I don’t understand why you think my stance on whether design mistakes are a real thing or not matters in a discussion about whether it’s appropriate for interfaces to be extremely unstable.

                                                                                                          Moreover, given the vagueness of your example config parser case, I never actually said if the scenario you presented constituted a design mistake, I merely outlined a set of possibilities, one of which was that there was a design mistake.

                                                                                                          So far (when it comes to your config parser example), I think that you’re being quite narrow-minded in terms of possible approaches to non-breaking interface changes which would avoid the problem entirely without sacrifices. I also think that the example you provided is extremely contrived and the initial design sounds like a bad idea to begin with (although once again, I don’t think this is the reason why you think there needs to be a breaking change, I think you think there needs to be a breaking change solely because I don’t think you have explored all the possibilities of how a non-breaking change may be implemented, but the example lacks a lot of detail so it’s difficult to understand your reasoning).

                                                                                                          Maybe if you picked a more realistic design which didn’t have obvious problems to begin with and demonstrated a sensible design change you think would necessitate a breaking change (to avoid compromising on API quality) we could discuss how I would solve that and you can point out the disadvantages, according to you, of my approach to solving the breakage?

                                                                                                          1. 2

                                                                                                            if every time I try to update your interface I have to change my code, what benefit is there at all in me even calling it a proper interface

                                                                                                            The benefit that my library provides to you as a consumer is mostly about the capabilities it offers for you, and only a little bit about the stability of its API over time. Updating the version of my library which you use is also an opt-in decision that you as a consumer make for yourself.

                                                                                                            I also think that the example you provided is extremely contrived

                                                                                                            It is adapted directly from a decision I had to make in my actual flag parsing library and chosen because I feel it is particularly exemplary of the kinds of design evolution I experience every day as a software author.

                                                                                                            you can point out the disadvantages, according to you, of my approach to solving the breakage?

                                                                                                            I don’t think we need an example. My position is that as long as you abide semver and bump the major version there is almost no net disadvantage to breaking changes for the vast majority of software produced today. And that avoiding breaking changes is a cost to coherence that rarely makes sense, except for a superminority of software modules. I have explained this as well as I think I can in a previous comment so if that’s not convincing then I think we can agree to disagree.

                                                                                                            1. 1

                                                                                                              The benefit that my library provides to you as a consumer is mostly about the capabilities it offers for you, and only a little bit about the stability of its API over time. Updating the version of my library which you use is also an opt-in decision that you as a consumer make for yourself.

                                                                                                              You can call it a library or an interface all day long, I dispute you calling it that more than I dispute its usefulness. It might function and be useful but it does not function as an interface. It is really not an interface if it does not end up shared among components, if it is not shared among components then there is obviously no real problem (at least in terms of stability, since there’s an enormous number of problems in terms of general code complexity, over-abstraction, security issues and auditability issues).

                                                                                                              It is adapted directly from a decision I had to make in my actual flag parsing library and chosen because I feel it is particularly exemplary of the kinds of design evolution I experience every day as a software author.

                                                                                                              Can you point me to the breaking change?

                                                                                                              My position is that as long as you abide semver and bump the major version there is almost no net disadvantage to breaking changes for the vast majority of software produced today.

                                                                                                              And like I already explained, talking about “most software” is pointless since almost nobody uses it.

                                                                                                              The discussion is about the small fraction of software which people actually use, where there is a clear benefit to avoiding random breaking changes.

                                                                                                              And that avoiding breaking changes is a cost to coherence that rarely makes sense, except for a superminority of software modules.

                                                                                                              A superminority which anything worth talking about is already part of.

                                                                                                              I have explained this as well as I think I can in a previous comment so if that’s not convincing then I think we can agree to disagree.

                                                                                                              There is nothing left to explain, it is not an issue of explanation, you are simply making a bunch of assertions including the assertion that, given an interface which people actually use, most normal design changes which change the interface are better* than design changes which preserve the interface. You have yet to provide any evidence of this (although hopefully if you send me the commit for your project we will finally have something concrete to discuss).

                                                                                                              *better being defined as “the disadvantages of breaking the interface do not outweigh the disadvantages of lost clarity by expanding keeping the interface”

                                                                                                              1. 2

                                                                                                                I don’t care about “interfaces” — that’s terminology which you introduced. We’re talking about software modules, or packages, or libraries, with versioned APIs that are subject to the rules of semver.

                                                                                                                And like I already explained, talking about “most software” is pointless since almost nobody uses it . . . The discussion is about the small fraction of software which people actually use

                                                                                                                The discussion I’m having is about what software authors should be doing. So I’m concerned with the body of software produced irrespective of how many consumers a given piece of software has. If you’re only concerned with software that has enormously more consumers than producers than we are having entirely different discussions. I acknowledge this body of software exists but I call it almost irrelevant in the context of what programming language ecosystems (and their tools) should be concerned with.

                                                                                                                1. 1

                                                                                                                  I don’t care about “interfaces” — that’s terminology which you introduced. We’re talking about software modules, or packages, or libraries, with versioned APIs that are subject to the rules of semver.

                                                                                                                  If you had an issue with my definition of interface then you should have raised it a bit earlier. That being said, I don’t think that you have a problem with the definition (since it seems completely compatible) but maybe the issue lies somewhere else.

                                                                                                                  The discussion I’m having is about what software authors should be doing. So I’m concerned with the body of software produced irrespective of how many consumers a given piece of software has. If you’re only concerned with software that has enormously more consumers than producers than we are having entirely different discussions. I acknowledge this body of software exists but I call it almost irrelevant in the context of what programming language ecosystems (and their tools) should be concerned with.

                                                                                                                  Programming ecosystems are interfaces with large numbers of users, their package handling tools exist to solely serve the packages which have a significant number of users. Making package handling tools work for the insignificant packages is a complete waste of time and benefits basically nobody.

                                                                                                                  Your package, like it or not, belongs in the group of packages I’m talking about.

                                                                                                                  Discussions about semver and interfaces and stability and versioning are obviously going to be completely irrelevant if nobody or almost nobody uses your package. More importantly, it’s very unproductive to use this majority of unused or barely used packages to inform design decisions of software tooling or to inform people how they should go about maintaining packages people actually use.

                                                                                                                  1. 2

                                                                                                                    [ecosystem] package handling tools exist to solely serve the packages which have a significant number of users

                                                                                                                    No, they exist to serve the needs of the entire ecosystem.

                                                                                                                    Your package, like it or not, [is insignificant]

                                                                                                                    I guess we’re done here.

                                                                                                                    1. 1

                                                                                                                      No, they exist to serve the needs of the entire ecosystem.

                                                                                                                      Focusing on the needs of wannabe libraries which nobody uses is nonsensical. Ecosystems focus on the needs of packages which people use and the people who use those packages.

                                                                                                                      Your package, like it or not, [is insignificant] I guess we’re done here.

                                                                                                                      Did you intentionally misrepresent me because you were bored of the discussion? Because I literally said the opposite.

                                                                                                                      1. 2

                                                                                                                        I apologize if you were saying that my package represents something with a large number of users. I parsed your sentence as the opposite meaning.

                                                                                                                        Nevertheless, we’re clearly at an impasse. Your position is that

                                                                                                                        package handling tools exist to solely serve the packages which have a significant number of users

                                                                                                                        which is basically the antipode of my considered belief, and it doesn’t appear like you’re open to reconsideration. So I’m not sure there’s much point to continuing this already very long thread :)

                                                                                                                        edit: I can maybe speculate at the underlying issue here, which is that you seem to be measuring relevant software by the number of consumers it has, whereas I’m measuring relevant software by it’s mere existence. So for you maybe an ecosystem with 1000 packages with 2 consumers each and 2 packages with 1000 consumers each is actually just 2 packages big, so to speak, and those 2 packages’ needs dictate the requirements of the tool; and for me it’s 1002 packages big and weighted accordingly. Does this make sense?

                                                                                                                        1. 1

                                                                                                                          I get your point, I don’t think there’s any misunderstanding there. I just can’t understand why when it comes to tools which are designed to facilitate interoperation you care about the packages for which … because almost nobody uses them … interoperation is not as important as the packages which lots of people use. I think the other problem is maybe that you’ve got a skewed idea of the relationship between package producers and users. I would say that it’s extremely likely (and I don’t have numbers to back this up but there’s no real reason why this doesn’t follow the Pareto distribution) that 20% of packages constitute 80% of package usage. These tools are designed to facilitate package use above package creation (since far fewer people will be creating packages rather than using them) therefore focusing on the 20% of packages which which constitute 80% of the usage would surely make sense?

                                                                                                                          1. 2

                                                                                                                            far fewer people will be creating packages rather than using them

                                                                                                                            Just to make this explicit, I’m speaking from a context which includes open-source software (maybe 10-20% of all software produced in the world) and closed-source software written and maintained at market-driven organizations (maybe 80% of all software).

                                                                                                                            In this context it’s my experience that essentially every programmer is both a consumer and producer of packages. Even if a package has only a single consumer it is still a member of the package ecosystem, and the needs of it’s (singular) author and consumer are relevant! So I don’t agree that far fewer people are creating packages than consuming them. In fact I struggle to summon a single example of someone who consumes but doesn’t produce packages.

                                                                                                                            20% of packages constitute 80% of package usage

                                                                                                                            I agree that the packages-vs-consumers curve is pretty exponential. I agree that tooling can and should support the needs of those relatively few packages with vastly more consumers than producers. But I don’t think it’s as extreme as you’re describing here. I think the “long tail” of packages consumed by a single-digit number of consumers represents the 80% of overall package consumption, the 80% of the area under the curve.

                                                                                                                            1. 1

                                                                                                                              In this context it’s my experience that essentially every programmer is both a consumer and producer of packages. Even if a package has only a single consumer it is still a member of the package ecosystem, and the needs of it’s (singular) author and consumer are relevant! So I don’t agree that far fewer people are creating packages than consuming them. In fact I struggle to summon a single example of someone who consumes but doesn’t produce packages.

                                                                                                                              I think the term might be a bit loose here. I struggle to see a scenario where people produce as many packages as they consume. A lot of projects out there consume a lot of packages but only produce one.

                                                                                                                              I agree that the packages-vs-consumers curve is pretty exponential. I agree that tooling can and should support the needs of those relatively few packages with vastly more consumers than producers. But I don’t think it’s as extreme as you’re describing here. I think the “long tail” of packages consumed by a single-digit number of consumers represents the 80% of overall package consumption, the 80% of the area under the curve.

                                                                                                                              Okay, let’s just ignore the smaller packages for a moment, because as I said in my other comment, I don’t see how what I am trying to advocate for here affects those packages negatively in any way.

                                                                                                                              Why is it remotely acceptable for the rust programming language to be both so completely unstable to the point that if you install it from the repositories of a non-rolling-release distro, there’s a good chance you won’t be able to build anything of even insignificant size?

                                                                                                                              Why is it remotely acceptable for an average popular rust package to itself include up to hundreds of other tiny dependencies?

                                                                                                                              Why are these not considered the horrible software practices that they are?

                                                                                                                              1. 2

                                                                                                                                Why are these not considered the horrible software practices that they are?

                                                                                                                                Because “horrible” is a subjective judgment, not an objective metric.

                                                                                                                                1. 1

                                                                                                                                  … and?

                                                                                                                                  Having the language be unstable makes it effectively unusable, you can’t write software against a moving target unless you want to live with the fact that if you stop maintaining it it will stop working. How is this beneficial?

                                                                                                                                  Having packages which include hundreds of packages makes things impossible to audit, makes security vulnerabilities more difficult to fix and makes maintenance more difficult.

                                                                                                                                  Are you happier with the above elaborations? I really wasn’t expecting to have to explain why these are horrible software practices.

                                                                                                                                  1. 1

                                                                                                                                    I understand and agree that a high rate of breaking changes is horrible for consumers like you, no need to elaborate there :)

                                                                                                                                    It’s just not true to say it’s horrible period, that it’s a horrible software practice in general. Consumers like you aren’t the only members of the software ecosystem; for many stakeholders, breaking changes provide large benefits. They count, too.

                                                                                                                                    I’m guessing you’re going to disagree with that, and say that what I’m describing as beneficial is actually just laziness or something. But I don’t think this is true, and I’m confident it’s not a productive line of reasoning. Regardless if it’s due to rational actions or character flaws or whatever else, breaking changes are the reality of software development in the large. Human beings can’t write C without memory bugs, and we can’t do software over time without breaking changes. Just the reality :)

                                                                                                                                    1. 1

                                                                                                                                      for many stakeholders, breaking changes provide large benefits

                                                                                                                                      We never really finished addressing this.

                                                                                                                                      Like I said, I am of the opinion that breaking changes are not a given even when you’re trying to make things better. There are many ways to make an API cleaner for example without deprecating the old API immediately.

                                                                                                                                      Human beings can’t write C without memory bugs, and we can’t do software over time without breaking changes. Just the reality :)

                                                                                                                                      We have lots of enormous software projects which maintain backwards compatibility for decades.

                                                                                                    2. 2

                                                                                                      I want to revisit two things here…

                                                                                                      I don’t care about private spaces.

                                                                                                      That’s fine! If you’re not interested in this category of software personally then that’s totally groovy. But “private spaces” hold the overwhelming majority of software produced and consumed in the world. Package management tooling for a general-purpose programming language which doesn’t prioritize the needs of these users doesn’t serve it’s purpose.

                                                                                                      if your project is popular and you break stability expectations

                                                                                                      When you say “stability expectations” do you mean a consumer’s expectation that I won’t violate the rules of semver? Or an expectation that I won’t hardly ever increment my major version number? Or something else?

                                                                                                      As an example, go-github is currently on major version 41 — is this a problem?

                                                                                                      1. 1

                                                                                                        Okay, sorry for the long gap in replies, life became a bit busy before and around Christmas.

                                                                                                        That’s fine! If you’re not interested in this category of software personally then that’s totally groovy. But “private spaces” hold the overwhelming majority of software produced and consumed in the world. Package management tooling for a general-purpose programming language which doesn’t prioritize the needs of these users doesn’t serve it’s purpose.

                                                                                                        Let me rephrase. I don’t see how a open source software ecosystem in which my concept of stability expectations is de-facto and in which software tooling makes accommodations for these de-facto expectations (i.e., by having project metadata which allows specifying semver based version requirements, having dependency location tools which can make semver based choices and package managers which can make semver based choices) would in any way negatively impact or prevent private organizations from performing whatever they want to do (including version pinning things) in any way they want to.

                                                                                                        When you say “stability expectations” do you mean a consumer’s expectation that I won’t violate the rules of semver? Or an expectation that I won’t hardly ever increment my major version number? Or something else?

                                                                                                        No, I mean whatever expectations you set out in your project. I won’t depend on a project unless I can see just from reputation that the author: follows semver and doesn’t break the expectations of semver and doesn’t bump the major version very regularly for changes which could have quite easily been made in a non-breaking way. Also, for software at v2 and up I look to see if the prior versions have some explicitly documented information about long term support, at least in the form “security patches and major bug fixes will be backported for X time”.

                                                                                                        As an example, go-github is currently on major version 41 — is this a problem?

                                                                                                        Yes, because in the long term, if there’s security or general bugs in the library, when updating I would likely have to make changes to the rest of my software and keep re-learning random things about the library. Although I welcome improvements and will happily change my software to make use of new improved APIs, I don’t feel like it’s something I should have to worry about doing every time if I don’t want to. At the end of the day it’s also NOT a problem, as this project is clearly advertising that people like me should NOT use it.

                                                                                                        1. 1

                                                                                                          I don’t see how . . .

                                                                                                          I think we’re in broad agreement, actually: I agree that all packages should strictly follow semver, and that tooling can and should leverage those versions as appropriate. The point of contention is around the rate-of-change of major versions.

                                                                                                          Two facts: (1) a breaking change isn’t well-defined, and could include any change to the package whatsoever if you define API compatibility in terms of Hyrum’s Law; (2) the cost of a breaking change is different from project to project.

                                                                                                          As a consumer you can of course decide what rate-of-change you’re comfortable with. But that’s the point: what constitutes “good” or “bad” rates of change is a subjective decision, not an objective truth. Tooling cannot make that decision for you.

                                                                                                          In short,

                                                                                                          go-github is currently on major version 41 — is this a problem?

                                                                                                          Yes

                                                                                                          For you, okay! Sure! But not for everyone. It’s not a problem for me.

                                                                                                          1. 1

                                                                                                            For you, okay! Sure! But not for everyone. It’s not a problem for me.

                                                                                                            Why not? Genuinely, why is it not a problem for you? How do you justify the wasted time dealing with breaking changes to yourself?

                                                                                                            1. 1

                                                                                                              I don’t waste time with breaking changes. I don’t upgrade dependencies unless there is a specific need, and there is almost never a specific need. I think in the last 10 years, I could count on one hand the number of times I’ve upgraded my project deps for any reason other than I needed a new feature. And when I do upgrade my dependencies, I fully expect that it will require changes in my code, even on patch updates.

                                                                                                              I don’t use dependencies that are subject to security vulnerabilities. I don’t use deps that require constant maintenance to remain functional. (shrug)

                                                                                                              1. 1

                                                                                                                I don’t use dependencies that are subject to security vulnerabilities.

                                                                                                                That’s great but unless you’re only writing single player games then it seems incredibly difficult to be in the ideal situation you describe.

                                                                                                                1. 1

                                                                                                                  For context, I’ve been writing Go for the last ~forever years, which has a comprehensive stdlib and discourages imports kind of philosophically. As a result, in that ecosystem, it’s actually super easy. My projects tend to have on the order of 10 dependencies, and they’re all pretty narrow in scope: something for ULIDs, something to handle flags better, etc.

                                                                                                                  But I’ve just started writing Rust in anger, and it’s definitely made me more sympathetic to the problem. Rust has a small stdlib as an explicit design goal, and that philosophy seems to be transitive throughout the ecosystem. It seems like it’s basically not possible to accomplish anything without a whole slurry third-party crates, which import their own huge set of crates, and so on. (Most of which are written by single individuals in their spare time, and many of which are abandoned — but that’s a separate discussion!)

                                                                                      2. 2

                                                                                        Bollocks. A minor breaking change is “re reversed the order of parameters on this method to match the other methods”. A major breaking change is “switched from callbacks to promises”

                                                                                        1. 2

                                                                                          A switch from callbacks to promises that comes without a compatibility interface should be reflected in the library name, not just the version. It’s not even the same library anymore if literally every line of code that is using the old version must be rewritten to use it again.

                                                                                          1. 1

                                                                                            I agree, but we’re in the minority, despite Rich Hickey’s efforts.

                                                                                      1. 7

                                                                                        Hey, that’s actually not a bad tip (I’m not 100% sure it’s worthy of its own post, but it’s definitely not worth flagging). My main concern is:

                                                                                        None of the viruses in npm are able to run on my host when I do things this way.

                                                                                        This is assuming a lot of the security of docker. Obviously, it’s better than running npm as the same user that has access to your SSH/Azure/AWS/GCP/PGP keys / X11 socket, but docker security isn’t 100%, and I wouldn’t rely on it fully. At the end of the day, you’re still running untrusted code; containers aren’t a panacea, and the simplest misconfiguration can render privilege escalation trivial.

                                                                                        1. 3

                                                                                          the simplest misconfiguration can render privilege escalation trivial.

                                                                                          I’m a bit curious which configuration that’d be?

                                                                                          1. 2

                                                                                            not OP, but “--privileged” would do it. or many of the “--cap-add” options

                                                                                            1. 1

                                                                                              Not 100% sure here but lots of containers are configured to run as root, and file permissions are just on your disk right? so a container environment lets you basically take control of all mounted volumes and do whatever you want.

                                                                                              This is of course only relevant to the mounted volume in that case, though.

                                                                                              I think there’s also a lot of advice in dockerland which is the unfortunate intersection of easier than all alternatives yet very insecure (like most ways to connect to a github private repo from within a container involves some form of exposing your private keys).

                                                                                            2. 1

                                                                                              This is assuming a lot of the security of docker

                                                                                              Which has IMO a good track record. Are there actually documented large scale exploits of privilege escalation from a container in this context? Or at all?

                                                                                              Unless you’re doing stupid stuff I don’t think there’s a serious threat with using Docker for this use case.

                                                                                            1. 3

                                                                                              Looks like custom query is a new attack surface

                                                                                              1. 3

                                                                                                How so? Isn’t this proposal just a way to standardize what we already do in some other ways?

                                                                                                1. 1

                                                                                                  There are two major issues I can see:

                                                                                                  1. developers will need to consider this continuously, and they already have issues ensuring that all verbs’ paths transit security controls
                                                                                                  2. compensating/mitigating controls at the edge will need to consider this, and they’re notoriously bad at keeping up to date with standards.

                                                                                                  The first is relatively straight forward: developers will now need to track a new verb that they have to make sure that all authentication, authorization, &c controls are applied. For frameworks that tend to have a single object/method per URL/route (endpoint), this is relatively straight forward: you can have a single entry point for the endpoints, and just detect if the new verb is being used to access it. For frameworks that tend to separate out endpoints, this means we need a new method, it needs to have all controls applied, tested, &c. It’s not a huge deal, but it’s often an edge case I see when working with developers, but also not too different from our current pain.

                                                                                                  The second is more myriad; generally, depending on your devices, servers, security software, &c &c, you can have all sorts of odd interactions between these devices and the destined applications. For example, years ago many large venders had very strict rules for SQLi and XSS. So how would we bypass them? well, we’d just change GET or POST to Foo (or the like). At the time, several application servers would use heuristics for verbs to determine how they should be interpreted: if it had a body, it was a POST, if not, it was a GET. But how did this bypass edge controls? Well they also had heuristics: if they could determine what the verb was, they would apply rules, otherwise they would just pass it to the downstream application server. If you application server was reliant on the edge controls to block SQLi or XSS, you were in trouble. We’ve gotten much better about these sorts of things, but they can still lag significantly (HTTP/2 controls come to mind, or SCTP as a TCP replacement for data exfiltration, because many systems simply pass those along).

                                                                                              1. 2

                                                                                                That actually seems great. Does anybody see any drawback (besides the overhead of starting a subshell) with using this tip?

                                                                                                1. 9

                                                                                                  Forks are slow so starting a subshell is not an insignificant cost. It also makes it impossible to return values besides an exit status back from a function.

                                                                                                  Zsh has “private” variables which are lexically scoped. ksh93 also switched to lexical scoping instead of dynamic scoping but note that in ksh, you need to use function name { syntax instead of name() { to get local variables.

                                                                                                  1. 9

                                                                                                    Also, in zsh you can just use always to solve the problem in the article:

                                                                                                    {
                                                                                                         foo
                                                                                                    } always {
                                                                                                         cleanup stuff
                                                                                                    }
                                                                                                    
                                                                                                    1. 3

                                                                                                      Every time I learn a new thing about zsh, I’m struck by how practical the feature is and how amazing it is that I didn’t know about said feature the past dozen times I really, really needed it. I looked around the internet for documentation of this, and I found:

                                                                                                  2. 2

                                                                                                    A guy on the orange site timed subshell functions to take roughly twice as long.

                                                                                                  1. 3

                                                                                                    I just tried Parcel 2 on one of my side projects already using Parcel 1 and it’s definitely an improvement. I tried it multiple times during the beta but couldn’t get it to work (issues with d3.js) so I’m glad they managed to fix it. The bundled js it produces is now half what it was before thanks to tree shaking.

                                                                                                    What I love is that Parcel 2 allowed me to actually remove some of the (already almost non existant) config that I had with Parcel 1, and I finally got rid of Babel. So the zero config promise of Parcel 1 still stands with Parcel 2 !

                                                                                                    1. 104

                                                                                                      I’m not a big fan of pure black backgrounds, it feels a bit too « high contrast mode » instead of « dark mode ». I think a very dark gray would feel better to the eye. n=1 though, that’s just a personal feeling.

                                                                                                      Thanks for the theme, it’s still great!

                                                                                                      1. 29

                                                                                                        Agreed, background-color: #222 is better than #000.

                                                                                                        1. 15

                                                                                                          I’ll just put my +1 here. The pure black background with white text isn’t much better than the opposite to me (bright room, regular old monitor). I’ve been using a userstyle called “Neo Dark Lobsters” that overall ain’t perfect, but is background: #222, and I’ll probably continue to use it.

                                                                                                          On my OLED phone, pure black probably looks great, but that’s the last place I’d use lobste.rs, personally.

                                                                                                          1. 18

                                                                                                            Well, while we’re bikeshedding: I do like true black (especially because I have machines with OLED displays, but it’s also a nice non-decision, the best kind of design decision), but the white foreground here is a bit too intense for my taste. I’m no designer, but I think it’s pretty standard to use significantly lower contrast foregrounds for light on dark to reduce the intensity. It’s a bit too eye-burney otherwise.

                                                                                                            1. 7

                                                                                                              You have put your finger on something I’ve seen a few times in this thread: The contrast between the black background and the lightest body text is too high. Some users’ wishes to lighten the background are about that, and others’ are about making the site look like other dark mode windows which do not use pure black, and therefore look at home on the same screen at the same time. (Both are valid.)

                                                                                                              1. 10

                                                                                                                For me pure white and pure black is accessibility nightmare: that high contrast triggers my dyslexia and text starts to jump around, which starts inducing migraine.

                                                                                                                As I default to dark themes systemwide and I couldn’t find way to override detected theme, this site is basically unusable for me right now. Usually in these cases I just close the tab and never come back, for this site I decided type this comment before doing that. Maybe some style change happens, manual override is implemented or maybe I care enough to setup user stylesheet.. but otherwise my visits will stop

                                                                                                                1. 1

                                                                                                                  No need to be so radical, you still have several options. Not sure what browser you’re using, but Stylus is available for Chrome/FF:

                                                                                                                  https://addons.mozilla.org/en-US/firefox/addon/styl-us/

                                                                                                                  It allows to override the stylesheet for any website with just a few clicks (and few CSS declarations ;))

                                                                                                                  1. 9

                                                                                                                    I don’t mind the comment. There’s a difference between being radical because of a preference and having an earnest need. Access shouldn’t require certain people to go out of their way on a per-website basis.

                                                                                                                    1. 6

                                                                                                                      It’s not radical, it’s an accessibility problem.

                                                                                                              2. 8

                                                                                                                That’s great, thank you.

                                                                                                                I wonder if I am an outlier in using the site on my phone at night frequently. Alternatively, maybe we could keep the black background only for the mobile style, where it’s more common to have an OLED screen and no other light sources in your environment.

                                                                                                                1. 2

                                                                                                                  I don’t use my phone much, especially not for reading long-form content, so I wouldn’t be surprised if I was the outlier. That sounds like a reasonable solution, but it’s not going to affect me (since I can keep using a userstyle), so I won’t push either way. I will +1 the lower-contrast comments that others have posted, if it remains #000 though - the blue links are intense.

                                                                                                                  1. 1

                                                                                                                    The blue link color brightness is a point that not many have made. I think the reason I didn’t expect it is that I usually use Night Shift on my devices, which makes blue light less harsh at night. Do you think we should aim to solve this problem regardless of whether users apply nighttime color adjustment? Another way to ask this question: What do you think about dark mode blue links in the daytime?

                                                                                                                    1. 2

                                                                                                                      Sorry if I’m misunderstanding, but to clarify, my above comment is in a bright room; I try to avoid looking at screens in dim light/darkness. The blue links just look kind of dark, and intensely blue. Just a wee reduction in saturation or something makes it easier to read.

                                                                                                                      Thanks for your work on this btw. I looked into contributing something a while back, but was put off after it looked like the previous attempt stalled out from disagreement. I’d take this over the bright white any day (and it turns out this really is nice on my phone, dark blue links withstanding). The css variables also make it relatively easy for anyone here to make their own tweaks with a userstyle.

                                                                                                                      I feel like I’ve taken up enough space complaining here, so I’ll leave a couple nitpicks then take my leave: the author name colour is a little dark (similar to links, it’s dark blue on black), and the byline could do with a brightness bump to make it more readable, especially when next to bright white comment text.

                                                                                                                      1. 1

                                                                                                                        I appreciate the clarification and other details :)

                                                                                                                  2. 1

                                                                                                                    My laptop is OLED and I’d still appreciate #000 there

                                                                                                                    1. 1

                                                                                                                      +1 to separate mobile style.

                                                                                                                  3. 4

                                                                                                                    I strongly agree.

                                                                                                                    I can’t put my finger on why, but I find very dark gray easier.

                                                                                                                    1. 1

                                                                                                                      #222 is way better! thank you

                                                                                                                    2. 14

                                                                                                                      I strongly disagree, and this black background looks and feels great to me! No one can ever seem to agree on the exact shade or hue of grey in their dark themes, so if you have the general UI setting enabled, you end up with a mishmash of neutral, cooler, hotter, and brighter greys that don’t look cohesive at all. But black is always black!

                                                                                                                      For lower contrast, I have my text color set to #ccc in the themes I have written.

                                                                                                                      1. 6

                                                                                                                        Another user pointed out that pure black is pretty rare in practice, which makes this site stand out in an environment with other dark mode apps:

                                                                                                                        Here’s a desktop screenshot with lobste.rs visible - notice that it’s the only black background on the screen.

                                                                                                                        Does that affect your opinion like it did mine? I do see value in pure black, but suppose we treated the too-high-contrast complaint as a separate issue: Darkening the text could make the browser window seem too dim among the other apps.

                                                                                                                        1. 3

                                                                                                                          I prefer the black even in that scenario. The contrast makes it easier to read imo.

                                                                                                                          1. 2

                                                                                                                            Not all. If it gets swapped out for grey I will simply go back to my custom css, which I have used to black out most of the sites I visit, so no hard feelings.

                                                                                                                        2. 8

                                                                                                                          Feedback is most welcome! Would you please include the type of screen you’re using (OLED phone, TFT laptop…) and the lighting environment you’re in (dark room, daytime indoors with a window, etc.)? And do you feel differently in different contexts?

                                                                                                                          I’ve got some comments about how I selected the colors in the PR, if that helps anyone think through what they would prefer.

                                                                                                                          1. 4

                                                                                                                            Sure! I’m on my iPhone 12 so OLED phone. I tried in with dimmed lights and in the dark, but in both cases I think I’d prefer a lighter background color.

                                                                                                                          2. 7

                                                                                                                            I disagree. Black is black. These off-gray variants just looks dirty and wrong to me.

                                                                                                                            I love this theme.