Threads for roryokane

    1. 2

      What next? A complete spritemap for Mario on NES? If the Unicode Consortium accepts this they, in my view, will have completely lost their minds.

      1. 13

        If these were actually used in text streams, I think they pretty obviously fall within Unicode’s mission. NES sprites never did, as far as I know.

        1. 2

          Also, many of them seem useful for modern applications too.

      2. 11

        One of the principles of unicode is that it should be possible to re-encode old documents from legacy character sets to unicode, so this seems entirely sensible to me.

      3. 6

        The proposal addresses this concern on page 5’s section “8. Finiteness”:

        We have received concerns that there may be no end to the number of unencoded characters found in old microcomputers and terminals, leading to no end of future proposals should these characters be accepted. We believe this is not the case, for the following reasons.

    2. 6

      It’s an interesting idea, but if you have “Unavailable commands:” lying there without any context, that’s not exactly good usability because it’s impossible to discover the state machine behind it without trying it out.

      The shell has poor discoverability as it is which means many people coming to the command will have read some documentation about it but still from a pure UX perspective, it’s a bad practice to put people into a state without affordance how to get to a different state.

      1. 6

        if you have “Unavailable commands:” lying there without any context […] it’s impossible to discover the state machine behind it without trying it out.

        Who said you can’t have the reason why the command is unavailable printed right next to it?

        1. 3

          Well, that’s the least you should be doing here.

      2. 5

        Since your comment is overall negative in tone, I want to emphasize that moving some commands to “Unavailable commands:” without further explanation still makes for more usable --help output than otherwise. If it were all you had time to implement, it would be better than the common practice of listing available and unavailable commands together indiscriminately.

        But yes, the problem you describe of being unsure why a command is unavailable does imply the possibility of further improvements to --help output.

      3. 3

        I was thinking the same thing.. I mean, you could dump a state transition diagram/dependency graph in --help output, but I’m not sure if that’s.. helpful enough. ;-) One does not usually see such even in more detailed documentation like man pages.

        Similarly, even his guiding examples of fd/rg have pros & cons. Users must learn that honoring .gitignore is the default. In the case of rg this is right at the top of a 1000 line --help output (which also has pros & cons, right at the top, but so long its easy to miss; inverse text or color embellishment might help, such as user-config driven color-choice for markup which is a different kind of “missed opportunity based on dynamic state”).

        In the case of fd, since unlike greps, finds mostly operate on file names/FS metadata not file contents, it’s a more dubious &| surprising default. (So, while hex coded hash named .pack files might make little sense to search for, a substring of “HEAD” or finding permission problems inside .git or executable files explicitly gitignored might make a lot of sense.) I say this bit only really as a teaser for how much this kind of context-driven stuff could be ripe for abuse leading to issues you highlight from a use-case variability perspective.

        1. 1

          I’m looking forward to see the first command line tool sprinkle AI in there to try to learn what it is that you actually want to do. (Please no!)

      4. 2

        Yeah, as is I agree information is missing (I mention that on the hovernote 6).

        1. 3

          FWIW, I would say, as a document usability feature, that hovernote-only text is tricky. I didn’t even realize there was hidden text to hover over until you just said something. You have a lot of good text there - I particularly liked [3].

          1. 2

            Thanks! Yeah, maybe Bringhurst-style margin notes would be better. Or at least also listing hovernotes at the end of the text.

    3. 19

      Say it with me now, “Just use AGPL.”

      Just use AGPL.

      1. 17

        Yes the AGPL would solve their stated problem, but, not their actual problem. People using licenses like these don’t want their work released as free software; they want it to have that appearance.

        Freewashing, perhaps, after greenwashing?

        1. 14

          It doesn’t solve the stated problem. They don’t want competition. AGPL doesn’t prevent competition, the big cloud players have learned to accept the AGPL. So if you’re a SaaS play releasing a project, there’s nothing about the AGPL that prevents AWS from offering a version and using their scale to underprice and undercut the original company.

          Please don’t read that as a defense of a non-free license. But the fact is AGPL isn’t a silver bullet for the problem as they see it.

        2. 3

          I absolutely agree with you. The virtue signaling of “open source” is good for business.

          The problem, I think, is that projects turn into companies with use and adoption. Like, it remains to be seen that “start a company with a BUSL license” can succeed, right? So we have to normalize on the solution early. Use, and normalize AGPL so that when the thing becomes successful any company that provides support and a “product” around it doesn’t have exclusive right to the collective effort.

          There are ways to make proprietary tools to manage AGPL installations of things, and many other options for business ventures. Distributed changes to AGPL software doesn’t have to hurt a company’s chances at making money.

        3. 1

          Is this what companies that say release software as BSD, but on their website say something to the effective of “if your company makes more than 1 million $ p.a. you have to buy the professional license”

        1. 3

          Have you read AGPL? Section 13?

          Yes.

          I am aware of Mongo’s use of the AGPL and then them dropping it. The OSI doesn’t recognize SSPL as an open source license. I am not sure what you are trying to argue?

          Many other companies have also started with permissive, open source licenses and then decided protecting their business was more important than user freedom. Again, this is done after taking full advantage of all the benefits of being an open source company. News at 11.

          1. 5

            “Just use AGPL” doesn’t actually work for that many projects anymore. And AGPL was never well written or well understood.

            There are definitely projects out there under AGPL. But the original uses were far outnumbered by defensive businesses in time. Now that the compliance game has changed, many of those are moving on. FSF itself ran around talking down services in general. I don’t know who AGPL is for.

            1. 7

              “Just use AGPL” doesn’t actually work for that many projects anymore. And AGPL was never well written or well understood.

              It should work fine for “projects” but you probably mean “businesses”? I want a world where a project can’t turn into a business and relicense unfavorably for users.

            2. 2

              “Just use AGPL” doesn’t actually work for that many projects anymore.

              Rather, “being FOSS” doesn’t work for them. I mean, if you ask me, AGPL isn’t FOSS either, but if a company can’t do what they do with AGPL, their business model is fundamentally incompatible with FOSS.

            3. 2

              AGPL is for Free Software whose authors want to minimize corporate usage. If AGPL isn’t suitable for that purpose, then we will identify other principles which are toxic to corporations but amenable to Free Software, and write new licenses; but AGPL doesn’t seem to actually have concrete shortcomings.

              1. 4

                The FSF has repeatedly and publicly insisted that AGPL is neither anti-commercial nor anti-corporate. Rather, they say, it’s against non-Free software. Meanwhile, to my eye, and at least until Mongo switched, the predominant group of AGPL adopters was small software companies, nearly all of whom encouraged maximum adoption by corporate users in their applications and infra.

                Mongo’s posts to the OSI list that I linked above identified a very concrete shortcoming: AGPLv3, like GPLv3, arguably does not require sharing and licensing source code alike when composing with AGPLv3 code as services, rather than copying code into modules, linking, or reusing as a module on the same real or virtualized machine. For Mongo and other software companies, who overtly are against some corporations—their potential competitors—those shortcomings have been concrete enough to abandon AGPL.

                Seems to me the natural principle amenable to Free Software but not to non-Free software corporations would be to require sharing and licensing more source code in more situations. In other words, strengthening copyleft. Licenses I’ve written going there haven’t been welcome as supposedly going too far or encouraging copyright holders to use too much of their power.

      2. 4

        If your problem is that people are using and contributions to your software, both APGL and BSL will solve the problem for you.

        1. 2

          I think there’s a typo which is making it hard to understand your comment.

          There’s basically no legal protection for external contributors of BUSL software. (Your use case might be deemed competitive and therefore not licensed) As opposed to the AGPL, where effectively, everyone has the same equal rights.

          I know where I’d rather spend my time.

          1. 3

            There’s no typo. Both AGPL and BSL have clauses that make cursing and contributing to projects using the, too high risk for anyone who has had to ask a lawyer’s opinion before doing either. If you are happy agreeing to a complex legal document without consulting a lawyer, have a much higher risk tolerance than most companies.

            1. 5

              I, too, was confused by your previous comment, specifically its phrase “people are using and contributions to your software”. I think that is a typo. After a few rereads, my guess is that you meant “people are using and contributing to your software”.

              Your most recent comment has even more typos (characteristic of a mobile device). If you’d like to edit your comment within its edit window, here are the fixes I would apply:

              Both AGPL and BSL have clauses that make using and contributing to projects using them too high risk for anyone who has had to ask a lawyer’s opinion before doing either. If you are happy agreeing to a complex legal document without consulting a lawyer, you have a much higher risk tolerance than most companies.

              1. 3

                Ouch. Apparently I’m very bad at proof reading the iPad’s autocorrect results. It seems to have become worse recently. Thank you for the fixes, they are correct, but it’s past the time when I can edit the posts.

    4. 16

      I find it incredibly disrespectful companies like this are still referring to the BUSL (Business Source License) as BSL (or here, BSL/BUSL) despite the Boost Software License’s long, fruitful history. These corporations can’t even take the first step towards respecting the Free Software ecosystem and properly refer to licenses - there is a “culture war” in software licensing that the corps will lose, because their only culture is producing profit.

      1. 6

        Citations:

        • SPDX identifier BSL-1.0 for the Boost Software License 1.0, which has been OSI-approved since 2008
        • SPDX identifier BUSL-1.1 for the Business Source License 1.1, whose license text is copyrighted 2017
        • searching for “BSL license” with Google Search or DuckDuckGo, all software-license-related results in the first page are about the Business Source License (BUSL)
    5. 58

      Sigh.

      Yet another non-free license announcement designed to build a feel-good case around a fundamental contradiction: you can’t have both free software and non-competes. They literally cannot coexist.

      If companies don’t want to release their software as free software, fine, that’s their choice. But to dress up non-free software as free, and claim you’re addressing the tragedy of the commons problem in the process, is just deceitful.

      1. 14

        you can’t have both free software and non-competes. They literally cannot coexist.

        I agree, and from a corporate perspective the big benefit of open source is an automatic second source. If I buy a F/OSS solution and your terms for the next version (or the features in the next version) are unacceptable, then I can go elsewhere. This is what makes the BSL pretty business hostile. A second source is, by definition, at least a few years behind the main branch. Adding back vendor lock-in to open source does not make it more business friendly.

        As others in this thread have said, the broad scope of the non-compete in the BSL also puts it on the ‘never touch this’ pile for most corporate lawyers.

        I can just about get behind the idea that it’s a good idea to have a level playing field for second sources. There might be some kind of business copyleft that’s interesting, such that people selling value add services must pay a foundation that owns a piece of software an annual fee that is offset by contributions to the code (e.g. any company paying 10 engineers to work on the open codebase full time doesn’t pay)l but I can’t see how to make something like that actually work.

        1. 4

          the broad scope of the non-compete in the BSL

          BUSL’s relation to non-compete terms

          The Business Source License actually does not contain any non-compete terms. Rather, it is a parametrized license where one of the parameters is an Additional Use Grant permitting limited production use (non-production use is unlimited). MariaDB, the creator of the BUSL, uses the BUSL to license MaxScale with only this Additional Use Grant:

          You may use the Licensed Work when your application uses the Licensed Work with a total of less than three server instances in production.

          There is no non-compete requirement there. I presume that companies would not consider it economical to compete with whatever MaxScale services MariaDB offers using only two instances, but they could try if they wanted to.

          Hashicorp, on the other hand, offers this version of the BUSL. Its much longer Additional Use Grant, which links to a binding licensing FAQ, starts with this sentence:

          You may make production use of the Licensed Work, provided Your use does not include offering the Licensed Work to third parties on a hosted or embedded basis in order to compete with HashiCorp’s paid version(s) of the Licensed Work.

          This competition-focused Additional Use Grant led to the forking of the last open source version of Hashicorp’s Terraform software as OpenTofu, as explained in the section “Why was OpenTofu created?” on OpenTofu’s website.

          The Functional Source License’s non-compete

          The Functional Source License has its own way of forbidding competition. Its Permitted Purpose section starts like this:

          A Permitted Purpose is any purpose other than a Competing Use. A Competing Use means use of the Software in or for a commercial product or service that competes with the Software or any other product or service we offer using the Software as of the date we make the Software available.

          I wonder whether OpenTofu’s complaints would apply to the Functional Source License.

          1. 1

            I wonder whether OpenTofu’s complaints would apply to the Functional Source License.

            Wouldn’t this clause make it potentially even worse?

            in or for a commercial product or service that competes

            At least BUSL explicitly carves out internal use as allowed, but that makes it sound like if you just used a tool that in some way contributed to your product you’d potentially be competing.

            1. 2

              A later paragraph explicitly carves out internal use as allowed, so the Functional Source License is not more restrictive than the BUSL in that sense:

              Permitted Purposes specifically include using the Software:

              1. for your internal use and access;
        2. 4

          How is a second source automatic? There are plenty of useful projects out there under totally permissive licenses where the first source breaks down and ceases maintenance. No license binds anyone, or any company, to volunteer as maintainer in perpetuity. Just opening the project up doesn’t spontaneously force users to contribute and self-organize forever. Most would prefer to free-ride—to look for “sources”, the more they can get for $0, the better.

          Company-driven projects suffer neither the no-source-at-all problem nor the user-collective-action problem…so long as their companies stay afloat. Staying afloat motivates the license maneuvers. They’re protecting themselves as first source by blocking potential seconds. Users would of course prefer total commodification and abundant investment by everyone else with no strings attached, but they’re only begging, not choosing. Unless they step up as contributors and coordinators for a credible real or threatened fork.

          As for corporate lawyers being allergic to BSL—or any other scheduled relicensing scheme—the reaction isn’t “never touch”. Once the new license kicks in for a release, the BSL itself no longer matters. You can bring that version in under just the new terms. The lawyers may have already approved those terms, one-off or by policy.

          1. 4

            How is a second source automatic?

            You can always pay engineers to maintain the code and there are a lot of consulting companies that will happily take your money.

            It isn’t always cheap. It may cost more to fund a second source than it costs to just put up with the changes that upstream is making. It may cost less in engineering if you work with other consumers with the same requirements as you to spread the cost.

      2. 12

        I’m glad more businesses are releasing their source code so I can fix it when I need to. Sure, it’s not as good as free software, but boy is it better than ye olde binary blob from sourceforge or elsewhere

      3. 5

        Isn’t AGPL the “perfect” (for the owners, not for anyone else at all) “free” software with a built-in non-compete for the owners?

        Let’s say ACME Inc has an AGPL project called RoadRunner. Another company wants to run a service based on RoadRunner with some improvements. They have to make those improvements available under the AGPL too, or they’re in breach of it.

        Now let’s say ACME Inc wants to host their own service based on RoadRunner, and they want to include a new feature or some other improvement, for high paying customers. There is no requirement for them to release the improvement, at all.

        1. 7

          Neither GPL nor AGPL is a noncommercial or “don’t compete with the developer” license. They say no such thing, though companies have been tempted to imply they do.

          Lots of companies used GPL with these effects, however, especially back when GPL was the most popular FOSs license and most software was distributed as binary executables.

          Assuming competitors would never deign release code under GPL, “you can use this code, but only if you release your own under GPL” meant “competitors can’t use this code”. Assuming potential business customers would never approve the use of GPL code internally, licensing code under GPL while offering paid, commercial licenses meant “call us about a paid, commercial license” in practice.

          All of these assumptions began to break down, not just for GPL but for AGPL, as well. AWS actually read AGPL and decided they could service-ize Mongo without AGPL’ing their empire. So Mongo wrote and switched to SSPL. Companies started approving use of GPL code by policy, so long as their people weren’t modifying the software. So developers moved to _A_GPL, to get back ahead of the legal departments. Legal departments are catching up again, and FSF isn’t writing a new, stronger AGPL. So we’re seeing more interest in proper noncommercial licenses and small-business licenses—terms that spell out the rule these developers want.

        2. 3

          The owner never has to release improvements, no matter the license. So that isn’t AGPL specific. And if they want to do that, AGPL or not, they cannot accept outside contributions without a CLA.

          Though of course AGPL is needed to make this tactic work with SaaS, and a very fun thing ACME Inc could do is purposefully designing RoadRunner to make AGPL compliance harder. There’s AGPL licensed software out there that has no way to give the user a source code link, so any 3rd party code change, no matter how minor, now has to implement that functionality.

          1. 3

            Edit: I misunderstood the question above. Skip this reply, or see below.

            The owner never has to release improvements, no matter the license.

            On AGPLv3, I’d encourage you to have another look at section 13, and the way it differs from section 13 of GPLv3. In FSF-speak, “private changes” aren’t private changes when you make them available as a service.

            For whether licenses more broadly can require release or “contributing back”, have a look at RPL and the “External Deployment” rule of OSL, both of which OSI approved way back when, and more recently Parity.

            1. 3

              This is mostly (completely?) irrelevant for the owner though – they have the copyright, so they can release it under two different licenses. Other users don’t have the copyright, they just have a license to use the software – AGPL, or whatever else it might be. This is how software such as Qt can be dual licensed.

              1. 2

                Ah, I see. The comment above was specifically about what the owner can do. Thank you.

                1. 2

                  Pretty important to note that you can’t do this in an OSS project without some sort of copyright attribution to the owner (such as a CLA), though, otherwise the copyright is partly owned by any contributors. It’s why open source projects, such as VLC recently, have such a hard time changing license (this can be either a feature or a bug of the system, you decide!)

                  1. 1

                    Pretty important to note that you can’t do this in an OSS project without some sort of copyright attribution to the owner

                    Key point here is you can’t do it in an OSS project if the project accepts outside contributions. The whole point here is companies using a “free” license as a marketing tool, not using open source to improve their software.

          2. 1

            The owner never has to release improvements, no matter the license.

            My point was about this aspect of the parent comment:

            designed to build a feel-good case around a fundamental contradiction: you can’t have both free software and non-competes

            The AGPL absolutely allows a company to use a “feel good” (for people who think *GPL are ‘good’) license, prevent anyone else from offering any kind of value-add without also making the source available, while allowing the owning company to have whatever private/paid additions available.

            if they want to do that, AGPL or not, they cannot accept outside contributions without a CLA.

            The moment we’re talking about “feel good license”, I think the premise that the company accepts/wants outside contribution is already a stretch. It’s literally a marketing campaign: “look we embrace open source; even if we close down you can still use this!” etc.

    6. 7
      Usage

      I read the entire thing and I’m not sure what it is doing or what we should be doing.

      The only command that’s in there is: nix flake init -t github:nickel-lang/organist but that’s I guess how you setup an organist project, not how you use it? Then you use it regularly with nix develop?

      Update: I think if you read the README here, it becomes clear: https://github.com/nickel-lang/organist Still not really clear whether or how it’ll fill many of my development needs.

      Ncl

      I browsed Nickel documentation previously but still, constructs like this leave me rather mystified:

      services.minio = nix-s%"
          %{organist.import_nix "nixpkgs#minio"}/bin/minio server --address :9000 ./.minio-data
        "%
      

      What is happening here with the %s?

      I’d say in general that Nickel may be a great idea and it looks less offputting than Nixlang but it’s still very far off from something a large audience of people can use.

      Recently I saw Garn which is a “Typescript eats the entire world” approach to this problem. I’m also very sceptical of it as an abstraction layer, but the choice of language does look like it could be a winner. It reminds me a bit of CDK/Typescript which is a weird imperative/declarative hybrid alternative to the standard terrible Devops ways of defining infrastructure.

      1. 2

        My impressions as well. I’m not sure if this competes with devenv, devbox, and others, or is some completely different thing. If former, what does it bring over other tools.

        1. 3

          Similar thoughts. Even as a Nix user I’m confused about some of the syntax I’m unfamiliar with, and generally about what Organist is trying to be.

          If it’s a layer above Nix flakes dev shell configuration like some of the other projects, it seems like a hard sell as if you can do Nickel… you probably can do Nix already, and introducing an extra layer is neither here or there. If you go json/yaml it will be dumbed down but easier to consume for non-nixers, and if you go Nix - you are 100% seamless with Nix.

          BTW. I’m causally lurking into Nickel and I’m still confused w.r.t level of interoperability with Nix. Nickel presents itself like a Nix-like “configuration language”, which means … it can’t really do some of the things Nix do? Or can it? Can it transpile to Nix or something?

        2. 1

          My take is that yes it’s competing with those tools, but in a (nearly) native nix way, nearly as it depends on Nickel tooling, but the generated flake pulls that in automatically so there’s nothing else to install.

          At work I am using Devenv mostly for process support (which ironically I don’t need any more) and it fits the bill, but IS two things to install before team members can start developing (plus direnv). This would only be one thing to install.

          At home I run NixOS and just use a flake for my dependencies but that doesn’t launch any services so I am kind of keen on using organist if I ever need that.

          1. 2

            My take is that yes it’s competing with those tools, but in a (nearly) native nix way, nearly as it depends on Nickel tooling, but the generated flake pulls that in automatically so there’s nothing else to install.

            It’s very cool that this works so you can have your flake contents defined in some other language entirely and don’t have to think about it (if it works).

          2. 2

            You can use Devenv as a mkShell replacement when working with Nix Flakes, so you do not need to install anything manually.

        3. 1

          One of the article’s links is to Organist’s README – How does this differ from {insert your favorite tool} ?. In summary, Organist’s closest competitor is Devenv, and its main advantage is the consistency and power of the Nickel language.

    7. 5

      I’ve only read a little about it and never used it, but Dagger might be a solution. Dagger Engine, a “programmable CI/CD engine that runs your pipelines in containers”, allows all CI actions to be written in any programming language Dagger has an SDK for and runs them locally the same as in the CI environment.

      Dagger Engine is open source and self-hostable. There is also a proprietary Dagger Cloud paid service that “complements the Dagger Engine with a production-grade control plane” that provides “pipeline visualization, operational insights, and distributed caching”.

      A similar pair of products is Earthly (open source) and Earthly Cloud (paid service). Where Dagger has you define pipelines in a general-purpose language using an SDK, Earthly has you define pipelines in Earthfiles, which combine elements of Dockerfiles and Makefiles.

    8. 2

      If you’re increasing the bit depth of the single greyscale value 1234 5678, wouldn’t it be more accurate to turn it into 1234 5678 7878 7878 (repeating the last byte) rather than 1234 5678 1234 5678? That’s what my intuition says, but I don’t have a formal argument for it.

      The three-value and four-value CSS hex color syntaxes are defined as repeating the first hex digit for each channel (RGBA). Color #fa0 is equivalent to #ffaa00, and #fa08 is equivalent to #ffaa0088. There is no CSS syntax that turns two digits for a channel into more digits, though, so we can’t compare CSS’s design to my idea above.

      1. 8

        Why just the last byte, rather than, say, the last bit, or the last 4 bits? That seems quite arbitrary.

        Consider what happens when we use your proposed mechanism to extend 3->6 digits (no meaningful difference between decimal and any other base here):

        126 -> 126 666
        127 -> 127 777; delta = 1111
        128 -> 128 888; delta = 1111. so far so good
        129 -> 129 999; delta = 1111
        130 -> 130 000; delta = 1. uh oh
        131 -> 131 111; delta = 1111

        Now use the linked article’s mechanism:

        126 -> 126 126
        127 -> 127 127; delta = 1001
        128 -> 128 128; delta = 1001
        129 -> 129 129; delta = 1001
        130 -> 130 130; delta = 1001
        131 -> 131 131; delta = 1001

        (Not so coincidentally, this mapping can be implemented as—or, rather, is rightfully defined as—a multiplication by 1001.)

      2. 4

        1234 * 99999999 / 9999 = 12341234

        1. 3

          In more detail: first of all, this is in decimal instead of binary/hex, for clarity. 1234 is a 4-digit decimal color, and we want to convert it to 8-digit decimal. Dividing by 9999 (the largest 4-digit decimal value) converts 1234 to a fraction between 0 and 1 inclusive. Multiplying by 99999999 (the latest 8-digit decimal value) converts that fraction to 8 digits. Though you need to do the multiplication before the division because integer math.

      3. 1

        In CSS syntax, for RGBA, each digit in “#ffaa0088” is a hexadecimal digit (4 bits). The last byte is 2 of those digits.

        In the article, for greyscale, each digit in “12345678” is a binary digit (1 bit). The last byte is all 8 digits. Repeating the last (and only) byte would be “12345678 12345678”.

      4. 1

        Moonchild’s sibling comment is a good answer to the accuracy question. I wouldn’t have known myself! CSS colors are actually a good example of the proposed technique in action since each 4-bit hex digit gets expanded to an 8-bit channel intensity by copying. That could’ve been a nice lead to my article.

      1. 4

        I’m a git user myself and have never used mercurial for anything significant, but I’m not pleased to see it losing meaningful mind-share. I don’t think git is the end-all be-all of VCSs, and I respect the hard work the mercurial developers have put into their DVCS solution and would like to see whatever good ideas they have that are available only in mercurial made more widely available.

        I also wonder if Facebook is still using mercurial, I feel like if everyone there was using it I probably would have heard about it somehow.

        1. 4

          Facebook is using Sapling, whose CLI “was originally based on Mercurial, and shares various aspects of the UI and features of Mercurial”. Facebook released Sapling to the public on 2022-11-15. At one point Sapling could only work with its own internal repo format (source), but now it can work with Git repositories.

    9. 12

      FWIW I stumbled upon “The Next Programming Language” talk by Crockford a few months ago -

      https://www.youtube.com/watch?v=R2idkNdKqpQ

      In general I enjoy Crockford’s thoughts, just because they’re strongly held and I can see where I agree and disagree.

      I didn’t really enjoy this talk, and don’t recommend it (unlike his other work) … He’s strongly advocating an actor language, and someone asks at the end “Why not Erlang?” The answer is pretty unsatisfying – I think he said something like “Erlang is close but not the pure actor model”.

      https://www.crockford.com/misty/actors.html

      So yeah I would like to see some kind of clear distinction with respect to prior art. Or at least some statement about exactly what you can do with actors, that you can’t with threads or goroutines or whatever (goroutines being isomorphic to threads!)

      i.e. a side by side of 2 programs, where there is some clear advantage for actors / Misty / etc.

      1. 5

        someone asks at the end “Why not Erlang?” The answer is pretty unsatisfying – I think he said something like “Erlang is close but not the pure actor model”.

        I hunted down Crockford’s exact words. One question like that is at 43:36 in the video. After Crockford says “there’s a lot of brilliant stuff in Elixir”, someone asks “Is there a big difference between what Misty would be trying to accomplish and what Elixir is tackling?” Crockford replies:

        I think so. I have a particular interpretation of the actor model which they could do, but they’re not doing that. So, I think there’s a lot of similarity there.

        At 44:36, the next questioner asks how Misty is going to be different from Erlang and why it would be more successful, as well as an unrelated question. Crockford replies:

        First, on Erlang, so Erlang did not set out to be an actor language. They kind of stumbled on it accidentally. There’s so much brilliance in Erlang and the Erlang team, but they didn’t quite get all of the actor-ness. For example, it’s based on — they don’t have private addresses, they’ve got process IDs, and so those are guessable, so you don’t get the same security. And there are a couple other things that they missed. But they got so much right – I very much admire Erlang.

        1. 4

          Thanks, yeah that has a little more color that I remember.

          And I think it is a substantive criticism. I believe Erlang does have global process IDs to which messages are addressed, and I think they’re not first class either. In contrast, Go has channels as a normal value (with a type – a parameterized type), so you can pass them between functions, and you can pass channels over channels.

          I think that is a pretty significant issue and worth pointing out.

          Still, I think comparisons like that should have been in the main part of the talk! I don’t remember specifically since I watched a few months ago, but I kept waiting for him to get to the “meat” / interesting parts of the talk, and that never really came IMO.

          The talk didn’t refer enough to existing practice IMO, or concrete experiences / goals. Also, I would have liked an analysis of why these pretty old ideas haven’t caught on yet.

          1. 6

            We might summarize this by saying that Erlang is not “capability-safe;” if we imagine that actors are addressed by capabilities, then it is possible to forge capabilities by guessing PIDs. In E and its descendants, including Misty (“mist-E”, common pun in this line of research) it’s not possible to do that, and the only way to have a reference to an actor is by one of three routes:

            • Being born with a reference
            • Creating the referent, and receiving a reference at the end of construction
            • Receiving a message with embedded references, and saving them for later
          2. 3

            I think [PIDs are] not first class either.

            In what sense do you mean “first class” here? PIDs are values in Erlang, alongside numbers, closures, atoms, tuples etc.

            1. 1

              I haven’t done much Erlang, but I think the sibling comment about being capability-safe gets more at what Crockford is saying.

              But also, does Erlang have an analogue of a file descriptor? A port that’s independent of a process?

              If not, then I’d say it’s a more “hard-coded” model. Because then ports aren’t as “first class” as they are in Unix (FDs) and Go (channels).

              Basically I think it’s better to send to the equivalent of FDs, and not PIDs. If PIDs are tied to a specific Erlang process.

              For example the classic inetd receives connection, and then spawns a new PID for the FD. Systemd also does this

              http://0pointer.de/blog/projects/socket-activated-containers.html

              I will have to think of more examples, but I also think it could inhibit more abstract patterns like this:

              http://catern.com/introduction.html


              The Go designers also made a point of saying that channels can be passed over channels, and that’s useful. A process can abstractly send to a channel, but the other end can be “owned” by different goroutines.

              I think in Go you can’t look up a channel in a global table either. The only way to send something on a specific channel is to create it or be passed a reference to it.

              I don’t think you can kill a goroutine in Go either. I think in Erlang there are global PIDs you can kill.

              Not saying one is better than the other, but I would have liked the Crockford talk to shed some light on these kind of design decisions, and make comparisons.


              A related issue is that I’ve found it’s a lot better to model a build system as a bipartite graph with nouns and verbs, rather than a “hard-coded” graph of “nouns”

              e.g. GNU make is basically looks like this

              File -> File -> File

              Ninja has both build and rule, and you can generate code, so it looks more like

              Files -> Rule -> Files -> Rule

              Having the rules as first class is good!

              1. 1

                Erlang’s PIDs are what you want, yes. They are like object references. Don’t be confused by the name “process”, they’re not like Unix process identifiers. Erlang PIDs denote an actor and its mailbox. You send them in other messages to other actors etc. Actors are extremely fine-grained, and PIDs act like Unix FDs and like Go channels etc. I don’t know what you mean by “hard coded” or “first class”.

                (There are a few things that would be needed to make Erlang an ocap language; among them would be removing the seldom-used but ambiently-available reflective interface that lets you forge PIDs. To first approximation, no Erlang program uses this interface, though.)

      2. 3

        yeah. can’t say I felt very inspired by that talk either. or this reading. a bunch of very technical grammar talk tells me nothing useful about the language.

    10. 2

      I know it’s probably something wrong with me but where can I find this proposed legislation (containing this sinister article 45) and read it? Also, if it is really true, I don’t understand how any country in Europe would agree to this. Will eg. Hungary be able to create a new certificate for riksdagen.se and browsers in the whole Europe will just accept it? Or is there any more detail to this story?

      1. 15

        Also, if it is really true, I don’t understand how any country in Europe would agree to this. Will eg. Hungary be able to create a new certificate for riksdagen.se and browsers in the whole Europe will just accept it? Or is there any more detail to this story?

        It looks as if this is another case of well-intentioned legislation being written by people with no understanding of the subject at hand and without proper consultation. I believe (judging from the analysis in the letters) the intent was to require that EU countries are able to host CAs that are trusted in browsers, without the browser vendors (which are all based outside the EU) being able to say ‘no, sorry, we won’t include your certificate’. Ensuring that EU citizens can get certificates signed without having to trust a company that is not bound by the GDPR (for example) is a laudable goal.

        Unfortunately, the way that it’s written looks as if it has been drafted by various intelligence agencies and introduces fundamental weaknesses into the entire web infrastructure for EU citizens. It’s addressing a hypothetical problem in a way that introduces a real problem. I’m aware that any sufficiently advanced incompetence is indistinguishable from malice, but this looks like plain incompetence. Most politicians are able to understand the danger if US companies can act as gatekeepers for pieces of critical infrastructure. They’re not qualified to understand the security holes that their ‘solution’ introduces and view it as technical mumbo-jumbo.

        1. 4

          Ensuring that EU citizens can get certificates signed without having to trust a company that is not bound by the GDPR (for example) is a laudable goal.

          As far as I am aware of, the real reason for this is to facilitate citizen access to public services using certificates that actually certify to citizens that they are in fact communicating with the supposed public organization.

          EU would be served well by a PKI extension that would allow for CAs that can only vouch for a limited set of domains where the list of such domains is public and signed by a regular CA in a publicly auditable way.

          Or even simpler, they could just define a standard of their own with an extra certificate for the regular certificate. Then they could contribute to browsers some extra code that loads a secondary certificate and validates it using a different chain when a site sends a X-From-Government: /path/to/some.cert header and displays some nice green padlock with the agency name or something.

        2. 2

          intelligence agencies

          Looking how misguided this legislation is, I’m not sure if these agencies are from any European country. This level of incompetence feels utterly disappointing. Politicians should be obliged to consult experts in a domain that given new legislation affects. I also don’t understand what’s so secret about it that justifies keeping it behind closed doors. Sounds like an antithesis of what EU should stand for. Perfect fuel for some people that tend to call EU 2nd USSR and similar nonsense like this.

          1. 5

            Looking how misguided this legislation is, I’m not sure if these agencies are from any European country

            It depends. Several EU countries have agencies that clearly separate the offensive and defensive parts and this is the kind of thing that an offensive agency might think is a good idea: it gives them a tool to weaken everyone.

            Politicians should be obliged to consult experts in a domain that given new legislation affects

            This is tricky because it relies on politicians being able to identify domain experts and to differentiate between informed objective expert opinion and biases held by experts. A lot of lobbying evolved from this route. Once you have a mechanism by which politicians are encouraged to trust outside judgement, you have a mechanism that’s attractive for people trying to push an agenda. I think the only viable long-term option is electing more people who actually understand the issues that they’re legislating.

            I also don’t understand what’s so secret about it that justifies keeping it behind closed doors. Sounds like an antithesis of what EU should stand for

            The EU has a weird relationship with scrutiny. They didn’t make MEPs voting records public until fairly recently, so there was no way of telling if your representative actually voted for or against your interests. I had MEPs refuse to tell me how they voted on issues I cared about (and if they’d lied, I wouldn’t have been able to tell) before they finally fixed this. I don’t know how anyone ever thought it was a good idea to have secret ballots in a parliamentary system.

            1. 1

              I think the only viable long-term option is electing more people who actually understand the issues that they’re legislating.

              This is the sort of thing that an unelected second chamber is better at handling. Here’s an excerpt from an interview with Julia King, who chairs the Select Committee on Science and Technology in the UK’s House of Lords:

              You get a chance to comment on legislation because we are a revising chamber. We’re there to make legislation better, to ask the government to think again, not to disagree permanently with the government that the voters have voted in because we are an unelected House, but to try and make sure that legislation doesn’t have unintended consequences.

              You look at the House of Commons and there’s probably a handful now of people with science or engineering backgrounds in there. I did a quick tot up - and so it won’t be the right number - of my colleagues just on the cross-benches in the House of Lords and I think there must be around 20 of us who are scientists, engineers, or medics. So there’s a real concentration of science and engineering in the House of Lords that you just don’t get in the elected House. And that’s why I think there is something important about the House of Lords. It does mean we have the chance to make sure that scientists and engineers have a real look at legislation and a real think about the implications of it. I think that’s really important.

              That’s from an episode of The Life Scientific.

            2. 1

              I think the only viable long-term option is electing more people who actually understand the issues that they’re legislating.

              I don’t see how this could ever be viable given existing political structures. The range of issues that politicians have to vote on is vast, and there just aren’t people that exist that are simultaneous subject matter experts on all of them. If we voted in folks that had a deep understanding of technology, would they know how to vote on agriculture bills? Economics? Foreign policy?

              1. 1

                You don’t need every representative to be an expert in all subjects, but you need the legislature to contain experts (or, at least, people that can recognise and properly interrogate experts) in all relevant fields.

                I’m not sure if it’s still the case, but my previous MP, Julian Hubert, was the only MP in parliament with an advanced degree in a science subject and one of a very small number of MPs with even bachelors degrees in any STEM field. There were more people with Oxford PPE degrees than the total of STEM degrees. Of the ones with STEM degrees, the number that had used their degree in employment was lower. Chi Onwurah is one of a very small number of exceptions (the people of Newcastle are lucky to have her).

                We definitely need some economists in government (though I’ve yet to see any evidence that people coming out of an Oxford PPE actually learn any economics. Or philosophy, for that matter), but if we have no one with a computer science or engineering background, they don’t even have the common vocabulary to understand what experts say. This was painfully obvious during the pandemic when the lack of any general scientific background, let alone on in medicine, caused huge problems in trying to convert scientific advice into policy decisions.

      2. 8

        You currently cannot as per the first paragraph. The working documents are not public.

        1. 1

          My reading comprehension clearly leaves a lot to be desired, my bad.

      3. 3

        Because as with all legislation here, every person involved cannot understand the threat model. They believe that obviously this will only do good things, and don’t understand that different places have different ideas of what is “good”. They similarly don’t understand that the threat model includes people compromising the issuer, and don’t understand that given the power of their own CAs they will be extremely valuable, and also part of a section of all governments is generally underfunded.

        Fundamentally they don’t understand how trust works, and why the CARB policies that exist, exist.

      4. 3

        Also, if it is really true, I don’t understand how any country in Europe would agree to this.

        Considering how the EU works, this was probably proposed by a member government, and with how it’s going, many member governments probably support it.

      5. 2

        I don’t know what’s in the proposed legislation, but the version of eIDAS that was published in 2014 already contains an Article 45 about certificates (link via digital-strategy.ec.europa.eu):

        Article 45 - Requirements for qualified certificates for website authentication
        1. Qualified certificates for website authentication shall meet the requirements laid down in Annex IV [link].
        2. The Commission may, by means of implementing acts, establish reference numbers of standards for qualified certificates for website authentication. Compliance with the requirements laid down in Annex IV shall be presumed where a qualified certificate for website authentication meets those standards. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 48(2) [link].

        I suppose the proposed legislation makes this worse.

    11. 1

      no date, no author, no reference. Looks fishy.

      1. 35

        This is legitimately from Mozilla.

        1. 7

          In future, if Mozilla is doing official things on domains unrelated to any existing project domain, it would be helpful to:

          • Link to that domain from one of the official domains
          • Have a link in the thing on that domain pointing to the place Mozilla links from it.

          Doing this would mean that, in two clicks, readers can validate that this really is Mozilla-endorsed and not someone impersonating Mozilla. Training Mozilla users that anyone who copies and pastes the Mozilla logo is a trusted source is probably not great for security, in the long term.

      2. 18

        There’s literally a date, references at the bottom, and it says Mozilla both at the top and bottom.

        1. 6

          date acknowledged, but placing a mozilla logo is too easy faked.

          IMO would be ok on their own domain. But not on a vanity domain.

      3. 7

        I, too, question whether this page was really written by Mozilla, but I did confirm that Mozilla and other companies really do oppose Article 45 of eIDAS.

        This Mozilla URL hosts a 3-page open letter against Article 45 of eIDAS: https://blog.mozilla.org/netpolicy/files/2023/11/eIDAS-Industry-Letter.pdf. It’s a completely different letter from the 18-page letter linked by this story, though both letters are dated 2 November 2023. This story references Mozilla’s letter as if it’s by someone else:

        Their calls have also been echoed by companies that help build and secure the Internet including the Linux Foundation, Mullvad, DNS0.EU and Mozilla who have put out their own statement.

        Some other parties published blog posts against eIDAS Article 45 today:

      4. 2

        There’s a very big Mozilla logo at the top.

        1. 21

          And at the bottom, yet it’s not on a Mozilla domain, it doesn’t name any Mozilla folks as authors, and the domain it is hosted on has fully redacted WHOIS information and so could be registered to anyone. I can put up a web site with the Mozilla logo on it, that doesn’t make it a Mozilla-endorsed publication.

          1. 2

            fully redacted WHOIS information

            As is normal for any domain I order from inside the EU.

            Edit: And the open letters are all hosted on the https://www.mpi-sp.org/ domain. That doesn’t have to make it more credible, but at least that’s another institute.

            1. 9

              As is normal for any domain I order from inside the EU.

              It is for any I do as an individual. Corporate ones typically don’t redact this, to provide some accountability. Though I note that mozilla.org does redact theirs.

              1. 2

                Good to know. The company domains I dealt with all have this enabled. (Some providers don’t even give you the option to turn it off.)

              2. 1

                I’ve found this to be inconstantly administrated. For instance, I believe that it is Nominet (.uk) policy that domain registrant information may be redacted only for registrants acting as an individual. But registration information is redacted by default for all domain contact types at the registry level and there is no enforcement of the written policy.

            2. 6

              This is the link that was shared by Stephen Murdoch, who is one of the authors of the open letter: https://nce.mpi-sp.org/index.php/s/cG88cptFdaDNyRr

              I’d trust his judgement on anything in this space.

    12. 6

      I guess I’m a bit curious on a meta level where the aversion to paying for content comes from. Like, I get that someone is inevitably going to reply with “I live in a country/situation where it is impossible to pay, you insensitive clod!” but take as a given that I am asking this question of people who have access to an accepted payment method and who have sufficient income to afford to do so.

      My partner and I watch a lot of stuff together on YouTube, mostly cooking and crafts stuff, and I watch a lot of stuff related to my own hobbies. So I pay for premium to get it ad-free. I’ve historically also watched a fair amount of Twitch streamers related to my hobbies, so I’ve paid for subscriptions to them to get ad-free viewing. I support a podcast that I like, and get ad-free episodes in return. I make a recurring monthly donation to the admin of the Mastodon server I use. I’ve bought merch or other products from artists I liked.

      I have the means to do this. Other people who have the means to do this: why don’t you?

      (I also run heavy ad-blockers on all my devices, of course, if nothing else as a security/privacy measure, but when I find something I like I still tend to seek out a way to pay to support it so that the thing I like will continue to exist)

      1. 15

        where the aversion to paying for content comes from

        I have the means to do this. Other people who have the means to do this: why don’t you?

        I don’t have an aversion to paying for content: I happily do that for music (on bandcamp or on CDs), books, games and would be happy to do that for films and shows given the opportunity.

        I do, however, have an aversion for paying for services. Take Spotify, for example - $10 per month, and you get pretty good selection of music, but it’s tied to a pretty terrible music player. On top of that from those $10 I pay, most of it goes to Spotify itself and the top played artists globally, not the ones I actually listen to. So I’m theoretically paying for content, but in practice most of it is for the service (which I’d rather not have – I’d prefer to have my music offline, in a media player that doesn’t suck), and content creators I don’t care about. $10 for a service I don’t like, and supporting people I don’t want to support. Compared to Bandcamp which takes ~20% and just gives me the files (and streaming options if I want them), this is a terrible value proposition for either of my goals ­– supporting the artists and getting a good product out of it.

        Now for YouTube this is a bit of a different story. I don’t know what the revenue share is like when it comes to the Premium subscriptions, and how much my favourite channels would really get out of it. But YouTube has become a monopolist and annihilated the competition by being the good, free product with no ads in it. Now that it’s comfy on its throne, it’s pulling the rug out and abusing its position to push whatever it wants – unskippable ads, adblock-blockers, idiotic copyright and monetization policies… and we have nowhere else to go. Is this something you’d want to support? I see a product actively becoming worse over the years, and I’m suppose to believe that once I’ll start paying it’ll become better?

        Were it a good product that becomes better if you pay for it ­– that’d be something worth considering. That reminds me of Twitter in its prime, in its golden days. If it asked for money back then and gave extra stuff in return – new functionality, unrestricted Client API access etc, I’d happily throw my money at them in exchange for something extra. But nowadays it’s been on an almost comical downward spiral, getting worse and worse every week (not to mention all the people I cared about leaving), it’s taken most of the good things about it away and now it asks for money? It locked me out of Tweetdeck, leaving me with the absolutely abhorrent default client and promises that if I pay I’ll get it back? No thanks!

        And it’s the same with YouTube. Lock in 120Hz+ streams behind premium and I’ll happily pay extra for the smoother videos. But with what they’ve been doing over the years, paying to get some of the good old days back just doesn’t sit right with me. And if I wanted to support the content (creators), then paying for YouTube is a very suboptimal way of going about it.

        1. 2

          Take Spotify, for example - $10 per month, and you get pretty good selection of music, but it’s tied to a pretty terrible music player

          I don’t think this is true. I don’t use Spotify, but my partner does, so I have set up spotifyd (runs at least on FreeBSD and Linux) and it seems to work well. We control it with the official client, but I believe there are other things that replace the control interface.

        2. 2

          I do, however, have an aversion for paying for services. Take Spotify, for example

          I guess this is a trendy thing to claim, but I’m not sure I understand the logic behind it.

          For example: I have bookcases full of books that I’ve bought, and also an ebook reader with even more books I’ve bought. But I also have a library card. I only buy a book if I think I’m going to want to re-read it, or be able to instantly refer to it, multiple times. If I think I’m only ever going to read a particular book once, well, I’m probably not going to buy a copy; instead I’ll look to borrow one from the library.

          I see streaming music services as being similar to a library card. They let me sample a lot of things that I wouldn’t ever listen to if my only option were to buy and forever own a copy. And when they do turn up something I like enough to re-listen many times, that’s when I do go buy a copy.

          Were it a good product that becomes better if you pay for it

          I’ve never understood why media needs to go above and beyond to get someone to pay for it. In the old days I could watch a movie when it was shown on a broadcast TV channel and accept the ads the TV channel would insert, or I could pay to watch the movie in a theater without any ad breaks, or pay to rent or own a copy of the the movie on VHS without any ad breaks in it.

          I look at YouTube the same way: the cost of “free” content is advertising, and I can pay to remove the ads. I don’t need it to also offer a bunch of other above-and-beyond features on top of that.

          1. 3

            Does your library require you to read the borrowed books under their supervision?

            1. 2

              My library has rules for what I can do with the books I borrow. Which makes sense given that they remain the property of the library.

              So I’m not really sure what your point was here. Yes, libraries impose terms and conditions on their patrons. If you want to argue the nuances of which terms and conditions are morally acceptable to you personally, that’s a completely different topic than what was being discussed.

          2. 1

            I guess this is a trendy thing to claim, but I’m not sure I understand the logic behind it

            I don’t know if it’s trendy or not, but I’m not surprised. And it’s connected to your followup points and examples: library cards, VHSes etc – it’s hard to shake off the feeling that things used to be better. Library cards are free. VHSes I can buy and keep. Subscriptions services give me the worst of both worlds: I have to keep paying, I don’t get to keep anything, and they impose a ton of restrictions on how I get to consume the content.

            If I have some euros to spare, I can buy myself an music album (physical or digital), which I can then listen to whenever I want to and give it away (discreetly) or sell it (if it’s physical) if I don’t want it anymore. I can buy a book in a similar way. I used to be able to do that with video games to – as I kid I bought a game, played it for a while, then traded it for a comic book with a rabbit samurai which I still have on my shelf (and which has since gone up in value :)). That’s a pretty good deal! Not so much with the subscription-based alternatives though. I guess the upside is that I have access to a vast library and I can access it anywhere I want, but it’s not only not that important to me personally, it also has to compete with free services that do the exact same thing.

            This is where we get to the “above and beyond”. Things, in my view, have become worse for the consumers. They may have become slightly more convenient in some cases, but the experience is massively inferior in others. This is why, if I have to splash out my money on something, it’d better be really good, not just acceptably mediocre. This is the case with video games for me – they’re DRM’d and locked to my account forever, but there’s quite a lot of added value (automatic updates, streamlined installations, save syncing etc) that makes it an attractive proposition even compared to the good old days. With music, videos, books? Not so much.

            1. 1

              I would argue that digital distribution (of music, books etc) has opened up a huge market for creators. Physical books and music are heavy and costly to manufacture. Only certain stores could carry them (postal distribution helped a bit). Nowadays an electronic book or music set is among the smallest pieces of digital content there is. This makes it much easier to seek and find a big market.

              As to why gatekeepers and middlemen appear despite the early internet theoreticians saying they wouldn’t, that just means those people didn’t know how economics works.

            2. 1

              Subscriptions services give me the worst of both worlds: I have to keep paying, I don’t get to keep anything, and they impose a ton of restrictions on how I get to consume the content.

              With a library card you also don’t get to keep the books you borrow, and there are restrictions (return within a certain time, late fees if you don’t, etc.). Again, the problem is you’re treating the streaming service as if it’s your own private owned-by-you record cabinet in your home, when it really is more like the library that has a far larger catalog of stuff but that you explicit agree is not and won’t become your owned-by-you property.

              I’m also not sure what “restriction” I’m suffering under. I open the app, I browse around until I find something that looks good, I hit “play” and it plays. What do you think I ought to be able to do that I’m not?

              1. 2

                What do you think I ought to be able to do [in Spotify] that I’m not?

                While I’ve never used Spotify, I understand that it does not let you download tracks as DRM-free standalone audio files. Therefore, if I subscribed to Spotify instead of buying standalone audio files, I would miss these features:

                • Creating playlists that contain both tracks on Spotify and tracks unavailable on Spotify
                • Opening tracks in alternative music players
                  • for a better listening experience
                    • iTunes (now Apple Music) tracks the Play Count and Last Played date of each track. Spotify may track this too, but I bet you can’t view those records if your subscription isn’t active.
                    • Some may prefer the keyboard shortcuts of other media apps. mpv’s shortcuts are customizable to jump forward or backward any number of seconds with arbitrary keypresses.
                  • to understand the music better
                    • Viewing the track’s spectrogram to learn the notes using Audacity or Amadeus Pro
                    • Marking measures and listening to sections on repeat or slowed-down using Transcribe!
                • Creating derivative works under fair use (e.g. only for personal use)
                  • Mashing up the music with other tracks in dJay Pro
                  • Sampling the track while creating new music in a digital audio workstation
                1. 3

                  While I’ve never used Spotify, I understand that it does not let you download tracks as DRM-free standalone audio files.

                  I’m not trying to be harsh here, but: have you checked the context of the discussion you’re replying to? My point was that I see streaming services as similar to having a library card which lets me temporarily borrow and enjoy many things that I would never ever go out and buy personally-owned copies of for myself.

                  And through streaming services I’ve discovered quite a few things that I later did buy copies of because I liked them enough, but I never would have even listened to them once if full-price up-front fully-owned purchasing was the only way to do so.

                  So complaining that you do not obtain full “owner” rights over tracks from streaming services does not really have any relevance to what was being discussed. That’s already been acknowledged, and I see streaming and purchasing as complementary things. I was asking for opinions on why they wouldn’t be, or what restrictions streaming has – in the context of the streaming-and-purchasing-are-complementary point – that make it untenable as a complement to purchasing.

                  1. 1

                    I think my comment’s definition of “restrictions” is consistent with the definition in its grandparent comment, tadzik’s comment. That definition is “the restrictions that come with not owning content”.

                    I derive that definition from the sentence in tadzik’s comment that you quoted earlier:

                    • “Subscriptions services give me the worst of both worlds: I have to keep paying, I don’t get to keep anything, and they impose a ton of restrictions on how I get to consume the content.”
                      • the worst of the world of paid ownership
                        • “I have to keep paying”
                      • the worst of the world of free subscription
                        • “I don’t get to keep anything”
                        • “they impose a ton of restrictions on how I get to consume the content”

                    You replied to that comment asking for examples of “restrictions”. I think I successfully gave them. If you think the restrictions deriving from lack of ownership are off-topic, it would have been clearer to state that in your reply to tadzik’s comment instead of asking for examples.

                    Now, if I try to answer the question you said you meant – why streaming could not be complementary to purchasing – I think we already agree that they are complementary. Streaming is indeed useful for browsing in a way that purchasing is not. Streaming for free and without ads is nicer than streaming with a paid subscription, but depending on the price and the value of the paid content relative to available free content, paying for a streaming subscription may be worth it.

      2. 10

        I have the means to do this. Other people who have the means to do this: why don’t you?

        I try to support the artists who made the art that I like. I try not to support Google. If I had to choose between them I guess I’d reluctantly choose not to support Google, but I don’t have to choose: I can give money (less in-) directly (via Patreon or whatever) to the YouTubers I want to support.

        More broadly, though, I just really don’t like ads. They make the internet worse to look at, and I don’t like business models based on the premise that my attention is a commodity that people are free to auction off. If there were only a few people doing that I’d just avoid them, but that’s pretty much impossible today. If I had to pay to make the ads go away, I think it would probably harden my stance against the creator in question, since I’d feel like I was rewarding a business model I disapprove of. As it is, I feel good about giving money to the creators I give money to precisely because it feels both mutually beneficial and entirely voluntary.

        1. 3

          I try not to support Google

          If you believe it’s unethical to support Google, then I don’t see how it could be ethical to watch YouTube at all. You still appear in the metrics that they use in their pitches to advertisers, and indeed you still appear in the aggregate metrics even when using an ad blocker. There is no ethical way to consume YouTube in that situation, that I can see.

          1. 6

            And that’s why I try to watch videos on Nebula, when they’re available. But for the most part, YouTube is a monopoly and all the content is there; there’s being principled and then there’s being a masochist like Richard Stallman.

      3. 5

        Let me give you my take on this. What does an ad-view on YouTube cost a company? From what I read it’s $0.010-$0.030 (let’s say on average $0.020). The average time spent on YouTube is ~20 minutes per day. If the viewer was shown an ad every 4 minutes, this amounts to 5 ads per day and 150 ads per month, which is equal to 3$. How much is YouTube premium? 14$. It’s a rip-off. For it to be reasonably-priced the average YouTube Premium subscriber would have to spend 2+ hours per day on the platform, which is quite a lot and way beyond average (~20 minutes).

        The content companies really nailed it with Music in the early 2010’s. You could buy an album DRM free and were left alone. Then they messed it up with movies, where you “bought” a movie but only bought a right-to-use depending on the platform. Music streaming is a way to disown people again in a way and to incentivize users to pay monthly for what is often a relatively static music collection which would have been much cheaper to buy once and extend incrementally. The detrimental effects on the music as a medium as a whole are another topic. Movie/Series streaming is a bit of grey zone which I won’t discuss here. What we see, though, is that the once simple streaming landscape has become more and more diverse, leading to immense costs per month if you want to follow a diverse offering of shows. Surely this is a first-world-problem, but you would’ve spent much less back in the day just buying a set of series on DVD, and you would have owned those DVDs forever. You could have also borrowed DVDs and BluRays at your local library.

        To keep it short, I know many people who would be willing to pay for content, however, they don’t like being held hostage or becoming suckers of the greedy companies. Music has proven that content can be DRM free and still be profitable, but it has to be accessible to the user.

        YouTube loses a lot of money with ad blocking users. Even conservative metrics assume a 20% market share of ad blocking users. If you factor this in, YouTube could reasonably ask for a plan of 1.99$/month or something to be ad-free. If they do it in a way that it’s not a hassle and they do it fairly, everyone would be happy.

        1. 1

          What does an ad-view on YouTube cost a company?

          Do you mean “earn” rather than “cost”?

          1. 3

            No, the cost to place an ad, of which some is taken by Google as profit for their ad business, some is payment processing fees, some goes to YouTube to run the service and some goes to the content creator. In theory, a company then makes more in increased sales than they’ve spent, but most are really bad at tracking this.

            The ad price is set based on an auction, so varies a lot depending on how many other companies want to place an ad in the same video. I suspect that most of the ads that I see are incredibly cheap because they’re not on mass-market content and few companies are competing for the space.

            1. 1

              Ah, I see, thanks. I was thinking of how much each ad placement earns the video creator, but I see now that the correct analysis is to compare the cost of premium/ad free to the cost of placing the ad.

        2. 1

          Your math is extremely suspect given that AFAIK it’s a competitive bidding process where the most-watched videos command higher rates. And since your whole argument is based on that math, I don’t think it holds up.

    13. 9

      Strange hill to die on. If there’s a systemic problem in academia, fine argue with the academics, but in my experience touching this stuff for 30 years at different layers of the stack I think the job of a programmer, network designer or any other technical worker does is to take a given abstract model and be able to use the the useful bits, understand interfaces and abstractions to the point you can do useful work. In real world jobs I’ve never experienced anyone bound by the OSI model to the point that it prevented us getting something built or done. If you can’t take a framework or model, look at your real world application and immediately take the useful bits of the model and discard, modify, ignore or reinvent the rest, you’re not going to get much done, it’s a core skill of technical work.

      1. 7

        I understood the content as “it’s not only not matching reality, it’s so disconnected that it’s not useful”. Sure, OSI hasn’t prevented anyone from doing something, but did they actually get any value from it? If you strip things to the point of “there are ways to encode data in physical world and you can stack layers of protocols in the software world”, it’s not really even related to OSI anymore.

        What were the useful bits you got out of the actual generalisation of OSI (rather than a generalisation of Ethernet and TCP/IP)?

        1. 2

          A specific case that comes to mind where ‘Layer 2’ was useful, distinct from ‘Ethernet’, was collective understanding of mesh networking protocols in a metropolitan mesh networking project. My understanding of mesh networking protocols is that they provide virtualised Layer 2 addresses, that sit on top of the real Layer 2 in order to provide mesh functionality. E.g. if I’ve seen some Layer 2 address before and I want to try and reach it again, it might not be available through the same Layer 2 interface that I saw it through before, but maybe it is somewhere else or perhaps even several L2 hops away, so a virtualised ‘mesh’ Layer 2 interface provides additional services to deliver a Layer 2 packet specifically in the mesh use case where peers on the network are coming and going or moving around between locally attached segments. Until this point of course ‘Layer 2’ is synonymous with ‘Ethernet’, but here’s where ‘Layer 2’ and ‘Layer 3’ became useful terms: in these kinds of discussions we conceptualised mesh protocols as having inserted a new layer which we called ‘Layer 2.5’ - routing on top of Layer 2, but before Layer 3. Not standard ethernet, but not IP routing either. So the OSI model terms ‘Layer 2’ and ‘Layer 3’ facilitated the discussion and shared understanding of how the mesh protocols worked - and in doing so, were immediately modified/broken by the ‘Layer 2.5’ concept. (Totally acknowledge that this is somewhat mixing abstractions because L3 in our context meant specifically IP and L2 meant specifically Ethernet, but nevertheless, the mesh concept was made clear by the L2 and L3 terms and the introduction of L2.5 to mean ‘another one like L3 in between L2 and L3 but isn’t Ethernet and isn’t IP’).

          1. 3

            What you’re saying is that the OSI notion of layer 2 was too anaemic to describe the mesh network’s architecture, so you needed to break the OSI model in order to talk about the network usefully.

            1. 1

              Yep and that I can imagine if we hadn’t had that starting point and the abstractions that L1-3 provide it would have been a much longer conversation.

              1. 4

                Your point is related to the introduction in chapter 4:

                The simplest alternative to teaching OSI is nothing – simply remove any mention of OSI without replacing it with anything. OSI isn’t theory, it’s not a framework, and it’s not modern practice, so there’s really no point in keeping it for any of these purposes. There’s no concept that isn’t better taught by not referencing OSI.

                But still, some demand a theory framework. This textbook therefore presents two alternatives. […]

                A synthesis of your view and the book’s would be that the simplest alternative to the OSI model (teaching nothing) will harm students and that teaching a theory framework is indeed helpful.

              2. 1

                I realised some time after making my comment that there’s a kind of Sapir-Whorf thing going on here, where you lack the language to succinctly describe what you are doing, so you have to use circumlocutions instead. And I don’t have better words for you to use instead, which sucks, though I suspect that using concrete nouns would be better than abstract jargon. For instance, when you say “layer 2 address” I guess you mean EUI-48 specifically, not any other kind of address. But you were vague so I have to assume — tho I think it’s a fairly safe guess because you talked about confusing addresses between the underlying networks and your mesh network.

                1. 2

                  I guess you mean EUI-48 specifically

                  In practice, yes, but not necessarily. I can imagine that the ‘L2.5 address’ could have easily been some other new invented address type. But this would have necessitated more work in writing functionality for IP to connect to a new underlying L2 layer.

                  Yes, extra more specific shared language would be really useful! The abstractions that seem useful to me, that likely provide useful shared context are things like using ‘L2’ to refer to ‘the kind of thing that ethernet interfaces and MAC addresses do over local physical or wifi network segments’, and ‘L3’ to refer to ‘the kind of thing that IP addresses, routers and routing tables do in software somewhere above a physical layer’. Giving these concepts better clearer names, and acknowledging that they can be connected in all kinds of ways for different applications, and are not specifically tied to implementations (‘Ethernet’, ‘TCP/IP’) would be useful.

      2. 4

        Strange hill to die on.

        Holy O.o you’re not kidding, I was expecting a short blog post explaining tl;dr that OSI was an overengineered model that never really made it into the real life but it looked comprehensive and formal enough that everyone grabbed it, and now we’re stuck with a model that’s both too complicated for the network stack we have and unable to represent some of the quirks of our protocols that well.

        237 pages is not quite what I had in mind for the “tl” part though.

        I took the author’s advice and read through the seventh chapter (Misconceptions). All of it is sound and, yep, it’s all things I was frustrated with.

        Even relatively uncontroversial things, like layering, hit pretty close to home. In my experience, networking libraries that take the layering part too seriously tend to handle things like tunneling or cross-layer dependencies a little awkwardly. The expectation that a packet of layer N will always contain a payload of layer N + 1 or higher, or that parsing layer N information does not depend on data at layers N - 1 and lower don’t hold true IRL (but see below for why I think it’s still absolutely the right model to apply).

        However, while I agree that the OSI model is not a particularly good framework (like all frameworks that evolved out of aborted real-life implementations) I’m not particularly eager to replace it, for two reasons.

        First, models structure new designs just as much as they represent existing ones. Replacing OSI with a narrower model runs the risk of educating a generation of developers with a narrower understanding of protocols. One with less obvious separation of layer concerns runs the risk of educating a generation of developers who’ll value centralisation of concerns even more than contemporary commercial practices require them to, and who will come up with bulkier protocols that don’t compose well.

        The OSI model is bad in a lot of ways, but I’m yet to see an alternative good enough that it’s better to teach that instead of the shortcomings of the OSI model. That’s how lots of models in engineering are taught, FWIW – pretty much every piece of professional knowledge about EE that I have (or, rather, that I have left) comes with a “here’s where it doesn’t apply” disclaimer, and figuring out which model to use and how to refine it is like half of the hard part of every design. (Note that the models that the author proposes might be one of these “better models”, I haven’t read them thoroughly; I’m not trying to pack up a clever retort here)

        Second, while the cost of inertia is real, the cost of churn is also real, and often higher. Like you, I can’t say I’ve seen a case where us of the OSI model prevented something from getting done. Any model that we’ll ever come up with is going to break down against some corner case. Coming up with a new one won’t just fix all the shortcomings of the OSI model, it’s going to come up with a whole new set of shortcomings of its own, except we won’t have like fourty years of prior art on how to deal with them.

        1. 5

          Replacing OSI with a narrower model runs the risk of educating a generation of developers with a narrower understanding of protocols.

          A major problem with the OSI model is that (aside from being wrong) it is already too narrow and hinders understanding of the breadth and diversity of the protocols that are currently in use.

          One of the important features of the Internet was the notion of a “narrow waist” in the protocol stack, so under TCP/IP there could be many different lower layers, and on top of TCP/IP there could be many different higher layers. But the way people use the OSI model is to bundle all the sub-IP protocol stacks into a single layer. Why does Ethernet get one layer when TCP/IP gets two?

          What works better (as Rob Graham says) is to treat the OSI model as a model of the OSI protocol stack in particular, not as a universal truth. Other protocol stacks have their own models: IP, CSMA/CD Ethernet, switched Ethernet, WiFi, MPLS, etc. usw. Then you can put them side by side and observe their similarities and differences. The important characteristics of a protocol stack are the features they require of their under-layer and the features they provide to their over-layer, and internally how they handle identification, addressing, routing, reliability, ordering, and so on. Inside a protocol stack there are important parts that are not layered, such as STP in Ethernet and BGP in IP.

          In real world networks there’s an enormous amount of layering and tunnelling that cannot be understood through the OSI lens. There isn’t a fixed set of layers and the same functionality is not layered in a consistent order in different protocol stacks - encryption is the most glaring example. Instead of OSI, it is better to think of layering entire protocol stacks, as in John Day’s book “patterns in network architecture”. Then you have a model that can describe things like Ethernet-over-MPLS, TLS VPNs, and the like.

          1. 3

            I sort of alluded to that in my comment above but I think I should’ve articulated it explicitly because I agree with pretty much everything you’re saying, and I realise I was bringing two different things under the “OSI is bad but I don’t know of a better model” umbrella: it’s a bad descriptive model (because it was originally a prescriptive model), but it’s not the worst teaching model for some protocol design principles.

            I 100% agree with this:

            What works better (as Rob Graham says) is to treat the OSI model as a model of the OSI protocol stack in particular, not as a universal truth.

            The fact that we ended up teaching it as a universal truth is… I’m not even sure it’s an unfortunate accident, but it’s unfortunate nonetheless.

            However, I think the OSI model (or what we now understand as “the OSI model”) does have some genuine teaching value, as long as it comes with the caveat that it’s a model for thinking about protocol stacks that originated in a particular stack, not the model for thinking about protocol stacks that we came up to model all protocol stacks ever. It’s an abstract model that illustrates many useful design decisions, and provides a useful theoretical vehicle for answering a bunch of questions that are useful to us now, but weren’t so obvious when we were undergrads. I mean things like why TCP is above IP in TCP/IP, or why TCP and UDP are separate protocols instead of a single protocol with different transport modes but otherwise similar semantics.

            Yes, it wasn’t an abstract model from the beginning, and it shows, but it’s a model that nonetheless illustrates enough protocol design decisions to be useful for teaching people who don’t know the first thing about TCP/IP. Day’s model is generic and powerful enough to adequately describe things like Ethernet-over-MPLS but it’s not a model you can use in an undergrad course without everyone in the audience thinking they found the one course that makes even less sense (to put it charitably) than the one where all the sample code is in Haskell.

            A major problem with the OSI model is that (aside from being wrong) it is already too narrow and hinders understanding of the breadth and diversity of the protocols that are currently in use.

            I agree that this is a problem with OSI, but my criticism about narrower models was aimed at the linked document. The author seems to be torn between coming up with a model that is both generic enough to describe e.g. tunneled networks, but also hierarchic enough to preserve some of the structure of the OSI design, like a gradual “descent” from high-level data to EM waves.

            In order to preserve that property, they introduce the concept of layered networks that are decoupled from protocols, which is okay (and obviously reminiscent of Day’s model), but maintains some form of hierarchy among networks (or protocols?) and their layers, to the point where the OSI model explicitly fits part of the author’s model:

            In our thinking, OSI is a seven sublayer model. It’s not defining layers so much as sublayers. All seven of the sublayers are designed to work together as part of a single network.

            A simplified descriptive model like the high-level model that the author introduces (or John Day’s layered networks model, for that matter) is extremely useful to us as developers. But it’s hard to explain universal protocol design practices (like separation of concerns) on it without hopelessly tying down the discussion to specific network stacks (again, to use the author’s terminology – i.e. to things like TCP/IP, LTE or whatever).

            Much of this simplification is done by yanking away things that are either obsolete (e.g. the session layer) or easily folded into more generic layers (e.g. PHY, MAC and LLC). That’s a good way to come up with a more universally descriptive model, but it provides a rather bad framework for explaining eager students the relative merits of making RRC a separate protocol instead of just folding that, PDCP and RLC into a single “radio control” protocol.

            Edit: FWIW, I used to recommend Patterns in Network Architecture to our interns (and I still would, I just don’t work in a place where we have interns who do networking!) but I don’t think you can start networking education from there, not without providing an addenda to it that would effectively amount to a more composable version of the OSI model. At that point you might as well just teach the OSI model and generalize from there.

            1. 3

              the OSI model [is] an abstract model that illustrates many useful design decisions, and provides a useful theoretical vehicle for answering a bunch of questions that are useful to us now, but weren’t so obvious when we were undergrads.

              Er, what? No, it’s a concrete model for a particular network stack.

              I would really like a specific example of a current question the OSI model is helpful for answering. Not, oh we had to break the OSI model to get the result we wanted, but, this modern design (more recent than 2003, say) follows the OSI model and the model was a successful guide to the protocol architecture. Like, I want to see exactly 7 layers.

              And I don’t know about you, but I’m 49 years old, and I was an undergrad when the Internet was in the exponential part of its adoption curve. The OSI stack was a joke, a bloated pile of paper protocols with incomplete implementations that did not work. It seemed to me that we were taught about OSI because until 1991 it was the mandated wide-area protocol stack on JANET, and as soon as IP was permitted OSI was no longer the future and IP dwarfed its traffic in a few months. And in the 30 years since then, I have learned more useful lessons from the history of IP than the history of OSI — except for the bits of OSI that have survived, like the x.500 directory and its relation to LDAP, and ASN.1’s legacy of deserialization vulnerabilities in protocols like SNMP, and the intense pain of working with x.509.

              But not the OSI model, because it’s useless except for talking to people who have been taught badly.

              I mean things like why TCP is above IP in TCP/IP, or why TCP and UDP are separate protocols instead of a single protocol with different transport modes but otherwise similar semantics.

              But the OSI model says nothing about IP and TCP and UDP. OSI does not explain why TCP was separated from IP so that UDP could exist alongside it. OSI’s design for network audio was the telephone system, with connection-oriented channels (right down to layer 1 or 2, think ISDN). UDP was created to support packetized network audio, in direct opposition to OSI.

              Encryption is a really great example of a protocol function that the Internet has struggled with for decades. Does OSI help when designing a protocol with encryption? No. Which layer does encryption belong in? WiFi says layer 2. IPSEC says layer 3, er, or maybe layer 4, depending. TLS says layer 5 (if you ignore what the OSI model actually says about layer 5). Telnet and ssh say layer 7. QUIC says, let’s swap layers 4 and 5. (Or maybe OSI says 6 instead of 5? Can you explain which is correct, with reference to the standard?)

              Or how about DHCP? It runs over UDP so it must be an application layer protocol. But it assigns IP addresses, so it must be a layer 3 control protocol. But ethernet switches participate in DHCP so part of it must be layer 2? From the OSI point of view this is nonsense, but if you use a model of protocol stacks, DHCP is a control protocol and part of the Internet protocol suite that helps to adapt IP to a lower level protocol stack, with co-operation from the lower layers.

              But it’s hard to explain universal protocol design practices (like separation of concerns) on it without hopelessly tying down the discussion to specific network stacks

              But the OSI model is a model of a specific network stack. It has been watered down and abused and misapplied as a universal ideal when it really is not.

              easily folded into more generic layers (e.g. PHY, MAC and LLC).

              I am not sure what you mean here. Are you saying that it’s good to ignore the PHY/MAC/LLC structure and treat them as a single layer 2 (i.e. the traditional way the OSI model is applied)? Or are you saying that there are common structures across “layer 2” protocol stacks (which the OSI model does not explain)?

              1. 2

                Er, what? No, it’s a concrete model for a particular network stack.

                Please, try to be a little more charitable here :-). I know it’s a concrete model and I’ve already acknowledged that in my posts above, but for better or, realistically, for worse, it’s taught as an abstract model.

                And I don’t know about you, but I’m 49 years old, and I was an undergrad when the Internet was in the exponential part of its adoption curve. The OSI stack was a joke, a bloated pile of paper protocols with incomplete implementations that did not work.

                That part is something many undergrads today are not aware of. The best case scenario is that instructors will tell them the OSI model originated in an effort to design a specific communication protocols stack, which it didn’t work out, but the model turned out not to be all that awful. But a substantial number of undergrads don’t know there was actually an OSI protocol stack at one point.

                But the OSI model says nothing about IP and TCP and UDP. OSI does not explain why TCP was separated from IP so that UDP could exist alongsidestra it.

                It doesn’t – it couldn’t, I think both IP and TCP predate the first published version of the model and UDP was basically contemporary with it – but you can explain some general design principles based on it. You can explain an audience of twenty year-olds that separating internet-level protocol functions into network and transport functions enables a flexible enough packet relaying protocol to support multiple higher-level data transport models. You don’t even need to name the actual protocols at that point, it’s a discussion you can have using nothing but the OSI (or, for that matter, the TCP/IP) layer diagram. It’s not “historically accurate” in that TCP, IP and UDP weren’t designed according to the OSI model, but it’s a good enough basic illustration of these concepts.

                Explaining the same principle in terms of the network IPC model in an introductory course isn’t feasible IMHO. PINA spends like 100 pages across three chapters to build that model, and about half of that is just the abstract terminology. And I mean abstract, like, routing is explained in terms of maps between (N)-addresses and either (N-1)-IPC process names or (N-1) addresses, depending on the relation between the (N)-DIF and the (N-1)-DIF. Bad though layered models (like OSI) may be, you can at least introduce them in ten minutes, and they make sense with minimal prior exposure.

                I am not sure what you mean here. Are you saying that it’s good to ignore the PHY/MAC/LLC structure and treat them as a single layer 2 (i.e. the traditional way the OSI model is applied)? Or are you saying that there are common structures across “layer 2” protocol stacks (which the OSI model does not explain)?

                FWIW I’m definitely in the latter boat but I was specifically talking about the document linked in the post. The author proposes a model in which every “layered network” (web, Internet, Ethernet) can be decomposed into its own “sublayers” or “protocols”. One example that the author gives is that the Ethernet “network” can be decomposed into LLC, MAC and PHY “sub-layers”. (Everything in quotes is Graham’s own terminology; I hope I’m using it right). It’s akin to a more hierarchical version of the network IPC model.

                This is a good descriptive model and it clears up a lot of misunderstandings from the OSI layer. But it will generate misunderstandings of its own if you try to use it as a teaching model. E.g. chronically sleep-deprived young people who just learned that tunneling is a thing literally from your previous sentence will have a hard time coming to terms with the fact that the two Ethernet networks in an Ethernet-over-IP stack over a wired Ethernet network don’t meaningfully decompose into the same sub-layers (as in, what would the PHY layer of the tunneled Ethernet network do?)

                It’s not like the OSI model handles this any better, so I’m not saying we don’t need a better model – but I am saying that even a model that clears up all of OSI’s ambiguities will still have ambiguities of its own, which are hard to spot in the “design phase”, when you’re super enthusiastic about your model.

                I don’t mean to thrash the author’s work with this – as I mentioned above, I really think it’s a better, as in, more descriptive model, a generic model with less concrete baggage. I just don’t think it’s easy to teach it to people who don’t have Robert Graham’s experience. We teach imperfect models everywhere, just because they’re easier – hell, we’re still teaching Newton’s bullshit, to school children no less! Like @james here, I think teaching the OSI model well would resolve much of the cricitism in the linked document and still provide a lot of value to an audience of people who don’t intend to make a career out of designing and implementing new networking protocols.

                1. 6

                  While I am fond of the Pratchett/Cohen/Stewart idea of “lies to children”, I don’t think it means that you should deceive them. By the time they are expected to learn and apply Newton’s laws and equations, children are already aware that relativity is a thing - they have heard of the speed of light as an absolute limit - so they know that Newton’s laws are a simplification. And they are taught that there are more immediate complications such as friction. Nevertheless, Newton’s laws are actually useful in practice. Unlike the OSI model.

                  As far as I can tell, there are two generally useful ideas in the OSI model:

                  • protocols are designed in layers, which are often visible in the nested headers of packets

                  • the absolute bottom layer is physical and the absolute top layer is application

                  Everything else about it is wrong when applied to other network suites. Sadly there’s a third practical reason for learning the OSI model, to be able to communicate with people who have been taught to apply the OSI model to non-OSI networks.

                  I don’t think John Day’s RINA model is any more useful than the OSI model. It has the same problem of being a model for a specific network stack, that is dressed up as being more universal than it is - e.g. it doesn’t make sense to treat everything as IPC. (Day was one of the authors of the OSI model, so experience suggests we should treat his architectural fancies with pragmatic suspicion.) But RINA has a generally useful idea about layering that is missing from OSI:

                  • protocol stacks can be piled up recursively

                  Then there’s the Internet model which has another generally useful layering idea that OSI lacks:

                  • an hourglass model, where there is diversity of protocols above and below a narrow waist

                  The narrow waist pattern also happens in Ethernet, whose framing and addressing is a narrow waist below which there are diverse physical and virtual sublayers, and above which are diverse higher layer protocols. It explains what happens to the PHY layer when Ethernet is tunnelled.

                  I think if you are going to teach this stuff without misleading or overwhelming students, it should be fine to introduce each model with about as much detail as the OSI model is currently taught, but with a change of emphasis:

                  • make it clear that each model applies to a particular network stack

                  • make it clear which parts of the model are parochial and which are universal

                  • more compare-and-contrast to illustrate which parts of each model fit foreign protocol stacks and which parts do not

                  For example, immediately after showing the OSI 7 layer model, show the Internet 4 layer model (conventionally numbered 2, 3, 4, 7) explaining that unlike OSI the Internet says nothing about layer 1 and very little about layer 2, being below the waist of the hourglass; and that the functions of OSI layers 5 and 6 are replaced by Telnet’s feature negotiation, and don’t occur elsewhere in the Internet stack.

                  I’m not sure that a universal model even exists, but I am confident that a small collection of network architecture patterns is a better tool of thought than misapplying an inappropriate model.

                  1. 1

                    I’m not sure that a universal model even exists, but I am confident that a small collection of network architecture patterns is a better tool of thought than misapplying an inappropriate model.

                    That is a fair point. Teaching patterns instead of grand truths isn’t very common in undergraduate courses, but it probably should be. That’s actually the main reason why I recommend people to read PINA – I’m pretty fond of the RINA model because lack of composability is the thing I hate the most about the OSI layer, but Day’s discussion of protocol and network elements and patterns is readily applicable to a much wider set of problems.

                    That would also make it a lot easier to explain that there are different models, and which parts are universal and which aren’t. I’m not sure the academic world is ready for this kind of approach in a technical setting – most “competing” models that they teach are either compatible or complementary – but that’s on academia.

                    On this side of the academia-industry divide I’ve generally been able to explain away most of the OSI fanfic. If we can do it, I’m sure they can do it in class, too :-).

    14. 6

      I never thought of this, it’s brilliant. (I kind of wish Firefox had a “form history” UI, especially given that it already saves/caches this stuff anyway).

      I get hit by this at least once per year. Form history is only kept when you press the back button IFF it’s a static HTML (not generated by JS) and the page is cached and there was no redirect tomfoolery.

      1. 7

        There used to be an addon for this, ‘Lazarus’, but modern browser policies (probably rightly) don’t allow it to run any more.

        1. 13

          For Firefox on the desktop there’s still ‘Form History Control’, which has saved my butt on multiple occasions.

          1. 6

            Links:

      2. 1

        I used to use the Lazarus plugin mentioned in the other comment.

        These days I solve this the simple way - write large things in a text editor with autosave, paste into browser later.

    15. 4

      It feels like everyone is reverse-engineering git. Isn’t there a technical documentation somewhere describing how it works internally ?

        1. 3

          Sadly, that book is not comprehensive. It says nothing about how Git represents stashes internally.

    16. 4

      In case someone’s wondering why medical trial statistics appear on a tech site — the same paradoxes can appear in benchmarks. For example if you want to show off a new compression algorithm, testing it on a mix of small and large files can lead to opposite conclusions depending on how you sum up the data.

      1. 6

        The analogous mistake in a compression benchmark would be a claim that because the average .zip file found in the wild is smaller than the average .7z file found in the wild, compressing with zip will result in a smaller file. I doubt anyone would create such a flawed compression benchmark on accident, as all compression benchmarks I’ve seen use the same set of files as inputs to each algorithm.

        Perhaps a more realistic use case for applying Simpson’s Paradox to software dev would be evaluating software based on reviews. You might read more complaints about PostgreSQL being unable to handle large datasets than the same complaint about SQLite, but PostgreSQL could still have better support for large datasets if all the SQLite users who ran into scaling problems already mitigated those problems by migrating to PostgreSQL.

    17. 2

      Is there a reason we shouldnt just embed these kinds of badges?

      1. 3

        What do you mean by embed? Embedding the image data of the badge into your README.md file as a data: URL rather than having use an image URL of https://casuallymaintained.tech/badge.svg? Copying the SVG file into your repo and using a local relative image URL in your README.md? Some other kind of embedding?

    18. 3

      Because this language model uses only one word of context, it is an example of a Markov chain.

      1. 2

        This is commonly done with multiple words of context, producing higher quality output, and I’ve seen those called Markov chains too. I think you just have to expand the notion of “state” to be a tuple of past words.

    19. 1

      I never needed ligatures in my life anyway but this st is just an abomination.

      1. 3

        I don’t understand. What does this article have to do with ligatures? By “st” did you mean “site”? I don’t see any ligatures anywhere on the page (in Chrome on macOS; the rendered body font is Open Sans Light).

        1. 3

          there is an st ligature in the “just” of the title (firefox, linux)

          edit: and it is just an abomination

          1. 4

            It’s Firefox-on-Linux specific for me, and can probably be removed by disabling the “liga” font feature for Montserrat.

            1. 2

              The site doesn’t use any ligatures for me on iOS Safari.