Threads for adamgordonbell

    1. 2

      Author here. Thanks for sharing! I see you are Dutch @j11g ! I spent a lot of time with Dutch to English google translate to put this episode together. Eric Smit helped correct some of my pronunciation but otherwise I was bit lost and tried to skip saying places like Nieuwegein.

      1.  

        Yes I am Dutch and a big big fan of your podcast!

        I have blogged (in Dutch) about your Yann Collet episode: https://janvandenberg.blog/yann-collet/ but also blogged about the Eric Smit book: https://janvandenberg.blog/de-broncode-eric-smit/

        So this episode was right up my alley! The story is part of Dutch tech lore, you have however given me a new insight or you are actually the first to come up with a reasonable explanation of what was really happening with the demos (the book leaves this part open). It was all a ruse from a guy who didn’t really have the tech. So thank you for that!

        Also if you ever need help with Dutch translations, let me know ;)

        1.  

          Will do! Thanks for listening

    2. 9

      There are at least 2 problems with YAML

      1. the error-prone syntax
      2. the fact that people use it as a base for a programming language, with conditionals and control flow

      e.g. I wrote up an example here of how Github, Gitlab, etc. do it – https://lobste.rs/s/v4crap/crustaceans_2021_will_be_year_technology#c_t7tj0u


      This page is focusing on the first part, which is fine

      But there’s also the problem of programmability. Sections 3,4, and 5 on this survey deal with this -

      https://github.com/oilshell/oil/wiki/Survey-of-Config-Languages

      For example, Go templates / Cmake / M4, Cue / Dhall / HCL / Jsonnet / Nickel / Nix / Starklark, all have some sort of programmability

      And G-expressions are used by Guix in Guile Scheme …

      So IMO changing the syntax is the smaller part of the problem. Most apps that use YAML like CI systems or Kubernetes empirically can’t use pure data.

      They also have code, at multiple levels

      1. small scale / serial / single node – shell scripts
      2. large scale / parallel / multiple nodes – coordination among separate processes, dependencies
      1. 6

        the fact that people use it as a base for a programming language, with conditionals and control flow

        This is the one that - to me - is the most valid complaint. I’ve been trying to explain this to people, in writing and in person, it’s not necessarily YAML it’s tools that need a full programming languages but want to say they just need to be configured, so they have a YAML config with if and such.

        ( Also YAML templating, how is that that a thing?)

    3. 3

      Author here. CNCF has a ‘cloud native’ glossary and with all the cloud native terms laid out, it all sounded a bit too important.

      Also, I love the A Brief, Incomplete, and mostly wrong history of programming languages article from back in the day. So I tried to combine those things together.

      I only covered a very small part of the CNCF glossary because it turns out that being funny is hard work and its not clear I’m good at it.

      1. 2

        Needs more The S stands for simple to be the ultimate troll post.

        1. 1

          I’ve never seen that before, but it very much rings true.

          I tried using soap tooling from Microsoft to talk to Java and Tomcat stack a long long time ago, and it was a huge pain.

    4. 0

      Great post! I like the concrete example.

      FWIW this is what Hay is about:

      Discussion last year: https://lobste.rs/s/phqsxk/hay_ain_t_yaml_custom_languages_for_unix

      Basically it allows you to use the syntax of YSH to declare data. So you have both declarative data and control flow, interleaved, in the same language.


      It’s not done yet, but I just translated the evaluator to C++. I think it could almost directly express the syntax you wrote a parser for

      job first PARALLEL {
         echo "Hello, World!"
      } when {
         echo 'First Condition' 
      }
      
      job second PARALLEL {
         ls
      } when {
         echo 'Second Condition'
      }
      
      wait
      
      job third  {
         echo "Third thing" 
      }
      

      It would probably look something more like this:

      Job first PARALLEL {
        ACTION { echo "Hello, World!" }
        WHEN   { echo 'First Condition' }
      }
      
      Job second PARALLEL {
        ACTION { ls }
        WHEN   { echo 'Second Condition' }
      }
      
      wait
      
      Job third {
        ACTION { echo "Third thing"  }
      }
      

      (Upper case means that the thing in {} is a shell block, not a data block.)

      But yeah I think we could run this … Definitely something to test out.

      The idea is of course that you can avoid writing a parser, which is not that easy – what people come up with usually has bad interactions with shell!


      I wonder if you can fill in what “First Condition” and “Second condition” are? This seems a bit unrealistic – why not include what you’re actually doing?

      1. 3

        Hey, Thanks for your thoughts.

        The real-life examples are a bit more complex, but it might be something like ‘find the correct blog post, check if it has an excerpt in the frontmatter. If it doesn’t then call GPT-4 to get a summary and add it to the frontmatter. ’ Or check the file for any headings without a time stamp, and if they are missing, run this AWK script that will add the time stamp. Things like that.

        1. 1

          Thanks, that makes sense.

          After seeing the awkwardness of Github Actions, I was expecting a connection to Earthly? But maybe this is a more imperative problem with side effects, and Earthly (“cross between Dockerfile and Makefile”) is more about constructing artifacts in parallel?

          i.e. a more functional / dataflow / incremental approach?


          I’d be interested in any feedback on Hay … the examples in the doc should all run, although we are actively working on it as mentioned. I expect there is a bunch of polish to do based on real examples.

          It is an independent open source project, not tied to a cloud like e.g. Deno :-) (I think we need a name for that)

          The idea is of course that you don’t have to write your own parser to make DSLs around shell. Makefiles and Dockerfiles both embed shell.

          The $$$$ problem in Makefiles is “the original sin” IMO – Make is poorly layered on top of shell. With Hay, everything is parsed all at once, you get a big tree.

          It’s very Lisp-y, e.g. everything is parsed at once into a big tree

          $ cat _tmp/h
          hay define Package/ACTION
          
          Package {
            version = '42.0'
            ACTION {
              echo hi >
            }
          }
          

          If you make a syntax error, then it’s caught a parse time, before you push, even though the code is not executed. It’s basically parsed into closure-like “code object”, or it catches the syntax error:

          $ ysh  _tmp/h
                echo hi >
                         ^
          '_tmp/h':6: Invalid token after redirect operator
          
          1. 2

            I’ll take a look. I like your Oilshell work!

            I do use Earthly for some of this stuff, but in the article it was more of a sideways look at what I dont like about devops tools always using YAML.

    5. 3

      Hey, I wrote this and built the translator. One thing I’ve just been thinking of is that we now live in a world where code translation, and even using local idioms seems possible. It takes some effort but is totally doable.

      What could you do with that? Could you bootstrap a programming language by building such a tool and then bootstrapping the libraries required?

    6. 2

      So Amir Rajan was a burnt-out C# consultant who tried to change the .net world through open-source projects and concluded that he had failed. So he took a sabbatical and embraced indie game development.

      And luckily for him, his game was eventually a wild success. He was getting written up in the New York Times and The New Yorker.

      Now, Rajan builds DragonRuby, a game runtime, to help others follow his path.

    7. 6

      Ben is the longest tenure Stack Overflow employee and got started at the company because he was a reader of Coding horror and he hung out on Meta and ended up building a unicorn webservice for an early April fools joke. The early story of stack overflow and how Joel and Jeff talked it out on a Skype podcast are something I wasn’t aware of.

      The other thing that’s interesting though, is how Jeff and Joels efforts to build a Dev First place ended up backfiring in way. Ben opinion was devs, including him, became sort of assholes to other employees and even to each other on lesser teams because they were elevated so much. So is implying that developers at a company are more equal than others a problem? I mean at a company like SO it may literally be the truth. But can have ramifications on how people interact with each other.

    8. 2

      Yann Collet’s story is crazy I really loved the interview, thank you.

      One thing that resonates with me a lot: to stay motivated in the long run (years and years) on a dedicated and hard subject, your driver should be deep inside you. It’s not the money that will keep you digging into the little details, gaining a small perf percentage at each step.

      1. 2

        Agreed, that was a big take away of mine as well. You need to be in it for the long term and have deep motivations for that.

    9. 7

      Host here. Yann Collet was bored and working as a project manager. So he started working on a game for his old HP 48 graphing calculator.

      Eventually, this hobby led him to revolutionize the field of data compression, releasing LZ4, ZStandard, and Finite State Entropy coders.

      His code ended up everywhere: in games, databases, file systems, and the Linux Kernel because Yann built the world’s fastest compression algorithms.

      And he got started just making a fun game for a graphing calculator he’d had since high school.

      1. 3

        Thanks, it’s a great story. LZ4 really was a game changer (ZSTD is nicer). It was so fast that even a slow CPU wasn’t a bottleneck, so unless you really cared about latency enabling it probably made things faster. I remember it becoming the default for ZFS and it really was a huge win: things got a bit smaller, but they got faster by the same amount. The total amount of data read or written was reduced and so the disk became less of a bottleneck but, unlike gzip mode, it wasn’t replaced by a bottleneck on the CPU.

        As he mentions, doing less was important, but the big win for LZ4 was the early abort. If you’re writing a zip file or an MPEG stream to disk, LZ4 bails early. It isn’t going to get a good compression ratio on already-compressed (or encrypted) data, so it detects that and bails. In contrast, gzip mode just sat there eating your CPU (and often made thing bigger!).

        It’s also great to read that even people like Yann suffer from imposter syndrome at times!

        1. 1

          It’s also great to read that even people like Yann suffer from imposter syndrome at times!

          For sure! Going to work as a professional developer for the first time in your 40s at Facebook sounds very intimidating.

          LZ4 bails early

          I didn’t even know about that. Very cool!

        2. 1

          Tangentially, I remember seeing it mentioned in some benchmark of LZO that sometimes LZO decompression could occasionally be a little faster than memcpy because it needed to read fewer bytes from memory.

    10. 18

      I was a nixos user for a year or so maybe 4 years ago. I recently learned how containers work at the kernel level, which deepened my understanding of Nix.

      I’m still not sure this is right, but this is how I’ve been thinking about nix:

      Packaging things is hard on Linux because you have dynamic dependencies. Everything written in C probably loads in libc. And it gets worse from there. You have all kinds of dependencies that are loaded dynamically at runtime, some crypto library, P threads, et cetera. If everything was linked into a static fat binary, it would be easy to package a deploy things onto Linux machines, but that’s not the case.

      So one way to view Docker is as a hack around this issue. You can ship an application easily – You can package it up easily – If you put it inside of a box and inside that box, you put an entire Linux file system and all its dependencies.

      A second solution is the one that Nix has, which is to rethink all this. When you build things just be very explicit about what the dependencies are. And when you link them, link them by hash that’s made of all the inputs, and then you don’t have collision problems. And you’ve solved the packaging problem for Linux. But it requires changing how you build programs.

      I’m sure nix experts on here have a deeper understanding of things. But it’s been a useful mental model for me.

      1. 19

        That’s a pretty good way to put it. I think of it even more in terms of namespaces: In the traditional world, the filesystem is one big shared mutable namespace, and everyone has to agree on what e.g. /usr/lib/libssl.so.1 means. This causes the usual problems with shared mutable state that we know from programming languages, but at the systems level instead. Docker takes that shared mutable namespace and turns it into a private mutable namespace - implementation-wise, it’s even literally a “mount namespace” on Linux. So you get to keep your existing software that cares about /usr/lib/libssl.so.1, but you lose the global coherent view on things. Nix takes that shared mutable namespace and turns it into a shared immutable namespace. Everyone still has to agree on what /nix/store/xzn56dy54k0sdgm4lx98c20r81hq41nl-openssl-3.0.8/lib/libssl.so.3 means, but because of the hash addressing it could really be only one thing. Your existing software now breaks and needs to be rebuilt, but you get to have a globally consistent namespaces, which makes the final running system (but not the build system) easier to reason about.

        In a way, if Docker is the Erlang of software packaging, Nix is the Haskell.

        1. 9

          This is a great read!

          In a way, if Docker is the Erlang of software packaging, Nix is the Haskell.

          Lol, I am stealing this.

        2. 4

          How is Docker Erlang?

          A Dockerfile looks like BASIC to me – no abstraction. Less abstraction than shell.

          Erlang is a functional and concurrent language with Prolog-like syntax. I don’t see the relation to Docker!

          I definitely see Nix and Haskell. Nix is based on lazy expressions, and Haskell is too, although Haskell has a bunch of other stuff like I/O and so forth.

          1. 6

            Dockerfiles indeed offer little to no abstraction. But docker is more than that. I was thinking more of the runtime view - the way state is isolated into smaller components that run and can fail independently of another.

            1. 7

              OK I see what you mean, but that “failing independently” is due to the operating system itself – Unix processes, and Linux containers (which fix the “leaks” in processes).

              Let’s not give Docker too much credit, OR too little! It’s really layer on top of the OS, not the OS itself.

              It should be a small layer, but the implementation is pretty bad and tightly coupled, so it’s a big layer.


              Too much credit: Docker is like Erlang! No, Unix processes are like Erlang – Erlang itself uses the word “process” since the VM is a like a little “monoglot” operating system.

              Fault tolerant Linux clusters (“cattle not pets”) are known at Google / Facebook these days, but go back to Inktomi in the 90’s:

              https://www.usenix.org/conference/usenix-1997-annual-technical-conference/inktomi-search-engine

              http://now.cs.berkeley.edu/webos.html

              As a data point, Google was using Linux containers / cgroups in their clusters starting ~2005, almost a decade before Docker launched.

              Docker had literally nothing to do with this. They actually failed to build their own cloud – the company was called “dot Cloud” before they pivoted to Docker.


              Too little credit: Docker is just LXC, or a bad version of Solaris Zones or Jails (I’ve heard this a lot). No, I would say the central innovation of Docker, and a very useful one, is LAYERS – and the DSL for specifying layers.

              Layers are important for both storage and networking. They solve the “apt-get sprays files all over the file system” problem, and so you can just use apt packages instead of rewriting your whole build system. (This has advantages and disadvantages, but it’s clearly here to stay)

              FWIW I built containers from scratch with shell scripts, pre-Docker, and Docker definitely adds something. But it’s more about tools and less about the runtime architecture of your software.


              tl;dr Please don’t say Docker is like Erlang :) It’s sort of a category error, because you can build OCI/Docker containers with both Nix and Bazel. Nix has a functional language, and Bazel has a Python-like language.

    11. 2

      Triggering steps based on manually specified globs makes me wince a bit, but it’s unclear what people should do instead. Bazel is the “proper” solution but it’s cruel to recommend that to someone who does not have a build team to look after it. Even once you have Bazel, you need more plumbing to get conditional CI jobs.

      Is there a good middle ground out there?

      1. 2

        This is why Earthly was created, to give a more accessible version of proper caching, isolation and build parallelism but in a more approachable manner than Bazel.

        https://earthly.dev/

        If you are working in Python, Pants is also worth a look.

        1. 1

          https://earthly.dev/

          Too hard to compete with GitHub Actions Marketplace at this time. I don’t want to rewrite custom plugins myself.

      2. 1

        Triggering steps based on manually specified globs makes me wince a bit, Why? Should Python linter run when the Go code changes? Or should the TS linter run when the Rust code changes?

        1. 1

          Because it’s something the build tool can calculate for you. Anything worth its salt can tell you exactly which files it used once you have a complete product, so the sensitivity list could be generated automatically. This is a classic strategy for traditional Makefiles when compiling C.

          1. 1

            Which build tool will tell you which files will say pylint access?

    12. 0

      doesn’t everyone else in this space just use containers?

      1. 1

        So our build runner is buildkitd, and it runs containerized but needs to run privileged. But I think the answer is no, everyone else doesn’t just let everyone run arbitrary containers on shared infra. AWS uses firecracker for lambda isolation for instance.

        1. 1

          I could see privileged containers being the line drawn in the sand, but it was interesting the reasoning regarding container breakouts

          I guess I feel like container breakouts for unprivileged containers isn’t something that people typically worry about… perhaps as much as they should?

          I guess I need to try out the service but a CI service that doesn’t support customer containers seems constrained to me, maybe I just need to give that some more thought

          1. 1

            Oh, so this is probably bad communication on my part, but we do allow customers to run their own containers. We just don’t run all the containers together on a shared instance, like a shared Kuberentes cluster or something. Instead each customer is on their own EC2 instance. Containers are fine for packaging, we just have another layer there.

            1. 2

              each customer is on their own EC2 instance

              ah interesting

              yea, the more i read about buildkit and earthfiles the more i was seeing the whole thing together, i think the article might assume a lot of knowledge about what you all already have in place, which might be fine depending on your intended audience

              to me your company is just known as “that company trying to figure out how to get ci pipelines to run the same locally as remotely” which is a very inciting prospect when you’re an engineer working on devops tooling and you’re trying to figure out how to get things to work the same w/ local dev/build as it does in a gitlabci runner, which we run for ourselves w/ k8s clusters. but i’m not exactly sure how gitlab.com provides theirs, which this article would be more analogous to… that’s why i started the thread

    13. 6

      One of the authors here.

      We built a service that executes arbitrary user-submitted code. It’s the thing you’re not supposed to build, but we had to do it, because it’s a cloud build service.

      Running arbitrary code means containers weren’t a good fit ( container breakouts happen), so we are spinning up and down ec2 instances. This means we have actual infrastructure as code (i.e. not just piles of terraform but go code running in a service that spins up and down VMs based on API calls).

      The service spins up and down EC2 instances based on user requests and executes user-submitted build scripts inside them. It’s not the standard web service we were used to building, so we thought we’d write it up and share it with anyone interested.

      One cool thing we learned was how quickly you can Hibernate and wake up x86 EC2 instances. That ended up being a game-changer for us.

      Corey and Brandon did the building, I’m mainly just the person who wrote things down, but hopefully, people find this interesting.

      The next iteration of this service using firecracker VMs is already under investigation but this is working surprisingly well.

      1. 2

        One cool thing we learned was how quickly you can Hibernate and wake up x86 EC2 instances. That ended up being a game-changer for us.

        This was my main “huh, wow” moment reading it. Definitely going to be able to make use of that I think, had never realised it was possible.

    14. 6

      Host here. This is Ron’s classic story of refusing to let a cancelled project die. They snuck onto Apple Campus, squatted in offices, and worked tirelessly to complete their graphing calculator project. The result: It ended up on millions of devices!

      Oh, and did I mention the secret Apple tablet, and the golden masters, and the fire?

      Here is one of my favorite Ron quotes:

      Ron: Frequently people would come up to me and ask me, “Do you work here?”

      And I’d just tell them no.

      And they’d say, “Oh, that means you’re a contractor.”

      And I’d say, “Actually, no.”

      Then they’d say, “But then who’s paying you?”

      And I’d tell them no one.

      Then they’d ask, “How do you live?”

      I’d say, “I live simply.”

      And then they’d ask me, “What are you doing here?” And I’d give them a demo.

    15. 3

      Yeah I did it for a year cause I was desperate for money. That was enough, I won’t do it again, rather die.

      1. 1

        What was the job? Can you share more?

        1. 1

          Backend dev for a useless company that was slowly turning dark pattern-y due to VCs having other fish to fry.

          They employ a bunch of people and some of those people even do pretty good programming work but the whole thing would not exist without the central planners. The CEO is friends with people who hand out money and the game they are playing is trying to be the interface users interact with for some class of purchasing decision. This of course is control over users and in the end that’s all the money wants.

    16. 11

      I worked for a very large local startup in the data science team and it was completely BS. Top management wanted to tell the investors that “we use machine learning for decision making”, and then later the team started also serving the marketing department for publicity stunts.

      In a team of 30 data scientists, I think maybe 2 or 3 where enough for all real needs.

      1. 8

        Oh, fake AI / ML for the investors. That is probably happening a lot right now I’m guessing.

        1. 4

          The way I’ve seen it happening is that you take some open-source libraries and Huggingface models, stick their outputs in a database with no testing or validation, and call it “proprietary AI”.

        2. 2

          That and crypto.

      2. 4

        I’ve worked adjacent to one of these BSDS teams, too. Several years would go by without anything they did making it to production. Mostly we didn’t even know what they were doing. Occasionally we’d see some kind of crazy webgl visualization that was in no way practical to implement let alone understandable to users. But the presentation would come with fancy terms like “Louvain” and would impress people who didn’t know better.

        Worst part was they’d present to the whole company at a Friday call, and then Sales would be asking the real engineering team “when can we have it” and we’d have to explain how a canned demo with cherry-picked data didn’t just translate into a production system that was useful to customers.

        The guy who got paid to create graphics with no deliverables, no timeline, and no responsibility for getting them into production or maintaining them - it was a bullshit job, but I think he was enjoying it.

        1. 3

          TIL that belgian city names are fancy. “Bruxelles” “Namur” “Gand” if needed for the next demo. “Sibret” is quite somehting if you want fancy :D

          1. 1

            They are fancy. Where I live in the US there’s a chain of waffle places where each one is named after a Belgian town.

        2. 3

          The job was fun at times because all the team was very smart. But certainly not fulfilling.

          We had a pricing model. I wrote a linear model and it fitted the data (roughly). It had economic behavioral equations underneath, to avoid nonsense. The next version was a very complicated model based on embeddings derived from boosted trees. It required an immense infrastructure to version the model, datasets, and to serve it and deploy. Weights alone had 2GB. Dataset was 40MB. Basically the same performance of the linear model but orders of magnitude more flashy and expensive.

          This was what work looked like week after week.

          The company is now laying off thousands of people.

          1. 2

            If your dataset is 40 MB then a 2+ GB model is gonna have a hard time telling you anything new!

            1. 2

              And it will also give nonsensical results by not incorporating spatial correlations explicitly because we could even demonstrate the mathematic atrocity that was so easily avoidable with a little bit of linear algebra.

              I mean it was a clear spatial econometrics job, but who gives a fuck for an economist’s team member that can code and understand the domain model.

              But guess how many data scientists actually are capable of following a standard econometrics conference paper.

    17. 14

      I would hope developers building SaaS that is mostly sold to companies that build SaaS that is mostly sold to companies that build SaaS are contemplating this.

      1. 18

        Most everything has a supply chain, and usually each item in the supply chain has its own supply chain. We don’t say that the companies that make the screws that are used in the valves that are used in the diggers that mine the sand that goes through a refinement system that goes into a fab to create a chip that goes into your laptop [0] are doing a bullshit job.

        SaaS has value, why is its supply chain necessarily bullshit?

        [0] Or whatever the actual supply chain for a CPU is

      2. 3

        Maybe crpyto infra would be a bit like this as well. Your software is used but maybe you are selling pitch forks to a pointless industry.

    18. 2

      I worked for a game studio where the publisher seemed to undermine the whole project at every turn. Games aren’t a pointless endeavour but when you know it will be a sub-standard game and very few people will ever play it that kind of qualifies.

      The reality is as programmers we get to pick and choose a bit more than most as we are in demand and there is not enough of us to go around. Ask someone that works at mcd’s or someone who works for a failing advertising company and you might get some good examples. Think about people that stand in the street in front of fast food places holding a sign and wearing a mascot uniform. I never get in a position where I am forced to accept such a job.

      1. 8

        I don’t think holding a sandwich board ad qualifies as “bullshit” under the definition in the book. The book distinguishes “shit jobs” from “bullshit jobs.” Being out in the sun all day is a shit job, but you really are advertising the store out there. The bullshit job would be working on hashtags for the marketing campaign for the store. No normal human being will ever care what the hashtag for the store is, but someone in marketing has to think it up, so they can say they’ve covered social media in the campaign.

        1. 5

          …you really are advertising the store out there.

          I don’t really consider advertising a fast food restaurant with a mascot to contribute to the world. I understand this is debatable and some other people might have a different view. But the restaurant could have a sign without wasting man hours getting someone to hold it, and most fast food places have little benefit to society. Don’t get me wrong I enjoy food and don’t mind if it is cheap and made quickly, but seriously some of the crap they push is barely food. And don’t even get me started on the franchise system and how it ruins everything it touches. End Rant

      2. 4

        I’ve worked on GameBoy Advance games. Publishing was full of bullshit there. I’ve seen graphically-rich games get canned only because Nintendo would not let them use a large-enough cartridge. OTOH half-assed rushed games got made, because the studio bought rights to making a movie tie-in game.

        1. 1

          Ah, movie tie-in games. Some of the very worst to ever actually ship. I was just watching a video about a game based on American Tail, where you wander as Fievel and deal with a janky physics / platforming.

          On the converse (slightly) I had played the original 7-Up tie-in game on the Sega Genesis, and it was considered decent.

          1. 3

            Chex Quest was a pretty fun DOOM mod given out for free in boxes of cereal. Way better than it had any right to be.

      3. 2

        The reality is as programmers we get to pick and choose a bit more than most as we are in demand and there is not enough of us to go around.

        Except the reason that we’re in demand is that a lot of the time, we’re creating pointless work for each other. Think of the amount of effort that goes into keeping up with breaking API / library versions, security issues, etc.

      4. 2

        I worked for a game studio where the publisher seemed to undermine the whole project at every turn. Games aren’t a pointless endeavour but when you know it will be a sub-standard game and very few people will ever play it that kind of qualifies.

        That’s close I think.

        I’d love it if no dev ever had a BS job, but i suspect that’s not the case. Someone mentioned to me that a lot of working at google being just moving data from one proto to the other over and over again, and that seems close to BS I suppose.

    19. 1

      When I run Docker, it’s running a Linux process on my Mac. That’s not just a matter of chroot.

      1. 4

        No. But what if I told you it was “just” a chroot in a Linux VM?

        1. 1

          Exactly my thinking. And no one has commented on my footnote on mac native ‘containers’ but osx supports chroot, so its totally possible to have native mac containers, they will simluate linux prod less well, but would be super low overhead.

          1. 3

            chroot is not sufficient for shared-kernel virtualization. It’s worth reading the Jails and Zones papers (or watching Bryan Cantrill’s excellent Papers We Love talk about them).

            In your examples, root does the chroot, but your chroot does not contain /etc/passwd and su and so you cannot drop privileges. A root user inside a chroot can use the mknod or mount system calls to mount device nodes and then get complete read-write access to the raw device underlying the disk, and so on. Shared-kernel virtualisation needs to constrain root. There are also some non-filesystem namespaces, such as SysV IPC, network ports, and so on that also need to be constrained (Bryan points out that Jails left this to future work, Zones then did it. Jails then gained VNET support, allowing a completely separate instance of the network stack for the jail, which improved performance by removing contention on some network stack structures).

            On XNU, the sandbox framework probably could be used to constrain some of these things.

            Note: I am not using the term ‘containers’ here, because containers are a packaging and distribution model and do not necessarily imply shared-kernel virtualisation, containers can also run in separate VMs or be deployed with no isolation at all.

            1. 2

              Maybe we are talking about different things. I was just suggesting that many people use a docker-compose on a mac to start up a bunch of deps. This ends up involving a linux VM that is hidden from sight. But you could do this with an image format that you start up with chroot, which contains the native mac deps. It wouldn’t be a container exactly, but a distribution framework and way to start up mac native versions of things. There are downsides to it, but also pluses and it is possible.

              (There is also sandbox-exec which might be useful for you know actual sandboxing )

              1. 1

                Please re-read the last paragraph of my post. The problem with a lot of your article is that you are conflating shared-kernel virtualisation (a family of techniques for building isolated namespaces for processes on a single kernel) with containers (a packaging and distribution model that depends on some isolation model that provides an isolated namespace).

                Because the main use case for Docker on macOS is to provide a development environment for people deploying things on Linux servers, it uses a port of FreeBSD’s bhyve to run a Linux VM as the isolation mechanism. Docker and containerd on Windows can use Hyper-V in a similar way to run both Windows and Linux containers and can also isolate Windows containers with shared-kernel virtualisation.

                You could provide isolation on macOS with the sandbox framework, but this does not allow namespace isolation. Chroot provides a subset of this. Please read the two papers that I mentioned or watch Bryan’s talk about them to understand what is missing.

                Kernels like Mach and Zircon completely elide a global namespace from their core abstractions and so are trivial to build shared-kernel virtualisation on top of, because ‘global’ namespaces are all things that are introduced by a rendezvous server that is provided to new processes on process creation. On traditional UNIX kernels it requires extra indirection.

                1. 2

                  You could provide isolation on macOS with the sandbox framework, but this does not allow namespace isolation.

                  I understand this, and I’ve seen the talk. My article didn’t use namespaces and cgroups on purpose, both because that has been done before and because I was trying to give an intuition to people using the simplest starting block. It says it’s a simplification but one I find useful in the first sentence and whole intro.

                  Note: I am not using the term ‘containers’ here, because containers are a packaging and distribution model

                  Images and the registry standard are the distribution model, in my thinking, containers are an instance of them running, but anyhow I don’t think the semantics of the terms is important here.

                  I’m certain that you know more about shared-kernel virtualization than me. I’m not questioning that. My article was just about “Hey, a process running on the same machine is a useful way to for people unfamilair how containers work to think about them”. In my thinking, however runc is implemented matters not for that. The intended audience and level of rigor might just be very different then you were expecting and I am sorry if it contains inaccuracies or is loose with terminology.

        2. 1

          The fact that you can seemlessly move from “basically chroot” to a full-blown VM without noticing is a qualitative difference, don’t you think?

          1. 1

            Was kind of tongue in cheek. Also: alias chrooot=docker run. Problem solved.

      2. 1

        When you run Docker on macOS to my knowledge you run a VM (xhyve, a port of FreeBSD’s bhyve to macOS) in which “the actual stuff” happens or did that change?

        1. 1

          I haven’t dug into this too closely because of the risk of barfing all over my keyboard when I find out how it actually works, but it would appear that when you run Docker on macOS M1 not only do you have xhyve, you also have qemu to emulate x86-64 so that[*] all the random images around the net work on your machine. There’s probably bubble gum and duct tape in there too.

          [*] not all. Some of them die with segmentation faults, apparently because the CPU that QEMU is emulating is not the same variety of CPU that J Random Docker Image Publisher used to compile his image, and some instructions aren’t emulated.

    20. 5

      “It’s just processes!” That’s what I used to say a few years ago when people were comparing containers to VMs (all the reproducibility rage was on vagrant back then (remember? ;))

      So this article is already getting a bit old, although it’s a great article that I would have loved to read 3 -or more- years ago!

      But the real reason I’m commenting is that, as already pointed by others, containers are not “just chrooted processes” anymore. It’s all the rest that comes along (plus the need to increase server density). Actually it’s so much “all the rest”, that the same cloud density can be achieved using VMs now (see https://github.com/firecracker-microvm/firecracker-containerd for example) and yet we still keep the container idea.

      1. 8

        Hey, thanks for reading!

        I’m the author and plead guilty to writing a title that is a bit of a stretch. I think if I had to rename the article it would be something like “How chroot helped me understand containers” because I get that namespaces and cgroups and unionfs exist, but it really was a breakthrough in understanding for me to think of containers as a chrooted process. My goal is to share that understanding and perhaps, in a future article, layer on other things.

        1. 5

          Nothing to be guilty of, the article was great, whatever the title. My point mainly is that containers are evolving, and it might be that “containers are just linux processes” is soon to be not true anymore (at least not for everything). Firecracker-containerd are VMs, wasm-shim are wasm module instances running in a wasm VM but the vast majority is still good old processes behind execve+flags/chroot syscalls. For how long is the question.

          And I think you achieved your goal of sharing good stuff :) thanks for writing!

          1. 3

            Firecracker is on my to-look-into list!

            But also, truly thanks for the compliment on the article. I was starting to dread the feedback on this one. Receiving critical feedback is not my strength.

      2. 4

        Container is a very overloaded term. It is used to describe both a packaging and deployment system, and an isolation mechanism. Most containers are run by containerd these days and it has various pluggable back ends. You can use runc to run them isolated on Linux using namespaces, cgroups, and setcomp-bpf, or runj to run them isolated on FreeBSD using jails, but you can also deploy them in separate VMs on Hyper-V or various different KVM-based systems (including the Firecracker one that you link to). There’s even a back end for OpenBSD’s hypervisor.

        It a shame that containers are often used to mean shared-kernel isolation because the packaging, distribution, and deployment models that the term encompasses are far more interesting and influential parts of the ecosystem. FreeBSD lost a big chunk of the server market by missing this and thinking it was just about jails.