1. 16
  1. 12

    So we set out to make a simple and fast inner dev loop. The core shift that needed to be made was from the fully integrated environment of Onebox (many services), toward an isolated environment where only one service and its tests would run. This new isolated environment would be run on developer laptops . . . We decided to run service code natively on MacOS without containers or VMs . . .

    Yes! Yes! Amazing!

    Having developers run [separate processes for datastores, etc.] manually would be very cumbersome . . . we needed a tool that would orchestrate these checks and manage the necessary processes with declarative configuration. For this we decided to use Tilt.

    Agh! So close! šŸ˜‡ A service should be runnable and testable in isolation without runtime dependencies. Requisite e.g. datastores should have mock versions enabled with an e.g. -dev mode flag. As described in the article it seems like they just replaced one local service orchestration tool (Onebox) with another (Tilt).

    1. 11

      I’d even argue that in corporate environments it’s ā€œbetterā€ to have the default work immediately on a development environment without any fiddling around, and then have to do a little dance if you want to shove the thing in production.

      Deploying from scratch to prod is gonna happen less often than onboarding new devs. At least in my day to day this holds.

      1. 1

        Requisite e.g. datastores should have mock versions enabled with an e.g. -dev mode flag.

        How do iteratively develop the code for production datastores then?

        1. 1

          Usually — hopefully! — you can abstract the concrete data store behind an interface that can be mocked. But if your service is tightly coupled to the DB, then you gotta think of them as a single thing, and yeah probably can’t avoid running an instance locally.

      2. 3

        That’s great. Developing in a VM or on a remote machine is — for someone who doesn’t use a terminal-based text editor — a death by thousand cuts. Being able to run code locally has always been important for me, whether that was PHP or Node or Rust.

        One thing that I find funny: I’ve ended up being a maintainer of cargo-deb that builds Debian packages, and I develop it on macOS.

        1. 1

          Developing in a VM or on a remote machine is — for someone who doesn’t use a terminal-based text editor — a death by thousand cuts.

          VS Code and IntelliJ can work with remote projects and SDKs. AFAIK Emacs can be attached remotely using its protocol.

          1. 2

            That seems like only a small specialized solution. I presume it wouldn’t work with my terminal application (I’d rather not be limited to a toy terminal inside the text editor). I have a GUI git client that I like to use, and it’s usually not fast over NFS. Plus I need access to SSH keys, VPNs, which are usually fiddly to set up and pass through safely. I may need to browse data files or images my software generates, etc. It’s all solvable, but a hassle.

        2. 2

          I find it hard to believe that all their services are completely portable between their Linux servers and their MacOS dev machines. But maybe I’m just too accustomed to C++ and programming to the syscall interface.

          1. 1

            I’m accustomed to C++ and programming to the syscall interface and it’s quite rare for me to have problems porting code between macOS and Linux. Between macOS / Linux / *BSD and Windows is another matter, but between two POSIX platforms you have to be doing something pretty unusual for it to matter. For example, the sandboxing frameworks and hypervisor interfaces are different and they spell futex differently so I often need a small platform layer, but it’s a tiny fraction of the code.

            1. 1

              Yeah, depends on what you’re doing for sure. In my work I directly depend on Linux-only APIs like epoll (though I’d prefer kqueue), memfd sealing, io_uring, etc., so portability isn’t something I either try for or could easily achieve at this point. POSIX compatibility simply isn’t worth it, since our product is only expected to ever run on Linux.

              1. 2

                epoll vs kqueue isn’t as big a deal as it used to be. There’s a libkqueue for Linux and a libepoll for everything else that implement one interface in terms of the other.

                Memfd sealing is a mechanism created to introduce security vulnerabilities because it depends entirely on getting the error handling (which is tested only when under attack) correct. We proposed an alternative mechanism when it was introduced where you could request a snapshot mapping and explicitly pull updates. If everyone followed the rules, this is nice and fast (no CoW faults) but if someone doesn’t then it falls back to copying. No need for applications to handle the error condition because the attack is not observable.

                io_uring is starting to look very nice. Hopefully other kernels will pick it up soon, since it seems to be stabilising. It may be a bit late though. For anything high-performance, things like DPDK / SPDK seem to be the future and the intersection of ā€˜need very high performance’ and ā€˜are happy to have the kernel in the fast path’ is shrinking.

                I’m curious what the product is.

                1. 1

                  I don’t do networking, but my impression from reading Cloudflare blog posts etc. was that eBPF has largely removed the need for userspace network stacks in Linux.

                  Not sure I can talk about the product yet; it’s in closed beta but should be public beta relatively soon.