So we set out to make a simple and fast inner dev loop. The core shift that needed to be made was from the fully integrated environment of Onebox (many services), toward an isolated environment where only one service and its tests would run. This new isolated environment would be run on developer laptops . . . We decided to run service code natively on MacOS without containers or VMs . . .
Yes! Yes! Amazing!
Having developers run [separate processes for datastores, etc.] manually would be very cumbersome . . . we needed a tool that would orchestrate these checks and manage the necessary processes with declarative configuration. For this we decided to use Tilt.
Agh! So close! š A service should be runnable and testable in isolation without runtime dependencies. Requisite e.g. datastores should have mock versions enabled with an e.g. -dev mode flag. As described in the article it seems like they just replaced one local service orchestration tool (Onebox) with another (Tilt).
Iād even argue that in corporate environments itās ābetterā to have the default work immediately on a development environment without any fiddling around, and then have to do a little dance if you want to shove the thing in production.
Deploying from scratch to prod is gonna happen less often than onboarding new devs. At least in my day to day this holds.
Usually ā hopefully! ā you can abstract the concrete data store behind an interface that can be mocked. But if your service is tightly coupled to the DB, then you gotta think of them as a single thing, and yeah probably canāt avoid running an instance locally.
Thatās great. Developing in a VM or on a remote machine is ā for someone who doesnāt use a terminal-based text editor ā a death by thousand cuts. Being able to run code locally has always been important for me, whether that was PHP or Node or Rust.
One thing that I find funny: Iāve ended up being a maintainer of cargo-deb that builds Debian packages, and I develop it on macOS.
That seems like only a small specialized solution. I presume it wouldnāt work with my terminal application (Iād rather not be limited to a toy terminal inside the text editor). I have a GUI git client that I like to use, and itās usually not fast over NFS. Plus I need access to SSH keys, VPNs, which are usually fiddly to set up and pass through safely. I may need to browse data files or images my software generates, etc. Itās all solvable, but a hassle.
I find it hard to believe that all their services are completely portable between their Linux servers and their MacOS dev machines. But maybe Iām just too accustomed to C++ and programming to the syscall interface.
Iām accustomed to C++ and programming to the syscall interface and itās quite rare for me to have problems porting code between macOS and Linux. Between macOS / Linux / *BSD and Windows is another matter, but between two POSIX platforms you have to be doing something pretty unusual for it to matter. For example, the sandboxing frameworks and hypervisor interfaces are different and they spell futex differently so I often need a small platform layer, but itās a tiny fraction of the code.
Yeah, depends on what youāre doing for sure. In my work I directly depend on Linux-only APIs like epoll (though Iād prefer kqueue), memfd sealing, io_uring, etc., so portability isnāt something I either try for or could easily achieve at this point. POSIX compatibility simply isnāt worth it, since our product is only expected to ever run on Linux.
epoll vs kqueue isnāt as big a deal as it used to be. Thereās a libkqueue for Linux and a libepoll for everything else that implement one interface in terms of the other.
Memfd sealing is a mechanism created to introduce security vulnerabilities because it depends entirely on getting the error handling (which is tested only when under attack) correct. We proposed an alternative mechanism when it was introduced where you could request a snapshot mapping and explicitly pull updates. If everyone followed the rules, this is nice and fast (no CoW faults) but if someone doesnāt then it falls back to copying. No need for applications to handle the error condition because the attack is not observable.
io_uring is starting to look very nice. Hopefully other kernels will pick it up soon, since it seems to be stabilising. It may be a bit late though. For anything high-performance, things like DPDK / SPDK seem to be the future and the intersection of āneed very high performanceā and āare happy to have the kernel in the fast pathā is shrinking.
I donāt do networking, but my impression from reading Cloudflare blog posts etc. was that eBPF has largely removed the need for userspace network stacks in Linux.
Not sure I can talk about the product yet; itās in closed beta but should be public beta relatively soon.
Yes! Yes! Amazing!
Agh! So close! š A service should be runnable and testable in isolation without runtime dependencies. Requisite e.g. datastores should have mock versions enabled with an e.g.
-dev
mode flag. As described in the article it seems like they just replaced one local service orchestration tool (Onebox) with another (Tilt).Iād even argue that in corporate environments itās ābetterā to have the default work immediately on a development environment without any fiddling around, and then have to do a little dance if you want to shove the thing in production.
Deploying from scratch to prod is gonna happen less often than onboarding new devs. At least in my day to day this holds.
How do iteratively develop the code for production datastores then?
Usually ā hopefully! ā you can abstract the concrete data store behind an interface that can be mocked. But if your service is tightly coupled to the DB, then you gotta think of them as a single thing, and yeah probably canāt avoid running an instance locally.
Thatās great. Developing in a VM or on a remote machine is ā for someone who doesnāt use a terminal-based text editor ā a death by thousand cuts. Being able to run code locally has always been important for me, whether that was PHP or Node or Rust.
One thing that I find funny: Iāve ended up being a maintainer of
cargo-deb
that builds Debian packages, and I develop it on macOS.VS Code and IntelliJ can work with remote projects and SDKs. AFAIK Emacs can be attached remotely using its protocol.
That seems like only a small specialized solution. I presume it wouldnāt work with my terminal application (Iād rather not be limited to a toy terminal inside the text editor). I have a GUI git client that I like to use, and itās usually not fast over NFS. Plus I need access to SSH keys, VPNs, which are usually fiddly to set up and pass through safely. I may need to browse data files or images my software generates, etc. Itās all solvable, but a hassle.
I find it hard to believe that all their services are completely portable between their Linux servers and their MacOS dev machines. But maybe Iām just too accustomed to C++ and programming to the syscall interface.
Iām accustomed to C++ and programming to the syscall interface and itās quite rare for me to have problems porting code between macOS and Linux. Between macOS / Linux / *BSD and Windows is another matter, but between two POSIX platforms you have to be doing something pretty unusual for it to matter. For example, the sandboxing frameworks and hypervisor interfaces are different and they spell
futex
differently so I often need a small platform layer, but itās a tiny fraction of the code.Yeah, depends on what youāre doing for sure. In my work I directly depend on Linux-only APIs like epoll (though Iād prefer kqueue), memfd sealing, io_uring, etc., so portability isnāt something I either try for or could easily achieve at this point. POSIX compatibility simply isnāt worth it, since our product is only expected to ever run on Linux.
epoll
vskqueue
isnāt as big a deal as it used to be. Thereās alibkqueue
for Linux and alibepoll
for everything else that implement one interface in terms of the other.Memfd sealing is a mechanism created to introduce security vulnerabilities because it depends entirely on getting the error handling (which is tested only when under attack) correct. We proposed an alternative mechanism when it was introduced where you could request a snapshot mapping and explicitly pull updates. If everyone followed the rules, this is nice and fast (no CoW faults) but if someone doesnāt then it falls back to copying. No need for applications to handle the error condition because the attack is not observable.
io_uring is starting to look very nice. Hopefully other kernels will pick it up soon, since it seems to be stabilising. It may be a bit late though. For anything high-performance, things like DPDK / SPDK seem to be the future and the intersection of āneed very high performanceā and āare happy to have the kernel in the fast pathā is shrinking.
Iām curious what the product is.
I donāt do networking, but my impression from reading Cloudflare blog posts etc. was that eBPF has largely removed the need for userspace network stacks in Linux.
Not sure I can talk about the product yet; itās in closed beta but should be public beta relatively soon.