I was curious about apt-cacher-ng and dug in a little, and ended up writing a story of what I found when I did so. This is the sort of post a (recently past) me would like to read, so I thought I’d share it here too in case there were other folks that are similar.
apt-cacher-ng is a godsend when you’re building packages with sbuild - it starts with a minimal chroot on each build, so apt-cacher-ng prevents you from downloading the world from the mirrors with every build. Even on gigabit internet, it makes a huge difference just from the near-0 latency.
If you build with debootstrap, you can pass –include=foo to add package foo to the base image. On Ubuntu, in the ubuntu-dev-tools package, there is a script called “mk-sbuild” which has a –debootstrap-include arg for that (it passes the value to debootstrap).
I found out that including more packages can be slower if they’re likely to require upgrades (which happens a lot for me as I develop these packages or along them) but if you’re on a released series, it should be good.
I prefer my sbuild chroot templates to have as few packages as possible, so I can catch missing Build-Depends quicker. Installing the B-Ds is quick enough with apt-cacher-ng that the speed gain of baking them into the template isn’t really worth it to me.
Most binary packaging systems I have used include checksums for every package. I have idly wondered how useful it would be to calculate an IPFS hash as that checksum to enable fetching packages from a local cache, a remote cache, or a centralised server as is conventional.
NB: I work for Canonical. (lobsters hats are problematic since I don’t speak in an official position)
This post a bit of a grab bag of ideas and comments.
On the cacher side, there’s also squid-deb-proxy in addition to apt-cacher-ng. There’s a weird and still not understood interaction between Ubuntu’s apt and apt-cacher-ng where apt-cacher-ng stops answering requests (pipelining maybe?) and that doesn’t exist with squid-deb-proxy: apt-cacher-ng upstream says there is no issue on their side but considering the codebase and that I definitely experience the issue, I’m doubtful. That brings us to one thing I like with squid-deb-proxy: it uses squid, a software dedicated to caching.
On the client side, you have at least two ways to dynamically detect a proxy: auto-apt-proxy which pokes your gateway and a few other machines to see if there’s a proxy there, and squid-deb-proxy-client’s apt-avahi-discover which uses mdns (through which squid-deb-proxy announces its presence).
I appreciate that I can simply install auto-apt-proxy and it’ll pick up my cacher. No need to touch files myself. One thing that has bitten me however is that in some multi-stage containers creation, the cacher address is computer on the host and used as-is in the container where networking can be different (especially if the cacher was on localhost).
I should also mention that squid-deb-proxy has some configuration choices that can be surprising, especially not allowing PPAs by default but that’s fortunately easy to change in /etc/squid-deb-proxy/mirror-dstdomain.acl.d/10-default (I want to change that but I need to remember it and have time to do it). It also seems to incur a ~30s shutdown penalty (my first guess would be a timeout to drain clients).
Something I’ve found useful is using the Proxy-Auto-Detect directive in combination with a script that determines if my machine is on a network I control that has a proxy configured.
My example is here. When making requests, Apt runs that script, which uses Netcat with a short timeout to see if http://proxy:3128 resolves and connects.
I was curious about apt-cacher-ng and dug in a little, and ended up writing a story of what I found when I did so. This is the sort of post a (recently past) me would like to read, so I thought I’d share it here too in case there were other folks that are similar.
apt-cacher-ng is a godsend when you’re building packages with sbuild - it starts with a minimal chroot on each build, so apt-cacher-ng prevents you from downloading the world from the mirrors with every build. Even on gigabit internet, it makes a huge difference just from the near-0 latency.
If you build with debootstrap, you can pass –include=foo to add package foo to the base image. On Ubuntu, in the ubuntu-dev-tools package, there is a script called “mk-sbuild” which has a –debootstrap-include arg for that (it passes the value to debootstrap).
I found out that including more packages can be slower if they’re likely to require upgrades (which happens a lot for me as I develop these packages or along them) but if you’re on a released series, it should be good.
I prefer my sbuild chroot templates to have as few packages as possible, so I can catch missing Build-Depends quicker. Installing the B-Ds is quick enough with apt-cacher-ng that the speed gain of baking them into the template isn’t really worth it to me.
Most binary packaging systems I have used include checksums for every package. I have idly wondered how useful it would be to calculate an IPFS hash as that checksum to enable fetching packages from a local cache, a remote cache, or a centralised server as is conventional.
Good timing, I need this.
NB: I work for Canonical. (lobsters hats are problematic since I don’t speak in an official position)
This post a bit of a grab bag of ideas and comments.
On the cacher side, there’s also squid-deb-proxy in addition to apt-cacher-ng. There’s a weird and still not understood interaction between Ubuntu’s apt and apt-cacher-ng where apt-cacher-ng stops answering requests (pipelining maybe?) and that doesn’t exist with squid-deb-proxy: apt-cacher-ng upstream says there is no issue on their side but considering the codebase and that I definitely experience the issue, I’m doubtful. That brings us to one thing I like with squid-deb-proxy: it uses squid, a software dedicated to caching.
On the client side, you have at least two ways to dynamically detect a proxy: auto-apt-proxy which pokes your gateway and a few other machines to see if there’s a proxy there, and squid-deb-proxy-client’s apt-avahi-discover which uses mdns (through which squid-deb-proxy announces its presence).
I appreciate that I can simply install auto-apt-proxy and it’ll pick up my cacher. No need to touch files myself. One thing that has bitten me however is that in some multi-stage containers creation, the cacher address is computer on the host and used as-is in the container where networking can be different (especially if the cacher was on localhost).
I should also mention that squid-deb-proxy has some configuration choices that can be surprising, especially not allowing PPAs by default but that’s fortunately easy to change in /etc/squid-deb-proxy/mirror-dstdomain.acl.d/10-default (I want to change that but I need to remember it and have time to do it). It also seems to incur a ~30s shutdown penalty (my first guess would be a timeout to drain clients).
Very useful info, thanks! I’ll definitely look into auto-apt-proxy.
Something I’ve found useful is using the
Proxy-Auto-Detectdirective in combination with a script that determines if my machine is on a network I control that has a proxy configured.My example is here. When making requests, Apt runs that script, which uses Netcat with a short timeout to see if http://proxy:3128 resolves and connects.