That package caching trick is super fragile. It will break on any package that has postinst scripts, causes dpkg triggers to fire, etc. Safer would be to cache the .deb and install them every time, although slower.
It’s a shame GitHub Actions / Azure Pipelines (they’re the same with marginally different YAML configs) does not provide a way of snapshotting the CI VM. I have a small project where installing the dependencies is around 75% of total CI time. I’d love to snapshot the VM (or, ideally, just the filesystem used for builds) at this point and then on the next run start again. Or test incremental builds by snapshotting after every successful build and then applying the patch to a copy of that snapshot and then building from there.
I’m aware of one company that built an in-house FreeBSD CI system using ZFS snapshots like this. Roughly, each successful build was snapshotted with the git hash as the ZFS snapshot name. The trigger for the new build walked up the history until it found a snapshot that existed in the history of the PR, then cloned that snapshot and updated the source tree to the current commit. They had a clean build that took a couple of hours but normally hit an incremental build time of under a minute. The CI machines could then spend most of their time running tests, not rebuilding the same files. I believe they also had a nightly job that did a clean build and fixed occasional issues that came out of this, but didn’t require a complete rebuild on every PR.
For accelerating the build, a few big companies have in-house build systems that use cloud storage to cache all of the intermediate build results. I’d love to see something like this integrated into CMake, automatically caching every build result and skipping rebuilds of everything whose sources hadn’t changed from the last generated result.