It’s funny to mention docker caching, cuz I’ve found docker caching to be pretty miserable. It’s mostly due to the underlying tooling, but since you can only really describe linear stuff with Dockerifles it’s really hard to do anything without ending up using some other caching mechanism within the dockerfile (or using some other tool to build your containers)
Caching also gets invalidated pretty quickly because it includes all the file metadata like UID/GID, permission and modification times. Take makes it quite unlikely that layers can be shared with other machines.
I have used Docker. I understand the reason for its existence, and why people find it useful. I understand why people like containers in general.
But some of the uses listed… it is like pounding in nails with a screwdriver. Yes, you can do it, but in nearly every case, there are better tools available.
That said, I do understand the pain of cross-compile. Some programming languages make that very easy, but many do not.
That said, I do understand the pain of cross-compile. Some programming languages make that very easy, but many do not.
We used a docker container for cross-compiling snmalloc (C++) for CI. It installed, set up qemu user mode, and then built and ran the tests in the container. I rewrote this to use ‘normal’ cross-compiling (which Debian / Ubuntu made really easy by providing all of our dependencies in packages that installed in non-conflicting multiarch locations). The compiler and linker ran as native binaries, only the tests ran in qemu user mode. Our CI time halved.
I recently came across a project that used docker in their install scripts. They pull a image from dockerhub with all the wanted artifacts, then copy the files to a mounted directory. They used several different images. Debugging that was horrible.
I’ve worked in a team where some team members used a Mac, some Windows, and I could not find a way to automate deployments to a satisfactory extent across all the different hardware/OS combos.
I used Docker to build a semi-automated deployment tool to replace an incredibly manual deployment process. It got us from every-deploy-breaks-something to a reasonably comfortable deploy process pretty quickly and using a common environment. I eventually replaced it with a proper CI/CD setup, though.
It’s funny to mention docker caching, cuz I’ve found docker caching to be pretty miserable. It’s mostly due to the underlying tooling, but since you can only really describe linear stuff with Dockerifles it’s really hard to do anything without ending up using some other caching mechanism within the dockerfile (or using some other tool to build your containers)
I agree that it’s tough to describe/cache arbitrary DAGs. Luckily, not impossible - a proof of concept [0] and an idea that could be implemented [1]
[0] https://matt-rickard.com/building-a-new-dockerfile-frontend/
[1] https://matt-rickard.com/request-for-project-typescript-docker-construct/
Caching also gets invalidated pretty quickly because it includes all the file metadata like UID/GID, permission and modification times. Take makes it quite unlikely that layers can be shared with other machines.
Yeah…. no.
I have used Docker. I understand the reason for its existence, and why people find it useful. I understand why people like containers in general.
But some of the uses listed… it is like pounding in nails with a screwdriver. Yes, you can do it, but in nearly every case, there are better tools available.
That said, I do understand the pain of cross-compile. Some programming languages make that very easy, but many do not.
We used a docker container for cross-compiling snmalloc (C++) for CI. It installed, set up qemu user mode, and then built and ran the tests in the container. I rewrote this to use ‘normal’ cross-compiling (which Debian / Ubuntu made really easy by providing all of our dependencies in packages that installed in non-conflicting multiarch locations). The compiler and linker ran as native binaries, only the tests ran in qemu user mode. Our CI time halved.
I recently came across a project that used docker in their install scripts. They pull a image from dockerhub with all the wanted artifacts, then copy the files to a mounted directory. They used several different images. Debugging that was horrible.
I’ve worked in a team where some team members used a Mac, some Windows, and I could not find a way to automate deployments to a satisfactory extent across all the different hardware/OS combos.
I used Docker to build a semi-automated deployment tool to replace an incredibly manual deployment process. It got us from every-deploy-breaks-something to a reasonably comfortable deploy process pretty quickly and using a common environment. I eventually replaced it with a proper CI/CD setup, though.