Not quite a counterpoint, but to what extent is the problem that people use giant frameworks for small projects?
I’ve spent a preposterous (comparatively) amount of time recently shuffling code around to reduce some of the dependencies for my blog. It’s not that much code! But the “user” module (for one user, me) depended on json, which I don’t use elsewhere, and another module for randomness, not used elsewhere, and another thing for hashing. Similarly, the front end monitor needed a posix module to call a single function, and a filesystem module for one other function…
There wasn’t much maintenance cost in keeping it going, but moving from one machine to another or setting it up on my laptop to make a quick fix turned into a never ending rabbit hole. Hence, a bunch of work to rewrite many of the used features to eliminate the dependencies.
Sounds like an argument for using something like docker.
How so? Docker is a big dependency in itself and it doesn’t solve the problem of dependencies it just hides it behind something opaque. Docker is only going to exacerbate this problem when someone realizes they want to update their docker image.
Not to mention docker doesn’t run on OpenBSD, but “dockerize it” is definitely advice I put in the “do more work now, so you can do more work later” category. I was explicitly trying to make it easier to move to new platforms. Having to build “unsupported” 32-bit docker images, plus one for linux, freebsd, etc. etc., is far more work. As I understand things, “use vagrant” is really the correct magic bullet advice, but experience has taught me that reducing complexity works better than hiding it.
Having to build “unsupported” 32-bit docker images, plus one for linux, freebsd, etc. etc., is far more work.
Maybe I’m misunderstanding what you’re saying here but can’t you just build one image with docker and share it across your platforms (or just a Dockerfile)?
“use vagrant” is really the correct magic bullet advice
Overall the solutions to encapsulating dependencies between these two are very similar in my mind - Vagrantfiles vs Dockerfiles. I’m not sure what vagrant buys you that you don’t get with docker (maybe it’s easier to run vagrant on OpenBSD?). I’d say that Docker certainly makes it easier to build a Dockerfile thanks to the docker image cache.
experience has taught me that reducing complexity works better than hiding it.
I’m not sure if “hiding” is the word I would choose here – maybe “encapsulating”. Dockerfiles/Vagrantfiles force you to make your dependencies explicit so the whole thing is easier to ship around.
Sorry, maybe there’s something I’m missing. It’s just that I’ve been in the situation you’ve described with your blog but once I started on docker/vagrant this kind of problem is a thing of the past for me.
Perhaps I don’t understand what docker does. If I build a docker image for Ubuntu amd64, isn’t it going to have a whole pile of .so library files built for that platform? They aren’t going to work on FreeBSD i386 or NetBSD arm. Is docker just a shell script that runs make in a bunch of directories?
So I did a little more reading and it looks like you’re right that you can’t share an image across as easily as I thought. It looks like images are file system snapshots, so they’re architecture-specific (no amd64 from i386); also I’m not sure if it’s even possible to share things between FreeBSD and Linux; I just tried running a FreeBSD image from inside TinyCore Linux (via boot2docker) and it didn’t work; I think I need to be running from FreeBSD for that.
For the stuff I’m doing, I’m generally staying inside Linux, and I find docker to be interchangeable with vagrant as far dependency management goes (and a little better IMO). For the stuff you’re trying to do I can see that docker probably isn’t the answer.
Every Docker image includes an entire Linux distribution. The binaries in it need to be compatible with the kernel of the host, which does imply the same architecture.
Sure, if you just use images and not Dockerfiles. With Dockerfiles the advantage is that all your dependencies are listed out (very similar to vagrant in this regard). So now you have a recipe for all dependencies in your app that you can ship around and modify in the future as your see fit. Docker itself is a dependency but now you really just have to manage one as opposed to many. It’s night and day compared to doing forensics on a box to figure out what you installed six months ago to get some simple app to work IMO.
This is essentially saying: do not ugprade anything, keep your old libs riddled with vulnerabilities and keep running your app on old software riddled with vulnerabilities.
This is not a good advice. Really, it’s not.
If you don’t want to maintain your project, then stop maintaining it but let someone else take care of it. Do not let it die.
Moreover, if you don’t want it to take too much of your time, then use a PaaS. At least your app will run on up to date software without you doing anything.
I think what we are disagreeing on is the size and value of the personal project. I currently have 15 personal projects of various vintages. I would be crippled from working on new things if I continued to maintain them. They are not valuable enough to give to a new maintainer but I don’t think they should be killed.
What this suggests are realistic options for retaining the value of the project without becoming time sinks.
PAAS’s are great, but not in the long term because they deprecate portions of their system which forces you to move. In the worst cases they force API changes (GAE master / slave).