This person seems to really be squeezing everything out of Jenkins, but I see some of the same issues at a smaller scale. We only do fairly basic Jenkinsfiles with some calls to Docker and whatever build tool.
Configuration UI is quite messy, with lots of hidden settings. (Does anyone even touch the XML?) Not being able to run and debug a Jenkinsfile locally stinks. Breakages can be fun and have sometimes required us to dive into the alien world of Java.
I kinda love tinkering with it sometimes, but I feel it doesn’t really benefit the team. Maybe the only reason we haven’t switched yet is the time investment.
Edit: Oh, and I really like the idea of just using an S3 bucket for artifacts. That seems like such an obvious solution, I feel silly for not having thought of it.
It should be part of a continuous deployment pipeline from source control to build to test to production….. except it doesn’t really have config source control and test and deployment pipeline for instances of itself.
ie. It’s trivial for it’s administrators to screw it up and leave the rest of the team swearing until they fix it.
I see a continuous deployment pipeline more as a database for gathering information on the quality of binary artifacts…. and at the heart of it should be a good database like postgres
This is a good summary of Jenkins’ issues. To be frank though, it’s best in class insofar as I can tell. Admittedly, that’s partly because the plugin system is huuuuuuuuuuge. It’s also because it is……. very well understood online. The rough edges are Known.
I’m keeping my eye on https://github.com/tektoncd/pipeline and look forward to seeing open source k8s solutions grow up and really take their place.
I see in general a divergence between enterprise needs and small outfit needs. Kubernetes is much more effective as you scale up - it’s initialization cost in knowledge and tooling is heavy, not to mention in plain cloud costs. Tekton is the shape of the future there, even if it doesn’t directly come out of the tekton code base (there are similar proprietary tools). I anticipate CRDs to be the shape of the k8s future; containerization is going to, I think, allow container-in-container tooling that is straightforward
Docker itself is probably going to wither here in another 2-5 years; the k8s project can create a kube-container tool that does all that is required for kube work on the client side; most of the advanced docker features can be reduced down to “what does kubernetes need”.
Jenkins is probably the long-term home of the small team, the non-containerized world; it’s rooted in the “I have a SVN repository and I have a build server and a few always-on build workers” setup from 2004 or so, and has struggled mightily in the cloud world. To do a wholesale migration to a modern design is probably impossible at this point.
A long lived jetty server with EARs and WARs hot swapping in and out is still a viable pattern for deployment, and I suspect that it’s going to re-emerge, but polished, as the enterprise world wanders off into tooling that is more and more infeasible for small shops to run.
I wanted to like modern Jenkins and the Jenkinsfile but I couldn’t get it to do what I wanted. I’m also finding that every alternative assumes you use Linux and especiallu Docker.
I’ve got a fairly hokey old Buildbot setup that spins up EC2 instances for testing builds in parallel across FreeBSD, Debian, and CentOS, and a permanent SmartOS zone running buildbot-worker. Jenkins looked like it’d let me do similar, but I couldn’t get it doing parallel builds, or it would fail all builds if just one worker failed a single step, so you couldn’t see the state of all builders. It was pretty frustrating.
All the alternatives - GitLab CI, Drone, Circle, Concourse - look great but all assume Docker.
I’ve used Jenkins casually in previous workplaces, but never in extreme setups. I disliked its use of a Web UI, typing shell scripts into text boxes seemed very wrong to me (no version control, etc.), storing those scripts in a database (harder to access and execute than files on disk), etc.
These days the only CI system I interact with is an installation of Laminar I have running on my own laptop, which I came across when it was posted on lobste.rs. It does exactly what I need it to: read its config from disk, run some scripts (stored on disk) when triggered, display results (in a read-only Web UI). I treat it like cron, except that builds are triggered by a command (in my case post-receive hooks) rather than on a schedule.
I’ve installed it via a simple NixOS module I hacked together, I generate its config files using Nix (which lets me bake all dependencies into the generated scripts, so they can be run standalone) and keep those Nix files in git.
My point is not to claim that Laminar is suitable for huge, mission-critical deployments. My point is that, to me, Jenkins seems over-engineered (plugin architecture, VCS monitoring, extensive Web UI, etc.) whilst simultaneously amateurish (scripts in text boxes are hostile to external tools like version control/editors/linters/etc., Web UIs are hostile to automation, etc.). Yes there are plugins to overcome some of these issues, but that’s piling even more complexity on top. Laminar is a nice comparison since it provides so little, whilst simultaneously avoiding many of these problems: it just reads and executes files on disk, and provides a CLI.
Now I can run a job on my local machine in the EXACT same environment that the build system would use.
Trick of the trade that others should know about:
Put your Jenkins slave on a box with Nix. Make a script that looks like this and toss it in the root of your project (bear with me if there’s a typo here). Name it something like nixify and chmod it executable.
#!/bin/sh
if command -v nix-shell >/dev/null 2>&1; then
exec nix-shell --run "$SHELL $@" --pure
else
echo "This build might work better with Nix." >&2
exec "$SHELL" "$@"
fi
Then make your Jenkins scripts look like this:
stage('Build') {
sh '''#!nixify
echo "In a nix shell!"
make
'''
}
(The magic: Jenkins defaults to #!/bin/sh, but you can change this by providing your own shebang and Jenkins won’t write out a script with a shebang.)
Stick your dependencies in a default.nix in the root of your project and you’re set. (You can use this with things like FHS envs too, just mind the lifetime of things like gradle daemon, since the FHS env goes away after you return). Boom, now you can build your code anywhere you can install Nix, even without Jenkins.
The deeper question here is if the CI should be burdened with the environment. You could argue that the build system should ensure reproducible builds independent of the environment.
The deeper question here is if the CI should be burdened with the environment
I’d argue that it shouldn’t be. The CI’s purpose should just be to kick off the build and collect artifacts and results. Your environment should be branch-specific, and the branch config should just live in the repo.
Regarding reproducibility: nix-shell isn’t perfect, but in practice it’s been pretty damn close. (For instance, I did have some really amazing “broken on the build server” issues recently due to an errant script that concatenated something with an environment variable I forgot to unset, but only when it was running from CI - but that was my fault for forgetting to unset it in my default.nix. Locale issues running Nix on a non-NixOS machine didn’t seem to be, though.)
This.
Also, substitute bash scripts for whatever script your shell prefers.
Bash? Jenkins supports it.
Batch? Jenkins supports it.
PowerShell? Jenkins supports it.
Let your shell scripts do all the work, make Jenkins call them using a Jenkinsfile and scripted (!Declarative) syntax.
Behind paywall. Can’t read it.
I just let my Medium membership lapse since I realized I wasn’t nearly getting $60/year worth out of it. This is the first paywall I hit.
I don’t notice a paywall. How does it work?
I think you get something like 5 free reads a month and then you get a “We notice you like reading, upgrade?” paywall.
Wow. Does any of that money go to the authors of the blog articles?
I doubt it.
https://outline.com/https://itnext.io/jenkins-is-getting-old-2c98b3422f79
Link without paywall.
This person seems to really be squeezing everything out of Jenkins, but I see some of the same issues at a smaller scale. We only do fairly basic Jenkinsfiles with some calls to Docker and whatever build tool.
Configuration UI is quite messy, with lots of hidden settings. (Does anyone even touch the XML?) Not being able to run and debug a Jenkinsfile locally stinks. Breakages can be fun and have sometimes required us to dive into the alien world of Java.
I kinda love tinkering with it sometimes, but I feel it doesn’t really benefit the team. Maybe the only reason we haven’t switched yet is the time investment.
Edit: Oh, and I really like the idea of just using an S3 bucket for artifacts. That seems like such an obvious solution, I feel silly for not having thought of it.
I’m always uncomfortable with Jenkins.
It should be part of a continuous deployment pipeline from source control to build to test to production….. except it doesn’t really have config source control and test and deployment pipeline for instances of itself.
ie. It’s trivial for it’s administrators to screw it up and leave the rest of the team swearing until they fix it.
I see a continuous deployment pipeline more as a database for gathering information on the quality of binary artifacts…. and at the heart of it should be a good database like postgres
This is a good summary of Jenkins’ issues. To be frank though, it’s best in class insofar as I can tell. Admittedly, that’s partly because the plugin system is huuuuuuuuuuge. It’s also because it is……. very well understood online. The rough edges are Known.
I’m keeping my eye on https://github.com/tektoncd/pipeline and look forward to seeing open source k8s solutions grow up and really take their place.
I had also been checking out Tekton. What are your thoughts?
I see in general a divergence between enterprise needs and small outfit needs. Kubernetes is much more effective as you scale up - it’s initialization cost in knowledge and tooling is heavy, not to mention in plain cloud costs. Tekton is the shape of the future there, even if it doesn’t directly come out of the tekton code base (there are similar proprietary tools). I anticipate CRDs to be the shape of the k8s future; containerization is going to, I think, allow container-in-container tooling that is straightforward
Docker itself is probably going to wither here in another 2-5 years; the k8s project can create a kube-container tool that does all that is required for kube work on the client side; most of the advanced docker features can be reduced down to “what does kubernetes need”.
Jenkins is probably the long-term home of the small team, the non-containerized world; it’s rooted in the “I have a SVN repository and I have a build server and a few always-on build workers” setup from 2004 or so, and has struggled mightily in the cloud world. To do a wholesale migration to a modern design is probably impossible at this point.
A long lived jetty server with EARs and WARs hot swapping in and out is still a viable pattern for deployment, and I suspect that it’s going to re-emerge, but polished, as the enterprise world wanders off into tooling that is more and more infeasible for small shops to run.
I wanted to like modern Jenkins and the
Jenkinsfile
but I couldn’t get it to do what I wanted. I’m also finding that every alternative assumes you use Linux and especiallu Docker.I’ve got a fairly hokey old Buildbot setup that spins up EC2 instances for testing builds in parallel across FreeBSD, Debian, and CentOS, and a permanent SmartOS zone running buildbot-worker. Jenkins looked like it’d let me do similar, but I couldn’t get it doing parallel builds, or it would fail all builds if just one worker failed a single step, so you couldn’t see the state of all builders. It was pretty frustrating.
All the alternatives - GitLab CI, Drone, Circle, Concourse - look great but all assume Docker.
I’ve used Jenkins casually in previous workplaces, but never in extreme setups. I disliked its use of a Web UI, typing shell scripts into text boxes seemed very wrong to me (no version control, etc.), storing those scripts in a database (harder to access and execute than files on disk), etc.
These days the only CI system I interact with is an installation of Laminar I have running on my own laptop, which I came across when it was posted on lobste.rs. It does exactly what I need it to: read its config from disk, run some scripts (stored on disk) when triggered, display results (in a read-only Web UI). I treat it like cron, except that builds are triggered by a command (in my case post-receive hooks) rather than on a schedule.
I’ve installed it via a simple NixOS module I hacked together, I generate its config files using Nix (which lets me bake all dependencies into the generated scripts, so they can be run standalone) and keep those Nix files in git.
My point is not to claim that Laminar is suitable for huge, mission-critical deployments. My point is that, to me, Jenkins seems over-engineered (plugin architecture, VCS monitoring, extensive Web UI, etc.) whilst simultaneously amateurish (scripts in text boxes are hostile to external tools like version control/editors/linters/etc., Web UIs are hostile to automation, etc.). Yes there are plugins to overcome some of these issues, but that’s piling even more complexity on top. Laminar is a nice comparison since it provides so little, whilst simultaneously avoiding many of these problems: it just reads and executes files on disk, and provides a CLI.
Trick of the trade that others should know about:
Put your Jenkins slave on a box with Nix. Make a script that looks like this and toss it in the root of your project (bear with me if there’s a typo here). Name it something like
nixify
and chmod it executable.Then make your Jenkins scripts look like this:
(The magic: Jenkins defaults to
#!/bin/sh
, but you can change this by providing your own shebang and Jenkins won’t write out a script with a shebang.)Stick your dependencies in a
default.nix
in the root of your project and you’re set. (You can use this with things like FHS envs too, just mind the lifetime of things like gradle daemon, since the FHS env goes away after you return). Boom, now you can build your code anywhere you can install Nix, even without Jenkins.Using Nix instead of Docker is a good idea.
The deeper question here is if the CI should be burdened with the environment. You could argue that the build system should ensure reproducible builds independent of the environment.
I’d argue that it shouldn’t be. The CI’s purpose should just be to kick off the build and collect artifacts and results. Your environment should be branch-specific, and the branch config should just live in the repo.
Regarding reproducibility: nix-shell isn’t perfect, but in practice it’s been pretty damn close. (For instance, I did have some really amazing “broken on the build server” issues recently due to an errant script that concatenated something with an environment variable I forgot to unset, but only when it was running from CI - but that was my fault for forgetting to unset it in my default.nix. Locale issues running Nix on a non-NixOS machine didn’t seem to be, though.)
This. Also, substitute bash scripts for whatever script your shell prefers. Bash? Jenkins supports it. Batch? Jenkins supports it. PowerShell? Jenkins supports it.
Let your shell scripts do all the work, make Jenkins call them using a Jenkinsfile and scripted (!Declarative) syntax.