1. 10
    1. 3

      Incidentally, I’m looking at https://www.lambda.cd/ right now, which seems to be in the same ballpark as Drone, but probably with even less assumptions about what build steps do.

      1. 3

        Using Clojure for the build script is neat, I like it, but like many others, LambdaCD fails short on some of my key requirements:

        • Builds should be isolated, preferably running in a container.
        • It should support triggering builds on pull requests, too.

        LambdaCD doesn’t appear to do either out of the box. A quick look suggests that one can build both on top of it, but I’d rather not, when there are other options which do what I need out of the box. Still, there are some good ideas there, will be keeping an eye on it. Thanks for mentioning it!

        1. 2

          It does the triggering thing, although right now it’s a separate plugin (which they apparently want to ship with the main library at some point): https://github.com/flosell/lambdacd-git#using-web--or-post-commit-hooks-instead-of-polling

          Can you explain more about isolation? My simplistic view on it was that one could simply shell out to docker build ., and generally rely on docker itself to manage images. Am I missing some gotcha here?

          1. 3

            Yeah, that notification thing looks good enough for implementing pull requests, though the pipeline will need to be set up in every case to support that scenario, since the pipeline is responsible for pulling the source too. Thus, it must be ready to pull it from a branch, or a different ref, or tag, or whatever. Having to implement this for every single repo is not something I’d like to do. Yes, I could implement a function to do that, and remember to use it in every pipeline, but… this is something I’d expect the CI system to do for me.

            Looking further into LambdaCD, this part of the HOWTOs suggest that while pipelines are code, they are not in the same repository as the code to build. This means that when I add a new project, I need to touch two repos now. If I have a project with more than one branch, and I want to run different pipelines in the branches, this makes that much more complicated. For example, I have a master and debian/master branch: on master I do the normal CI stuff, on debian/master, I test the packaging only, or perhaps in addition; or I’m implementing a new feature, which requires a new dependency, so I want that branch to have that dependency installed too - both of these would be cumbersome. If, like in the case of Drone, or Travis, if the pipeline is part of the source repo, neither of these is a problem: I simply adjust .drone.yml / .travis.yml, and let the system deal with it.

            Additionally, if someone else implements a feature, and submits a pull request, I want them to be able to modify the pipeline, to add new dependencies, or whatever is needed for the feature. But I do not want to give them access to all of my pipelines, let alone my host system. I don’t think this can be achieved with LambdaCD’s architecture.

            Can you explain more about isolation? My simplistic view on it was that one could simply shell out to docker build ., and generally rely on docker itself to manage images. Am I missing some gotcha here?

            It’s not that easy. First, I don’t want to run docker build .. I don’t need to build a new image for every project I have. I want each of my stages to execute in a container, by default, with no other option. I do not want the pipeline to have access to the host system under normal conditions. To better highlight what goes on under the hood, lets look at an example. See this Drone control file for example, which on one hand is reasonably simple, but it achieves quite a few things.

            The dco and signature-check stages execute in parallel, in two different containers, using two different images (with the sources mounted as a volume). The latter only on tag events. Under the hood, this pulls down the image if it’s not available locally yet, runs them in docker with the sources mounted, and fails the build if any of them fail.

            The bootstrap and tls-certificate stages run parallel again, using the same image, but two containers. This way if I change anything outside of the shared source directory, that won’t cause a conflict. This allows me to install a different set of packages in parallel, and not have a conflict.

            The stable-build and unstable-build stages also run parallel, and so on.

            There’s also a riemann service running in the background, which is used for some of the test cases.

            Shelling out to docker is easy, indeed. Orchestrating all of the above - far from it. It could be implemented in LambdaCD too, with wrapper functions/macros/whatever, but this is something I don’t want to deal with, something which Drone does for me out of the box, and it’s awesome.

            1. 1

              Thank you very much for such a detailed response! I’ll give it a thorough thinking because you seem to approach the problem from a completely different side from mine, so may be I need to rethink everything.

              For example, instead of having every project define its own pipeline I’d rather have a single pipeline defined in my CD system which would work for all projects. So it will have logic along the lines “if it’s the master branch deploy to production, if it’s a PR, just run tests”. For it to work, all project will have to follow conventions on how they get built, how they get tested.

              This is something I implemented successfully in the past with Jenkins, but now I’m looking for something more palatable.

              1. 2

                So it will have logic along the lines “if it’s the master branch deploy to production, if it’s a PR, just run tests”.

                Yeah, that makes sense, and might be easier with a global pipeline. Perfectly doable with pipelines defined in the projects themselves too, even if that’s a bit more verbose:

                pipeline:
                  tests:
                    image: some/image
                    commands:
                      - make tests
                
                  deploy_production:
                    image: some/image
                    commands:
                      - deploy
                    when:
                      event: [push, tag]
                      branch: master
                

                With Drone, this will run tests for everything, PRs, pushing to any branch, tagging. The deploy_production stage will only be run for the master branch, and only on push and tag (this excludes PRs, pushes to any other branch, or tags that aren’t on master). Granted, with a single pipeline, this can be abstracted away, which is nicer. But per-project, in-source pipelines grant me the flexibility to change the pipeline along with the rest of the project. When adding new dependencies, or new testing steps, this is incredibly convenient, in my experience.

                Mind you, there’s no solution that fits all use cases. If you mostly have uniform projects, a global pipeline has many benefits over per-project ones. For a company with well controlled, possibly internal repos, likewise. My case is neither of those: I have a bunch of random projects, each with their own nuances, there’s very little in common. For me, a per-project pipeline is more useful.

                1. 1

                  If you mostly have uniform projects, a global pipeline has many benefits over per-project ones. For a company with well controlled, possibly internal repos, likewise. My case is neither of those: I have a bunch of random projects, each with their own nuances, there’s very little in common.

                  This sums it up nicely!