It was interesting to see what kinds of plans each service offered. It seemed like the bigger or more established they were, the less they offered, which makes sense, but if you’re working on a side project, or just getting up and running with a new application, you can get quite a bit for without having to pay a dime.
Curious what you think I missed or what other services I should have covered.
Nice article, but the visualizations are kinda miserable to actually understand. What’s wrong with just graphs?
I was thinking I’d prefer seeing names, not just icons.
I think the criticism was that it’s not visually obvious in your graphics that 8GB RAM is twice as much as 4GB. The main use of visualization is to visually represent things (quantities in your case), but your visualization fails at that because it requires us to read all of its test. I hope I didn’t misrepresent what icefox meant.
P.S. Charts (plots?) is a better term than graphs.
I was adding general feedback, not strictly replying only to what icebox brought up. Should I have made a separate comment?
Same here, bar graph with annotation should be more meaningful and useful than a random icon for reader to decrypt.
I like the idea of GitLab and even bought into it for my last company with an Ultimate subscription for the features they offered, but I probably should’ve gone with something I was familiar with (GitHub) because of stability issues of the platform itself. Everything felt a little half-baked as they tried to give you everything, just not well.
Bringing it back to the topic, I’m pretty sure Gitlab had CI before Github had actions. Not that this means anything I guess, you’d use a third party service with Github.
Competition is good but I think I prefer the CI format of Gitlab even though I’ve been a user on Github for longer. The smaller self-hosted git apps afaik are simple to run but don’t have the feature set. Gitlab is quite a thing to install and manage last I did, so not without trade-offs. I’m impressed with the Gitlab Team’s iteration speed. You can open an issue if there’s a problem, because it is open source. Github itself is not open source at all and this is usually a huge sticking point for any other tool thread.
But whatever, not trying argue. We can’t measure or describe software really so it’s just chit-chat. 🌻
I empathize with people wanting an open source tool, but GitLab is not approachable and it’s hard to engage them even when you’re a paying customer for features. There was also the stability problem they were having and they took everyone off of working on features and deliverables to improve platform stability. I don’t know if that ever had an effect, but they did it.
Its CI does predate Actions, but before Actions, you’d just plug in Travis or Circle anyways (and from the article, I may choose to do so in the future for a non-GitLab, non-GitHub offering in the future).
Half-baked? Can you even rerun a single job in github already? If anything is half baked, it’s actions.
Yes, you can.
Wow, since yesterday. My point still stands then :)
“Half baked” is a good way to put it. I felt that even with the little time I spent with it. I know GitLab has its fans, and I’m sure its great under the right circumstances, but if you want to host your code and run some Ci all in the same place, Github is just way ahead of GitLab.
The thing that bugs me is that a lot of people think about moving off GitHub (which is great; they totally should; monopolies are bad, etc) but then they go look at GitLab and assume that because it’s the biggest “github alternative” that it’s also the best. Then they see this half-baked stuff and then go back to GitHub not realizing that they are missing out on much better alternatives like Gitea and Sourcehut.
But gitea isn’t hosted. You’ll have to host it yourself. I think its great to have that option, but it’s very different from simply moving from github to gitlab.
Sure; I guess in that context https://codeberg.org or one of the other hosted gitea sites would be a better comparison.
Disclaimer: I work for CircleCI.
I wonder if the reason GitHub ran the benchmarks faster is because they don’t run the brainfuck benchmark as part of the build job, like their CircleCI implementation does for some reason? (Edit: Oh, I misread: CircleCI is already faster in this case, it was the Earthly checks where GitHub Actions won out.)
(I also wonder if the CircleCI solution could be speeded up further by using the special support for building docker images, rather than the machine executor – but that’s pure speculation on my part.)
Do you know the cause of disk corruption on CircleCI? I am worried whether switching the Docker version here just incidentally fixed the issue, since it may also have invalidated some cache keys.
Is it related to how caches above 500 mb are not checked for corruption?
These are the reasons why I am trying to switch to GitHub Actions. For any SaaS option, reliability should be part of the package. I don’t want to have to worry about whether I exceed the 500 mb mark. I’d prefer to just be billed more and keep getting the corruption checks. I cannot fathom how the docs seem to implicitly waive guarantees based on an arbitrary limit like 500 mb.
No. The corruption check you refer to here is only relevant for caches you explicitly create with save_cache and restore with restore_cache, intended for caching dependencies. This cache is stored in tar, and I would expect any corruption to lead to the failure to unpack the archive, and thus failing the cache restore. Your code should then proceed as if there was no cache hit at all, and re-generate the dependencies.
If I read the code that implements that correctly, that corruption check for caches under 500 MB appears to be solely so we can abort early without even attempting to unpack the tar archive if the cache is deemed to be corrupted. In light of that the note in the docs regarding the corruption check you’re referring to may be misleading, which is an issue I’ll raise internally.
I doubt it. It looks (though I haven’t verified) like the package is simply broken in Docker 17, but fixed in Docker 20. A lot could happen in 3 major versions. I’m not a Docker expert, but I’m sure our support team would be happy to help you if you contact them.
As for why version 17.* is default still, as you asked in the linked SO post, I expect Hyrum’s Law applies: people don’t like it when defaults change without their say-so. We support a few different versions of Docker, and pinning one that supports the operations you want to perform would seem to be the natural path forward.
One of the things I honestly love about GitHub actions over most of the others is that you actually can run your own actions on your own machines. Yeah it might defeat the purpose if you don’t have enough builds to exceed their minutes, but if you want something well built and you want to self host the runners for one reason or another it’s an amazing option now.
just for the record, you can do the same with GitLab, and Azure DevOps. I’m sure there are others too.
GitLab is open source so I’m not shocked, Azure DevOps is an interesting one, although admittedly I have almost never touched Azure. I believe sourcehat has something as well. I’m just pretty happy with how Github actions are structured and use Github at work already so it is just nice that they allow you to use your own. This I should have clarified is most in contrast to CircleCI, TravisCI which is not shocking considering that is their core product.
You mean soucehut :P ? Yeah, I’m 99.99% sure they also have that.
yes, your comment makes more sense that way haha
Sourcehut, in fact, does not have self-hosted runners. The architecture as it is right now isn’t really fit to just plop them in. You can however fairly easily run your own instance of just the build service.
good to know. Can you plug that build service in, say, a “hosted sourcehut”?
Ehhh, somewhat. I’m not particularly happy with it though, you basically need to set up your own push webhooks, and I don’t think there’s anything that would automate that. I want to rework how cross-service automation works in sourcehut, but I haven’t found the time for that yet.
Oops, yes I did mean sourcehut. I’ve been jokingly calling it “sr.ht” -> sir hat for too long apparently lol.
Buildkite is another one that hardly anyone seems to know about. I used it at my last job and was quite happy with it.
I don’t use CircleCI at my current job, but did in my last job and I found it to be a great value. You get so much for a surprisingly small amount of money.
And what about self-hosted solutions? On most project, I met Jenkins. It worked quite well. However it highly depended on knowledge of the staff that managed it.
Jenkins is absolutely miserable once you start trying to make sense of the plugin ecosystem. It’s also eternally locked on Groovy 2.4 since they can’t figure out how to upgrade without breaking everything.
The absolutely huge difference between Jenkins and modern CI systems is that pipelines are now configured as structured text files. After working with Hudson & Jenkins for years this was a complete game changer. It means that pipelines are trivially portable to other installations, easily copied and customised between jobs, and specify the exact steps to execute in enough detail that even porting them to another CI system is not all that difficult (I ported several of my own projects from GitHub to GitLab a few years ago with little work). It does mean that the developers need to be able to edit those files reliably, but GitLab, for example, has a configuration validator you can use to verify that at least the basic structure is right.
There are two parts to a CI system:
GitHub somewhat decouples the first part: there is a generic ‘something has happened with this project’ eventing mechanism on top of which the specific GitHub Actions ‘something has happened that triggered this pipeline to run on a runner in this set’ action happens.
The runner is open source but not very portable. There’s a more portable third-party reimplementation of the protocol that you can use for self-hosted runners on other platforms. I’m using this for FreeBSD CI with GitHub Actions in a few places. I believe most of the other solutions have a similar notion of a self-hosted runner.
Jenkins is the source of so much suffering in my life that I’d like to pretend that I’d never encountered it and, if that fails, to at least be happy that I haven’t had to use it for a few years.
My company basically decided they could compete with the Jenkins Pipelines implementation using shared libraries that read yaml and it’s just the worst of all worlds. If the shared library doesn’t have exactly what you need, you’re spending days editing Groovy and running builds to troubleshoot anyways but with this extra indirection layer you’re forced to use as well.
Given this task, I (as in me) would probably investigate droneci and kick it around. I’d be interested in self-hosting and the plugin ecosystem beyond just seeing how it works and what its authors’ viewpoints are (as usual).
Woodpecker also exists, which was forked from the last OSS version of Drone (which is now open core). I’m not sure how the current version of Drone and Woodpecker compare, though.
I’ve been a very happy Drone user for the past couple of years. Easy to set up, easy to maintain, works like a charm.
I believe Tekton is the most exciting project and from what I’ve heard works great on kubernetes environments.
If you are working on open source C/C++ projects, the
build2project offers an unlimited, fully-managed, push-style CI service that covers all the major platforms and compilers: https://ci.cppget.org/?build-configs
You will need to switch your project’s build system and package manager to
I’d also add Cirrus CI to the list. They’re one of the few that offers free CI for FreeBSD.
I, for one, somehow feel that the winner here is Gitlab. They manage to provide a CI that works well for my small/private projects while committing the absolute minimum resources to this free lunch giveaway. Hope they continue to provide free CI for many years to come.