It’s a valid opinion. I don’t share it, though. It seems to be based on:
As long as I can get syntax highlighting in my chosen editor, where I spend the vast majority of my time reading Go code, it’s not that big a deal that it’s missing from the playground, where code snippets tend to be sort enough that the benefit of syntax highlighting is marginal at best.
As for the fact that Go expressly makes it hard to write complicated code, that does seem to result in Go code being much easier to read, as opposed to write. Given than almost all programmers working on teams will spend the bulk of their time reading other people’s code, as opposed to writing their own, this is totally worth it.
I really like this setup. Unique and different compared to what I typically hear people running. my so far is pygopherd for gopher web server apache for normal http so that let’s encrypt is easier to setup hugo for the site
What is the hardware context? Physical machine at home? Cloud provider? Shared hosting? Rented root server? I would assume the last one.
I started with a VM at vultr.com (2 core, 4G RAM), but then switched to a root server at Hetzner. Performance was good on both systems, I switched because GitLab recommends at least 4 GB dedicated RAM (although it worked fine, I personally like to fulfill those recommendations).
And I have a similar setup at home (with less potent hardware). There’s currently one container running (a Minio server) that takes my restic backups.
That machine could easily take my other containers if I’d have to migrate for some reason.
This last 12 months of actions and reactions by the developers of Caddy make me wonder why anyone would still use it.
In may when LE had an outage, Caddy servers with valid certificates in the renewable period would refuse to start. This was not an oversight, it was intended behaviour and it took a lot of complaints before they relented and adjusted the configuration. It will still refuse to start if a certificate is very close to expiry and LE is down, from memory.
Later in the year they started injecting ads into repsonse headers for the downloaded “free” binary before again, relenting under a wave of backlash.
Whenever the main developer is involved in a discussion about it where there is criticism or particularly comparison to alternative open source tools, his responses make it seem like he thinks requesting/renewing a certificate via letsencrypt is some kind of secret sauce that he alone provides to the world.
Theres also that whole “separation of concerns” thing but that’s not specific to the last 12 months
You could just set the certs explicitly in the caddyfile to work around that issue. And I guarantee you I could code, build, and deploy a hotfix removing that behavior in 15 minutes or less if necessary. But I can’t say the same for Apache or nginx. Actually I’d probably just shoot myself if I had to hotfix Apache.
As for separation of concerns, I think a web server that handles all aspects of being a web server is a great idea. Certs in particular cause loads of trouble for amateur users.
In no other TLS terminating program do you need to “deploy a hotfix” for fucking stupid behaviour.
The problem with caddy is two fold: it tries to be “magic” and the “wizard” in control thinks he knows better than anyone else.
In no other TLS terminating program do you need to “deploy a hotfix” for fucking stupid behaviour.
Sorry, are you jokin my ass? You’ve NEVER heard of anyone having to deploy a config hotfix to Apache or nginx for stupid behavior? Hah, good one.
First sentence of my comment:
You could just set the certs explicitly in the caddyfile to work around that issue.
Literally the next sentence of your comment:
I guarantee you I could code, build, and deploy a hotfix removing that behavior in 15 minutes or less if necessary
Changing the config to statically reference a certificate file isn’t a long term solution, because it will then turn off Caddy’s renewal of said certificate. So, what, you either keep manually adjusting the config whenever Caddy won’t start, or you have to modify the source and re-build the whole program? All to work around fucking stupid behaviour that was intentional.
Yes, I could do either one. The point of the second sentence was not only is a config change easy, Caddy is so easy to work with I could do a code change instead if I wanted. Or roll out a config change first and a code change later. Or use an external LE client. I’ve had worse problems with other servers that were a lot harder to fix, and bitching about this one like it’s the end of the world sounds really amateur to me. And yes, the many problems in other servers are “fucking stupid behaviors that are intentional,” and still aren’t fixed.
Besides, how would you even hit this problem? Why are all of your frontend web servers restarting at the same time? If you only have one, why is it restarting unattended? I can’t imagine any skilled team having a second of downtime because of this.
You shouldn’t make any unattended changes to production. If you’re restarting manually, you can just fix the issue with a config change trivially in < 1 minute. If you’re a company making an infrastructure change, you better have some real process that would catch any problem, not just this one.
You can fix it “trivially” if you know how to fix it (which is highly unlikey given that the use-case for caddy is “you dont have to worry about certificates”) and if you’re manually restarting.
What if the software crashes and is restarted by your init?
What if the server has to restart due to a failure of some kind?
Its a fucking stupid concept and it was intentional. That should tell anyone all they need to know about how this project is run.
That should tell anyone all they need to know about how this project is run.
I think you’re too willing to make black and white judgements, but ok.
A skilled team wouldnt find certbot and haproxy/similar “too hard to configure” and turn to caddy.
I don’t know the exact situations that triggered the problem because I dont use the fucking thing, but it had enough people seeing the issue that the github issue was like a townhall meeting.
No, they wouldn’t. But they might use Caddy if they didn’t want to spend the time doing so. Time is a finite resource.
Yep, it died.
To answer the grandparent, I’m going to be really blunt: Caddy has made some poor decisions that I disagree with and that make me disinclined to trust them. But so has Mozilla, as we’ve discussed recently. And, very recently, so has Apple, with the battery situation. Caddy, at least, was very straightforward and transparent in what they were doing, whereas these other companies were not. And in the modern context, where most open-source projects are sponsored by major companies, transparency in what’s going on is close to the best I can ask for. Toss in that building Caddy without that header at any point required, let’s be honest, the bare minimum of effort, whereas e.g. building Firefox without the Mr. Robot plugin did not, means I don’t personally have any trouble continuing to use them.
I didn’t know that either, good to know. And I mostly agree. As I see it, Matt is trying to make a living out of a great piece of software. I respect his attempt, although I’m really glad he dropped the propriety headers.
Congrats on getting your server up and written about!
The thing that strikes me as kinda odd–maybe I’m just showing my age–is that you seem to have not one, not two, but many webservers:
Like, I can’t help but wonder if this is really an efficient use of resources. This sort of thing is why I view container-based solutions for ops with tremendous skepticism.
Congrats on getting your server up and written about!
Thank you very much!
Apache in the Next Cloud (I think?)
There is a version (tag) of the nextcloud image that does not use a webserver and only exposes the php-fpm port. I’m using that image.
Thank you for reading my post so thoroughly - You’re mostly right. Except for the nextcloud service every other container has a dedicated web-server built in. This takes some additional memory, but I don’t think it’s relevant cpu-wise (perceived, not factual).
Nevertheless: Yes, it comes with some overhead. And this surely is not a solution for everyone. But in my case, I’m very happy to be able to isolate all the services with containerization; The pleasure of having easy updates and clean isolation far outweighs the (IMO) slight overhead in computation. Although: With some extra effort I’d be able to remove the web-servers in most of the containers.
Although: With some extra effort I’d be able to remove the web-servers in most of the containers.
I’m curious - how would you do this and still keep gitlab isolated? You’d still be running it with thin/puma/unicorn rather than spawning with passenger, right?
I’d configure gitlab to not use the integrated nginx server and configure caddy to serve gitlab accordingly :). I’ve not figured out the required caddy settings, but that’s on my agenda.
All other daemons that are required for running gitlab you’ve mentioned would still run inside the docker container. Thus gitlab would be isolated, but without the nginx server.
This is unfortunate, but also rather necessary for isolation. Often, apps depend on specific webserver settings (especially in the PHP world). If you’re going to pull those settings outside the container, that means you have to be aware of any changes during an upgrade.
For what it’s worth, in our case at least, there’s only 2 webservers in the request path, and our apps that need their own webserver always use nginx. These instances of nginx have all caching and buffering disabled, and are at the very bottom in memory usage on the system.
I’m not sure of the processing overhead per request of an extra webserver, but we’ve currently not hit any issues. The idea is to cache as much as possible at the front proxy, and the remainder that goes through the stack is heavier any way. My gut says the overhead is probably small compared to the rest of the app logic in PHP.
Great setup, thank you for sharing it. I like the transparency of it. I’m currently using dokku, but it keeps evolving in ways I don’t understand very well.
Great to hear! I’m going to write more detailed notes and instructions soon(ish), maybe those will inspire you to try it out.
This all seems sensible and nice. :)
One tiny nitpick: where you used the word “abstract” here, I’d have used the word “isolate”.
I’ve been 20% happier since I discovered that. Wish VSCode’s Vim emulation allowed me to do the same.
Just needs a little WASD to complete the experience. Double your productivity by navigating two panes simultaneously.
The CCC did an awesome job here. If you read German, I highly recommend reading their full report.
The vendors developing this software have been exposed as highly incompetent.
Arbitrary server filesystem access via broken PHP scripts, incredibly stupid username/password combinations in production (“guest:test”, “test01:test01”, “test02:test02”), plaintext FTP being used to transfer sensitive voting data, a shared account on this FTP server allowing any user to modify vote results across the entire country, unsigned software updates, silly custom symmetric “crypto” where a PKI is needed… the works.
Fortunately, since 2009, recounts based on paper ballots are mandated if there’s any doubt.
I wept tears of joy reading the full report although I’m fully aware this is a serious mater.
[Page 8]: “After encrypting with the provided tool SmartEditor.exe these are the login credentials […]”
So they “encrypt” their config files containing passwords and then provide they tool to decrypt it when publishing the config files.
Is that company using OpenBSD for any of their products? They seem to be working on some innovative phone features, but it’s not clear (to me) what the underlying OS is.
Would be pretty amazing though, if it was based on OpenBSD.
No idea. But note that Android borrows big chunks of OpenBSD libc for bionic, for instance; it’s entirely possible to be grateful to OpenBSD without using full OpenBSD.
I can say it feels good to pay for a year of premium phoronix after using ad-blockers for some time now. He definitely earned it. And it’s even faster now.
There is also this book by Verhoeven for all who want to understand steel and what’s going on when hitting it with a hammer while it’s hot.
Edit: There’s also a German translation called “Stahl-Metallurgie für Einsteiger: Komplizierte Zusammenhänge verständlich erklärt” which is highly regarded.