I have evaluated Caddy, and I was not impressed.
Overall I set up acme + nginx in max. 2 hours the next day. It was simple, there are no open/gray zone legal questions. It has known security track. I spent more time evaluating (and beginning to not like) Caddy.
I don’t care for QUIC, and I’ve only had certificate issues with nginx when it was initially configured for a horizontally scaled front-end SSL termination at a job (not done by me, but probably I would have botchered it, tbh), and the cert distribution was improperly implemented initially. It was only in test mode then.
Websocket proxying was flaky
Websocket proxying was flaky
I’m curious how you came to this conclusion. I use Caddy to serve websockets over SSL on 15 websites on the same machine and never seen an issue.
Its very simple. I was irregularly seeing errors in the proxied web app, it was complaining about connection issues. I opened the browser’s developer console then, and watched the network logs. It seemed as if it was opening http/2 connections sometimes, and the websocket upgrade failed because of that. If I remember well Caddy logs shoved the requests, but were not helping me to get forward with debugging the problem.
After trying to enforce http/1.1 according the docs the problem still persisted, but at this point I already had enough of Caddy, so I deleted it, and made the decision to use proven technology instead. Btw. the web app was a Java 8 Jetty based web app.
edited: added some details which I remembered after posting.
there are a few nginx options in that example config that aren’t needed anymore, or are deprecated, mostly around the add_header commands. like Public-Key-Pins is deprecated now.
A good example for nginx security would be more like, using the Mozilla SSL generator tool.
Also for caddy I believe you still need to add the Strict-Transport-Security header and specifically opt-in for TLS1.3. (I could be wrong, as I don’t use Caddy anymore)
Mozilla’s SSL generator tool makes it super easy to configure nginx, and you can use something like ssllabs.com to test it out. I don’t really share the author’s difficult experience dealing with this..
Let’s Encrypt, maybe?
Let’s Encrypt’s certbot will automatically handle your nginx config if you choose to verify that way (I just do DNS verification, so can’t comment more about it)
I personally prefer to stick with nginx after having seen how the Caddy dev handled this security issue. But that’s just me.
His responses when a LE outage meant that no-one could restart (or start, if stopped) Caddy despite having valid certs convinced me it’s not a viable tool.
“Automagic TLS Certificates” which really means “an ACME client you don’t have direct control over” is not really a feature I’d see as valuable for anyone with the slightest bit of operator/admin experience.
That’s barely a “security vulnerability”, and a lot of accusatory debate, including on unrelated matters like the EULA, the kind that give us security people a bad rep with open source maintainers.
I really only skimmed it, but what didn’t you like about it? They seemed pretty open to discussion and it seems to have been resolved?
It was more of the developer’s attitude towards the security issue itself—they straight up dismissed it.
If I understand this correctly it’s rather a leakage of public certificates (e.g. for other subdomains) available on the same server.
The author cites Zstandard compression as a reason for using Caddy. However, no web browser supports it. (Test tool.) He doesn’t mention Brotli which is supported just about everywhere. I feel like I’m missing something here.
I also got the impression that not much research has gone into this, especially from that part. I’m pretty intrigued by Caddy, but I was surprised when the article abruptly ended as I expected it to go deeper into the reasoning and experience/result of the switch.
I wish Caddy would figure itself out and either continue making the 1x version available so we can use a code base that’s documented or finish thoroughly documenting 2, because the way things stand right now creates some unfortunate obstacles to adoption.
I recognize Caddy is open source, and I am not looking the gift horse in the mouth, merely wishing that I could make use of it more easily right now and hoping things change sooner rather than later :)
I’m enjoying h2o so far. My thing with choosing between static “precached” pages and the dynamic server was quite easy to do with the mruby scripting.
I mean the perfect frontend server would be something like h2o but in Rust, with rustls for TLS, with something other than mruby for scripting (WASM?), and with support for both KTLS+sendfile (offload stuff onto the kernel) and netmap (bypass kernel; especially good for QUIC where you don’t need a userspace TCP stack; via libpnet)… but I don’t have time to work on that :D
This author shows that for one website their nginx config needed two files, each with 24+ lines, much of which has to be generated with some other tools. The author doesn’t mention that nginx then requires the website to be enabled by linking them to the magic /etc/nginx/sites-enabled/ directory.
In contrast, the author show’s their Caddy config is only one file for two websites, with less than 24 lines of config.
This was what prompted me to switch to Caddy from nginx four years ago. I have about forty websites at any given time running on my machine. I found the Caddyfile blocks within a single config file was refreshing coming from nginx. My entire config file for all my websites is just 342 lines (many server blocks are just 7 lines of config). For me this was great not having to wrangle a hundred nginx config files and typing ln -s dozens of times.
You can have everything in one file for nginx too.
The compact, well documented and easy to read config is also the main reason I use caddy.
Unlike other commenters, I also found it trivial to compile my own caddy for commercial use.
It’s also fairly easy to automate/infrastructure-as-code compiling custom caddy’s. Here’s a personal FreeBSD Port that does so. As written it only supports the add-ons that I use but it would be easy to extend. It also predates FreeBSD Port’s support for Go modules.
I thought sites-enabled was just a Debian thing, not Nginx itself?
I moved my static website from an Apache host where I wasn’t the sysadmin to a VPS where I was. I found Nginx a bit hard to get my head around, but with the help of some googling (Digital Ocean’s docs are very good) I got it to work.
If I were to implement TLS (which I’m not currently interested in) I’m concerned to read that Nginx doesn’t play well with Let’s Encrypt (at least according to this author). But that’s not really a knock against Nginx in my book, rather that the tooling from LE is lacking.
I found LE setup with nginx super easy. certbot supports nginx out-of-the-box.
That’s good to know.
I don’t know why the author suggests that Let’s Encrypt is hard with nginx. As icyphox mentioned, certbot handles nginx. I have managed to confuse certbot’s autorenewal but a close re-reading of the docs solved that.
OpenBSD’s acme-client (which has a portable version) is also incredibly trivial to use.
Which is probably, I haven’t tested, good for folks using H2O or other webservers that don’t have direct support for LE.
which has a portable version
which has a portable version
Which one should be used? The author from OpenBSD no longer maintains their portable variant since acme-client was upstreamed.
Ah, I’m sorry, I didn’t realise that! I’ve just been using the OpenBSD version so I don’t really have any experience with the current ports. Looks like there are a couple.
I’m currently using Caddy 1 with a pretty complex rewrite / redirect configuration (to keep urls working), but I want to migrate to Caddy 2, after it exits beta. I just hope the documentation get’s a few more examples.