I personally really like sslh: it’s a protocol multiplexer that accepts all connections and acts as a reverse proxy by picking an upstream server based on the protocol that each client seems to expect. For example, if a client connects and immediately sends traffic that looks like an HTTP request, sslh will let an upstream HTTP server deal with the client. If the client does nothing, after a configurable timeout sslh will proxy the connection over to an SSH server.
I use sslh at work in order to expose Prometheus metrics and a SSH server on a single port of a Docker container. Just like this snaps fingers I’ve gained SSH access to all of my containers that are also scraped by Prom.
Ah, that’s a pretty cool tool. I hadn’t seen anything quite so generalized before.
Why do you use a container for more than one process?
I set up supervisord as the entry point in a base image, and then have derived images add their config files into the directory from which supervisord picks up program configs. I’ve tried conjuring up a similar setup using OpenRC and whatever SysV init comes with BusyBox, but it never did turn out as smooth as my supervisord setup, so I’m rolling with that.
Edit: you asked “why?” and I answered the “how?” - well done! As to why: one of my projects at work ships as a Docker container that runs on machines that I have no access to. For ease of debugging, tailing logs, and generally poking around, nothing beats having good old shell access. Running sshd inside my container alongside the main process nicely sidesteps the need to provision shell access to the Docker host, which is organizationally unpalatable when done on a systematic basis. Simply hiding behind Prom’s metrics port is way easier.
The how is just as interesting, as I have tried it with systemd (which blatantly refuses to run in docker), so I now use supervisord as well for mostly the same purpose. However, I thought that a container was meant for one process, thus perceiving myself to do containers “wrong”. Lightweight VMs for trusted software where actual kvm would be way to much.
What kind of project is it?
You can also use something like https://github.com/Yelp/dumb-init which works very well too. Probably more lightweight then supervisord though.
The nice part of supervisord is it also works everywhere else as well, so you only have to learn one tool.
I use dumb-init in containers that only have a single process in them, while supervisord allows me to ship multiple independent but related processes in a single container.
I’ve come to accept that anything goes inside a Docker container that would have previously gone into a complete Linux system. After all, Docker is just convenience machinery atop Linux namespaces, which is exactly in line with how I’m using Docker: to isolate & virtualize a complete system.
The project I’m currently working on is a log ingestion daemon for a very temperamental legacy application that emits Valuable Business Data™️ in a variety of mostly textual formats. The daemon acts as a bridge between this legacy application and a streaming data pipeline by tailing files and transforming them into streams of events. Most of the difficulty with this daemon has to do with really weird stuff that the legacy application does, so convenient introspection via a sidecar sshd has proven invaluable.
“One process per container” is unjustified dogma left over from the early days of docker. I like the notion of one service per container, where any given application is comprised of multiple smaller services. When a service depends on two or two or more proceses that are tightly coupled together and it never would make sense to handle them separately, they should go in the same container.
And when the service is in fact only one process, there’s still the issue that it probably doesn’t behave anything like init, which can be a problem, especially if it does any forking.
Nginx supports this natively: https://raymii.org/s/tutorials/nginx_1.15.2_ssl_preread_protocol_multiplex_https_and_ssh_on_the_same_port.html
No need for sshl or other multiplexers.
I like the sentence “But it’s possible to run two protocols over the same port with some smarts in the endpoint.”
I’m gonna use “adding some smarts” to describe my programming from now on, no matter how complex.
git commit -m “add smarts to make it work better”
has to be “some smarts”
This reminds me of Corkscrew, a tool for tunneling ssh through http/s proxies. I needed to use this in a similar environment where the only egress allowed was through an http proxy.
Great article! I’ve been using a similar procedure, but rather in a more primitive manner, by using socat and the ProxyCommand ssh config value.
The solution proposed by the author is so much more versatile and can “hide in plain sight”, in case ISPs actually care to check if the service you’re accessing is speaking HTTP.
Stupid question: What makes this superior to a VPN ?
Most VPN’s are exotic protocols. HTTPS is not.
It’s a good question. Keep in mind that the use case is very specialized - you’re on a network that only allows outgoing traffic on a very few ports. This likely means that the ports that VPNs use are blocked. HTTPS (port 443) is so popular that it can’t be blocked so tunneling other protocols over 443 might be the only way to get them out.
TLS ALPN is the elegant way to do this.
How does TLS ALPN help? SSH is already a secure transport, it doesn’t need TLS.
If you tunnel SSH over TLS, you can switch on ALPN to figure out what protocol is going to be spoken over the connection and handle it appropriately. He’s doing the same thing without ALPN, just sniffing the connection instead which works but is less reliable.
Furthermore, SSH is usually filtered at the protocol level whereas TLS is not.
I don’t see what’s unreliable about it. It’s 100% reliable.
Tunneling SSH over TLS sounds awful. That’s needless double encryption.
It’s not needless if SSH is DPI filtered.
I did misinterpret the blogpost though, I thought he was doing SSH over TLS but its just SSH over HTTP.
“But it’s possible to run two protocols over the same port with some smarts in the endpoint.”
This had simply never occurred to me before. But then seeing the proof-of-concept, it immediately clicked.
Thanks for sharing this.