Is it just me, or should public key authentication and disabling password logins be near the the top of that list, rather than an “advanced topic” that will be covered in future?
If you have session-multiplexing configured in your ssh-client, this will still let you in, even if new logins would no longer work, as it’s re-using your existing session to spawn a new one.
To check this safely, either first disable session-multiplexing, or delete the session-multiplexing socket file, or login from a different system
But I agree, the first thing to do is only allow logins with public-keys, disable passwords, and mostly ignore the other recommendations, as they don’t add anything useful and mostly just add frustration and problems.
Something in favor of using non-default SSH ports that is often overlooked: it won’t stop a determined attacker, but that’s not the point. The point is that it significantly cuts down on noise in your logs created by automated low-effort intrusion attempts.
The problem with a non-default port is that it’s either a low number (in which case it shows up on a port scan) or it’s an insecure port. If you pick a port above 1024 then any user on the system can bond to the same port if they win a race against the ssh daemon. This means that someone who compromises an unprivileged user has a path to privilege elevation by impersonating the ssh daemon.
I’m also not wild about disabling X11 forwarding on the server. It doesn’t improve server security, it helps only prevent the server from attacking the client and so it should be the client’s responsibility. The ssh command doesn’t enable it by default, so you are only protecting clients that have explicitly requested it against attacks by someone who has the privileges to reenable this setting if they actually wanted to launch the attack and so you add inconvenience in exchange for no security.
At the cost of slightly increasing setup complexity, the port race problem can be mitigated by having sshd bind to port 22 as normal, but redirecting your chosen high port to port 22 locally via firewall rules.
It is a mild annoyance that instead of genuine log messages, >99% of your logs are to do with automated scanners. It can even make logs roll more quickly in some configurations.
Same thing happens for web servers listening on ipv4 but there you can’t change port. Instead where possible I only listen on ipv6 and have the CDN proxy ipv4 to ipv6.
The point is that it significantly cuts down on noise in your
logs created by automated low-effort intrusion attempts.
I’ve been making the same point for years. Unfortunately, over the last
several months, I’ve noticed that the high-numbered SSH port I commonly
use has been getting numerous brute force attempts. I wonder if it is
listed on shodan or one of those types of services, and that’s where the
attention is coming from?
I also noticed one of my IPv6-only hosts getting some loving on its
high-numbered SSH port from a Hurricane Electric IP address. I was
about to send an abuse report to abuse@he.net, but when I looked at
whois, I found that the address was assigned to “The Shadowserver
Foundation”, https://shadowserver.org. My server’s upstream is HE, so
I’m assuming HE is asking these folks to scan their address space.
The noise issue can be resolved fairly simply by installing something like fail2ban, which has the added benefit of working for a whole bunch of other authenticated services, some of which it isn’t practical to change ports for.
I don’t know why would anyone open sshd to the intenret these days. I use tailscale these days to connect back home.
good point. along that note though… why would anyone use some external company to configure wireguard for them? I could understand something more complicated to set up, like openvpn… but wireguard is dead simple. and relying on a possibly fly-by-night “tech” company to do it seems like a bad idea. am I wrong?
Because tailscale is simpler and has more features built in.
I don’t have to open firewall ports or / and port map, configure dynamic resolution when the IPv4 or 6 change (in my case it’s Starlink so I would have to use an online server anyway), etc.
How is disabling SFTP beneficial for security, other than the very specific case of ForceCommand? If someone can use SFTP, they usually already have shell access.
Also - relying on a potentially compromised server not to let you fuck yourself over with X11 forwarding might not be the best idea either.
accepting a restricted cipher set, ed25519 and newer key types only, ipv6 or vpn only connections, makes a massive difference in reducing log spam. But putting spiped in front https://www.tarsnap.com/spiped.html is the clear winner and from most *nix setups it’s transparent with a simple config.
I love the spirit but it feels like the workflow of “SSH into servers from your laptop” is increasingly becoming an antipattern. From EC2 Instance Connect to Kubernetes clusters where SSHing into individual nodes is very discouraged to outright not allowed, it seems like the pattern of “leave an SSH port open to the internet” is beginning to be a relic of the past.
I went from “managing SSH keys is a very important part of my job” to “anybody attempting to SSH into a server should probably be stopped”.
Definitely agree when we’re talking about managing fleets of “crops” (to use the recent analogy posted here), but there’s going to be a continued need for secure remote login on your “houseplants”. Speaking generally, we as an industry rely on levels of abstraction to keep the crops running, whether that’s a cloud provider being able to cycle instances quickly, a dedicated hardware management team, or automation that removes the human element entirely. I want remote access to my personal k8s cluster because I don’t have any of that abstraction and any automation I build is also my responsibility to fix. That doesn’t necessarily mean a public SSH port, but having some properly secured remote login is a necessity.
Is it just me, or should public key authentication and disabling password logins be near the the top of that list, rather than an “advanced topic” that will be covered in future?
Author here. You are right. I figured it would be too much content to add it there, so I’ve created a separate blog post for it. You can find it here.
Thank you for your feedback!
definitely, and it’s not that difficult, you just have to check your keys work before turning off password logins…
Important trick:
This way if you mess up the changes you’re making in terminal 1, the other one is still there.
Also, SIGHUP sshd, or tell it to reload its config. Don’t stop it.
If you have session-multiplexing configured in your ssh-client, this will still let you in, even if new logins would no longer work, as it’s re-using your existing session to spawn a new one. To check this safely, either first disable session-multiplexing, or delete the session-multiplexing socket file, or login from a different system
But I agree, the first thing to do is only allow logins with public-keys, disable passwords, and mostly ignore the other recommendations, as they don’t add anything useful and mostly just add frustration and problems.
Something in favor of using non-default SSH ports that is often overlooked: it won’t stop a determined attacker, but that’s not the point. The point is that it significantly cuts down on noise in your logs created by automated low-effort intrusion attempts.
The problem with a non-default port is that it’s either a low number (in which case it shows up on a port scan) or it’s an insecure port. If you pick a port above 1024 then any user on the system can bond to the same port if they win a race against the ssh daemon. This means that someone who compromises an unprivileged user has a path to privilege elevation by impersonating the ssh daemon.
I’m also not wild about disabling X11 forwarding on the server. It doesn’t improve server security, it helps only prevent the server from attacking the client and so it should be the client’s responsibility. The ssh command doesn’t enable it by default, so you are only protecting clients that have explicitly requested it against attacks by someone who has the privileges to reenable this setting if they actually wanted to launch the attack and so you add inconvenience in exchange for no security.
At the cost of slightly increasing setup complexity, the port race problem can be mitigated by having sshd bind to port 22 as normal, but redirecting your chosen high port to port 22 locally via firewall rules.
Yup, that’s a better solution (and one I’ve used in the past).
You can also put a higher value in /proc/sys/net/ipv4/ip_unprivileged_port_start and make 1987 privileged as well.
I don’t understand why noise in logs is so concerning? It’s not a security issue for them to fail to get in…
It’s not terribly concerning, but spotting relevant log entries is easier when you have less irrelevant log entries to filter out.
It is a mild annoyance that instead of genuine log messages, >99% of your logs are to do with automated scanners. It can even make logs roll more quickly in some configurations.
Same thing happens for web servers listening on ipv4 but there you can’t change port. Instead where possible I only listen on ipv6 and have the CDN proxy ipv4 to ipv6.
Fail2ban is useful for this too.
The point is that it significantly cuts down on noise in your
I’ve been making the same point for years. Unfortunately, over the last several months, I’ve noticed that the high-numbered SSH port I commonly use has been getting numerous brute force attempts. I wonder if it is listed on shodan or one of those types of services, and that’s where the attention is coming from?
I also noticed one of my IPv6-only hosts getting some loving on its high-numbered SSH port from a Hurricane Electric IP address. I was about to send an abuse report to abuse@he.net, but when I looked at whois, I found that the address was assigned to “The Shadowserver Foundation”, https://shadowserver.org. My server’s upstream is HE, so I’m assuming HE is asking these folks to scan their address space.
The noise issue can be resolved fairly simply by installing something like fail2ban, which has the added benefit of working for a whole bunch of other authenticated services, some of which it isn’t practical to change ports for.
Firewalls can easily stop an attack by counting failed attempts without the need for additional tools.
I don’t know why would anyone open
sshd
to the intenret these days. I use tailscale these days to connect back home.good point. along that note though… why would anyone use some external company to configure wireguard for them? I could understand something more complicated to set up, like openvpn… but wireguard is dead simple. and relying on a possibly fly-by-night “tech” company to do it seems like a bad idea. am I wrong?
Because tailscale is simpler and has more features built in.
I don’t have to open firewall ports or / and port map, configure dynamic resolution when the IPv4 or 6 change (in my case it’s Starlink so I would have to use an online server anyway), etc.
I sometimes run endlessh on IPv4 port 22 to distract attackers and report brute force attempts to AbuseIPDB.
How is disabling SFTP beneficial for security, other than the very specific case of
ForceCommand
? If someone can use SFTP, they usually already have shell access.Also - relying on a potentially compromised server not to let you fuck yourself over with X11 forwarding might not be the best idea either.
Author here: I’ve just created another post on how to use public key authentication
ssh - How to use public key authentication on Linux
Appreciate your feedback! Going to add some notes later.
accepting a restricted cipher set, ed25519 and newer key types only, ipv6 or vpn only connections, makes a massive difference in reducing log spam. But putting spiped in front https://www.tarsnap.com/spiped.html is the clear winner and from most *nix setups it’s transparent with a simple config.
Not sure why one would pick spiped over wireguard today, tbh. (the choice between spiped and ipsec is/was a different matter).
I love the spirit but it feels like the workflow of “SSH into servers from your laptop” is increasingly becoming an antipattern. From EC2 Instance Connect to Kubernetes clusters where SSHing into individual nodes is very discouraged to outright not allowed, it seems like the pattern of “leave an SSH port open to the internet” is beginning to be a relic of the past.
I went from “managing SSH keys is a very important part of my job” to “anybody attempting to SSH into a server should probably be stopped”.
Definitely agree when we’re talking about managing fleets of “crops” (to use the recent analogy posted here), but there’s going to be a continued need for secure remote login on your “houseplants”. Speaking generally, we as an industry rely on levels of abstraction to keep the crops running, whether that’s a cloud provider being able to cycle instances quickly, a dedicated hardware management team, or automation that removes the human element entirely. I want remote access to my personal k8s cluster because I don’t have any of that abstraction and any automation I build is also my responsibility to fix. That doesn’t necessarily mean a public SSH port, but having some properly secured remote login is a necessity.
As someone who is apparently a relic without realizing it, what’s the alternative? rsh over VPN?