Because I do! :)
I usually have a single multiplexed connection open to my server, which connects me to chat. I also check my mail using some scripts that communicate over stdin/stdout to a process run via ssh. And I send email via a localhost port forwarded to the server.
And then you go and use HTTPS to post on lobste.rs, like a sucker!
Protocol design is nearly dead. HTTP killed it.
Many, many services force all communication over HTTP(S) no matter how ill-suited it is. I suspect some engineers have no idea what a socket even is because it is effectively substrate.
http isn’t optimized for interactive use.
And yet it seems to do a pretty good job in that role.
It would be interesting and instructive to look at a wire dump of an interaction with an HTTPS based chat server and sshchat.
Many web apps don’t actually need to feel as though they respond immediately to remote changes of state. The protocol choice doesn’t really matter for these.
A lot of apps that do have that requirement, meet it with web sockets. Web sockets aren’t “really” http(s), in the sense that opening one puts the web server in a different mode that’s standardized elsewhere.
When an app “fakes” web sockets by polling with ordinary http requests, this tends to be very noticeable; the protocol could handle faster inquiries, but in practice polling tends to happen every five to ten seconds, to reduce server load. It’s totally fine for things like receiving email, but playing chess that way is extremely frustrating. :)
So, I think there’s a not explicitly stated context here that’s worth pointing out - using ssh at the application layer rather than as general systems network plumbing.
Plenty common for many of us to ssh into a box and say, run irc or something, or maybe redirect a port so gtalk or slack or whatever will run, but to use ssh as the network protocol the application itself presents to the user is something I’d love to see more of.
I’d argue that Saltstack uses ssh as the communications layer and directly presents it to the user. I was wary of the design at first but quickly embraced it and realized what a good idea it is (assuming you enforce key verification and things like that, same as HTTPS verification).
Really? I thought salt used zeromq with a shared key that was rotated (generated from a public/private keypair).
They do use zeromq and now that I think about it I think you’re right. I could be totally off base here. Perhaps I’m thinking of early Saltstack versions? But those had zeromq as well… Hmm I’m getting old.
Maybe you were thinking of ansible, which uses ssh/paramiko?
SaltStack’s default transport is ZeroMQ. However it also supports SSH:
I use it for everything :) I replaced a full blown OpenVPN solution with plain OpenSSH tunneling (and even without the built in VPN support in OpenSSH). The end result is a MUCH simpler setup & instructions to access crucial services plus better control of which box/machine sees whom. It’s nice that an account can be pretty much locked down to a single command/option set by authorized_keys. I even hit a bug with Ansible becuse of it :)
I recently had a routing issue. This made it impossible for me to access my own home server (that happens to run far away from me). I routed all my traffic by proxying it via an SSH connection through a different machine. I use OpenSSH as an ad hocks socks proxy pretty often whenever I need to :)
Roughly. I prefer to bind everything to localhost and forward a port via OpenSSH. I trust OpenSSH more than OpenVPN or built in security features of applications I deploy (like the Jenkins web interface).
Hey everyone, thanks for dropping by the main ssh-chat server (chat.shazow.net). That one is running a fairly old version of the software (/uptime -> 13579h33m20.784837542s).
I just deployed the latest release on an east-coast server here, please come help test it:
Somebody was mean and “fuzzed” by shoving dev/urandom into it.
Damnit lobsters, this is why we can’t have nice things. >:|
People are very welcome to fuzz their own servers. Binary releases are here: https://github.com/shazow/ssh-chat/releases
Admittedly, SSH is missing some pieces. It’s lacking a notion of virtual hosts, or being able to serve different endpoints on different hostnames from a single IP address.
With horrible web apps, you can offer the user to login with their email address. I can’t think of a way to do that with standard OpenSSH.
I think I said this before: Most of the world’s web forms, interactive PDFs and web control panels could easily be reproduced in a reasonable cli/ncurses interface.
Your point is supported by the many mainframe and terminal apps running the backends of world commerce. Well, the one’s with the better user interfaces. They’re quite intuitive. Especially login screens. You see “username: .” User types username and tries “Enter.” Then, they see “Password: ” with same reaction. Then, they see a menu with numbered items or a page explaining their options. Usually an item for Go Back and an escape key. Straight-forward stuff. Extra benefit: runs FAST.
Yep, as someone that has used some of these (REXX i think?) TUI’s… I’ll just say, they vastly end up being easier to use than any web app.
Bonus, they don’t hog a core of a cpu doing client side… whatever.
What are you talking about? You can totally use email addresses, then just do registration just like you would in a web app.
ssh -l "firstname.lastname@example.org" $HOST
Hm. And map email addresses to UNIX usernames?
$ doas useradd -m email@example.com
useradd: `firstname.lastname@example.org' is not a valid login name
If you’re building the equivalent of a web app, I’d recommend against using UNIX accounts for each user.
Aw, but why? I like the seperation the OS does for me. Are large /etc/passwd files such a huge problem?
True, but you’re basically locking yourself into a preforking process per connection model.
4 billion may seem like a lot, but you may burn through that faster than you think.
If your app depends on sharing data between users, you’ll blow out the gid space even faster. You’re probably going to end up with a bunch of setuid helpers and it’s going to be hilarious when somebody figures out how to abuse that to send lolcats to the www user.
Just generally, kernel separation sounds cool, but the corollary is that your users are living in the same namespace as your application.
This is exactly what I do with LDAP logins for multi-domain user accounts. You wouldn’t be using UNIX usernames for a large scale application I’d expect. And if you did you’d probably just user usernames and tie the email to some other metadata property like GECOS fields during registration.
So someone chatting me has to guess which computer I’ll be at to receive it? (We do all use separate keypairs for each computer, right?)
HTTPS has a client authentication layer too that I assume carries over into HTTP/2. But most sites prefer not to use client certificates. If you can fix that then you’ll get all the functionality you would with SSH, and with much less migration effort.
(We do all use separate keypairs for each computer, right?)
What? God, no. We use separate key pairs for each administrative domain: One for work, one for personal stuff.