I’m looking for papers, free software and even critical reviews of existing applicative protocols that natively include encryption features.
Most of mainstream internet applicative protocols (HTTPS, FTPS, SMTPS and so on…) had Transport Layer Security added as an afterthought.
While designing the core protocol of Jehanne, I realized that I was assuming to do the same without much reflection. After all, this is a clear example of separation of concerns.
However, given the history of Internet I wonder if using applicative protocols supporting end to end encryption natively wouldn’t be better, so I was looking for prior art: I’d like to have valid reasons to adopt or dismiss this idea.
Notes:
tcpcrypt and Secure Spread come to mind.
Truth told, you’re more likely to screw things up trying to modernize one of these. There’s just so many failure modes, attacks, and complex interactions to consider. Better if you have something that’s a black box with clear usage on top of strong implementation. That’s what high-assurance security, esp U.S. military, did with link encryptors and VPN’s. I recommend just tunneling stuff through a solid implementation of app- or IP-level crypto. OpenBSD team’s LibreSSL, Amazon’s s2n, and Donenfeld’s Wireguard are good examples.
That’s my insight too.
But, while redesigning the internet (that is the true purpose of my little os :-D), I was wondering if requiring end to end encryption for authenticated connection would not substantially improve the network security.
Tunneling is battle tested, but requires it’s own configurations, it’s own handshakes… it’s own complexity.
So while not caring about encryption would simplify my protocol (but probably not that much actually), I wonder about what is simpler:
What do you think about that? Option 1 increase or decrease the attack surface? Would it worth the effort?
(sorry if I ask you to confirm and elaborate a bit, but actually… I highly value your opinion, so I’d like to really understand it)
It’s less complexity for you, though, since most of it’s already implemented. You just use it the way the experts that built it tell you to. Crypto primitives also require more domain knowledge to properly compose than crypto protocols that are intended to be used like lego blocks hiding the domain knowledge on the inside. If you’re implementing internals, you might also need to do things like covert or side channel analysis. That’s its own skill set that most coders don’t have. The time you spend picking up all of that just to reinvent the wheel is time you could’ve put into innovative aspects of your OS. That would be the bigger tradeoff to consider than just the security aspect.
“ I was wondering if requiring end to end encryption for authenticated connection would not substantially improve the network security.”
I think it would. It was even mandatory in high-assurance security via Red Book of TCSEC. It was usually a separate device from easily-compromised hosts with guard servers being common but DiamondTek and Boeing used PCI cards. The user authenticated to it through a trusted path that was unspoofable, it recorded their security clearance, it didn’t allow them to even see anything too confidential, it checked all incoming packets for correct fields, it labeled outgoing stuff with their name or level for both tracing plus guards stopping classified stuff from hitting Internet pipes, might suppress covert channels, some did rate limiting for DOS mitigation, and some could inspect or restore hosts.
Boeing combined all those techniques in their OASIS proposal for a high-assurance-secure, pub-sub system. It’s how I learned about Spread. I recommended similar techniques be mandatory via regulations in modems/routers supplied by ISP’s to solve DDOS problem at source on top of other problems. E language solved the composition problem with capability-based security at language level. Ethos is another modern take on the problem focusing on abstractions that are secure-by-default in use. Such work inspired Cap’n Proto from guy who made Protocol Buffers.
Wow… Thanks! :-)
You’re welcome and good luck!
What nickpsecurity said. Also, (Open)SSH is an example of an application (applicative?) protocol that natively includes encryption. There are also some applications that wrap individual connections - e.g. stunnel (OpenSSL) or Colin Percival’s spiped (custom). Also, consider certain Kerberized applications.
But overall, you’d need a reason to not use SSL/TLS; I can think of a few reasons not to, but defaulting to “use what everyone uses” is generally a good idea.
Please, can you elaborate? Which reasons?
Any argument pro or against will improve my informed decision.
For larger systems, read http://www.daemonology.net/blog/2011-07-04-spiped-secure-pipe-daemon.html and http://www.daemonology.net/blog/2012-08-30-protecting-sshd-using-spiped.html - basically, TLS is frighteningly complex, with all that entails. Also note that spiped has a different keying model, which can be another reason to do choose something that is not TLS. (You can usually twist certificate-based authentication to fit whatever you need, though.)
For small embedded systems, you may simply not have the space to include a TLS library, or may not have the space to include a good TLS library.
That said, don’t roll your own if any of this is news to you.
Thanks a lot!
The netcode.io spec talks about crypto but it seems strange to me, using pubkey crypto for messages instead of doing a key exchange.
Valve also recently published their netcode library. It’s not documented but does link to the QUIC crypto spec which might be worth reading.