I’m looking for books and papers about network protocols design (possibly suited for self study) to back the design of the network file system protocol for my research/toy OS.
I searched the web but while I’ve found a lot of material about networking, very few texts talk about the design tradeoffs and lessons learned from protocols like NFS, 9P2000, FTP or HTTP (or even SMTP, that while unrelated, seems the most distributed system so far):
To my untrained eye, all seem interesting, but I cannot afford all of them. However (as far as I can find in online ToC), none of them talk about security and encryption (nor about lessons learned from previously designed protocols).
Any further book/textbook/paper suggestion is welcome.
If you really want to look at the fundamental science and history of the major network protocols (and maybe get a glimpse at why there’s so little innovation in this area) I highly recommend John Day. “Patterns in Network Architecture: A Return to Fundamentals”, Prentice Hall. It’s out of print, but not too hard to find. Most of the material is online as well, on the Pouzin Society website. You should be aware that it’s not exactly the most practical book, since Day’s a bit of an iconoclast, but he’s also an old-timer who’s experienced much first-hand. Very interesting read. I think you’d enjoy it.
His critique of IPv6 is refreshing, interesting. Do you know whether other experts share his opinion? I have the impression that the generally held consensus is that IPv6 is a good thing.
It seems very very interesting, thanks!
In general, I would say this sort of content is not the kind of content that makes it to books. You’ll probably have more luck by:
Well, actually this is what I did so far.
I was looking for something more solid to “verify” and improve my current design and fix obvious issues that my ignorance hide to me. Indeed I’ve found very interesting the Sharp’s idea of using CSP to formally define and verify protocols designs (my first thought was “I guess that nobody ever tried to verify HTTP! :-D”, but actually I don’t know if any of the mainstream protocols I cited have been formally verified).
Nevertheless, please share any specific paper (or insightful rant) that belong to these categories.
If you are interested in formal verification of network protocols, I’d definitely recommend checking out the Tamarin Prover. It’s been used to formally verify protocols like TLSv1.3, DNP3 (used in SCADA systems) etc. David Wong did a video introduction if you prefer a guided tour.
One of the authors of Tamarin also co-authored a book on the formal verification of network protocols. It’s focused more on the theory than on engineering, and gives a reasonably comprehensive overview of the area.
Bias disclosure: I’m doing a Phd under the supervision of one of the authors
TLA+ can also be used to verify the pure theoretical protocol, right?
It helps to go where prior efforts went to leverage whatever they can teach you. TLA+ is great for protocols but most work is about model-checking. That’s only partial verification up to certain amount of interactions or states. Other tools are used for full verification more often. What you can do is start with TLA+ to see the benefits without investing tons of time before bigger investment of learning formal proof. For same reason, one can use TLA+ on a protocol design first to get a quick check on some properties before proving it. We have just enough data in that I recommend specification-based testing, model-checking, and fuzzing of anything before proving. Those quickly catch lots of problems with proof catching the deepest ones where the state space would’ve just never let you get there with other methods.
The sad truth is that all I’m currently able to do, is to integrate the protocol in my kernel and write some server for it.
Unfortunately, while writing a server is relatively simple, integrating protocol into the kernel is a pretty complex task, probably the most complex refactoring I faced so far, because while the 9P2000 protocol handling is pretty well isolated in a Plan 9 kernel, the underlying model is strongly tighted into… everything.
One could write an alternative test client that is not a kernel (and that’s probably what I will do actually), but… at some point I’ll have to integrate it into the Jehanne’s kernel anyway, and I’m a bit afraid of what issues a testing client could hide.
That’s why I’d like to somehow “check” the design before diving into such a C-refactoring nightmare…
Look into state machine modeling and Design-by-Contract. Put in pre-conditions and optional post-condition checks on each of the modules. You do that on protocols, too. Feed data into them generated specifically to test those and from fuzzers. Any errors are going to be inputs causing bad state or interactions. That’s the easy approach. DbC is mostly assertions btw if you don’t want to learn state-based modelling.
I like Marshall Rose’s On the Design of Application Protocols, and his specs for BEEP (here, and here). Higher/application-level protocol, but still.
The very first line just conquered my heart!
Thanks!
Spencer Dawkins published an early draft at IETF 101 “Path Aware Networking: A Bestiary of Roads Not Taken” to try and capture some of the mistakes protocol designers have made over the years. It is probably worth a read and worth following as more accounts are added.
https://tools.ietf.org/html/draft-dawkins-panrg-what-not-to-do-00
Not what you’re asking, but modern systems seem to be using remote block storage plus SQL servers (etc.) for shared data. Are you sure you want NFS?
Read Dan Luu (like, everything, but particularly his blog post on disaggregated storage.)
Jehanne was born as a response to the whole mainstream “architecture”: it’s official goal is to replace everything, from dynamic linking up to WebAssembly. I see most “modern systems” as a tower of patches, each addressing the problems introduced from the previous ones, despite the real hardware issues originally addressed at the base, have gone decades ago.
Actually I want a simplified and enhanced 9P2000. But yes, I think the file abstraction (once properly defined) is all we need to subsume all we have today and start building better tools.
Wow! This is a great blog!
But I can’t find anything about “disaggregated storage”, any direct link?
Sorry for the slow response. I like files too; not sure it’s the best use of one’s time to try to boil that ocean, but it should at least be educational.
I meant to point you specifically at https://danluu.com/infinite-disk/, but I was on my phone on the train at the time.
You’ll likely also be interested in https://danluu.com/file-consistency/ and https://danluu.com/filesystem-errors/, although that’s not so much a fundamental issues as “actually handling errors might be a good idea”.
Thanks, great reads.
I think so, actually. But the point is that we need to boil that ocean, one way or another…
Jehanne is my attempt, it’s probably wrong (if funny), it will fail… and whatever.
But my goal is to show that it’s a road worth exploring, full of low hanging fruit left there just because everybody are looking the other way. I want to stimulate critical thinking, I want to spread a taste for simplicity, I want people to realize that we are just 70 years in computers, and that everything can change.
Interconnections, by Radia Perlman, goes into some detail about how lower-level network protocols are designed including the tradeoffs and the human issues involved in designing them. You could probably get a sense for tradeoffs that would also apply to higher level protocol design.
Network Algorithmics has a lot of useful advice on how to design fast systems.