I like the idea, although the field “Disclouse:Full/Partial/None” is too open to interpretation to be useful I think.
Another issue is that whilst the RFC request the PGP key be served over HTTPS, it doesn’t specify that security.txt also be served over HTTPS. This makes the PGP key HTTPS irelevant, as an attacker could MiTM and specify a different location for the file.
HTTPS is useless if the attacker can MITM the server, since the attacker can just obtain a LetsEncrypt certificate.
If you are worried about the attacker MITM you, then you should try from multiple locations.
SEO shitbags rank with email spammers as the absolute lowest pigshit dirtfuck dregs of humanity.
Is this really the standard of article we want to see here?
The author seems pretty ill informed as well:
If people don’t want to see my site with random trash inserted into it, they can choose not to access it through broken and/or compromised networks.
Earlier you recommended letsencrypt, and now suddenly you want me to pick a competent certificate authority
The author seems pretty ill informed as well:
Reposting the “ill informed” opinions without refutation or explanation doesn’t really have much value.
Is this really the standard of article we want to see here?
You’re new…maybe wait a bit and contribute more before hand-wringing. :)
Reposting the “ill informed” opinions without refutation or explanation doesn’t really have much value.
I included two quotes from the article to explain my point and I think they speak for themselves. Misquoting someone has negative value.
In general, I would say this sort of content is not the kind of content that makes it to books. You’ll probably have more luck by:
Well, actually this is what I did so far.
I was looking for something more solid to “verify” and improve my current design and fix obvious issues that my ignorance hide to me. Indeed I’ve found very interesting the Sharp’s idea of using CSP to formally define and verify protocols designs (my first thought was “I guess that nobody ever tried to verify HTTP! :-D”, but actually I don’t know if any of the mainstream protocols I cited have been formally verified).
Nevertheless, please share any specific paper (or insightful rant) that belong to these categories.
If you are interested in formal verification of network protocols, I’d definitely recommend checking out the Tamarin Prover. It’s been used to formally verify protocols like TLSv1.3, DNP3 (used in SCADA systems) etc. David Wong did a video introduction if you prefer a guided tour.
One of the authors of Tamarin also co-authored a book on the formal verification of network protocols. It’s focused more on the theory than on engineering, and gives a reasonably comprehensive overview of the area.
Bias disclosure: I’m doing a Phd under the supervision of one of the authors
TLA+ can also be used to verify the pure theoretical protocol, right?
It helps to go where prior efforts went to leverage whatever they can teach you. TLA+ is great for protocols but most work is about model-checking. That’s only partial verification up to certain amount of interactions or states. Other tools are used for full verification more often. What you can do is start with TLA+ to see the benefits without investing tons of time before bigger investment of learning formal proof. For same reason, one can use TLA+ on a protocol design first to get a quick check on some properties before proving it. We have just enough data in that I recommend specification-based testing, model-checking, and fuzzing of anything before proving. Those quickly catch lots of problems with proof catching the deepest ones where the state space would’ve just never let you get there with other methods.
The sad truth is that all I’m currently able to do, is to integrate the protocol in my kernel and write some server for it.
Unfortunately, while writing a server is relatively simple, integrating protocol into the kernel is a pretty complex task, probably the most complex refactoring I faced so far, because while the 9P2000 protocol handling is pretty well isolated in a Plan 9 kernel, the underlying model is strongly tighted into… everything.
One could write an alternative test client that is not a kernel (and that’s probably what I will do actually), but… at some point I’ll have to integrate it into the Jehanne’s kernel anyway, and I’m a bit afraid of what issues a testing client could hide.
That’s why I’d like to somehow “check” the design before diving into such a C-refactoring nightmare…
Look into state machine modeling and Design-by-Contract. Put in pre-conditions and optional post-condition checks on each of the modules. You do that on protocols, too. Feed data into them generated specifically to test those and from fuzzers. Any errors are going to be inputs causing bad state or interactions. That’s the easy approach. DbC is mostly assertions btw if you don’t want to learn state-based modelling.