Threads for fazz

  1. 3

    It really does not matter if you have some kind of thought out and complete rules for parsing the protocol’s messages. But if you don’t, plain text protocols are a sure way to shoot oneself into foot using a majestic railgun, much worse than binary protocols (which tend to break down really fast when poorly designed and start causing problems long before they hit any production systems).

    See also: LANGSEC - (I feel obliged to link it here, as you mention neither “parser” nor “grammar” in your post).

    1. 2

      This is exactly the kind of problem where Weeks of Debugging Can Save You Hours of TLA+ and I would even paraphrase it as “Weeks of Sprinkling Mutexes Around the Code Base Can Save You Hours of TLA+”.

      In general, the problem described in the article is of categories “lost update” or TOCTOU (time-of-check-time-of-use) and should be solved not by adding mutexes, but by redesigning your data structures and operations on them in such a way that they are consistent in the face of all possible event reorderings. Adding mutexes to business logic code should always be considered a dangerous and suboptimal approach.

      In the current case, the first step would be replacing “saving” of full balance by incrementing it (not unlike the SQL example brought in the article itself).

      1. 4

        I remember hearing that, despite ATMs/banks being the “canonical” example of race conditions being Very Bad(TM), actual ATMs are totally susceptible to race conditions and there’s just batch reconciliation processes to handle stuff instead (and, I imagine, an expectation that stuff “higher in the stack” like laws, security cameras, etc will handle issues).

        I have no idea if this is true but I choose to believe it

        1. 3

          It used to be true and I know of real stories of good thugs hunting down bad thugs after exploiting the issue, but that was back in the early nineties.

          It is much less of a problem at the era of ubiquitous connectivity, but of course, there is some slack in the name of usability and that is handled “higher in the stack”, indeed.

        1. 2

          I saw Estonia brought up a few times.

          As someone who has dealt with this stuff for 20+ years in various settings, including the Estonian national ID, let me chime in and shatter your dreams a bit.

          TL;DR: forget about it. And when at it - forget about everything that involves X.509 PKI.

          TLS-CCA (client certificate authentication) is an extremely complex topic. How complex? See for yourself how hard it to set up correct server-side configuration in a PKI setting. And then ask yourself: can I be sure that my configuration is correct after all that dance? Inability to obtain that assurance is a prime sign of poor security solution.

          This is one of those instances where you should remember that trusting Wikipedia on complex issues is a mistake.

          Estonia is moving away from TLS-CCA, fast. Not the ID card or whatever is know by “e-estonia”, but specifically the TLS-CCA, because it has so many problems, especially with compatibility and stability. Changes between TLS 1.2 and 1.3 broke some use cases for good, for example. Every single major OSX update for the last 10-15 years has broken it. Multiply those problems by those of X.509 PKI and you get a horrible unmanageable monster.

          Furthermore, TLS-CCA has certain rare cryptographic properties (more below) that make it unsuitable for today’s environment. It is not compatible with TLS inspection (firewalls, antiviruses). For those in EU, it is not compatible with the most common integration modes mandated by PSD2. Never use it for end users and if you really want something there beyonds passwords and OTP-s, go for WebAuthn (albeit that has its own set of problems).

          Now, that said, do not forget that today’s TLS is a joke. It protects your from very little, especially in the web browser. It is not the protocol itself, which is very fine in its latest versions, but how it is used and how the necessary trust is disseminated.

          Should your security model actually say that you need a TLS channel that is not susceptible to various forms of widespread intercept and can set up a system which you fully control (that means all the endpoints and your own small PKI), then go for it, but even in those cases you may need something like Signal protocol, instead, on top of the TLS.

          Otherwise, it is a bad idea.

          For end users, got for WebAuthn or TOTP or whatever suits your setup.

          For apps, devices and back-to-back connections, go for OAuth and maybe TLS server public key pinning.