1. 66
  1.  

    1. 28

      I think the reality is that “don’t roll your own crypto” was probably good for getting people to stop rolling their own caesar ciphers and calling it AES but has been extremely insufficient as practical advice otherwise. Developers have to “roll their own crypto” by some definition sometimes. The article points this out and I think this is the key:

      Designing your own cryptography protocol on top of standard cryptography libraries? This is way more novel than you think it is.

      Most developers think of crypto as being a local property that can be wrapped by a protocol and all of the safety is encapsulated, but it isn’t. For example, they don’t think about what happens after decryption, like when the data is deserialized - deserializing isn’t crypto, therefor it’s not a security concern, I think, to devs. But of course as many know deserialization is extremely sensitive if the data was previously encrypted, even under gcm if you need to care about auth tag collisions.

      I tend to see two major issues:

      1. Using a library that sucks like openssl, which does insane things like set a null iv if unset

      2. Protocol issues where crypto is treated as a black box and everything about the values going in/ coming out is treated as not-crypto-related

      Comms just need to change. Devs like practical information. They don’t like “this is weak because it doesn’t X” they like “if it doesn’t X an attacker can do Y, which would undermine Z”. Devs think of things as binary “can decrypt it” vs “can’t decrypt it” vs “reduces the cost of decryption” etc, and that needs to change too.

      I’m not a cryptographer. I’m largely uncomfortable writing code that does crypto things so I defer to libraries or colleagues where possible, but I have had to do a few things before and it’s been interesting communicating why X is unsafe to developers.

      1. 18

        One way I like to try to communicate this is that generalist developers often have a blind spot about crypto code because you can’t test it in the same way.

        Ciphertext looks like binary nonsense? Job done, it must be encrypted.

        If you ask a generalist developer how confident they are of writing the code to implement a client for a communication protocol and never running or testing the code, but it has to work in production, they get a better idea of the challenge of successfully implementing a cryptosystem.

        (Also the word ‘crypto’ on its own is unhelpful, because it is used to mean both the low-level algorithms and the “whole cryptosystem”)

        1. 12

          I agree entirely re: devs seeing “it’s a blob” as “must be working”. And it’s hard to know if it’s a good blob or a bad blob.

          I think they’d have an easier time testing it if they knew about expected properties. For example, here is a test I have for some loose wrapper I wrote around some cryptography (99% of the code is just providing safe APIs that ensure things like random IVs, a Secret class that ensures it’s not accidentally logged, etc). I’m cutting 99% of the test out, but…

                tampered_ciphertext =
                  encrypted_data.ciphertext[0...-1] + (encrypted_data.ciphertext[-1].ord ^ 1).chr
              expect{decrypt(tampered_ciphertext)}.to raise_error(Crypto::Errors::CipherError)
          

          Basically “does this property hold?” tests for each expected property of this code. Similarly, I have properties like “encrypting the same plaintext twice leads to two different ciphertexts”. And many of these tests have a 1000.times.each wrapper with randomized inputs to ensure the properties hold up beyond incident.

          To write these tests you have to know what properties you want though. I’m hesitant to comment on the project that spurred this, but one thing that the author was unaware of was that encryption is really more of a read-protection, it’s aead that provides write-protection (and strengthens read protection on top of that!). I think that most developers think of encrypted values as having a sort of tamper-proof property, even though that’s not the case at all.

          These sorts of properties are things that devs can actually understand and test for, imo. What they tend to have a harder time with is knowing which properties to care about and how much, in my experience. Developers have a very hard time determining risk, which is where a security pal can be super helpful.

          1. 4

            I think they’d have an easier time testing it if they knew about expected properties.

            I’ll add that this is true of pretty much anything. Cryptographic code has higher stakes and is easier to screw up than “ordinary” code, so it needs it more; but personally, whenever I’m doing something even remotely tricky, I use property based tests to validate it. No way I can trust it until I do.

            Examples of things I wrote that required property based tests to find all the bugs: ring buffers, multiple-writers-single-reader message queues, parsers.

          2. 4

            generalist developers often have a blind spot about crypto code because you can’t test it in the same way.

            Unless you have test vectors, or a reference implementation to compare to. Then you mostly can test it in the same way. Gotta generate lots of tests of course, your regular unit tests obviously won’t cut it. But it remains a matter of mundane correctness — only the possibility of errors and the stakes are higher.

            The one thing that escapes ordinary tests even if you have a reference, are side channels. That stuff tend to require knowledge that the side channel might be a problem in the first place (rule of thumb: without physical access you only care about timings, with physical access you also care about energy consumption and EMI), and how to cut all flow of information from secrets to the side channel — which may require intimate platform knowledge.

            The minute you do something that doesn’t have a reference to compare to however, good luck.

            1. 7

              You can test a crypto algorithm (e.g. did I implement AES correctly) with test vectors, you can’t test a cryptosystem (e.g. am I at risk of nonce re-use which will completely invalidate my system).

              1. 2

                Ah, those pesky nonces. I agree, those need to be proven correct in some way (“I’m using random nonces from a trusted random source”), though tests can in some cases increase confidence. For instance if you’re using a counter that is not transmitted over the network, you can verify that every time you try to encrypt with the wrong nonce (that is, anything but the previous one + 1), then decryption fails. It’s not enough, but it helps.

              2. 4

                See my example above. I could run test vectors against libhydrogen and they’d all be fine. My code is even find if my threat model is defence against a passive adversary. If my threat model is an active adversary in control of the MQTT server, it is not.

                1. 4

                  Encryption isn’t much of a defence against a passive observer when most of the interesting information is the existence of the message and its sender :-)

              3. 3

                To give a concrete example, I just extended our IoT lightbulb demo to use end to end encryption, using libhydrogen’s secret box abstraction (libhydrogen is from the same people as libsodium and is a smaller version for embedded devices with fewer cyphers). The key is randomly generated by the function exposed from the library and communicated out of band (phone scans QT code to pair with the device). Messages are related via an MQTT server, libhydrogen manages authenticated decryption and will fail if messages are encrypted with the wrong key.

                Nice and secure, right?

                Well, it depends on the threat model. The demo wants to be able to have multiple phones controlling the light. At the same time, if the device loses network, it will miss MQTT packets. This means that there isn’t any kind of protection against replays. The MQTT server can retransmit any message that it’s seen before to control the light.

                Is that a problem? You’re protected against passive snooping of the server, but not against an active adversary. Up to you to decide whether that matters. It is possible to protect against replays but it’s more engineering work (it now requires at least some loose synchronisation, whereas previously the controllers were unidirectional).

                1. 2

                  That sounds like a home-rolled protocol with issues, which is exactly what the article is discussing ;-).

                  That doesn’t seem ideal code to have out in the wild. Maybe you would consider writing something more generically safe? I’d expect you to use either an interactive protocol (challenge-response-ish - requires some volatile state) or to keep some per-phone state (separate keys or per-phone nonce - requires some non-volatile state).

                  1. 4

                    A realistic deployment would not use an untrusted MQTT server, it would use one that was either provided by the device vendor or run by the user, so the extra crypto is defence in depth in case an attacker somehow manages to snoop those messages. This is possible due to misconfiguration of the server. If the server is so broken that an untrusted party can send arbitrary messages, you already have a complete denial of service attack on the system.

                    To be honest, I probably wouldn’t bother with the E2EE for a real use case because sensible ACLs on the server will do a better job and the server has to be in the TCB for availability anyway (even with all of the encryption in the world, it can still drop all messages). The demo is mostly about how you can pick up existing libraries (libhydrogen, a QR Code lib, the LCD drivers) and run them with least privilege. The crypto isn’t the focus of the example. It just serves to remind you that you need to think about threat models when you deploy something like this.

                    If I wanted to fix it, then the device would publish a montonically increasing 64-bit counter every ten seconds if it had received any messages in the preceding 20 seconds. It would use this as the context parameter on the secret and would try both contexts for decryption. You would be vulnerable to replays only if an attacker sent a message that you!d sent in the previous 20 seconds, which is easy to spot (it will happen only while you’re controlling the lightbulb).

                    I mostly didn’t because writing Android apps (the controller runs on Android) is so much harder than writing CHERIoT device firmware and I didn’t want to touch it more than I had to.

                    1. 5

                      You probably know this, but it bears repeating: any “sample” code will end up in production use eventually, including sample keys etc, regardless of warnings and such in the documentation. So be extra careful when providing such examples. It’s almost better to not have any examples at all :(

                      1. 2

                        If you have built a system so broken that the second layer of defence in depth having limitations is a problem, nothing I do can save you.

                      2. 1

                        If you wanted to fix this, I’d consider going for (wrapping every message for simplicity):

                        • client sends “request-challenge”
                        • light sends “challenge:
                        • client sends “command:

                        Using an incrementing 64-bit counter as a nonce is perfectly fine, but requires nonvolatile state; if you can assume that you have a decent RNG, I’d just pick a 256-bit random value for the nonce (128 bits is also almost certainly enough to avoid collisions given that the lightbulb is slow, but 256 bits is enough that you don’t need to think about it.)

                        Of course, you can also skip the request-challenge step if you assume clocks are sufficiently-synchronized. Or if the light can send a challenge and leave the challenge enqueued on the MQTT server (I must admit that I’m far from an expert on MQTT…)

                        [EDITed typo: challenged -> challenge in last line.]

                        1. 2

                          That approach would be hard to make work with MQTT. And using a 128-bit nonce would require using a different set of cyphers.

                          1. 1

                            Okay; thanks for entertaining my questions, in any case!

                            1. 1

                              huh, I wonder why Denis decided to design libhydrogen’s secretbox like that, since the underlying primitives use 128-bit nonces.

                    2. 1

                      Ciphertext looks like binary nonsense? Job done, it must be encrypted.

                      Like encrypting in Base64? Maybe some government or police official think like that. But it sound too dumb even for junior developers. I do not say they do not exist, but it is not common (at least in my neighborhood).

                      If you ask a generalist developer how confident they are of writing the code…

                      In many companies, there are too high expectations, too low budgets and too deadly deadlines, that the pace of the development is too fast and that developers are not confident about anything. No time to study how things work, no time to test thoroughly, just make it somehow work and skip to another task. This is a hazardous environment that calls for bugs – not only crypto ones but various overflows, injections, omitted checks, improper use of frameworks or libraries, logic mistakes etc. with same serious impacts (private data leaks, DoS, data integrity breach etc.). Senior developers have a better ability to correct management requests or even refuse such work.

                      1. 3

                        Like encrypting in Base64? Maybe some government or police official think like that. But it sound too dumb even for junior developers. I do not say they do not exist, but it is not common (at least in my neighborhood).

                        I had an applicant to a senior devops engineer role tell me that it was really important to make sure all your Kubernetes secrets are encrypted with base64.

                        1. 3

                          If it’s related to k8s then it’s probably in a YAML file and it may contain special characters, so base64 at least improves correctness, which may be important enough. /s

                          1. 1

                            Have quantum computers cracked rot13 yet?

                          2. 2

                            Like encrypting in Base64? Maybe some government or police official think like that. But it sound too dumb even for junior developers. I do not say they do not exist, but it is not common (at least in my neighborhood).

                            Behold: https://www.cryptofails.com/post/87697461507/46esab-high-quality-cryptography-direct-from

                      2. 19

                        As I have said before a non-patronising way to understand “don’t roll your own crypto” is, don’t make it your own, make it a collaborative project with expert review. That way the newbie collaborators get opportunities to learn from their mistakes before inflicting them on anyone else.

                        1. 16

                          I tried really hard to be positive in that thread but I think two things are missing from this discussion that most people who have audited crypto code in reality have to deal with: constantly having to prove someone else’s threat model that is never made explicit and developers not understanding that you don’t control how other people use your crypto code.

                          One hard part about cryptography from hobby projects or things like this, is that the developers don’t seem to fully grasp how much effort most professional cryptographers put into their work and doing things like defining conditions of risk. It was called out in the thread by the author:

                          It’s not an attack mode (MITM) that we actually care about (other problems with that anyway), but it does seem silly not to use authenticated mode when it’s easy.

                          But in reality, that threat model is never defined and the exact use case isn’t well defined enough to make the strong claims. I’ve seen this a lot. I’m no cryptographer, but I have audited and broke a shocking number of home grown mechanisms during my pentester years. People will say “here’s my generic secret sharing library” and then when people start pointing out potential footguns the goal posts get moved or you are suddenly the person who has the burden of proof and you are being told you need to teach someone else decades of historical cryptography when they made the tool.

                          I’ve also seen lots of developers use things you could not possibly imagine and in ways that baffles all logic. You don’t really get to control how others use your tool and if those potential footguns aren’t spelled out I think you mostly aren’t acting in a way that attempts to understand cryptography.

                          Unrelated, it’s super fun to see my little offhanded comment be picked up by a real practitioner and identify the same things, then go into real detail about it.

                          1. 10

                            you are suddenly the person who has the burden of proof

                            Something like that happened to me once. I was tasked with implementing TPM provisioning precisely because of my experience with cryptography. I had a procedure to follow, and among the goals were securing the communications between the TPM and the computer that provisioned it. Which I understood to mean that they did not want to trust the internal network of the factory, and wanted a direct secure link between the computer doing the provisioning, and the TPM being provisioned.

                            I had a procedure to follow, but I quickly noticed a step was missing: comparing the certificate given by the TPM with the root certificate of the manufacturer. Forgetting that step meant that a MitM could just provide a different certificate, and if you don’t compare it with a local copy they can make you believe you’ve provisioned the TPM, while actually you did not. I was very surprised when my tech lead didn’t believe me. I remember the conversation being quite frustrating, each of us thinking the other was talking nonsense. I temporarily gave up.

                            Some time later I got the code needed to connect to the TPM, and tested it with a fake TPM (software TPM, so not the manufacturer’s). And as I predicted, the connection was a success! Which it shouldn’t be, because in the configuration file we had specified the actual root certificate of the manufacturer, just like we would in production. We’re supposed to notice this is a fake TPM, and abort the hell out.

                            Even with running code to prove it, my tech lead still didn’t believe me. So in a meeting with the architects and the security specialist, I went over him and described the problem to the security specialist. Thankfully the security specialist knew what he was talking about, and asked we add the necessary comparison. Which we did (though we did add a “test mode” flag to skip that comparison so we could do preliminary tests with the software TPM).

                            That day I understood the legitimacy of writing, sometimes even running actual exploits. Without those people just don’t believe you. Heck, sometimes the only way they’ll believe you is when you steal their money and tell the world about it.

                            1. 8

                              This is something I deeply empathize with. I reproduced mortifying bugs on a pretty regular basis doing pentesting and it was a near constant battle to get people to believe me, especially when I was younger and not as good at writing or articulating the problems. I started putting a lot more stock in real exploits because someone really couldn’t argue with them…. and that’s why I’m a full time exploit developer now and a CNA disclosing vulnerabilities :) I do full time CVE reproduction and signature writing now because it’s functionally impossible to argue with and it’s very easy to show people where things go wrong in a way they can’t sweep it under the rug.

                            2. 7

                              People will say “here’s my generic secret sharing library” and then when people start pointing out potential footguns the goal posts get moved or you are suddenly the person who has the burden of proof and you are being told you need to teach someone else decades of historical cryptography when they made the tool.

                              Straight to my quote bible, thank you.

                              What’s most frustrating, for example in the case in question, those demands for instruction are literally a Google search away. “rsa attacks” or “rsa short message” But, no, it’s the irrational confidence of “I read the OpenSSL API and took a class about cryptography.” The arbitrary line is their current body of knowledge.

                              Meanwhile, I’ve never heard anyone assert that reading the Rails API and taking a class in distributed systems is enough to be qualified to write a secure web service.

                            3. 6

                              s/ Writing Encryption Code//

                              1. 3

                                This. Cryptography is quite the exception in software, in that it’s pretty much the only domain where people are crying left and right not to do it. But this indeed applies to pretty much everything. And yet, we don’t hear nearly as much outcry when it’s about “merely” processing untrusted input — though we are being increasingly serious about using memory safe languages for this.

                                1. 9

                                  Security is unusual in that correctness is mostly about what you do in erroneous cases. For most software, if there is a bug of the form ‘a user does this weird and stupid thing and the software doesn’t handle it correctly’, it’s safe to make that low priority and maybe document that users shouldn’t do the stupid thing. In security, you replace ‘user’ with ‘attacker’ in the above and now your code is broken and it’s a high-priority fix.

                                  1. 2

                                    Oh, I see. Since the first time I learned to properly test my code was when working on Monocypher, I kinda was blind to the difference. Now I think I get it: for casual stuff, we care about the happy path. For security stuff, the error paths are just as important, if not more.

                                    I can see how this affect tests: when testing the happy path, you just seek to confirm your theory that your software probably kinda works. When testing the error paths, it’s more about trying to disprove that theory. Both approaches are about correctness, but they’re very different approach. I just happen to systematically use the second one, except for the most casual stuff.

                                    I need to update this.

                                    1. 5

                                      Now I think I get it: for casual stuff, we care about the happy path. For

                                      Not just casual stuff. Imagine you ship, say, an office suite. In the word processor, if you select the correct three fonts, which are not system fonts on any supported platform, in adjacent text and then mash the keys really quickly while the third one is selected, it crashes. Is this a high-priority bug? Probably not: most users will not hit the first condition and so hitting both in a row is really unlikely.

                                      In a security context, that crash may be a symptom of a data corruption that can lead to arbitrary-code execution. Now it’s something you need to care about.

                                      This is the biggest issue I see when people start to think about security. It’s not just about being correct, it’s about being correct in the presence of an intelligent adaptive adversary. That’s a very different mindset because most people do not look at a system and immediately think ‘I could break this if I did these four things in a row and these two concurrently’. Those that do either end up in security or law (on one side or the other).

                                      1. 1

                                        Ok, I see what you mean: in an adversarial context, what should have been an unlikely glitch can quickly transform into an easily exploitable vulnerability leading to remote code execution: both the likelihood and stakes are drastically raised, sometimes to the point of transforming something negligible into something critical.

                                        It’s not just about being correct, it’s about being correct in the presence of an intelligent adaptive adversary.

                                        That’s where we differ, I think. Personally, I think of correctness irrespective of context. It’s simpler that way: correct software satisfy all requirements, which by my definition include all security requirements. Vulnerable software fails to satisfy at least one security requirement, and is therefore incorrect. The software doesn’t become “more incorrect” when I add an intelligent adversary into the mix.

                                        But that’s because I don’t think of correctness in terms of probability of occurrence. The only exception to that rule I allow myself is for stuff like cryptographic hash collisions, which I know are not impossible, but are improbable enough that I can ignore this “bug”. That may be too black&white an attitude.

                                        That said, I hate maintaining existing software, and prioritising bugs just hurts me.

                                        1. 5

                                          Personally, I think of correctness irrespective of context.

                                          Nontrivial software is almost never correct. Even formally verified software just guarantees that the bugs are present in the specification as well as the implementation. Most software does not have a formal specification to define correctness, let alone proofs of correctness.

                                          In the absence of such a specification, you have to prioritise the kinds of bugs you want to try to eliminate by construction and the ones that you want to ensure are low probability by careful testing. That prioritisation is very different if you assume the person providing the inputs to your program is incentivised to make it work correctly or break it.

                                          1. 1

                                            You make too good a case for me to disagree. But then we have a problem: the second your program is processing untrusted inputs it’s a security context, and priorities shift accordingly. Thing, is, we process untrusted input everywhere. Anything networked, anything that reads external documents or multimedia content… That is way too much software to ever hope to be secure.

                                            I’m guessing the only viable solution is to move as much software as we can out of a security context. An image reader for instance can guarantee the absence of remote code execution if it is implemented in a memory safe language (now the security requirements are on the compiler). One could properly parse & validate data before passing it to the rest of the program, which should severely limit (eliminate if we’re lucky) the possibility of attack if the parser is correct.

                                            Though wasn’t it you who said to me, that once we have a trusted enclave everyone wants to be in the enclave? Not that we should allow it, but I sense conflicting incentives.

                                            1. 4

                                              One could properly parse & validate data before passing it to the rest of the program, which should severely limit (eliminate if we’re lucky) the possibility of attack if the parser is correct.

                                              You should read about Qubes OS’ trusted image system.

                                              1. 4

                                                But then we have a problem: the second your program is processing untrusted inputs it’s a security context, and priorities shift accordingly

                                                Absolutely. There are three things that help:

                                                • Some programs simply do not process untrusted inputs. Unfortunately, it!s very common for programs to be designed to process only trusted inputs and then to discover that some inputs are untrusted. This is how we got MS Office Macro viruses.
                                                • Some programs run sandboxed and so a complete compromise doesn’t matter too much.
                                                • Most programs that process untrusted data do so only from a small subset of their inputs. For example, if you open an Office document, anything in that is untrusted, but anything coming from the user is probably fine to trust.

                                                An image reader for instance can guarantee the absence of remote code execution if it is implemented in a memory safe language (now the security requirements are on the compiler).

                                                Or if it sandboxes the decoder. If you run libpng, libjpeg, and so on in a sandbox where the input is a file in a complex format and the output is an uncompressed bitmap then an attacker who provides a malicious file that gets arbitrary code execution in the image library can generate an arbitrary image as output. Conveniently, that’s exactly the same as an attacker who just provides an image that doesn’t rely on any exploits. This is exactly the kind of thing that Capsicum and CHERI were designed to support.

                                                Though wasn’t it you who said to me, that once we have a trusted enclave everyone wants to be in the enclave?

                                                Sounds like something I say regularly. I think this is different because you’re encouraging people to put things in the sandbox not the secure world, and it’s possible to have a lot of mutually distrusting sandboxes.

                                                Most of what we’ve done in CHERIoT has been around making building software like this easy, rather than just possible.

                                                1. 1

                                                  it’s possible to have a lot of mutually distrusting sandboxes.

                                                  Got it.

                                      2. 1

                                        Users should not enter into a form names like Johnny'; DROP TABLE users; … or in a HTTP GET parameter id=123%20OR%201%3D1. You can put it in the manual and users may follow this rule… but you can replace „user“ with „attacker“ everywhere.

                                  2. 6

                                    I have the feeling that a fundamental shift that is needed in CS when there are advices to make to newcomers is, from “don’t do that” to “you could do that, but it is very dangerous, and there are many subtle things you may miss. But still, if you insist, go forward”. Because what the culture of not reinventing the wheel generated, is a lot of ignorance about many topics (that you learn doing errors and not being preoccupied of not even touch), and sometimes an incredible pile of useless dependencies (because it’s not just cryptography which is forbidden).

                                    1. 4

                                      Yeah. The authors of Signal, which Soatok is very fond of, have also rolled their own crypto, up to and including protocol design, as application developers. And not as an educational exercises, it’s running in production.

                                      The difference is definitely that those are Real Cryptographers™. So is the real lesson “don’t do stuff you’re not good at yourself when it’s critical for you to succeed at it”?

                                    2. 4

                                      Fundamentally, I believe the core problem here is a lack of available and trustworthy cryptography tools for developers to use.

                                      This is key. Embedded developers for instance still sometimes lack the necessary tools. Heck, not even two months ago I had to implement a protocol (SSCPv2) from specs: the only “approved” alternative for the use case (OSDP) was broken beyond repair, and the only SSCPv2 library we could find was some kind of Windows .net thing that I’m not sure we could have ported on our tiny Linux machine. So I wrote one from spec in C — not AES nor SHA-256, I borrowed them from elsewhere. AES was BearSSL’s bitslice implementation.

                                      (Now the folks who designed SSCPv2: why such an old school design with AES CBC and mac-then-encrypt? Why the CRC-32 checksum in addition to the authentication tag? Why didn’t they authenticate the unencrypted headers? Why didn’t they just use AES GCM or ChaPoly? Well, at least it didn’t look broken, so I guess that’s a win, compared to OSDP…)

                                      1. 3

                                        Does anyone have a resource for best practices when it comes to crypto? The article lists many DON’Ts but, having never worked on any crypto related, I didn’t get too much out of the article except crypto is even harder than I probably think it is.

                                        1. 11

                                          tqbf is basically the only person who actually writes about how you should do it: https://www.latacora.com/blog/2018/04/03/cryptographic-right-answers/

                                          it’s frustrating to me from the outside that the advice I read is “be scared, you’re always doing it wrong, and also we don’t tell you how to do it right”

                                          1. 3

                                            it’s frustrating to me from the outside that the advice I read is “be scared, you’re always doing it wrong, and also we don’t tell you how to do it right”

                                            There are two reasons for this. First, the advice itself funnels you into the simplest solutions possible, and never suggest you implement anything. Since the right answer is never “make your own”, it’s easy to deduce that we should be scared and never do anything (beyond using a turnkey off-the-shelf solution).

                                            Second, Thomas Ptacek himself has historically been pretty big on “don’t roll your own”, sometimes unreasonably so — pushing the Pareto envelope of cryptographic libraries earned me a few unjustified takes of his. Though I suspect there’s a meta-cognitive aspect to this: by showing this is possible, I may have encouraged more idiots to try it unprepared.

                                            1. 3

                                              A lot of these people have no idea how to do it right. Not just clickbait authors but also even legit pentesters etc. red team expert doesn’t automatically make a blue team expert

                                              1. 5

                                                Completing https://cryptopals.com/ is a pretty good qualification course.

                                                1. 2

                                                  fwiw I did the initial call out in that thread and this is where I learned enough to do real attacks and imo you have to know how to attack to defend.

                                                2. 3

                                                  If you want (or need, as still happens sometimes in embedded settings) to implement your own crypto, I wrote this a few years back.

                                                3. 3

                                                  Author of the original post being bashed here. The original post is here: https://mill.plainopen.com/how-we-share-secrets-at-a-fully-remote-startup. I am describing what I do, why, and how, and the reasons for this choice of approach. All criticism is about using this approach to solve different problems.

                                                  It’s like saying “It’s a terrible idea to use wood when better fire-resistant materials exist” and ignoring the fact that the topic is “wood to burn in a fireplace”.

                                                  Yes, many people doing cryptography will do it wrong, through lack of background understanding, or knowledge of particular tools, or research of latest developments. The post recognizes this. I am not immune to that. But I did do research. Yes, I learned some new things from the feedback here and here, but the original choices I made are still reasonable too. Convinced I am wrong? Decrypt this and get a bounty: https://github.com/dsagal/plainopen-mill-blog/discussions/2#discussioncomment-12028272. I am not moving the goal posts. That is the whole use case.

                                                  To anyone thinking about cryptography themselves, think also about the risk of installing and trusting a tool because you read a recommendation of it in an internet discussion.

                                                  1. 9

                                                    This is going to only come across as aggressive so please understand I’m doing this sincerely and not out of any malice, but I’m asking these things having been in these situations before and never seem to get straight answers as I didn’t make these choices, but you did. I do not believe that you have described why you made your choices in cryptographic primitives at all. You describe why you built a new tool in spite of the common refrains, but do not actually justify your selection anywhere:

                                                    Why did you choose RSA over EdDSA? Why didn’t you choose primitives that account for post-quantum cryptography? Are you familiar with the concept of “harvest now, decrypt later” and what that means for current vs future algorithm selection? Have you established any documentation of potential footguns for users of your library? Have you established a basic threat model that you can share with users about why and where you should use it? Have you done any attempts to formally model your protocol with something like Tamarin? How do you handle key distribution and management? Do you have a way to handle revocation internally? What happens when you encrypt a VM disk image and put it in S3 with this, does that break the RSA assumptions?

                                                    I find it interesting that you didn’t actually respond to @Soatok, nor their points and seem to point out the “goal post” moving statement, which I only brought up (and is mentioned by no one but me). At this point you have had 3 other people who took the time to show you that OAEP padding was an issue, the AEAD construction was an issue, or that direct use of RSA was an issue. Is this not moving the goal post? Didn’t you add a bounty after people in this thread have shown you how to break it? Why didn’t this happen before?

                                                    1. 1

                                                      I wish we could only talk about future secrecy because that’s an actual problem with the specific use-case I talk about. But so much else is in fact, totally different from what I’m talking about, and it’s a distraction. (And I did respond to @Soatok, and listened and reacted to many, and made a PR to improve it, though it is perhaps misguided because it may make it seem like I am encouraging others to use it, but nowhere do I say to others “you should use it”.)

                                                      I summarized my learnings here: https://github.com/dsagal/plainopen-mill-blog/discussions/2#discussioncomment-12029907.

                                                    2. 6

                                                      To anyone thinking about cryptography themselves, think also about the risk of installing and trusting a tool because you read a recommendation of it in an internet discussion.

                                                      There ARE fields in which it’s hard to tell who the real experts are, but I don’t think cryptography is one of them, is it? So install and trust a tool recommended by one of the real experts.

                                                      1. 1

                                                        There ARE fields in which it’s hard to tell who the real experts are, but I don’t think cryptography is one of them, is it

                                                        How do you tell who is a real expert? Are there any in this thread? Can you ever tell online, or does it have to be IRL?

                                                        1. 2

                                                          Cryptography is an academic discipline, and in any academic discipline it’s really easy to at least make a shortlist of experts. A lot of people would say it’s as simple as that. I don’t think it NECESSARILY as simple as that - there are academic disciplines in which the experts are often wrong, such as many parts of economics - but in the specific case of cryptography I think it probably is.

                                                      2. 9

                                                        I lightly suspect you’ll interpret the world’s disinterest in this bounty as your being right.

                                                        1. 2

                                                          Being right about what exactly?

                                                          1. 6

                                                            […] but the original choices I made are still reasonable too. Convinced I am wrong?

                                                            1. 9

                                                              Additionally:

                                                              Author of the original post being bashed here.

                                                              That’s not what this author was doing. This blog post is focused on how even people who think they’re following the “don’t roll your own” rule end up not realizing that they did, and bashing the cryptography community for not providing better tooling to prevent this from happening in the first place. He also seems to be more self-flagellating than bashing you or your code.

                                                              1. 2

                                                                The post is definitely a bash, despite saying it isn’t. Soatok had a very simple option - not linking to the project and not using harsh language. They chose not to do either of those things. Their disclaimer of “I totally am not bashing” does nothing tbh.

                                                                1. 8

                                                                  Walk me through the logic on this one, because I’m not seeing it.

                                                                  He cited an example of someone who agrees with cryptography and security experts about not rolling their own, even though they ultimately didn’t follow the spirit of that advice. He then went on to discuss other examples from his professional career that he can’t cite (presumably because the NDA’s haven’t yet expired), and then explains why this advice probably isn’t clear enough to be actionable before suggesting that cryptography nerds (which includes people like me) are at fault for these sorts of misunderstandings, as well as failing to meet people where they are.

                                                                  Just because he cites a specific example doesn’t mean he’s necessarily bashing the author or their work. The disclaimer isn’t why I disagree with you on this characterization; he also gives them credit where credit was due. They published their source code and transparently explained their reasoning. It isn’t “harsh language” to identify flaws in said reasoning and discuss them. Hell, he was harsher to people that work in cryptography (and especially himself) than he was to @dsagal. If anyone should feel “bashed”, it’s the people that work in cryptography and contribute nothing in the way of meeting developers where they are. And perhaps we deserve a few bashes now and again.

                                                                  1. 1

                                                                    I think the article may have been edited tbh

                                                                    1. 3

                                                                      I’m pretty sure those disclaimers were in the original version I read when I got the Signal group chat notification.

                                                                      1. 1

                                                                        Some certainly, I don’t think all were though. At one point I had addressed it explicitly in my original comment but decided to remove it early on because I felt it would be a distraction - I had quoted a few of places where I felt that the wording itself was insulting. Perhaps I’m mistaken, I don’t feel like trying to find na archive snapshot.

                                                                      2. 1

                                                                        Oh, probably.

                                                          2. 4

                                                            To anyone thinking about cryptography themselves, think also about the risk of installing and trusting a tool because you read a recommendation of it in an internet discussion.

                                                            I’m personally of the opinion that merely choosing a cryptographic tool requires more cryptographic knowledge than implementing an already chosen one from specs. It’s not necessarily harder, but there is an art indeed to decide which internet discussion to trust.

                                                            1. 5

                                                              Bounties of this sort are typically impossible to solve because they target an extremely unrealistically narrow threat model. Even significant problems like IND-CCA aren’t in scope for such a “decryption bounty”. Cryptographers have learned this, and no longer even try.

                                                              1. 2

                                                                Ignoring the realistic thrert model and substituting a different one in order to write a criticism is kinda this author’s MO so don’t feel too attacked.

                                                                I mean I agree with the overall sentiment of why did you build something new for this problem at all, but yeah one can’t take every article personally I agree

                                                                1. [Comment removed by author]

                                                                2. 3

                                                                  This was the source of the MtGox hack back in 2011/2012 timeframe. Dude wrote his own implementation of SSL in PHP or something. Whoops.

                                                                  1. 2

                                                                    How do cryptography experts go about designing new protocols?

                                                                    Presumably they’re getting reviews from other experts, but are there lists of known DOs and DONTs for various cryptographic schemes out there that they check against (and is this what FIPS is)? Alternatively, are all these weaknesses and attacks just sort of out in the world of research papers and the experts just have to keep on top of it as much as possible and know when they apply?

                                                                    1. 9

                                                                      How do cryptography experts go about designing new protocols?

                                                                      With extreme amounts of humility and a very clear scope of “why am I designing anything novel in the first place?”

                                                                      A lot of time is spent on requirements. Once you think you have those hammered out, you write a prototype and a threat model for your prototype. Then you share each component (requirements, prototype, threat model) with different peers to review each.

                                                                      What happens next is a gradient of novelty. On one end, you have “open specification and prototype is good enough”. On the other, you have “academic paper + formal verification of software correctness” levels of assurance, usually combined with multiple audits from independent pentesting firms that specialize in cryptography.

                                                                      Sometimes, you don’t need to go as far. You should always try to attain the highest level of assurance you can, though.

                                                                      1. 2

                                                                        Thank you (and the sibling comments), this is very clear and helpful

                                                                        With extreme amounts of humility and a very clear scope of “why am I designing anything novel in the first place?”

                                                                        Something I see in the other thread is that sometimes you have some requirements but don’t really what existing solution you should use to fit them. I think articles like this one from this same author help with that.

                                                                        Even personally within the last year I went down the rabbit hole of wanting a protocol that satisfies some specific properties and started drafting something myself that I thought would work.

                                                                        I’ve heard the “don’t roll your own” rule enough times to know I wouldn’t be able to get it right, but I still got as far as presenting it to people at work before finding that somebody more qualified had already designed it.

                                                                      2. 2

                                                                        How do cryptography experts go about designing new protocols?

                                                                        Going by my knowledge of how it works in other fields, it’s a two step process. First, think through a new protocol intuitively. Second, formally define a threat model and prove that the protocol is safe against that threat model. Knowing a bunch of specific attacks would be helpful for the first step, but not for the second.

                                                                        1. [Comment removed by author]

                                                                        2. 2

                                                                          “Hell Is Overconfident Developers Writing Encryption Any Code”

                                                                          Though encryption code is extra bad.

                                                                          1. 2

                                                                            Are bugs in a cryptography part of an application more „hell“ than other bugs like buffer overflow, SQL and other injections, privilege escalation etc. etc.? Why?

                                                                            (all of them are hard to find, all of them can cause private data leaks, DoS, identity theft and other damages)