1. 5

    Reading this so many years later it’s kinda remarkable, particularly as this was six years before the Snowden leaks.

    The Dual EC scandal only really got larger attention after Snowden and many more details around deployment became known (particularly the RSA Inc and and Juniper stories). But as this post shows the key facts around the broken/backdoored cryptography where known and publicly discussed long before that.

    1. 3

      What I find interesting is that a lot of the criticism seems to be underspecification and variations of the format.

      It’s obvious that this is a problem, but is it an unfixable one? It seems there is an older RFC, but that’s still underspecified. It also seems to me a way forward to improve things would be:

      a) Write up a new RFC and specify all the inconsistencies to something sane, ideally something that already a large number of applications do so, and try to get as many of them on board promising to support it.

      b) Name that something easy to remember, like “CSV, 2021 spec” or something.

      c) All applications provide at least an option to use “CSV, 2021 spec” and ideally move to that being the default.

      Please note that this wouldn’t mean “CSV, 2021 spec” can only be used to exchange data with other applications supporting it. Given we try to use what’s already what most applications do, unless you have weird edge cases it probably already works in most cases with existing CSV-supporting applications.

      FWIW I think there’s simliar inconsistencies in JSON parsers, and probably pretty much the same should be done.

      1. 4

        This reminds me of https://xkcd.com/927/ :)

        I think you underestimate how many non-technical people produce datasets. None of these people will ever have heard of RFCs or whatever else TLA you may say to them.

        1. 1

          I know that comic, but I think I made clear that I absolutely did not want to do this.

          My proposal would be to spec what is as close to existing solutions as possible and would likely work in most situations right from the start.

          1. 1

            I understand, but I would not even know where such a spec would start and where it would end. Would it include date-times? How about timezone offsets? What about number formatting? Character encodings? This all gets very complicated very quickly.

        2. 2

          The primary reason why CSV is such an ubiquitous format is that anyone can understand it quickly, and therefore nobody is coding against any spec. The RFC that exists was merely retrospective.

          The very thing that makes CSV so common is also the reason why drafting a new spec will be unlikely to gain traction.

          1. 1

            Nobody has stepped up to the plate so far. Feel free to start! Note, that I’m not volunteering, but I encourage you to!

          1. 3

            Repology is a way to check a bunch of Linux distributions’ version of glibc included in their respective repositories: https://repology.org/project/glibc/versions

            There doesn’t seem to be a single major distro that’s upgraded to 2.34 yet in a stable release. It’s hard to rapidly release such an integral library, so we might be waiting a while before the rebuilds are finished everywhere.

            1. 4

              This is not how distros work, at least most of them.

              They usually ship the version of a library that was stable when they made their last stable release and then backport important fixes.

            1. 0

              Yes STARTTLS can be downgraded, but how do most clients react when you simply block ports 465, 993 and 995? The client will often try to connect on port 25, 110 or 143 instead.

              Those might be closed server-side since you don’t want to allow clients to connect insecurely, but a hacker that was determined enough to inject data in a STARTTLS session can just as easy set up an stunnel MITM on the insecure ports.

              1. 1

                I haven’t seen such behavior during our tests and I would definitely consider it a security vulnerability.

                Can you name a specific client that will connect through plaintext ports if TLS ports are blocked?

              1. 11

                STARTTLS always struck me as a terrible idea. TLS everywhere should be the goal. Great work.

                1. 6

                  Perhaps this is partially the result of a new generation of security researchers gaining prominence, but progressive insight from the infosec industry has produced a lot of U-turns. STARTTLS was obviously the way forward, until it wasn’t, and now it’s always been a stupid idea. Never roll your own crypto, use reliable implementations like OpenSSL! Oh wait, it turns out OpenSSL is a train wreck, ha ha why did people ever use this crap?

                  As someone who is not in the infosec community but needs to take their advice seriously, it makes me a bit more wary about these kinds of edicts.

                  Getting rid of STARTTLS will be a multi-year project for some ISPs, first fixing all clients until they push implicit TLS (and handle the case when a server doesn’t offer implicit TLS yet), then moving all the email servers forward.

                  Introducing STARTTLS had no big up-front costs …

                  1. 9

                    Regarding OpenSSL I think you got some bad messaging. The message is not “don’t use OpenSSL”. The real message was “all crypto libraries are train wrecks and need more funding and security auditing”. But luckily OpenSSL has improved a lot, and you should still use a well-tested implementation and not roll your own crypto and OpenSSL is not the worst choice.

                    Regarding STARTTLS I think what we’re seeing here is that there was a time when crypto standards valued flexibility over everything else. We also see this in TLS itself where TLS 1.2 was like “we offer the insecure option and the secure option, you choose”, while TLS 1.3 was all about “we’re gonna remove the insecure options”. The idea that has gained a lot of traction is that complexity breeds insecurity and should be avoided, but that wasn’t a popular idea 20-30 years ago when many of these standards were written.

                    1. 2

                      The message is not “don’t use OpenSSL”. The real message was “all crypto libraries are train wrecks and need more funding and security auditing”. But luckily OpenSSL has improved a lot, and you should still use a well-tested implementation and not roll your own crypto and OpenSSL is not the worst choice.

                      100%

                      I prefer libsodium over OpenSSL where possible, but some organizations can only use NIST-approved algos.

                  2. 3

                    Agreed. It always felt like a band-aid as opposed to a well thought out option. Good stuff @hanno.

                    1. 3

                      Your hindsight may be 20/20 but STARTTLS was born in an era where almost nothing on the Internet was encrypted. At that time, 99% of websites only used HTTPS on pages that accepted credit card numbers. (It was considered not worth the administrative and computing burden to encrypt a whole site that was open to the public to view anyway.)

                      STARTTLS was a clever hack to allow opportunistic encryption of mail over the wire. When it was introduced, getting the various implementations and deployments of SMTP servers (either open source or commercial) even to work together in an RFC-compliant manner was an uphill battle on its own. STARTTLS allowed mail administrators to encrypt the SMTP exchange where they could while (mostly) not breaking existing clients and servers, nor requiring the coordination of large ISPs and universities around the world to upgrade their systems and open new firewall ports.

                      Some encryption was better than no encryption, and that’s still true today.

                      That being said, I run my own mail server and I only allow users to send outgoing mail on port 465 (TLS). But for mail coming in from the Internet, I still have to allow plaintext SMTP (and hence STARTTLS support) on port 25 or my users and I would miss a lot a messages. I look forward to the day that I can shut off port 25 altogether, if it ever happens.

                      1. 2

                        Your hindsight may be 20/20 but STARTTLS was born in an era where almost nothing on the Internet was encrypted.

                        I largely got involved with computer security/cryptography in the late 2000’s, when we suspected a lot of the things Snowden revealed to be true, so “encrypt every packet securely” was my guiding principle. I recognize that wasn’t always a goal for the early Internet, but I was too young to be heavily involved then.

                        Some encryption was better than no encryption, and that’s still true today.

                        Defense against passive attackers have value, but in the face of active attackers, opportunistic encryption is merely security theater.

                        I look forward to the day that I can shut off port 25 altogether, if it ever happens.

                        Hear hear!

                        1. 2

                          Defense against passive attackers have value, but in the face of active attackers, opportunistic encryption is merely security theater.

                          That’s not quite true, it still provides an audit trail. The goal of STARTTLS, as I understand it, is to avoid trying to connect to a TLS port, potentially having to wait for some arbitrary timeout if a firewall somewhere is set to drop packets rather than reject connections, and then retry on the unencrypted path. Instead, you connect to the port that you know will be there and then try to do the encryption. At this point, a passive attacker can’t do anything, an active attacker can strip out the server’s notification that STARTTLS is available and leave the connection in plaintext mode. This kind of injection is tamper-evident. The sender (at least for mail servers doing relaying) will typically log whether a particular message was sent with or without STARTTLS. This logging lets you detect which messages were potentially leaked / tampered with at a later date. You can also often enforce policies that say things like ‘if STARTTLS has ever been supported by this server, refuse if it isn’t this time’.

                          Now that TLS support is pretty-much table stakes, it is probably worth revisiting this and defaulting to connecting on the TLS port. This is especially true now that most mail servers use some kind of asynchronous programming model so trying to connect on port 465 and waiting for a timeout doesn’t tie up too many resources. It’s not clear what the failure mode should do though. If an attacker can tamper with port 25 traffic, they can also trivially drop everything destined for port 465, so trying 465 and retrying on 25 if that fails is no better than STARTTLS (actually worse - rewriting packets is harder than dropping packets, one can be done by inspecting the header the other requires deep-packet inspection). Is there a DNS record that can tell connecting mail servers to not try port 25? Just turning off port 25 doesn’t help because an attacker doing DPI can intercept packets for port 25 and forward them over a TLS connection that it establishes to 465.

                    1. 22

                      Does anyone, anywhere ever get taught how to design a file format? It seems a giant blind spot that people seldom talk about, unless like this person they end up needing to parse or emit a particularly hairy one.

                      A while ago I discovered RIFF and was just like “why are we not using this everywhere?”

                      1. 19

                        In university it came as a natural side effect of OS Design (file-systems and IPC) and network communication (device independent exchange). It’s enough for a start and then go by cautionary tales and try to figure out which ones apply in your particular context. You’ll be hard pressed to find universal truths here to be ‘taught’. Overgeneralise and you create a poor file-system (zip), overspecialise and it’s not format, it’s code.

                        The latter might be a bit surprising, but take ZSTD in dictionary mode. You can noticeably increase information density by training it on case specific data. The resulting dictionary need to go somewhere as it is not necessarily a part of the bitstream. The decoding stage needs to know about the dictionary. Do you preshare it and embed it in the application or put it as part of the bitstream. Both can be argued, both have far reaching consequences.

                        The master-level for file formats, if you need something to study, I’d say is media container formats e.g. MKV. You have multiple data streams of different sizes, some are relevant and some are to be skipped and it is the consumer that decides. Seeking is often important and the reference frames may be at highly variable offsets. There are streaming / timing components as your spinning disk media with a 30Gb file has considerable seek times and rarely enough bandwidth and caches. They are shared in contexts that easily introduces partial corruption that accumulates over time, a bit flip in a subtitle stream shouldn’t make the entire file unplayable and so on.

                        RIFF as an example is a TLV (tag-length-value). These are fraught with dangers. It is also the one that everyone comes up with and it has many many names. I won’t spoil or go into it all here, part of the value is the journey. Follow RIFF to EXIF to OMP and see how the rationale expands to “make sense” when you suddenly have a Jpeg image with a Base64 encoded XML indexed Jpeg image inside of it as part of metadata block. Look at presentations by Ange Albertini ( e.g. Funky File Formats: https://www.youtube.com/watch?v=hdCs6bPM4is ), Meredith Patterson ( Science of Insecurity: https://www.youtube.com/watch?v=3kEfedtQVOY ) and Travis Godspeed ( Packets in Packets: https://www.youtube.com/watch?v=euMHlV6MNqs).

                        1. 18

                          Being a self-taught programmer, I think the study of file formats is underrated. I only learned file format parsing to help me write post-exploitation malware that targets the ELF file format. I also used to hack on ClamAV for a job, and there I learned better how to parse arbitrary file formats in a defensive way–such that malware cannot target the AV itself.

                          I’m in the process of writing a proprietary file format right this very moment for ${DAYJOB}. The prior version of the file format was incredibly poorly designed, rigid, and impossible to extend in the future. I’m grateful for the lessons ELF and ClamAV taught me, otherwise I’d likely end up making the same mistakes.

                          1. 15

                            There’s a field of IT security called “langsec” that’s basically trying to tell people how to design file formats that are easier to write secure parsers for. But it’s not widely known and as far as I can tell usually not considered when designing new formats.

                            I think this talk gives a good introduction: https://www.youtube.com/watch?v=3kEfedtQVOY

                            1. 10

                              The laziest answer is don’t bother and let Sqlite be your on-disk file format. Then you also get interop with any geek wanting to mess about with your data, basically free.

                              It’s certainly not ideal in some situations, but it’s probably a good sane default for most situations.

                              sqlite links about it: https://sqlite.org/affcase1.html and https://sqlite.org/fasterthanfs.html

                              That said, I agree it would be great to have nice docs about various tradeoffs in designing file formats. So far the best we seem to have are gotcha posts like this one.

                              1. 3

                                Or CBOR, flatbuffers/capnproto/etc, just any existing solid serialization format. If storing just “regular” data. Things like multimedia come with special requirements that might make reusing these formats difficult.

                                1. 2

                                  There are three use cases that make designing a file format difficult:

                                  • Save on one platform / architecture, load on another (portability).
                                  • Save on one version of your program, load on a newer one (backwards compatibility).
                                  • Save on one version of your program, load and modify on an older one (forwards compatibility).

                                  Of these, SQLite completely fixes the portability problem by defining platform and architecture-agnostic data types. It transforms the other two from file format design problems into schema design problems. Backwards compatibility is fairly simple to preserve in both cases, read the file / database and write out the new version. It may be slightly easier to provide a schema migration query in SQLite than maintain the old reader and the new writer for a custom file format, but you’re also likely to end up with a more complex schema for a SQlite-based format than something custom. It can help a bit with forwards compatibility. This is normally implemented in custom formats by storing the version of the creator and requiring unknown record types to be preserved so that a new version of the program can detect a file that contains records saved by a program that didn’t know what they meant and fix up any changes. It may be possible for foreign key constraints and similar in SQLite to avoid some of this but it remains a non-trivial problem.

                                2. 10

                                  Excellent point — same thing goes for network protocols, though they’re less common.

                                  I learned a lot from RFC 3117, “On The Design Of Application Protocols” when I read it about 20 years ago. It’s part of the specs for an obsolete protocol called BEEP, but it’s pretty high level and goes into depth on topics like how to frame variable length records, which is relevant to file formats as well. Overall it’s one of the best-written RFCs I’ve seen, and I highly recommend it.

                                  1. 5

                                    IFF, the inspiration for RIFF, was used everywhere on the Amiga, more or less.

                                    1. 2

                                      Its ubiquity also had the advantage that you could open iffparse.library to walk through any IFF-based format instead of writing your own (buggy) format parser.

                                    2. 4

                                      I had the same question when I’ve learned about the structure of ASN.1, which is probably only used to store cryptographic data in certificates (maybe there are some other uses, but I haven’t seen any), but probably can be used anywhere really (it’s also a TLV structure).

                                      1. 7

                                        ASN.1 is used heavily in telecommunications and is used for SNMP and LDAP. Having implemented more of SNMP than I care to remember and worked with some low level telecoms stuff, ASN.1 gives me nightmares. I know the reasons for it but it’s definitely more complicated than it seems…

                                      2. 3

                                        I don’t think “how to design a file format” is often taught but I’ve been taught many examples of file and packet formats with critiques about what parts were good or bad.

                                        RIFF itself may not be common but its ideas are; PNG most notably. Also BEEP/BXXP, a now dead 2000-era packet format. But these days human readable delimited formats like JSON and XML are more in fashion.

                                        The reality is no product succeeds or fails on the quality of its data formats. Their fate is determined by other forces and then whatever formats they use are what we are stuck wtih.

                                      1. 3

                                        I wouldn’t say public CDNs are completely obsolete. What this article does not take into consideration is the positive impact of geographic locality (i.e. reduced RTT and packet loss probability) on transport layer performance. If you want to avoid page load times on the order of seconds (e.g. several MB worth of javascript over a transatlantic connection) either rely on a public CDN or run your own content delivery on EC2 et al. Of course this involves more work and potentially money.

                                        1. 2

                                          This would only apply if whatever you’re fetching from the CDN is really huge. For any reasonably small file the transport performance is irrelevant compared to the extra handshake overhead.

                                          1. 1

                                            It does apply for smallish file sizes (on the order of a few megabytes). It mainly depends on how far you have progressed the congestion window of the connection. Even with an initial window of 10 MSS it would take several RTT to transfer the first megabyte

                                            1. 3

                                              There’s a benefit if you use a single CDN for everything, but if you add a CDN only for some URLs, it’s most likely to be net negative.

                                              Even though CDNs have low latency, connecting to a CDN in addition to the other host only adds more latency, never decreases it.

                                              It’s unlikely to help with download speed either. When you host your main site off-CDN, then users will pay the cost of TCP slow start anyway. Subsequent requests will have an already warmed-up connection to use, and just going with it is likely to be faster than setting up a brand new connection and suffering TCP slow start all over again from a CDN.

                                              1. 1

                                                That is definitely interesting. I never realized how expensive TLS handshakes really are. I’ve always assumed that the number of RTTs required for the crytpo handshake are what the issue is, not the computational part.

                                                I wonder if this is going to change with QUICs ability to perform 0-RTT connection setups.

                                                1. 1

                                                  No, CPU cost of TLS is not that big. For clients the cost is mainly in roundtrips for DNS, TCP/IP handshake and TLS handshake, and then TCP starting with a small window size.

                                                  Secondary problem is that HTTP/2 prioritization works only within a single connection, so when you mix 3rd party domains you don’t have much control over which resources are going to load first.

                                                  QUIC 0-RTT may to help indeed, reducing the additional cost to just an extra DNS lookup. It won’t solve the prioritization problem.

                                        1. 1

                                          http://de interestingly ends up on some advertising page. How does one take over such a page?

                                          1. 3

                                            It doesn’t resolve for me. Maybe you have some catchall DNS?

                                            1. 1

                                              Have you tried to add a dot ?

                                              1. 5

                                                There is no A record for de. You are behind a split-horizon resolver, most likely your ISP’s to inject advertisements.

                                                1. 2

                                                  Creepy. That would explain why I got different results using my cell network.

                                            2. 1

                                              no luck for me either. I wonder if there’s something funky with my DNS. dig A de spits out

                                              de.			7155	IN	SOA	f.nic.de. its.denic.de. 1623166346 7200 7200 3600000 7200
                                              
                                              1. 3

                                                That’s just the authority record (see the SOA instead of A) telling you which nameserver is authoritative for the query. There aren’t any A records listed on de as far as I can see.

                                            1. 2

                                              Couple of notes:

                                              If this is ongoing and not broadly patched yet, is it responsible to reveal this much detail about the vulnerabilities? (Cynical take: iot are generally insecure as hell anyway, the knowledge that these vulnerabilities exist doesn’t make much difference to an attacker.)

                                              Why the hell doesn’t calloc perform an overflow check? It literally has two jobs…

                                              (For that matter, why aren’t they showing calloc’s original source code?)

                                              1. 2

                                                IoT devices are never broadly patched. That’s… part of the business model.

                                                There’s little a responsible security researcher can do here.

                                                1. 1

                                                  “Never” is an exaggeration. I own multiple IoT devices that receive firmware updates, such as WeMo smart outlets and Ecobee thermostats. (And my Eero router, if you count routers as IoT.)

                                                  I’m aware of one RTOS vendor (forgotten the name) whose top selling point is its superior support for secure OTA firmware updates.

                                                2. 1

                                                  Haven’t full read the article, so maybe I have missed some information.

                                                  As far as I see there are patches available for the affected vendors and some of them already have patches for there devices. At this point there is no reason to hide the details of the vulnerabilities. There are tools for analyzing binary patches and find the bugs. So it’s quiet easy to understand the bugs and create exploits for other devices.

                                                  On the defending side there is most of the time few or less information about the security issues. This brings problems about planing and executing an update (if available). Disable every device which is affected till it’s updated might be a good idea from a security perspective, but your management will not like it. So you have to do a risk management. Therefor more information about the bugs are better.

                                                1. 3

                                                  I think you’re looking for Signed HTTP Exchanges: https://developers.google.com/web/updates/2018/11/signed-exchanges

                                                  But it’s controversial, Mozilla had some reservations. I haven’t digged into the details of that discussion though.

                                                  1. 4

                                                    Signed HTTP Exchanges (SHE) is much, much more. My understanding is that this is an intent to bake something like AMP into web standards. What I find most worrisome here is that it allows to make an origin act on behalf of another origin without the browser ever checking the actual source. Essentially, this means amp.foo.example could - for all intents and purposes of web security and the same origin policy - speak for my-totally-other-site.example. This also removes the confidentiality you could have with the origin server and inserts a middle-man, which you wouldn’t have if you talked to the origin server directly. Mozilla openly considers Signed HTTP Exhanges as harmful.

                                                    That being said, a solution for bundling that supports integrity and versioned assets would be very much welcomed though!

                                                  1. 3

                                                    Attribution is trivial and left as an exercise to the reader.

                                                    Now I’m curious. Can you share the binary if you’re leaving that as an exercise to the reader?

                                                    1. 1

                                                      I think that was a joke. Attribution is usually the exact opposite of trivial.

                                                      1. 6

                                                        Apparently no. Over at the orange site: https://news.ycombinator.com/item?id=26302565

                                                        tl;dr it looks pretty much like an exploit from Immunity.

                                                        1. 1

                                                          Yeah, I thought maybe the author was saying that someone was claiming credit if you looked at the strings, and for some reason didn’t want to call attention to whom.

                                                          It works as a joke, too.

                                                      1. 7

                                                        This is pretty sad. As mentioned in the post, libtls is a real advance. I in fact think libtls is more important than LibreSSL.

                                                        1. 5

                                                          libretls might be a possible solution. I first ran across it in FreeBSD ports (link) pulled in as a dependency for net/openntpd.

                                                          1. 2

                                                            I don’t have an opinion on the libtls API. But if their plan was to make that successful, they did it in the worst possible way.

                                                            Like: “Hey do you want to use our TLS library with a fancy new API?” “Well, maybe, let’s have a look.” “Actually you can only use it if you replace your system’s OpenSSL with a fork that is not exactly 100% compatible, but if you patch 100 packages slightly it may just work.”

                                                            I remember that I found OpenNTPDs https fencing feature interesting, but it relied on libtls. So it was a non-starter to practically use on any mainstream system. (And yes, I know about libretls, but afair it didn’t exist back then and I did a quick check - it’s not packaged in e.g. debian or ubuntu, so a non-starter as well.)

                                                            1. 1

                                                              libtls might be doable with libressl-portable in a particular set of config options.

                                                              I do want to keep seeing libtls and hope it gets wider adoption.

                                                              1. 1

                                                                Maybe libtls could be re-implemented using the OpenSSL API? Best case that API subset would even be supported by LibreSSL for those who want to continue using it.

                                                                1. 1

                                                                  Of course that’s already been done

                                                              1. 4

                                                                My years running Courier taught me this. I’m surprised that there are still 6,000 600 domains in the top million that are incorrectly configured without realizing it.

                                                                Edit: mfw

                                                                1. 3

                                                                  600 :-)

                                                                1. 7

                                                                  A rather interesting cluster of crackpot cryptography could be observed at the beginning of the NIST post quantum cryptography competition.

                                                                  What made this situation somewhat unusual is that the competition was open for everyone, and ultimately there needed to be some judgement on why to exclude algorithms, so people analyzed them. There were plenty of completely broken algorithms early in the competition, usually in all likelyhood noone would’ve looked at them.

                                                                  1. 2

                                                                    I didn’t really pay attention until Round 2 so I missed a lot of this context. Have you ever written anything about it? I’d love to read more if you have. :)

                                                                    1. 3

                                                                      No, unfortunately not. It was briefly mentioned in a talk by Dan Bernstein and Tanja Lange at CCC afair.

                                                                      I remember Lorenz Panny did a lot of the breaking, e.g.: https://twitter.com/yx7__/status/945283780851400704

                                                                      One of the authors of one of the algs had at some point in the past contacted me and wanted to tell me something along the lines of “all other cryptographers are idiots and my algorithm is the only pqcrypto that works”. His algorithm sounded weird, I ignored it, then later saw it again as one of the “easily broken” ones.

                                                                  1. 5

                                                                    What is the actual path forward fixing the problem? Bringing Rust/LLVM support to all of those platforms? I can understand the reasoning by the maintainers that C is inherently insecure, but not being able to use the package for the foreseeable future isn‘t really an option either. Well it might spark some innovation :D

                                                                    1. 14

                                                                      Speaking in realistic terms and fully acknowledging that it is in some ways a sad state of affairs. Most of those platforms are dying and continuing to keep the alive is effectively volunteering to keep porting advancements from the rest of the world onto the platform. If you want to use third party packages on an AIX box you sort of just have to expect that you’ll have to share the labor of keeping that third party package working on AIX. The maintainers are unlikely to be thinking about you and for good reason.

                                                                      For users of Alpine linux the choice to use a distro that is sufficiently outside the mainstream means you are also effectively committing to help port advancements from the rest of the world onto the platform if you want to consistently use them.

                                                                      For both categories you can avoid that implicit commitment by moving to more current and mainstream platforms.

                                                                      1. 12

                                                                        Exactly. And as a long term implication, if Rust is here to stay, the inevitable fate of those platforms is “Gain Rust support or die”. Maintaining a C implementation of everything in the name of backward compatibility is only delaying the inevitable, and is ultimately a waste of time.

                                                                      2. 6

                                                                        I see it like this: Rust is not going away. This won’t be the last project introducing rust support.

                                                                        Either your plattform supports rust or the world will stop supporting your plattform.

                                                                        1. 5

                                                                          This definitely seems to be true. I helped drive Rust support for a relatively niche platform (illumos), and while it took a while to get it squared away everybody I spoke with was friendly and helpful along the way. We’re a Tier 2 platform now, and lots of Rust-based software just works. We had a similar experience with Go, another critical runtime in 2021, which also required bootstrapping, being no longer written in C and so on.

                                                                        2. 8
                                                                          1. The package maintainers agree not to break shit, or
                                                                          2. Someone from among those affected volunteers to maintain a fork.
                                                                          1. 5

                                                                            I mean, you can always pin your dependency to the version before this one. No way that could come back and bite you </sarcasm>

                                                                            1. 2

                                                                              I think GCC frontend for Rust, which recently got funded, will solve this problem.

                                                                              1. 3

                                                                                Rust crates tend to rely on recently added features and/or library functions. Given that GCC releases are far less frequent, I think there will be a lot of friction when using the GCC frontend.

                                                                                1. 4

                                                                                  Maintaining compatibility with a 6/9/12 month old rust compiler is a much smaller ask of the maintainers than maintaining a C library indefinitely.

                                                                            1. 4

                                                                              The end of the article mentions:

                                                                              They mention that a free plan is for projects that “aren’t business-critcal”.

                                                                              And the author talks how his business was affected. While I agree with the article that Cloudflare should be more transparent and explicit of what limits exist, maybe a better title would be: “never user Cloudflare CDN for your business”, otherwise it’s a clickbait.

                                                                              1. 2

                                                                                It does not seem to me that any of what happened to the person couldn’t happen to a non-business private project as well.

                                                                                1. 2

                                                                                  I think the point is that the argument is disingenuous in the sense that probably a good amount of non-business private project would never reach the limit, but benefit from CDN services offered.

                                                                                  Just because one user had a bad experience, it doesn’t mean that you shouldn’t ever use the offer from Cloudflare.

                                                                              1. 5

                                                                                Betteridge’s law strikes again.

                                                                                One of the key features of a blockchain, which the author tries to handwave away, is that every link in the chain is verifiable, and unalterable. The author tries to claim that because each commit carries a reference to its parent, it’s a “chain of blocks”, but it’s not so much a chain as just an order. You can edit the history of a git repo easily, reparent, delete, squash, and perform many other operations that entirely modify the entire chain. It was kinda made that way.

                                                                                1. 12

                                                                                  The technical properties of git’s and common block chain data structures are relatively similar.

                                                                                  You can also fork a bitcoin block chain and pretend that your fork is the canonical one. The special bit about block chains is that there’s some mechanism for building agreement about the HEAD pointer. Among other things, there’s no designated mover of that pointer (as in a maintainer in a git-using project), but an algorithm that decides which among competing proposals to take.

                                                                                  1. 16

                                                                                    They are technically similar because both a blockchain and a git repo are examples of a merkle tree. As you point out though the real difference is in the consensus mechanism. Git’s consensus mechanism is purely social and mostly manual. Bitcoin’s consensus mechanism is proof of work and mostly automated.

                                                                                    1. 2

                                                                                      Please stop referring to “Proof of _” as a consensus mechanism. It is an anti-sybil mechanism, the consensus mechanism is called “longest chain” or “nakomoto consensus” - you can use a different anti-sybil mechanism with the same consensus mechanism (but you may lose some of the properties of bitcoin).

                                                                                      The point is that there are various different combinations available of these two components and conflating them detracts from people’s ability to understand what is going on.

                                                                                      1. 2

                                                                                        You are right. I was mixing definitions there. Thanks for pointing it out. The main point still stands though. The primary distinction between a blockchain and git is the consensus mechanism and not the underlying merkle tree datastructure that they both share.

                                                                                      2. 1

                                                                                        Mandatory blockchain != bitcoin. Key industrial efforts listed in https://wiki.hyperledger.org/ are mostly not proof-of-work in any way (the proper term for this is permissioned blockchain, which is where industrial applications are going).

                                                                                        1. 2

                                                                                          You are correct. I don’t disagree at all. I used bitcoin as an example because it’s well known. There are lots of different blockchains with different types of consensus mechanisms.

                                                                                    2. 2

                                                                                      You can make a new history but it will always be distinct from the original one.

                                                                                      I think what you’re really after is the fact that there is no one to witness that things like the author and the date of a commit are genuine – that is, it’s not just that I can edit the history, I can forge a history.

                                                                                      1. 1

                                                                                        Technically you haven’t really made the others disappear. They are all still there just not easily viewed without using reflog. All you are really doing is creating a new branch point and moving the branch pointer to the head of that new branch when you do those operations. But to the average user it appears that you have edited history.

                                                                                        1. 1

                                                                                          what was all that hullabaloo about git moving away from SHA-1 due to vulnerabilities? why where they using a cryptographic hash function in the first place?

                                                                                          what you said makes sense, but it seems to suggest this SHA-1 thing was a bit of bikeshedding or theater

                                                                                          1. 2

                                                                                            Git uses a cryptographic hash function because it wants to be able to assume that collisions never occur, and the cost of doing so isn’t too large. A collision was demonstrated in SHA-1 in 2017.

                                                                                            1. 3

                                                                                              SHA-1 still prevents accidental collisions. Was Git really designed to be robust against bad actors?

                                                                                              1. 2

                                                                                                ¯_(ツ)_/¯

                                                                                                1. 1

                                                                                                  The problem is that it was never properly defined what properties people expect from Git.

                                                                                                  You can find pieces of the official Git documentation and public claims by Linus Torvalds that are seemingly in contradiction to each other. And the whole pgp signing part does not seem to be very well thought through.

                                                                                              2. 2

                                                                                                Because you can sign git commits and hash collisions ruins that.

                                                                                                1. 1

                                                                                                  ah that makes some sense

                                                                                            1. 37

                                                                                              Hello, I am here to derail the Rust discussion before it gets started. The culprit behind sudo’s vast repertoire of vulnerabilities, and more broadly of bugs in general, is accountable almost entirely to one matter: its runaway complexity.

                                                                                              We have another tool which does something very similar to sudo which we can compare with: doas. The portable version clocks in at about 500 lines of code, its man pages are a combined 157 lines long, and it has had two CVEs (only one of which Rust would have prevented), or approximately one every 30 months.

                                                                                              sudo is about 120,000 lines of code (100x more), its had 140 CVEs, or about one every 2 months since the CVE database came into being 21 years ago. Its man pages are about 10,000 lines and include the following:

                                                                                              $ man sudoers | grep -C1 despair
                                                                                              The sudoers file grammar will be described below in Extended Backus-Naur
                                                                                              Form (EBNF).  Don't despair if you are unfamiliar with EBNF; it is fairly
                                                                                              simple, and the definitions below are annotated.
                                                                                              

                                                                                              If you want programs to be more secure, stable, and reliable, the key metric to address is complexity. Rewriting it in Rust is not the main concern.

                                                                                              1. 45

                                                                                                its had 140 CVEs

                                                                                                Did you even look at that list? Most of those are not sudo vulnerabilities but issues in sudo configurations distros ship with. The actual list is more like 39, and a number of them are “disputed” and most are low-impact. I didn’t do a full detailed analysis of the issues, but the implication that it’s had “140 security problems” is simply false.

                                                                                                sudo is about 120,000 lines of code

                                                                                                More like 60k if you exclude the regress (tests) and lib directories, and 15k if you exclude the plugins (although the sudoers plugin is 40k lines, which most people use). Either way, it’s at least half of 120k.

                                                                                                Its man pages are about 10,000 lines and include the following:

                                                                                                12k, but this also includes various technical documentation (like the plugin API); the main documentation in sudoers(1) is 741 lines, and sudoers(5) is 3,255 lines. Well under half of 10,000.

                                                                                                We have another tool which does something very similar to sudo which we can compare with: doas.

                                                                                                Except that it only has 10% of the features, or less. This is good if you don’t use them, and bad if you do. But I already commented on this at HN so no need to repeat that here.

                                                                                                1. 12

                                                                                                  You’re right about these numbers being a back-of-the-napkin analysis. But even your more detailed analysis shows that the situation is much graver with sudo. I am going to include plugins, becuase if they ship, they’re a liability. And their docs, because they felt the need to write them. You can’t just shove the complexity you don’t use and/or like under the rug. Heartbleed brought the internet to its knees because of a vulnerability in a feature no one uses.

                                                                                                  And yes, doas has 10% of the features by count - but it has 99% of the features by utility. If you need something in the 1%, what right do you have to shove it into my system? Go make your own tool! Your little feature which is incredibly useful to you is incredibly non-useful to everyone else, which means fewer eyes on it, and it’s a security liability to 99% of systems as such. Not every feature idea is meritous. Scope management is important.

                                                                                                  1. 9

                                                                                                    it has 99% of the features by utility

                                                                                                    Citation needed.

                                                                                                    what right do you have to shove it into my system?

                                                                                                    Nobody is shoving anything into your system. The sudo maintainers have the right to decide to include features, and they’ve been exercising that right. You have the right to skip sudo and write your own - and you’ve been exercising that right too.

                                                                                                    Go make your own tool!

                                                                                                    You’re asking people to undergo the burden of forking or re-writing all of the common functionality of an existing tool just so they can add their one feature. This imposes a great cost on them. Meanwhile, including that code or feature into an existing tool imposes only a small (or much smaller) cost, if done correctly - the incremental cost of adding a new feature to an existing system.

                                                                                                    The key phrase here is “if done correctly”. The consensus seems to be that sudo is suffering from poor engineering practices - few or no tests, including with the patch that (ostensibly) fixes this bug. If your software engineering practices are bad, then simpler programs will have fewer bugs only because there’s less code to have bugs in. This is not a virtue. Large, complex programs can be built to be (relatively) safe by employing tests, memory checkers, good design practices, good architecture (which also reduces accidental complexity) code reviews, and technologies that help mitigate errors (whether that be a memory-safe GC-less language like Rust or a memory-safe GC’ed language like Python). Most features can (and should) be partitioned off from the rest of the design, either through compile-time flags or runtime architecture, which prevents them from incurring security or performance penalties.

                                                                                                    Software is meant to serve the needs of users. Users have varied use-cases. Distinct use-cases require more code to implement, and thereby incur complexity (although, depending on how good of an engineer one is, additional accidental complexity above the base essential complexity may be added). If you want to serve the majority of your users, you must incur some complexity. If you want to still serve them, then start by removing the accidental complexity. If you want to remove the essential complexity, then you are no longer serving your users.

                                                                                                    The sudo project is probably designed to serve the needs of the vast majority of the Linux user-base, and it succeeds at that, for the most part. doas very intentionally does not serve the needs of the vast majority of the linux user-base. Don’t condemn a project for trying to serve more users than you are.

                                                                                                    Not every feature idea is meritous.

                                                                                                    Serving users is meritous - or do you disagree?

                                                                                                    1. 6

                                                                                                      Heartbleed brought the internet to its knees because of a vulnerability in a feature no one uses.

                                                                                                      Yes, but the difference is that these are features people actually use, which wasn’t the case with Heartleed. Like I mentioned, I think doas is great – I’ve been using it for years and never really used (or liked) sudo because I felt it was far too complex for my needs, before doas I just used su. But I can’t deny that for a lot of other people (mainly organisations, which is the biggest use-case for sudo in the first place) these features are actually useful.

                                                                                                      Go make your own tool! Your little feature which is incredibly useful to you is incredibly non-useful to everyone else

                                                                                                      A lot of these things aren’t “little” features, and many interact with other features. What if I want doas + 3 flags from sudo + LDAP + auditing? There are many combinations possible, and writing a separate tool for every one of them isn’t really realistic, and all of this also required maintenance and reliable consistent long-term maintainers are kind of rare.

                                                                                                      Scope management is important.

                                                                                                      Yes, I’m usually pretty explicit about which use cases I want to solve and which I don’t want to solve. But “solving all the use cases” is also a valid scope. Is this a trade-off? Sure. But everything here is.

                                                                                                      The real problem isn’t so much sudo; but rather that sudo is the de-facto default in almost all Linux distros (often installed by default, too). Ideally, the default should be the simplest tool which solves most of the common use cases (i.e. doas), and people with more complex use cases can install sudo if they need it. I don’t know why there aren’t more distros using doas by default (probably just inertia?)

                                                                                                      1. 0

                                                                                                        What if I want doas + 3 flags from sudo + LDAP + auditing?

                                                                                                        Tough shit? I want a pony, and a tuba, and barbie doll…

                                                                                                        But “solving all the use cases” is also a valid scope.

                                                                                                        My entire thesis is that it’s not a valid scope. This fallacy leads to severe and present problems like the one we’re discussing today. You’re begging the question here.

                                                                                                        1. 4

                                                                                                          Tough shit? I want a pony, and a tuba, and barbie doll…

                                                                                                          This is an extremely user-hostile attitude to have (and don’t try claiming that telling users with not-even-very-obscure use-cases to write their own tools isn’t user-hostile).

                                                                                                          I’ve noticed that some programmers are engineers that try to build tools to solve problems for users, and some are artists that build programs that are beautiful or clever, or just because they can. You appear to be one of the latter, with your goal being crafting simple, beautiful systems. This is fine. However, this is not the mindset that allows you to build either successful systems (in a marketshare sense) or ones that are useful for many people other than yourself, for previously-discussed reasons. The sudo maintainers are trying to build software for people to use. Sure, there’s more than one way to do that (integration vs composition), but there are ways to do both poorly, and claiming the moral high ground for choosing simplicity (composition) is not only poor form but also kind of bad optics when you haven’t even begun to demonstrate that it’s a better design strategy.

                                                                                                          My entire thesis is that it’s not a valid scope.

                                                                                                          A thesis which you have not adequately defended. Your statements have amounted to “This bug is due to sudo’s complexity which is driven by the target scope/number of features that it has”, while both failing to provide any substantial evidence that this is the case (e.g. showing that sudo’s bugs are due to feature-driven essential complexity alone, and not use of a memory-unsafe language, poor software engineering practices (which could lead to either accidental complexity or directly to bugs themselves), or simple chance/statistics) and not actually providing any defense for the thesis as stated. Assume that @arp242 didn’t mean “all” the usecases, but instead “the vast majority” of them - say, enough that it works for 99.9% of users. Why is this “invalid”, exactly? It’s easy for me to imagine the argument being “this is a bad idea”, but I can’t imagine why you would think that it’s logically incoherent.

                                                                                                          Finally, you have repeatedly conflated “complexity” and “features”. Your entire argument is, again, invalid if you can’t show that sudo’s complexity is purely (or even mostly) essential complexity, as opposed to accidental complexity coming from being careless etc.

                                                                                                    2. 9

                                                                                                      I dont’t think “users (distros) make a lot of configuration mistakes” is a good defence when arguing if complexity is the issue.

                                                                                                      But I do agree about feature set. And I feel like arguing against complexity for safety is wrong (like ddevault was doing), because systems inevitably grow complex. We should still be able to build safe, complex systems. (Hence why I’m a proponent of language innovation and ditching C.)

                                                                                                      1. 11

                                                                                                        I dont’t think “users (distros) make a lot of configuration mistakes” is a good defence when arguing if complexity is the issue.

                                                                                                        It’s silly stuff like (ALL : ALL) NOPASSWD: ALL. “Can run sudo without a password” seems like a common theme: some shell injection is found in the web UI and because the config is really naïve (which is definitely not the sudo default) it’s escalated to root.

                                                                                                        Others aren’t directly related to sudo configuration as such; for example this one has a Perl script which is run with sudo that can be exploited to run arbitrary shell commands. This is also a common theme: some script is run with sudo, but the script has some vulnerability and is now escalated to root as it’s run with sudo.

                                                                                                        I didn’t check all of the issues, but almost all that I checked are one of the above; I don’t really see any where the vulnerability is caused directly by the complexity of sudo or its configuration; it’s just that running anything as root is tricky: setuid returns 432 results, three times that of sudo, and I don’t think that anyone can argue that setuid is complex or that setuid implementations have been riddled with security bugs.

                                                                                                        Other just mention sudo in passing by the way; this one is really about an unrelated remote exec vulnerability, and just mentions “If QCMAP_CLI can be run via sudo or setuid, this also allows elevating privileges to root”. And this one isn’t even about sudo at all, but about a “sudo mode” plugin for TYPO3, presumably to allow TYPO3 users some admin capabilities without giving away the admin password. And who knows why this one is even returned in a search for “sudo” as it’s not mentioned anywhere.

                                                                                                        1. 3

                                                                                                          it’s just that running anything as root is tricky: setuid returns 432 results, three times that of sudo

                                                                                                          This is comparing apples to oranges. setuid affects many programs, so obviously it would have more results than a single program would. If you’re going to attack my numbers than at least run the same logic over your own.

                                                                                                          1. 2

                                                                                                            It is comparing apples to apples, because many of the CVEs are about other program’s improper sudo usage, similar to improper/insecure setuid usage.

                                                                                                            1. 2

                                                                                                              Well, whatever we’re comparing, it’s not making much sense.

                                                                                                              1. If sudo is hard to use and that leads to security problems through its misusage, that’s sudo’s fault. Or do you think that the footguns in C are not C’s fault, either? I thought you liked Rust for that very reason. For this reason the original CVE count stands.
                                                                                                              2. But fine, let’s move on on the presumption that the original CVE count is not appropriate to use here, and instead reference your list of 39 Ubuntu vulnerabilities. 39 > 2, Q.E.D. At this point we are comparing programs to programs.
                                                                                                              3. You now want to compare this with 432 setuid results. You are comparing programs with APIs. Apples to oranges.

                                                                                                              But, if you’re trying to bring this back and compare it with my 140 CVE number, it’s still pretty damning for sudo. setuid is an essential and basic feature of Unix, which cannot be made any smaller than it already is without sacrificing its essential nature. It’s required for thousands of programs to carry out their basic premise, including both sudo and doas! sudo, on the other hand, can be made much simpler and still address its most common use-cases, as demonstrated by doas’s evident utility. It also has a much smaller exposure: one non-standard tool written in the 80’s and shunted along the timeline of Unix history every since, compared to a standardized Unix feature introduced by DMR himself in the early 70’s. And setuid somehow has only 4x the number of footgun incidents? sudo could do a hell of a lot better, and it can do so by trimming the fat - a lot of it.

                                                                                                              1. 3

                                                                                                                If sudo is hard to use and that leads to security problems through its misusage, that’s sudo’s fault.

                                                                                                                It’s not because it’s hard to use, it’s just that its usage can escalate other more (relatively) benign security problems, just like setuid can. This is my point, as a reply to stephank’s comment. This is inherent to running anything as root, with setuid, sudo, or doas, and why we have capabilities on Linux now. I bet that if doas would be the default instead of sudo we’d have a bunch of CVEs about improper doas usage now, because people do stupid things like allowing anyone to run anything without password and then write a shitty web UI in front of that. That particular problem is not doas’s (or sudo’s) fault, just as cutting myself with the kitchen knife isn’t the knife’s fault.

                                                                                                                reference your list of 39 Ubuntu vulnerabilities. 39 > 2, Q.E.D.

                                                                                                                Yes, sudo has had more issues in total; I never said it doesn’t. It’s just a lot lower than what you said, and quite a number are very low-impact, so I just disputed the implication that sudo is a security nightmare waiting to happen: it’s track record isn’t all that bad. As always, more features come with more (security) bugs, but use cases do need solving somehow. As I mentioned, it’s a trade-off.

                                                                                                                sudo, on the other hand, can be made much simpler and still address its most common use-cases, as demonstrated by doas’s evident utility

                                                                                                                We already agreed on this yesterday on HN, which I repeated here as well; all I’m adding is “but sudo is still useful, as it solves many more use cases” and “sudo isn’t that bad”.

                                                                                                                Interesting thing to note: sudo was removed from OpenBSD by millert@openbsd.org; who is also the sudo maintainer. I think he’ll agree that “sudo is too complex for it to the default”, which we already agree on, but not that sudo is “too complex to exist”, which is where we don’t agree.

                                                                                                                Could sudo be simpler or better architectured to contain its complexity? Maybe. I haven’t looked at the source or use cases in-depth, and I’m not really qualified to make this judgement.

                                                                                                        2. 5

                                                                                                          I think arguing against complexity is one of the core principles of UNIX philosophy, and it’s gotten us quite far on the operating system front.

                                                                                                          If simplicity was used in sudo, this particular vulnerability would not have been possible to trigger it: why have sudoedit in the first place, which just implies the -e flag? This statement is a guarantee.

                                                                                                          If it would’ve ditched C, there is no guarantee that this issue wouldn’t have happened.

                                                                                                        3. 2

                                                                                                          Did you even look at that list? Most of those are not sudo vulnerabilities but issues in sudo configurations distros ship with.

                                                                                                          If even the distros can’t understand the configuration well enough to get it right, what hope do I have?

                                                                                                        4. 16

                                                                                                          OK maybe here’s a more specific discussion point:

                                                                                                          There can be logic bugs in basically any language, of course. However, the following classes of bugs tend to be steps in major exploits:

                                                                                                          • Bounds checking issues on arrays
                                                                                                          • Messing around with C strings at an extremely low level

                                                                                                          It is hard to deny that, in a universe where nobody ever messed up those two points, there are a lot less nasty exploits in the world in systems software in particular.

                                                                                                          Many other toolchains have decided to make the above two issues almost non-existent through various techniques. A bunch of old C code doesn’t handle this. Is there not something that can be done here to get the same productivity and safety advantages found in almost every other toolchain for tools that form the foundation of operating computers? Including a new C standard or something?

                                                                                                          I can have a bunch of spaghetti code in Python, but turning that spaghetti into “oh wow argv contents ran over some other variables and messed up the internal state machine” is a uniquely C problem, but if everyone else can find solutions, I feel like C could as well (including introducing new mechanisms to the language. We are not bound by what is printed in some 40-year-old books, and #ifdef is a thing).

                                                                                                          EDIT: forgot to mention this, I do think that sudo is a bit special given that its default job is to take argv contents and run them. I kinda agree that sudo is a bit special in terms of exploitability. But hey, the logic bugs by themselves weren’t enough to trigger the bug. When you have a multi-step exploit, anything on the path getting stopped is sufficient, right?

                                                                                                          1. 14

                                                                                                            +1. Lost in the noise of “but not all CVEs…” is the simple fact that this CVE comes from an embarrassing C string fuckup that would be impossible, or at least caught by static analysis, or at very least caught at runtime, in most other languages. If “RWIIR” is flame bait, then how about “RWIIP” or at least “RWIIC++”?

                                                                                                            1. 1

                                                                                                              I be confused… what does the P in RWIIP mean?

                                                                                                              1. 3

                                                                                                                Pascal?

                                                                                                                1. 1

                                                                                                                  Python? Perl? Prolog? PL/I?

                                                                                                                2. 2

                                                                                                                  Probably Python, given the content of the comment by @rtpg. Python is also memory-safe, while it’s unclear to me whether Pascal is (a quick search reveals that at least FreePascal is not memory-safe).

                                                                                                                  Were it not for the relative (accidental, non-feature-providing) complexity of Python to C, I would support RWIIP. Perhaps Lua would be a better choice - it has a tiny memory and disk footprint while also being memory-safe.

                                                                                                                  1. 2

                                                                                                                    Probably Python, given the content of the comment by @rtpg. Python is also memory-safe, while it’s unclear to me whether Pascal is (a quick search reveals that at least FreePascal is not memory-safe).

                                                                                                                    That’s possibly it.

                                                                                                                    Perhaps Lua would be a better choice - it has a tiny memory and disk footprint while also being memory-safe.

                                                                                                                    Not to mention that Lua – even when used without LuaJIT – is simply blazingly fast compared to other scripting languages (Python, Perl, &c)!

                                                                                                                    For instance, see this benchmark I did sometime ago: https://0x0.st/--3s.txt. I had implemented Ackermann’s function in various languages (the “./ack” file is the one in C) to get a rough idea on their execution speed, and lo and behold Lua turned out to be second only to the C implementation.

                                                                                                            2. 15

                                                                                                              I agree that rewriting things in Rust is not always the answer, and I also agree that simpler software makes for more secure software. However, I think it is disingenuous to compare the overall CVE count for the two programs. Would you agree that sudo is much more widely installed than doas (and therefore is a larger target for security researchers)? Additionally, most of the 140 CVEs linked were filed before October 2015, which is when doas was released. Finally, some of the linked CVEs aren’t even related to code vulnerabilities in sudo, such as the six Quest DR Series Disk Backup CVEs (example).

                                                                                                              1. 4

                                                                                                                I would agree that sudo has a bigger target painted on its back, but it’s also important to acknowledge that it has a much bigger back - 100× bigger. However, I think the comparison is fair. doas is the default in OpenBSD and very common in NetBSD and FreeBSD systems as well, which are at the heart of a lot of high-value operations. I think it’s over the threshold where we can consider it a high-value target for exploitation. We can also consider the kinds of vulnerabilities which have occured internally within each project, without comparing their quantity to one another, to characterize the sorts of vulnerabilities which are common to each project, and ascertain something interesting while still accounting for differences in prominence. Finally, there’s also a bias in the other direction: doas is a much simpler tool, shipped by a team famed for its security prowess. Might this not dissuade it as a target for security researchers just as much?

                                                                                                                Bonus: if for some reason we believed that doas was likely to be vulnerable, we could conduct a thorough audit on its 500-some lines of code in an hour or two. What would the same process look like for sudo?

                                                                                                                1. -1

                                                                                                                  but it’s also important to acknowledge that it has a much bigger back - 100× bigger.

                                                                                                                  Sorry but I miss the mass of users pretesting on the streets for tools that have 100x code compare to other tools providing similar functionality.

                                                                                                                  1. 10

                                                                                                                    What?

                                                                                                              2. 10

                                                                                                                So you’re saying that 50% of the CVEs in doas would have been prevented by writing it in Rust? Seems like a good reason to write it in Rust.

                                                                                                                1. 11

                                                                                                                  Another missing point is that Rust is only one of many memory safe languages. Sudo doesn’t need to be particularly performant or free of garbage collection pauses. It could be written in your favorite GCed language like Go, Java, Scheme, Haskell, etc. Literally any memory safe language would be better than C for something security-critical like sudo, whether we are trying to build a featureful complex version like sudo or a simpler one like doas.

                                                                                                                  1. 2

                                                                                                                    Indeed. And you know, Unix in some ways have been doing this for years anyway with Perl, python and shell scripts.

                                                                                                                    1. 2

                                                                                                                      I’m not a security expert, so I’m be happy to be corrected, but if I remember correctly, using secrets safely in a garbage collected language is not trivial. Once you’ve finished working with some secret, you don’t necessarily know how long it will remain in memory before it’s garbage collected, or whether it will be securely deleted or just ‘deallocated’ and left in RAM for the next program to read. There are ways around this, such as falling back to manual memory control for sensitive data, but as I say, it’s not trivial.

                                                                                                                      1. 2

                                                                                                                        That is true, but you could also do the secrets handling in a small library written in C or Rust and FFI with that, while the rest of your bog-standard logic not beholden to the issues that habitually plague every non-trivial C codebase.

                                                                                                                        1. 2

                                                                                                                          Agreed.

                                                                                                                          Besides these capabilities, ideally a language would also have ways of expressing important security properties of code. For example, ways to specify that a certain piece of data is secret and ensure that it can’t escape and is properly overwritten when going out of scope instead of simply being dropped, and ways to specify a requirement for certain code to use constant time to prevent timing side channels. Some languages are starting to include things like these.

                                                                                                                          Meanwhile when you try to write code with these invariants in, say, C, the compiler might optimize these desired constraints away (overwriting secrets is a dead store that can be eliminated, the password checker can abort early when the Nth character of the hash is wrong, etc) because there is no way to actually express those invariants in the language. So I understand that some of these security-critical things are written in inline assembly to prevent these problems.

                                                                                                                          1. 1

                                                                                                                            overwriting secrets is a dead store that can be eliminated

                                                                                                                            I believe that explicit_bzero(3) largely solves this particular issue in C.

                                                                                                                            1. 1

                                                                                                                              Ah, yes, thanks!

                                                                                                                              It looks like it was added to glibc in 2017. I’m not sure if I haven’t looked at this since then, if the resources I was reading were just not up to date, or if I just forgot about this function.

                                                                                                                  2. 8

                                                                                                                    I do think high complexity is the source of many problems in sudo and that doas is a great alternative to avoid many of those issues.

                                                                                                                    I also think sudo will continue being used by many people regardless. If somebody is willing to write an implementation in Rust which might be just as complex but ensures some level of safety, I don’t see why that wouldn’t be an appropriate solution to reducing the attack surface. I certainly don’t see why we should avoid discussing Rust just because an alternative to sudo exists.

                                                                                                                    1. 2

                                                                                                                      Talking about Rust as an alternative is missing the forest for the memes. Rust is a viral language (in the sense of internet virality), and a brain worm that makes us all want to talk about it. But in actual fact, C is not the main reason why anything is broken - complexity is. We could get much more robust and reliable software if we focused on complexity, but instead everyone wants to talk about fucking Rust. Rust has its own share of problems, chief among them its astronomical complexity. Rust is not a moral imperative, and not even the best way of solving these problems, but it does have a viral meme status which means that anyone who sees through its bullshit has to proactively fend off the mob.

                                                                                                                      1. 32

                                                                                                                        But in actual fact, C is not the main reason why anything is broken - complexity is.

                                                                                                                        Offering opinions as facts. The irony of going on to talk about seeing through bullshit.

                                                                                                                        1. 21

                                                                                                                          I don’t understand why you hate Rust so much but it seems as irrational as people’s love for it. Rust’s main value proposition is that it allows you to write more complex software that has fewer bugs, and your point is that this is irrelevant because the software should just be less complex. Well I have news for you, software is not going to lose any of its complexity. That’s because we want software to do stuff, the less stuff it does the less useful it becomes, or you have to replace one tool with two tools. The ecosystem hasn’t actually become less complex when you do that, you’re just dividing the code base into two chunks that don’t really do what you want. I don’t know why you hate Rust so much to warrant posting anywhere the discussion might come up, but I would suggest if you truly cannot stand it that you use some of your non-complex software to filter out related keywords in your web browser.

                                                                                                                          1. 4

                                                                                                                            Agree with what you’ve wrote, but just to pick at a theme that’s bothering me on this thread…

                                                                                                                            I don’t understand why you hate Rust so much but it seems as irrational as people’s love for it.

                                                                                                                            This is obviously very subjective, and everything below is anecdotal, but I don’t agree with this equivalence.

                                                                                                                            In my own experience, everyone I’ve met who “loves” or is at least excited about rust seems to feel so for pretty rational reasons: they find the tech interesting (borrow checking, safety, ML-inspired type system), or they enjoy the community (excellent documentation, lots of development, lots of online community). Or maybe it’s their first foray into open source, and they find that gratifying for a number of reasons. I’ve learned from some of these people, and appreciate the passion for what they’re doing. Not to say they don’t exist, but I haven’t really seen anyone “irrationally” enjoy rust - what would that mean? I’ve seen floating around a certain spiteful narrative of the rust developer as some sort of zealous online persona that engages in magical thinking around the things rust can do for them, but I haven’t really seen this type of less-than-critical advocacy any more for rust than I have seen for other technologies.

                                                                                                                            On the other hand I’ve definitely seen solid critiques of rust in terms of certain algorithms being tricky to express within the constraints of the borrow checker, and I’ve also seen solid pushback against some of the guarantees that didn’t hold up in specific cases, and to me that all obviously falls well within the bounds of “rational”. But I do see a fair amount of emotionally charged language leveled against not just rust (i.e. “bullshit” above) but the rust community as well (“the mob”), and I don’t understand what that’s aiming to accomplish.

                                                                                                                            1. 3

                                                                                                                              I agree with you, and I apologize if it came across that I think rust lovers are irrational - I for one am a huge rust proselytizer. I intended for the irrationality I mentioned to be the perceived irrationality DD attributes to the rust community

                                                                                                                              1. 2

                                                                                                                                Definitely no apology needed, and to be clear I think the rust bashing was coming from elsewhere, I just felt like calling it to light on a less charged comment.

                                                                                                                              2. 1

                                                                                                                                I think the criticism isn’t so much that people are irrational in their fondness of Rust, but rather that there are some people who are overly zealous in their proselytizing, as well as a certain disdain for everyone who is not yet using Rust.

                                                                                                                                Here’s an example comment from the HN thread on this:

                                                                                                                                Another question is who wants to maintain four decades old GNU C soup? It was written at a different time, with different best practices.

                                                                                                                                In some point someone will rewrite all GNU/UNIX user land in modern Rust or similar and save the day. Until this happens these kind of incidents will happen yearly.

                                                                                                                                There are a lot of things to say about this comment, and it’s entirely false IMO, but it’s not exactly a nice comment, and why Rust? Why not Go? Or Python? Or Zig? Or something else.

                                                                                                                                Here’s another one:

                                                                                                                                Rust is modernized C. You are looking for something that already exists. If C programmers would be looking for tools to help catch bugs like this and a better culture of testing and accountability they would be using Rust.

                                                                                                                                The disdain is palatable in this one, and “Rust is modernized C” really misses the mark IMO; Rust has a vastly different approach. You can consider this a good or bad thing, but it’s really not the only approach towards memory-safe programming languages.


                                                                                                                                Of course this is not representative for the entire community; there are plenty of Rust people that I like and have considerably more nuanced views – which are also expressed in that HN thread – but these comments certainly are frequent enough to give a somewhat unpleasant taste.

                                                                                                                              3. 2

                                                                                                                                While I don’t approve of the deliberately inflammatory form of the comments, and don’t agree with the general statement that all complexity is eliminateable, I personally agree that, in this particular case, simplicity > Rust.

                                                                                                                                As a thought experiment, world 1 uses sudo-rs as a default implementation of sudo, while world 2 uses 500 lines of C which is doas. I do think that world 2 would be generally more secure. Sure, it’ll have more segfaults, but fewer logical bugs.

                                                                                                                                I also think that the vast majority of world 2 populace wouldn’t notice the absence of advanced sudo features. To be clear, the small fraction that needs those features would have to install sudo, and they’ll use the less tested implementation, so they will be less secure. But that would be more than offset by improved security of all the rest.

                                                                                                                                Adding a feature to a program always has a cost for those who don’t use this feature. If the feature is obscure, it might be overall more beneficial to have a simple version which is used by the 90% of the people, and a complex for the rest 10%. The 10% would be significantly worse off in comparison to the unified program. The 90% would be slightly better off. But 90% >> 10%.

                                                                                                                                1. 2

                                                                                                                                  Rust’s main value proposition is that it allows you to write more complex software that has fewer bugs

                                                                                                                                  I argue that it’s actually that it allows you to write fast software with fewer bugs. I’m not entirely convinced that Rust allows you to manage complexity better than, say, Common Lisp.

                                                                                                                                  That’s because we want software to do stuff, the less stuff it does the less useful it becomes

                                                                                                                                  Exactly. Software is written for people to use. (technically, only some software - other software (such as demoscenes) is written for the beauty of it, or the enjoyment of the programmer; but in this discussion we only care about the former)

                                                                                                                                  The ecosystem hasn’t actually become less complex when you do that

                                                                                                                                  Even worse - it becomes more complex. Now that you have two tools, you have two userbases, two websites, two source repositories, two APIs, two sets of file formats, two packages, and more. If the designs of the tools begin to differ substantially, you have significantly more ecosystem complexity.

                                                                                                                                  1. 2

                                                                                                                                    You’re right about Rust value proposition, I should have added performance to that sentence. Or, I should have just said managed language, because as another commenter pointed out Rust is almost irrelevant to this whole conversation when it comes to preventing these type of CVEs

                                                                                                                                  2. 1

                                                                                                                                    The other issue is that it is a huge violation of principle of least privilege. Those other features are fine, but do they really need to be running as root?

                                                                                                                              4. 7

                                                                                                                                Just to add to that: In addition to having already far too much complexity, it seems the sudo developers have a tendency to add even more features: https://computingforgeeks.com/better-secure-new-sudo-release/

                                                                                                                                Plugins, integrated log server, TLS support… none of that are things I’d want in a tool that should be simple and is installed as suid root.

                                                                                                                                (Though I don’t think complexity vs. memory safety are necessarily opposed solutions. You could easily imagine a sudo-alike too that is written in rust and does not come with unnecessary complexity.)

                                                                                                                                1. 4

                                                                                                                                  What’s wrong with EBNF and how is it related to security? I guess you think EBNF is something the user shouldn’t need to concern themselves with?

                                                                                                                                  1. 6

                                                                                                                                    There’s nothing wrong with EBNF, but there is something wrong with relying on it to explain an end-user-facing domain-specific configuration file format for a single application. It speaks to the greater underlying complexity, which is the point I’m making here. Also, if you ever have to warn your users not to despair when reading your docs, you should probably course correct instead.

                                                                                                                                    1. 2

                                                                                                                                      Rewrite: The point that you made in your original comment is that sudo has too many features (disguising it as a point about complexity). The manpage snippet that you’re referring to has nothing to do with features - it’s a mix between (1) the manpage being written poorly and (2) a bad choice of configuration file format resulting in accidental complexity increase (with no additional features added).

                                                                                                                                    2. 1

                                                                                                                                      EBNF as a concept aside; the sudoers manpage is terrible.

                                                                                                                                    3. 3

                                                                                                                                      Hello, I am here to derail the Rust discussion before it gets started.

                                                                                                                                      I am not sure what you are trying to say, let me guess with runaway complexity.

                                                                                                                                      • UNIX is inherently insecure and it cannot be made secure by any means
                                                                                                                                      • sudo is inherently insecure and it cannot be made secure by any means

                                                                                                                                      Something else maybe?

                                                                                                                                      1. 4

                                                                                                                                        Technically I agree with both, though my arguments for the former are most decidedly off-topic.

                                                                                                                                        1. 5

                                                                                                                                          Taking Drew’s statement at face value: There’s about to be another protracted, pointless argument about rewriting things in rust, and he’d prefer to talk about something more practically useful?

                                                                                                                                          1. 7

                                                                                                                                            I don’t understand why you would care about preventing a protracted, pointless argument on the internet. Seems to me like trying to nail jello to a tree.

                                                                                                                                        2. 3

                                                                                                                                          This is a great opportunity to promote doas. I use it everywhere these days, and though I don’t consider myself any sort of Unix philosophy purist, it’s a good example of “do one thing well”. I’ll call out Ted Unangst for making great software. Another example is signify. Compared to other signing solutions, there is much less complexity, much less attack surface, and a far shallower learning curve.

                                                                                                                                          I’m also a fan of tinyssh. It has almost no knobs to twiddle, making it hard to misconfigure. This is what I want in security-critical software.

                                                                                                                                          Relevant link: Features Are Faults.

                                                                                                                                          All of the above is orthogonal to choice of implementation language. You might have gotten a better response in the thread by praising doas and leaving iron oxide out of the discussion. ‘Tis better to draw flies with honey than with vinegar. Instead, you stirred up the hornets’ nest by preemptively attacking Rust.

                                                                                                                                          PS. I’m a fan of your work, especially Sourcehut. I’m not starting from a place of hostility.

                                                                                                                                          1. 3

                                                                                                                                            If you want programs to be more secure, stable, and reliable, the key metric to address is complexity. Rewriting it in Rust is not the main concern.

                                                                                                                                            Why can’t we have the best of both worlds? Essentially a program copying the simplicity of doas, but written in Rust.

                                                                                                                                            1. 2

                                                                                                                                              Note that both sudo and doas originated in OpenBSD. :)

                                                                                                                                              1. 9

                                                                                                                                                Got a source for the former? I’m pretty sure sudo well pre-dates OpenBSD.

                                                                                                                                                Sudo was first conceived and implemented by Bob Coggeshall and Cliff Spencer around 1980 at the Department of Computer Science at SUNY/Buffalo. It ran on a VAX-11/750 running 4.1BSD. An updated version, credited to Phil Betchel, Cliff Spencer, Gretchen Phillips, John LoVerso and Don Gworek, was posted to the net.sources Usenet newsgroup in December of 1985.

                                                                                                                                                The current maintainer is also an OpenBSD contributor, but he started maintaining sudo in the early 90s, before OpenBSD forked from NetBSD. I don’t know when he started contributing to OpenBSD.

                                                                                                                                                So I don’t think it’s fair to say that sudo originated in OpenBSD :)

                                                                                                                                                1. 1

                                                                                                                                                  Ah, looks like I was incorrect. I misinterpreted OpenBSD’s innovations page. Thanks for the clarification!

                                                                                                                                            1. 4

                                                                                                                                              I honestly feel sorry for companies like WinRAR. It’s not their fault at all that they get targeted by malware authors like they do.

                                                                                                                                              1. 5

                                                                                                                                                It’s the unrelenting “no exceptions possible” bureaucracy that always gets to me in cases likes this. I find it really soul-crushing, especially where it’s so damn obvious and has such large impact on people/companies.

                                                                                                                                                1. 2

                                                                                                                                                  It seems they get targetted by antivirus software authors. But I guess you can count Antivirus software as a subcategory of malware.

                                                                                                                                                1. 16

                                                                                                                                                  In here, we see another case of somebody bashing PGP while tacitly claiming that x509 is not a clusterfuck of similar or worse complexity.

                                                                                                                                                  I’d also like to have a more honest read on how a mechanism to provide ephemeral key exchange and host authentication can be used with the same goal as PGP, which is closer to end-to-end encryption of an email (granted they aren’t using something akin to keycloak). The desired goals of an “ideal vulnerability” reporting mechanism would be good to know, in order to see why PGP is an issue now, and why an HTTPS form is any better in terms of vulnerability information management (both at rest and in transit).

                                                                                                                                                  1. 22

                                                                                                                                                    In here, we see another case of somebody bashing PGP while tacitly claiming that x509 is not a clusterfuck of similar or worse complexity.

                                                                                                                                                    Let’s not confuse the PGP message format with the PGP encryption system. Both PGP and x509 encodings are a genuine clusterfuck; you’ll get no dispute from me there. But TLS 1.3 is dramatically harder to mess up than PGP, has good modern defaults, can be enforced on communication before any content is sent, and offers forward secrecy. PGP-encrypted email offers none of these benefits.

                                                                                                                                                    1. 6

                                                                                                                                                      But TLS 1.3 is dramatically harder to mess up than PGP,

                                                                                                                                                      With a user-facing tool that has plugged out all the footguns? I agree

                                                                                                                                                      has good modern defaults,

                                                                                                                                                      If you take care to, say, curate your list of ciphers often and check the ones vetted by a third party (say, by checking https://cipherlist.eu/), then sure. Otherwise I’m not sure I agree (hell, TLS has a null cipher).

                                                                                                                                                      can be enforced on communication before any content is sent

                                                                                                                                                      There’s a reason why there’s active research trying to plug privacy holes such as SNI. There’s so much surface to the whole stack that I would not be comfortable making this claim.

                                                                                                                                                      offers forward secrecy

                                                                                                                                                      I agree, although I don’t think it would provide non-repudiation (at least without adding signed exchanges, which I think it’s still a draft) and without mutual TLS authentication, which can be achieved with PGP quite easily.

                                                                                                                                                      1. 1

                                                                                                                                                        take care to, say, curate your list of ciphers often and check the ones vetted by a third party

                                                                                                                                                        There are no bad ciphers in 1.3, it’s a small list, so you could just kill the earlier TLS versions :)

                                                                                                                                                        Also, popular web servers already come with reasonable default cipher lists for 1.2. Biased towards more compatibility but not including NULL, MD5 or any other disaster.

                                                                                                                                                        I don’t think it would provide non-repudiation

                                                                                                                                                        How often do you really need it? It’s useful for official documents and stuff, but who needs it on a contact form?

                                                                                                                                                      2. 3

                                                                                                                                                        I want to say that it only provides DNS based verification but then again, how are you going to get the right PGP key?

                                                                                                                                                        1. 3

                                                                                                                                                          PGP does not have only one trust model, and it is a good part of it : You choose, according to the various sources of trust (TOFU through autocrypt, also saw the key on the website, or just got the keys IRL, had signed messages prooving it is the good one Mr Doe…).

                                                                                                                                                          Hopefully browsers and various TLS client could mainstream such a model, and let YOU choose what you consider safe rather than what (highly) paid certificates authorities.

                                                                                                                                                          1. 2

                                                                                                                                                            I agree that there is more flexibility and that you could get the fingerprint from the website and have the same security.

                                                                                                                                                            Unfortunately, for example the last method doesn’t work. You can sign anybody’s messages. Doesn’t prove your key is theirs.

                                                                                                                                                            The mantra “flexibility is an enemy of security” may apply.

                                                                                                                                                            1. 1

                                                                                                                                                              I meant content whose exclusive disclosure is in a signed message, such as “you remember that time at the bridge, I told you the boat was blue, you told me you are colorblind”.

                                                                                                                                                              [EDIT: I realize that I had in mind that these messages would be sent through another secure transport, until external facts about the identity of the person at the other end of the pipe gets good enough. This brings us to the threat model of autocrypt (aiming working through email-only) : passive attacker, along with the aim of helping the crypto bonds to build-up: considering “everyone does the PGP dance NOW” not working well enough]

                                                                                                                                                              1. 1

                                                                                                                                                                Unfortunately, for example the last method doesn’t work. You can sign anybody’s messages. Doesn’t prove your key is theirs.

                                                                                                                                                                I can publish your comment on my HTTPS protected blog. Doesn’t prove your comment is mine.

                                                                                                                                                                1. 2

                                                                                                                                                                  Not sure if this is a joke but: A) You sign my mail. Op takes this as proof that your key is mine. B) You put your key on my website..wait no you can’t..I put my key on your webs- uh…you put my key on your website and now I can read your email…

                                                                                                                                                                  Ok, those two things don’t match.

                                                                                                                                                        2. 9

                                                                                                                                                          I’d claim I’m familiar with both the PGP ecosystem and TLS/X.509. I disagree with your claim that they’re a similar clusterfuck.

                                                                                                                                                          I’m not saying X.509 is without problems. But TLS/X.509 gets one thing right that PGP doesn’t: It’s mostly transparent to the user, it doesn’t expect the user to understand cryptographic concepts.

                                                                                                                                                          Also the TLS community has improved a lot over the past decade. X.509 is nowhere near the clusterfuck it was in 2010. There are rules in place, there are mitigations for existing issues, there’s real enforcement for persistent violation of rules (ask Symantec). I see an ecosystem that has its issues, but is improving on the one side (TLS/X.509) and an ecosystem that is in denial about its issues and which is not handling security issues very professionally (efail…).

                                                                                                                                                          1. 3

                                                                                                                                                            Very true but the transparency part is a bit fishy because TLS included an answer to “how do I get the key” which nowadays is basically DNS+timing while PGP was trying to give people more options.

                                                                                                                                                            I mean we could do the same for PGP but if that fits your security requirements is a question that needs answering..but by whom? TLS says CA/DNS PGP says “you get to make that decision”.

                                                                                                                                                            Unfortunately the latter also means “your problem” and often “idk/idc” and failed solutions like WoT.

                                                                                                                                                            Hiw could we do the same? We can do some validation in the form of we send you an email encrypted for what you claim is your public key to what you claim is your mail and you have to return the decrypted challenge. Seems fairly similar to DNS validation for HTTPS.

                                                                                                                                                            While we’re at it…. Add some key transparency to it for accountability. Fix the WoT a bit by adding some DOS protection. Remove the old and broken crypto from the standard. And the streaming mode which screws up integrity protection and which is for entirely different use-cases anyway. Oh, and make all the mehish or shittyish tools better.

                                                                                                                                                            That should do nicely.

                                                                                                                                                            Edit: except, of course, as Hanno said: “an ecosystem that is in denial about its issues and which is not handling security issues very professionally”…that gets in the way a lot

                                                                                                                                                            1. 2

                                                                                                                                                              I’d wager this is mostly a user-facing tooling issue, rather than anything else. Would you believe that having a more mature tooling ecosystem with PGP would make it more salvageable for, say, vulnerability disclosure emails instead of a google web form?

                                                                                                                                                              If anything, I’m more convinced that the failure of PGP is to trust GnuPG as its only implementation worthy of blessing. How different would it be if we had funded alternative, industry-backed implementations after e-fail in the same way we delivered many TLS implementations after heartbleed?

                                                                                                                                                              Similarly, there is a reason why there’s active research on fuzzing TLS implementations for their different behaviors (think, frankencerts). Mostly, this is due the fact that reasoning about x509 is impossible without reading through stacks and stacks of RFC’s, extensions and whatnot.

                                                                                                                                                              1. 0

                                                                                                                                                                I use Thunderbird with Enigmail. I made a key at some point and by now I just send and receive as I normally do. Mails are encrypted when they can be encrypted, and the UI is very clear on this. Mails are always signed. I get a nice green bar over mails I receive that are encrypted.

                                                                                                                                                                I can’t say I agree with your statement that GPG is not transparent to the user, nor that it expects the user to understand cryptographic concepts.

                                                                                                                                                                As for the rules in the TLS/X.509 ecosystem, you should ask Mozilla if there’s real enforcement for Let’s Encrypt.

                                                                                                                                                              2. 4

                                                                                                                                                                The internal complexity of x509 is a bit of a different one than the user-facing complexity of PGP. I don’t need to think about or deal with most of that as an end-user or even programmer.

                                                                                                                                                                With PGP … well… There are about 100 things you can do wrong, starting with “oops, I bricked my terminal as gpg outputs binary data by default” and it gets worse from there on. I wrote a Go email sending library a while ago and wanted to add PGP signing support. Thus far, I have not yet succeeded in getting the damn thing to actually work. In the meanwhile, I have managed to get a somewhat complex non-standard ACME/x509 generation scheme to work though.

                                                                                                                                                                1. 3

                                                                                                                                                                  There have been a lot of vulns in x509 parsers, though. They are really hard to get right.

                                                                                                                                                                  1. 1

                                                                                                                                                                    I’m very far removed from an expert on any of this; so I don’t really have an opinion on the matter as such. All I know is that as a regular programmer and “power user” I usually manage to do whatever I want to do with x509 just fine without too much trouble, but that using or implementing PGP is generally hard and frustrating the the point where I just stopped trying.

                                                                                                                                                                  2. 1

                                                                                                                                                                    You are thinking of gnupg. I agree gnupg is a usability nightmare. I don’t think PGP (RFC4880) makes much claims about user interactions (in the same way that the many x509 related RFC’s talk little about how users deal with tooling)

                                                                                                                                                                  3. 1

                                                                                                                                                                    Would you say PGP has a chance to be upgraded? I think there is a growing consensus that PGP’s crypto needs some fixing, and GPG’s implementation as well, but I am no crypto-people.

                                                                                                                                                                    1. 2

                                                                                                                                                                      Would you say PGP has a chance to be upgraded?

                                                                                                                                                                      I think there’s space for this, although open source (and standards in general) are also political to some extent. If the community doesn’t want to invest on improving PGP but rather replace it with $NEXTBIGTHING, then there is very little you can do. There’s also something to be said that 1) it’s easier when communities are more open to change and 2) it’s harder when big names at google, you-name-it are constantly bashing it.

                                                                                                                                                                      1. 2

                                                                                                                                                                        Can you clarify where “big names at Cloudflare” are bashing PGP? I’m confused.

                                                                                                                                                                        1. 1

                                                                                                                                                                          Can you clarify where “big names at Cloudflare” are bashing PGP? I’m confused.

                                                                                                                                                                          I actually can’t, I don’t think this was made in any official capacity. I’ll amend my comment, sorry.