Threads for nadim

    1. 4

      Legitimate, well-written findings! I like how Soatok casually drops actual security audits as blog posts.

      1. 8

        Too many monsters.

        1. 5

          Agreed, it looks like Ultra-violence or Nightmare mode.

          1. 3

            Ironically I think this makes it significantly worse at being a captcha; humans will find it much more difficult, but for a computer the difficulty level shouldn’t make much of a difference.

            1. 7

              lol just type “iddqd” (god mode) and “idkfa” (unlimited ammo, all weapons unlocked + all keys etc). The codes are burned into my memory and I was pleased to see it work here.

              1. 2

                I passed it on the second attempt. The trick is to not run forwards. The monsters then come clustered together and it’s easy to just keep shooting.

                A simple ML model could also reach the same conclusion, but that’s not why it!s a bad CAPTCHA: it’s deterministic. You could just record the key strokes and someone who passes and play them back and you’ll pass 100% of the time.

                That didn’t mean it wasn’t fun, of course.

              2. 1

                It has indeed been set to nightmare difficulty. The “How it works” link on the page and the description on the repository confirms this.

              3. 4

                Move a little to the left of the starting position (don’t leave the hall), fire your pistol to alert monsters, and then pistol-snipe the zombies and sergeants that wander your way. If you fire continuously at whatever you see, you’ll prove your humanity before you die or run out of ammo. Just don’t go out into the open. I’m not sure whether it’s Nightmare or UV Fast, but either way it makes staying clear of fireballs a challenge.

                1. 2

                  Memories returning: punch the air to alert monsters, it will save one bullet. Still counts as “firing a weapon” and alerts anyone within earshot, even though it makes no real sense :)

              4. 13

                Soatok is one of my favorite applied cryptography bloggers, and his blog posts are always a joy to read. I frequently learn a lot from his blog and feel good about reading it due to his humor and sharp writing.

                That being said, I should respectfully point out that in this instance, the tone doesn’t match the findings. While the findings are legitimate, they are more or less known, and I can share that they’ve been spotted earlier by other auditors but deemed to be of no serious practical impact.

                I see no serious immediate concern arising from these issues. At most, they are “would be nice to fix”es down the line.

                I think we need to be able to criticize cryptography engineering without feeling like we need to end every single post with something along the lines of “This is fucking clownshoes.” regardless of the actual real-world criticality of the findings.

                Martin R. Albrecht, Sofía Celi, Benjamin Dowling and Daniel Jones’s work on practical exploits in Matrix, from last year, is what can actually be cited if you want to talk about serious vulnerabilities in Matrix: https://eprint.iacr.org/2023/485

                1. 6

                  This is common with Soatok in my experience. They love to talk about hypothetical attacks that don’t apply to the way anyone practically uses a system, and then use that to build a big fear mongering about the system itself.

                  1. 2

                    I think that’s kind of the curse of popularity. Soatok has written great articles in the past, and he probably pressures himself to keep up this performance.

                    1. 4

                      Don’t they pretty much say in recent articles that they feel compelled to keep commenting on such crypto issues by external queries y but really don’t want to? I’d not come across them until recently but what I’ve read gives off an “I’ve had enough of this shit, but if you insist on knowing what I think…” vibe, genuine annoyance. Seems to wrong to criticise the tone when those disclaimers are pretty much explicit..

                2. 17

                  The last paragraph starts with “Google often makes privacy-invasive technology” which is kinda the point people are making. Just because this one instance, at current time, on the face of it, isn’t privacy-invasive doesn’t outweigh the 10s of times they have violated user trust. Which is also why the comparisons to Apple’s on-device scanning are weak - people trust Apple to do on-device scanning and keep it on-device. People just do not trust Google to do the same because they have repeatedly demonstrated they cannot be trusted.

                  1. 11

                    Agreed. Yes, there may be a ‘slippery slope fallacy’ involved — but Google are well-known for their repeated construction of slopes (with very low friction). At some point it isn’t committing a fallacy, it’s pointing out a trend, and I’d argue we passed that point close to a decade ago.

                    1. 8

                      Sure, we can have a generalized mistrust of Google, but such generalized thinking is not appropriate in the context of privacy academics critiquing specific proposed features! In this type of discourse, we expect critique to be reasoned and justified. If criticism of new features from experts and academics devolves into “well I just don’t trust Google”, we’re not moving towards a good place in discourse.

                      1. 23

                        What makes the slippery slope a fallacy is absence of precedent: it’s based on nothing but potential, so it makes general statements based on a hypothetical. IMHO none of the critiques you’ve mentioned are just facile slippery slope arguments in this context.

                        Drawing generalized conclusions from precedents is not just appropriate of academia, it’s literally one of the things that entire academic fields, from international relations to historical linguistics, are built on. Pointing out that a company whose modus operandi is to ignore and/or gradually dismantle safeguards from initially benign products/services is likely to ignore and/or gradually dismantle safeguards from this particular benign service is a very valid point.

                        Green, Olejnik and Veale’s critiques offer examples about how that might go, which is obviously hypothetical, but I for one didn’t read any of them in terms of “today you’re letting them protect you from scammers, but five years from now who’s to say that they won’t be doing THIS”. I read them in terms of “based on how Google Ads/Chrome/Gmail/Blogger went, I see no reason why this will go any different, and here are some examples of what that might entail.”

                        The absence of a detailed literature survey of precedents is IMHO excusable for what is effectively a zero-day post. Detailed examples of these things have been available in literature for… at least five years (this is the oldest reference I have in my notes and it has dozens of relevant examples). I think at least Matthew Green kind of expected that most of the people reading his post are “in the know”, which when you’re posting a quick reaction on Twitter as oppposed to publishing an academic paper is… probably understandable to some degree.

                        More generally, I think our industry would benefit a lot from separating its projections for a particular technology from those of a particular company and its prospective use of that technology. Alphabet’s entire business model is based on getting people to funnel content through their stuff so they can analyse it. If they stop doing that, Alphabet goes bankrupt. The fact that they’re doing scanning non-locally is an artefact of their strategy – being first at doing something well has been a big deal for them, historically, and right now this is the best way to do it well on a wide range of heterogenous hardware.

                          1. 4

                            It wasn’t hard to write it, it’s a comment about a very good post :-). It doesn’t help that TechCrunch’s article is kind of barebones – when you’re summarizing “zero-day” posts, you’re supposed to fill in the context blanks for your readers, that’s the whole point of presenting specialised quick-takes to a non-specialised audience. Otherwise the only people who’ll get it are people who are already familiar with the context (and have likely already read these takes because they follow their authors on X/Mastodon/Substack/whatever). All the things that I called “excusable” or “understandable” are excusable or understandable in their original context (their authors’ accounts, that is), but IMHO not in a secondary source.

                            But even if you take out the TechCrunch hop, I think the quality of public discourse in privacy circles could be a lot better. It would benefit a lot from moving to a platform that allows real discourse instead of trivial snark like Twitter, but that’s not all there is to it. Too much of it is made up of a couple of good points that are wrapped in tone-deaf activism and, occasionally, gratuituous ludditism. The concerns you point out about the examples Green, Olejnik and Veale’s examples are very relevant.

                            Just one example to illustrate my point: Green’s idea is not completely out there, but he’s a cryptographer, so of course he’s filling the blanks between what we have right now and what we might have with things a cryptographer writing in 2024 might find cool.

                            The general idea, though, isn’t flawed: if AI inference scanning is accepted as a reliable shield from illicit behaviour, network operators requiring the stamp of approval from a third-party AI inference operator (whether in the form of a zero-knowledge proof or anything else that a cryptographer finds less fascinating) is a logical step, and any company that’s the “go to” AI inference operator for content can effectively control the distribution of content through the channels that require it.

                            Google already knows how to do this sort of stuff, they’re already exerting a great deal of influence over global email via Gmail and their email delivery requirements. However, email is built on largely open standards, and is under a lot of public technical oversight. Google entered it late and their means to control it are limited by the fact that free message exchange is basically built in the protocols and they required very accessible technical means.

                            Whereas when it comes to inference scanning, Google is (among the) first at the table, there are only a handful of companies with the resources to do it, and many of them are already operating major content steering (Google) or content delivery networks (Facebook, Microsoft, Amazon to some degree). Hell, Google could implement it right now as an extension to DMARC or via DKIM. And the “stamp of approval thing” can happen through entirely commercial channels, with zero involvement or legal oversight from institutions that guarantee free access to information or fair commercial competition for other channels, and no option of legal recourse for consumers. The exact technical implementation isn’t the issue, it’s how it can be achieved and for what competitive goals, and whether there are any legal barriers to their fulfillment. If there aren’t, those competitive goals are fair game for any board of directors.

                        1. 10

                          Let’s dig a bit deeper then: will this locally-scanning-AI write the script of each call to a textfile? Will it be uploded to google to “more precisely target ads”? When and how can we detect it? Or after 1.5 years it gets uncovered and the say “we are really sorry”? Do we allow history to repeat itself again?

                          1. 3

                            Going in the opposite direction risks a baseless luddite reaction to any technological advancement by default.

                            1. 11

                              But until we can be sure we can detect, the moment it happens, whether Google changes this to indeed send data to their servers, aren’t we better off saying no to the offer as a whole?

                              1. 13

                                I think this is the most important failing of computing today. Since machines are general purpose computers on which anything can be done, and corporations have repeatedly shown that they really do anything they can get away with and cannot be trusted, it’s a very compelling argument for open source software (with binary reproducibility, because how else would you know if you’re really running the released source code?).

                                For the “ordinary” user this is still a bit of a problem because they can’t verify that the code doesn’t do anything malicious. Maybe there’s room for a kind of certification? A bit like we have for organic food or the CE marking on electronic devices.

                                On the other hand, more and more laws are coming into effect now which allow governments to punish corporations for privacy infringements and even for inadequate security. Sure took long enough!

                              2. 7

                                Baseless against any? Most probably, but this is what $bigco and their dark patterns trained us to believe. This is now just a self-defense mechanism.

                                Baseless against Google? Not at all, Google shamelessly demonstrated multiple times how their product exploit privacy (not to mention that “targeting ads” is the tip of the iceberg of what they can do with our data).

                                1. 6

                                  You say that like it’s a bad thing, but historical luddites (not the modern caricature) actually had the right idea when it came to resisting oppressive power dynamics. Unfortunately they were beaten with overwhelming state-backed violence.

                            2. 2

                              people trust Apple to do on-device scanning and keep it on-device seems like misplaced trust then :P

                            3. 14

                              Why does this bug have its own website, and why does the website mimic the aesthetic of (often overhyped) security disclosures?

                              It reads like the main priority here is to grab attention in a way that is not exactly clearly warranted by the material.

                              Wouldn’t it be more productive and correct to reach out to TLS vendors to coordinate better handling of large ClientHello packets?

                              By this logic, I could make a website called “Wayland.fail” talking about how when I use Nvidia drivers with Wayland, dual monitor support doesn’t work, and proceed to generalize this into an “advisory” about dual monitor support on Linux desktop environments in general. Wouldn’t it be better to coordinate a fix, or, if you really want to rant, just write a blog post?

                              1. 7

                                Wouldn’t it be more productive and correct to reach out to TLS vendors to coordinate better handling of large ClientHello packets?

                                When you do this (identify and contact the maintainers of a buggy implementation) it seems it would be helpful to have a URL to direct them to with all the details in one place.

                                I could make a website called “Wayland.fail” talking about how when I use Nvidia drivers with Wayland, dual monitor support doesn’t work, and proceed to generalize this into an “advisory” about dual monitor support on Linux desktop environments in general.

                                You could, but this only makes sense when there’s more than one vendor affected. In the case of the ClientHello bug; there are likely to be scores of different small-scale TLS implementations with this bug. For GPU vendors there’s only a handful.

                                1. 2

                                  Why, indeed.

                                  1. 1

                                    When we would create a .fail website for everything, even Wayland, we would need sites for everything. (Especially MacOS, which also fails with too many monitors or high resolution monitors.)

                                  2. 2

                                    I don’t know if it’s one of the things that makes Betterbird better, but I have a Cozi family calendar, a couple of fastmail calendars, and a local calendar, and they all work great in Betterbird.

                                    1. 1

                                      Hadn’t heard of Betterbird, will take a look.

                                      1. 6

                                        Betterbird is largely just a rebranded Thunderbird so I guess the point also applies to Lightning (Thunderbird’s built-in calendar solution). FWIW I’m using Lightning with etesync and it’s working fine.

                                        1. 5

                                          My thoughts on Thunderbird are mentioned explicitly in the post.

                                          1. 1

                                            Yep, I mean I wouldn’t expect anything better in that regard from Betterbird.

                                    2. 57

                                      Evan Boehs is constructing a timeline of events that led to the implementation and discovery of the backdoor https://boehs.org/node/everything-i-know-about-the-xz-backdoor

                                        1. 19

                                          oh hey! that means I get to thank you for it. it was super useful!

                                          1. 7

                                            Does no one know what the backdoor does yet? Apart from how it embeds itself ofc

                                              1. 14

                                                interesting point:

                                                The real issue with a lot of small, foundational OSS libraries is just that there isn’t enough to do. They were written decades ago by a single person — and beyond bugfixes, they are not really supposed to change much. You don’t do major facelifts of zlib or giflib every year; even if you wave some cash around, it’s hard to build a sustainable community around watching paint dry.

                                                1. 4

                                                  Based on the the mailing list and open issues on giflib, the maintainer is in need of support. He says he suffered from health issues. And caused delays in rolling out fixes to a couple of memory leaks issues found by the community.

                                                  The maintainer has a Patreon link on the project page, if anyone comes across this comment and is so inclined.

                                                  https://giflib.sourceforge.net/

                                                  It does concern me that there is no organized support by well capitalized and funded organizations for this and many other projects. Patreon is not a long-term solution. I don’t know much about the maintainer, but he should be able to get some benefit for his work, even after he retires. We can’t just abandon these critical contributors because they grow old and sick. Add it to Easter intentions.

                                                  1. 11

                                                    I don’t know much about the maintainer…

                                                    esr is https://en.wikipedia.org/wiki/Eric_S._Raymond. In 2015, he suggested that the “unexpected success of [his] Patreon page” meant it was reasonable to consider him a potential candidate for most famous programmer in the world. In 1999, after VA Linux Systems’ IPO, he called himself “absurdly rich” and stated that “anyone who bugs me for a handout, no matter how noble the cause and how much I agree with it, will go on my permanent shit list”.

                                                    Even if you discount all of the above, there is enough well documented evidence of his racism and misogyny that I don’t feel the need to cite any of it here.

                                                    1. 3

                                                      We don’t have the full picture of his financial situation, but I stand by my principle that maintainers of widely-used open-source projects deserve support. The giflib maintainer had to deal with critical security issues while battling stomach cancer, all without proper infrastructure or compensation. It’s a common struggle for open-source maintainers. The worker deserves his wages.

                                                      Research shows he received 150k shares of VA Linux over 20 years ago as a board member. The stock surged 7-fold after the IPO but crashed shortly after, like many others. As a director, he had a lock-up period preventing him from selling shares immediately. Within a year, the stock fell below $9 per share, and a few years later, it was $2. An average software engineer likely made more last year than he did from VA Linux. Unless he’s bragging about a luxurious lifestyle, his participation doesn’t necessarily mean he’s wealthy. I’d rather see him supporting open source than an MBA pushing dubious licensing schemes or privacy-invading startups.

                                                      Despite the VA Linux situation, he’s expressed admirable intentions like buying rainforest for conservation and remaining humble enough to crash on a friend’s daybed when the local user group can’t afford a hotel. He continues maintaining giflib for free. I don’t believe he’s a bad person.

                                                      My point is that critical open-source maintainers shouldn’t need to rely on personal Patreon accounts for project support, just as Microsoft or Google engineers are paid for their work. It’s reasonable to expect community (especially by well-funded organizations) backing for such essential contributions.

                                                      1. 7

                                                        My point isn’t that esr is wealthy or that he doesn’t need money, but that he isn’t a random “critical open-source maintainer”. Most people on this site and elsewhere on the Internet would know him for editing the Jargon File in the 1990s, founding the Open Source Initiative, or his history of making inflammatory comments, not for maintaining giflib.

                                                        In your original comment, you wrote that you “don’t know much about” him. In your response, you wrote that you “don’t believe he’s a bad person”. Your opinion of him might change after you read his writing that is homophobic, racist, or apologetic about Epstein.

                                                        I assume your framing of this as “giflib maintainer needs money” and not “known bigot Eric S. Raymond needs money” is the result of ignorance of his history, but I think the bigotry is important context here.

                                                        1. 2

                                                          I have 2 Ubuntu systems within ssh reach, both run-of-the-mill machines, and neither has giflib installed. Just because the maintainer says it’s a critical piece of software, doesn’t mean it actually is.

                                                          Anyway, begging for scraps via Patreon isn’t viable long-term. For example, I would never donate personally to ESR because I happen to despise him, but I might be interested in paying into a fund that doles out money to maintainers based on some sort of objective criteria[1]. This is similar to how people trying to fund their health care via fundraisers is less effective than simply having a universal health insurance that pays out to everyone based on need.


                                                          [1] please don’t ask me how such a fund should be implemented.

                                                          1. 1

                                                            I understand that giflib is used in all sorts of servers and mobile apps for working with gifs. Buffer overflow issues is a potential security vulnerability.

                                                          1. 1

                                                            It was: it changed hands several times these last 12 years. That’s how we got the “adware controversy”, for example.

                                                2. 2

                                                  I do like this. Thank you for sharing. I have some questions:

                                                  1. From a portability standpoint, where are the notebook files stored on first run in, say, Windows?
                                                  2. More to that point, it seem on first run, the server is auto-launched. Is there more on how to config or build the server end of this?
                                                  3. Are there a list of any useful hotkeys? I notice q is quit and CTRL+S to save, TAB.. most seem obvious.

                                                  Also, seems that there is no indicator that a notebook needs to be saved (or has been modified).. that cue would help.

                                                  1. 2

                                                    From a portability standpoint, where are the notebook files stored on first run in, say, Windows?

                                                    Right now, notebooks are never stored locally, but always fetched from the server. When you open Enclave, it establishes a live session with the server, and you save your notebook directly onto the server after encrypting it.

                                                    More to that point, it seem on first run, the server is auto-launched. Is there more on how to config or build the server end of this?

                                                    No, that’s not right. Running Enclave as specified in the README connects to the default server at enclave.sh. Feel free to look into the server package in cmd/enclave-server and internal/server to figure out how to run your own server. I’ll add documentation on this later.

                                                    Are there a list of any useful hotkeys? I notice q is quit and CTRL+S to save, TAB.. most seem obvious.

                                                    Those are the ones so far! Although I’m planning to add an actual list into the UI soon.

                                                    Also, seems that there is no indicator that a notebook needs to be saved (or has been modified).. that cue would help.

                                                    I opened an issue.

                                                  2. 3

                                                    Interesting! From the README I gather that this is basically about storing an encrypted blob on the server, and that the data format (64 text strings of up to 64KB) is just an application-layer concern, with the 64k limit to deter misuse from overloading the server. Correct?

                                                    Why Scrypt and not, say, Argon2? I don’t know either algorithm well, I’m just asking because libSodium and Monocypher implement the latter.

                                                    The protocol seems simple enough that an RPC protocol is overkill. It sounds like it could be done in HTTP with a few dozen lines of server-side code (and you’d get caching for free using conditional GETs.)

                                                    1. 2

                                                      Interesting! From the README I gather that this is basically about storing an encrypted blob on the server, and that the data format (64 text strings of up to 64KB) is just an application-layer concern, with the 64k limit to deter misuse from overloading the server. Correct?

                                                      Yeah, more or less! There’s some other funky stuff like the decoy notebook logic but yes.

                                                      Why Scrypt and not, say, Argon2? I don’t know either algorithm well, I’m just asking because libSodium and Monocypher implement the latter.

                                                      Largely because of this result: https://eprint.iacr.org/2016/989

                                                      The protocol seems simple enough that an RPC protocol is overkill. It sounds like it could be done in HTTP with a few dozen lines of server-side code (and you’d get caching for free using conditional GETs.)

                                                      You’re right. I only chose gRPC primarily because it plays well with protobufs, which I use all over the place, and also because it’s very fast. But I could switch to REST.

                                                      1. 1

                                                        My experience with gRPC was that, the one time I tried it out, just checking out the repo and building took up nearly a gigabyte of disk. Yuck.

                                                          1. 2

                                                            No idea. I just know of the above result and have previous experience with Scrypt, so I went with Scrypt.

                                                      2. 3

                                                        This is an awfully complex solution that, IMO, doesn’t create a more trustworthy environment.

                                                        1. like @kline already mentioned: don’t use URL shorteners. They reduce transparency and web reliability.
                                                        2. It’s not even clear what the use case is. People in low-trust environments who for some reason trust URL shorteners? See (1.)
                                                        3. The whole point of URL shorteners (or if you insist, content linkers) is that they’re lossy. You’ll never know what content they contain without retrieving the content, which might be malicious.
                                                        1. 4

                                                          Okay… I’ll be honest: I was expecting better comments. I don’t mean that as a jab! I’m sincerely surprised by the kneejerk reaction here.

                                                          I’ll answer point 2 first then points 1 and 3 together:

                                                          Regarding 2: The paper points out that the use case is basically anything that takes a short identifier and turns it into a longer thing with a global integrity view. That’s very much not just URL shorteners. URL shorteners are a quick useful demo, but you can apply this to all kinds of things! Mission-critical files. Documents. Text. You name it. The service gives you one short identifier, and then commits a zero-knowledge proof to a smart contract such that any person using the short identifier to retrieve the full payload gets a global authenticity guarantee.

                                                          Regarding 1 and 3: Your qualm with URL shorteners seems to be that they can redirect to malicious URLs. Again, DuckyZip isn’t just about URL shorteners, but if you want to focus on that demo use case, this is actually something that DuckyZip can help solve: before redirecting to any URL, you can obtain not only the full URL and vet it, but also a discrete zero knowledge proof that it’s the right URL to begin with.

                                                          If you don’t like URL shorteners then by all means, don’t use them – DuckyZip is a low-level protocol with much broader use cases. Less knee-jerking would be appreciated.

                                                          1. 3

                                                            Less knee-jerking would be appreciated.

                                                            More useful examples would be appreciated.

                                                            1. 3

                                                              Your qualm with URL shorteners seems to be that they can redirect to malicious URLs

                                                              The problem with URL shorteners is that they stop operating eventually, because there’s no reason to operate one. Organization-specific shorteners like https://dolp.in/ have much better longevity.

                                                            2. 2

                                                              The whole point of URL shorteners (or if you insist, content linkers) is that they’re lossy. You’ll never know what content they contain without retrieving the content, which might be malicious.

                                                              And, in particular, they support updates. You can keep the stable short URL and redirect it to a new canonical URL when things move.

                                                              1. 1

                                                                That you’ll never know which content they contain without retrieving/parsing/executing is an intrinsic part of how the web treats a URL as a link regardless of another runtime translation layer/virtualisation/indirection.

                                                                You have no guarantees that you will retrieve same exact contents the next request or from the same provider, if you share a ‘direct’ link to someone else they will quite likely still get a different version. My ‘link’ sharing among friends is more often than not print to PDF first for anything not video now for this reason.

                                                                Even in a world where the URL would carry all state used as input to the content provider, you’d still fight low level tricks like hosts mapped to round robin DNS as well as high level ones from other tamper-happy intermediates – how many pages that relies on CDNs actually use SRI etc[1]?

                                                                As such the shortener doesn’t fundamentally change anything - the weakest part of the link will set the bar. If anything, you could use your own shortening service layered on this to provide further guarantees. If anything having one sanctioned by archive.org that >also< syncs wayback machine >and< provides a signed Firefox Pocket style offline friendly version would improve things at the expensive of yet another round of copyright and adtech cries - the sweetest-tasting of tears.

                                                                [1] Kerschbaumer, Christoph (2016). [IEEE 2016 IEEE Cybersecurity Development (SecDev) - Boston, MA, USA (2016.11.3-2016.11.4)] 2016 IEEE Cybersecurity Development (SecDev) - Enforcing Content Security by Default within Web Browsers. , (), 101–106. doi:10.1109/SecDev.2016.033

                                                                1. 1

                                                                  I believe there to be a fundamental difference between domains that may redirect users anywhere and domains that one can inspect, recognize, and vet in advance. I also consider link transparency to be a fundamental building block of the web’s trust model.

                                                              2. 6

                                                                What is it with cute animal drawings and exceptionally accessible and pedagogical cryptography explainers?!

                                                                1. 23

                                                                  This post is a welcome change from the derogatory yelling that usually surrounds these topics: “Don’t use RSA!” — often ignoring that those who are using RSA often have unfortunate constraints (legacy, etc.) or very good reasons imposed by corner cases outside of their control.

                                                                  One particularly illustrative example of the hard-headedness that I’m referring to is an incredibly abrasive and elitist post from 2019, which was originally titled, simply, “Fuck RSA” and which focused more on just signaling the authors’ doubtlessly impressive knowledge of RSA’s shortcomings instead of recognizing that some developers using RSA aren’t blubbering fools, but are simply stuck with it for some reason or another.

                                                                  The API suggested suggested by Soatok’s post, on the other hand, is sufficiently agnostic and provides a helpful grounder for all sorts of developers who could be reading the post. This useful framework is surrounded by exactly the sort of considerations that non-specialist engineers should be primed to think about! It takes a thoughtful mind to be truly pedagogical.

                                                                  This sort of anti-elitist, non-judgmental, well-written and accessible focus on providing standard engineering solutions is exactly what applied cryptography needs more of.

                                                                  Another author who writes like this is Vitalik Buterin. His explainers of ZK math are always a joy to read, largely because you feel like he’s genuinely interested in explaining valuable concepts to you in a simple and honestly accessible way, and that by doing so, he solidifies his own knowledge in his mind. Here’s one example.

                                                                  1. 3

                                                                    elitist post from 2019, which was originally titled

                                                                    It’s linked in the first sentence..

                                                                  2. 11

                                                                    So much content posted to twitter where it will doubtless be lost forever, or behind a login gate at some point as they chase the last profits from their fleeing audience when the next hip thing takes over :(

                                                                    Edit for usefulness so I’m not part of the problem: The post linked from the tweet.

                                                                    1. 5

                                                                      The actual content is in the Linux kernel’s git commit logs, which will certainly not be lost forever (unless, I guess, something really extreme happens).

                                                                      https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git/log/drivers/char/random.c

                                                                      1. 1

                                                                        Great work.

                                                                        I agree that it would make more sense to link directly to the commits. Mailing list posts are also better, but in this case the Lobste.rs headline already provides the sufficient editorial context. Linking to tweets (which appears to be more and more popular) seems to compromise the visibility of the work in favor of self-promotion, a point humorously reflected by how Lobste.rs’ extract of the post is simply “Trending now”: https://imgur.com/a/Pduk7iq

                                                                        I hope I’m not misunderstood — this is the latest in an array of excellent contributions and I’ve myself retweeted OP.

                                                                    2. 15

                                                                      Please consider signing the open letter against these changes: https://appleprivacyletter.com/

                                                                      1. 10

                                                                        Are you going to post an open letter for Microsoft, Google, DropBox, Facebook, Twitter, and all the other companies who have used the exact same database for this exact purpose for the last decade?

                                                                        1. 8

                                                                          Which provider has previously used this list against images that aren’t stored on their infrastructure?

                                                                          1. 4

                                                                            Images sent via iMessage are stored on Apple’s infrastructure.

                                                                            1. 1

                                                                              I think the question had implied “stored in plain text”. iMessage doesn’t do that.

                                                                              1. 6

                                                                                Right. So, every other provider has direct access to your photos, and scans for CSAM with their direct access. Apple, rather than give up their E2E messaging, has devised a privacy-preserving scheme to perform these scans directly on client devices.

                                                                                I really don’t understand how Apple is the bad guy here.

                                                                                1. 4

                                                                                  Other providers that scan cleartext images are off the hook, because they’ve never had E2E privacy guarantee.

                                                                                  [smart guy meme]: You can’t have encryption backdoor if you don’t have encryption.

                                                                                  Apple’s E2E used to be a strong guarantee, but this scanning is a hole in it. Countries that have secret courts, gag orders, and national security letters can easily demand that Apple slip in a few more hashes. It’s not possible for anyone else to verify what these hashes actually match and where they came from. This is effectively an encryption backdoor.

                                                                            2. 3

                                                                              If I understood what I read, although the private set intersection is done on device, it’s only done for photos that are synced with iCloud Photo Library.

                                                                              1. 2

                                                                                Apologies to all in this thread. Like many I originally misunderstood what Apple was doing. This post was based on that misunderstanding, and now I’m not sure what to do about it. Disowning feels like the opposite of acknowledging my mistake, but now I have 8 voted based on being a dumbass 🙁

                                                                                1. 2

                                                                                  iCloud Photos are stored on Apple infrastructure.

                                                                              2. 4

                                                                                This page gets the scope of scanning wrong in the second paragraph, so I’m not sure it’s well researched.

                                                                                1. 3

                                                                                  how so? can you explain?

                                                                                  “Apple’s proposed technology works by continuously monitoring all photos stored or shared on a user’s iPhone, iPad or Mac, and notifying the authorities if a certain number of objectionable photos is detected.”

                                                                                  seems like an appropriate high-level description of what is being done, how is it wrong?

                                                                                  1. 7

                                                                                    I may be wrong but, from what I understood, a team of reviewers is notified to check manually the photos once a certain number of objectionable photos is detected, not the authorities… If (and only if) the team of reviewers agrees with the hashes matches, they notify the authorities.

                                                                                    This is a detail but this introduces a manual verification before notifying the authorities, which is important.

                                                                                    From MacRumors:

                                                                                    Apple’s method works by identifying a known CSAM photo on device and then flagging it when it’s uploaded to ‌iCloud Photos‌ with an attached voucher. After a certain number of vouchers (aka flagged photos) have been uploaded to ‌iCloud Photos‌, Apple can interpret the vouchers and does a manual review. If CSAM content is found, the user account is disabled and the National Center for Missing and Exploited Children is notified.

                                                                                    Link to the resource: https://www.macrumors.com/2021/08/05/apple-csam-detection-disabled-icloud-photos/

                                                                                    1. 1

                                                                                      Second paragraph of the AP article

                                                                                      The tool designed to detected known images of child sexual abuse, called “neuralMatch,” will scan images before they are uploaded to iCloud

                                                                                      This resource from Apple also states that only images uploaded to iCloud are scanned.

                                                                                      1. 2

                                                                                        This quote you cite figures nowhere within the page.

                                                                                        1. 1

                                                                                          You replied to my comment linking to an open letter, you didn’t post a top-level comment.

                                                                                    2. 1

                                                                                      Apple’s proposed technology works by continuously monitoring photos saved or shared on the user’s iPhone, iPad, or Mac.

                                                                                      Only photos uploaded to iCloud Photos are matched against known hashes.

                                                                                  2. 4

                                                                                    Or just don’t buy an Apple device. Do you really think a trillion dollar company cares about digital signatures?

                                                                                    1. 6

                                                                                      I think this is a good statement of intent though.

                                                                                      I just bought an iPhone 12 and would be otherwise unlikely to be noticed as a lost sale until the iPhone 14~ since most people don’t upgrade a single minor version.

                                                                                      Giving them warning that they have lost me as a customer because of this is a good signal for them. If they choose not to listen then that’s fine, they made a choice.

                                                                                      Also the more noise we make as a community; the more this topic gains attention from those not in the industry.

                                                                                      1. 4

                                                                                        I didn’t mean to make some sort of “statement” to Apple. I find that idea laughable. What I meant is that if you are really concerned about your privacy to the point where scanning for illegal images is “threaten[ing] to undermine fundamental privacy protections” (which I think is reasonable), then why buy Apple in the first place? This isn’t the first time they have violated their users’ privacy, and it certainly wont be the last.

                                                                                        1. 6

                                                                                          What’s your proposed alternative?

                                                                                          I think Apple making a stance on privacy, often posturing about it a lot, does cause a lot of good will and generally those who prefer to maintain privacy have been buying their products. (myself included). You can argue that it’s folly but the alternatives are akin to growing your own vegetables on a plot of land in the middle of nowhere connected to no grid (a-la rooted android phones with f-droid) or google owned devices which have a significantly worse privacy track record.

                                                                                          1. 3

                                                                                            You oughta update your intel about the “alternative” smartphone space. Things have come a long way from “growing your own vegetables on a plot of land in the middle of nowhere connected to no grid.” The big two user-friendly options are CalyxOS and LineageOS with microG. If you don’t feel like installing an OS yourself, the Calyx Institute, the 501(c)(3) nonprofit which develops CalyxOS, even offers the Pixel 4a with CalyxOS preinstalled for about $600.

                                                                                            I’m running LineageOS on a OnePlus 6T, and everything works, even banking apps. The experience is somewhere between “nearly identical” and “somewhat improved” relative to that of the operating system which came with the phone. I think the local optimum between privacy-friendliness and user-friendliness in the smartphone world is more obvious than ever, and iOS sure ain’t it these days.

                                                                                          2. 2

                                                                                            It does seem folly to make a statement by not buying something, but consider this: When you vote, there are myriad ways that politicians have to dilute your impact (not going to enumerate them here but it’s easy to do). By comparison, when you make an economic choice, ever dollar is counted in full, one way or another. So if you vote, and you should, then there’s every reason to vote with your pocketbook as well.

                                                                                    2. 1

                                                                                      I’m surprised that the author didn’t think that winget deserved more than a passing mention! To me it was one of the most interesting announcements.

                                                                                      1. 1

                                                                                        I don’t get the whole “We are excited to announce the release of Windows Package Manager 1.0!” when it appears to still be a preview that you need to be running Windows Insider to use unless you want to manually install it?

                                                                                      2. 3

                                                                                        I am confused how the presented scheme is anything close to tracing. The first step is

                                                                                        The plaintext that is to be traced is submitted along with RF, NF and context.

                                                                                        But NF is a 256bit random nonce that no one other than the sender and recipient have access to. You may be able to guess a plaintext, but there’s no way you can guess that.

                                                                                        Additionally, it seems to me that if you have access to an oracle that can say if a given ciphertext is equal to some plaintext, you have broken ciphertext indistinguishability, a property that is very important to confidentiality (“Indistinguishability is an important property for maintaining the confidentiality of encrypted communications.”)

                                                                                        1. 1

                                                                                          There would be a step where the reveal of this nonce would be compelled, similarly to how message franking implements such a step in its current form. The idea is that you can just substitute the rationale for this step from “abuse reporting” to “message tracing”.

                                                                                          1. 2

                                                                                            How is compelling the reveal of the nonce any different from compelling the reveal of the plaintext? They’re stored next to each other and the only parties that have the nonce are the same parties that have the plaintext. The difference between “abuse reporting” and “message tracing” is which party is performing the action, and that makes all the difference.

                                                                                            1. 2

                                                                                              As far as I understand, the nonce serves to validate the initial HMAC, which serves as a pre-commitment to the authenticity of the message within its original context.

                                                                                        2. 8

                                                                                          I appreciate the intentions behind this post, but as a cursory introduction to a common problem in cryptography, I worry that this article muddies together a number of concepts, and I’m taking the time to write a correction here given how this have been upvoted to the top of Lobsters and could therefore mislead some developers.

                                                                                          This design completely lacks forward secrecy. This is the same reason that PGP encryption sucks.

                                                                                          This is just bizarre, because it strongly implies that the project whose cryptography the author is criticizing, “Zuccnet”, “completely lacks” forward secrecy because it uses RSA. But RSA is a primitive for public key encryption. Forward secrecy, on the other hand, is a property of a cryptographic protocol. Using RSA or not using RSA doesn’t have direct bearing on whether or not you obtain forward secrecy. RSA itself cannot possibly “lack” or “offer” forward secrecy, and constructing an argument based on this logic makes no sense:

                                                                                          1. Were I to replace RSA usage with AES-CBC, AES-GCM, XSasla20-Poly1305, etc. — none of that would grant me or take away forward secrecy.
                                                                                          2. Were I to follow the author’s advice and encrypt symmetric keys using RSA, that wouldn’t grant me forward secrecy, either, if I don’t have a protocol that manages the way those keys are generated/derived, used and refreshed.
                                                                                          3. Even if I were to use an authenticated key exchange as the author later suggests, that itself doesn’t guarantee forward secrecy, either! It simply guarantees, as the name suggests, an authenticated key exchange step for the protocol.

                                                                                          I think that it would be better for the author here to more clearly distinguish between RSA as a primitive and the design of the protocol they are criticizing, to avoid misleading new readers. It’s important to understand that RSA does not affect forward secrecy and vice versa. The conflation with PGP further muddies the comparison and mixes together a bunch of contexts that in reality aren’t very closely related.

                                                                                          Some cryptography libraries let you treat RSA as a block cipher in ECB mode and encrypt each chunk independently. This is an incredibly stupid API deign choice: […]

                                                                                          Calling this an “incredibly stupid design choice” doesn’t make sense to me, because the supposed “design choice” itself has been fundamentally misunderstood and is being miscommunicated. The author here is almost certainly referring to RSA constructions being referred to as, for example, RSA/ECB/OAEPWithSHA1AndMGF1Padding. This is a naming scheme that was first promoted in Java and that has found itself copied into a tiny number other, largely Java-inspired frameworks.

                                                                                          As noted in Java documentation and in ample references around the web, it is highly misleading to refer to how RSA Encryption is used as “ECB mode”. The “ECB” here doesn’t actually mean anything — it’s just a stand-in for there not being a a real block cipher mode of operation, and was likely added as part of the naming scheme for ciphers so that asymmetric ciphers are referred to in a way that structurally is similar to that of symmetric block ciphers (eg. AES/CBC/PKCS5PADDING).

                                                                                          Working around [the lack of forward secrecy] requires an Authenticated Key Exchange (AKE)

                                                                                          Some popular protocols, such as Signal or the Noise Protocol Framework, do establish some forward secrecy (and post-compromise security) via an AKE, but this doesn’t mean that an AKE is required to obtain forward secrecy. In the case of Signal, the majority of the forward secrecy and post-compromise guarantees are actually not even guaranteed by the AKE at all but by the subsequent ratcheting mechanism, with the AKE only setting the stage for that and offering forward secrecy for session initialization only.

                                                                                          Protocols can achieve forward secrecy via periodic key rotation or other mechanisms that don’t implicate an AKE, and this could be preferable depending on the use case scenario and execution context.

                                                                                          Finally, the “Recommendations” section contains pieces of advice that all seem to conflict with one another:

                                                                                          • RSA is for encrypting symmetric keys, not entire messages. Pass it on.

                                                                                          • Consider not using RSA.

                                                                                          • Instead, if you find yourself needing to encrypt a message with RSA, remind yourself that RSA is for encrypting symmetric keys, not messages. And then plan your protocol design accordingly.

                                                                                          • You should use RSA-KEM instead of what I’ve sketched out […]

                                                                                          If you’re the party planning the protocol design, then why would you find yourself needing to encrypt a message with RSA? If it’s better not to use RSA at all, then why is the article’s subheading mentioning that “RSA is for encrypting symmetric keys”? If one were to use a KEM, why would they use an RSA-based KEM?

                                                                                          I think the article is better off just providing a simpler, more coherent recommendation that leads people away from RSA entirely. As it is, I could read this article as a new cryptography engineer and walk away with four conflicting recommendations.


                                                                                          As others have noted, this post is commendable for not shaming the developer of “Zuccnet” and trying to raise the bar against common cryptography mistakes, so I’d like to congratulate the author their intentions but wish more time was spent on a polished execution. If folks are interested, I’d like to suggest some readings on protocol design that could serve as a more coherent reference on how to think about protocols, primitives, etc. (yes, they’re from ePrint, but they’re not harder to read than this blog post, I promise!):

                                                                                          1. 4

                                                                                            I mostly agree with you Nadim but I cannot think of a way to do PFS with RSA.

                                                                                            Except for very scientific constructions like having a million RSA keys and throwing away all the used ones. The problem is that you cannot really hash an RSA key to a new key. That’s why 0-RTT PFS for TLS is so cool. But it requires puncturable encryption.

                                                                                            So, practically speaking, I would agree that using RSA encryption means you don’t get PFS.

                                                                                          2. 2
                                                                                            1. If you try to encrypt a message longer than 256 bytes with a 2048-bit RSA public key, it will fail. (Bytes matter here, not characters, even for English speakers–because emoji.)
                                                                                            2. This design completely lacks forward secrecy. This is the same reason that PGP encryption sucks.

                                                                                            Could these tradeoffs be worth it if it means the system is really simple and easy to understand?

                                                                                            1. 12

                                                                                              The first one, no. Breaking on large messages is a serious usability pain-point, and doing a hybrid public key encryption is 100% worth the additional complexity.

                                                                                              The second one, YES! If you make the threat model clear, then eliminating forward secrecy greatly simplifies your protocol. (Implementing X3DH requires an online server to hand out “one-time pre-keys” to be totally safe.) At worst, you’re as bad off as PGP encryption (except, if you follow the advice in my blog, you’re probably going to end up using an authenticated encryption construction rather than CAST5-YOLO).

                                                                                              1. 1

                                                                                                The first one, no. Breaking on large messages is a serious usability pain-point, and doing a hybrid public key encryption is 100% worth the additional complexity.

                                                                                                Isn’t it something people are quite used to though? Both SMS and tweets have a character limit.

                                                                                                But let’s say we do want to go with the simplest secure model, without forward secrecy but no character limit. So hybrid encryption but not X3DH. What library functions would the smart developer use?

                                                                                                1. 5

                                                                                                  If they’re using libsodium? crypto_box_seal() and crypto_box_seal_open(). Problem solved for them.

                                                                                                  If they’re using OpenSSL (or one of the native wrappers), something like this:

                                                                                                  type SealedMessage = {cipher: Buffer, tag: Buffer, wrappedKey: buffer};
                                                                                                  const DOMAIN_SEPARATION_AES = Buffer.from('AES-256-CTR');
                                                                                                  const DOMAIN_SEPARATION_HMAC = Buffer.from('HMAC-SHA256');
                                                                                                  
                                                                                                  function hmacSha256(msg: string|Buffer, key: Buffer): Buffer {
                                                                                                      const hmac = crypto.createHmac('sha256', key);
                                                                                                      hmac.update(msg);
                                                                                                      return hmac.digest();
                                                                                                  }
                                                                                                  
                                                                                                  function seal(msg: string|Buffer, recipientPublicKey: Buffer): SealedMessage {
                                                                                                      // Generate and wrap the primary key 
                                                                                                      // (which is split into two keys: one for AES, one for HMAC)
                                                                                                      const key = crypto.randomBytes(32);
                                                                                                      const aesKey = hmacSha256(Buffer.concat([key, DOMAIN_SEPARATION_AES]), key);
                                                                                                      const macKey = hmacSha256(Buffer.concat([key, DOMAIN_SEPARATION_HMAC]), key);
                                                                                                      const rsaCiphertext = crypto.publicEncrypt(
                                                                                                          {
                                                                                                              key: recipientPublicKey,
                                                                                                              padding: crypto.constants.RSA_PKCS1_OAEP_PADDING,
                                                                                                              oaepHash: "sha256",
                                                                                                          },
                                                                                                          key
                                                                                                      );
                                                                                                      
                                                                                                      // Encrypt the data
                                                                                                      const nonce = crypto.randomBytes(16);
                                                                                                      const aes = crypto.createCipheriv('aes-256-ctr', aesKey, nonce);
                                                                                                      const ciphertext = Buffer.concat([
                                                                                                          nonce, 
                                                                                                          aes.update(Buffer.from(string)), 
                                                                                                          aes.finish()
                                                                                                      ]);
                                                                                                      
                                                                                                      // Authenticate the data
                                                                                                      const tag = hmacSha256(ciphertext, macKey);
                                                                                                      
                                                                                                      return {
                                                                                                          cipher: ciphertext,
                                                                                                          tag: tag,
                                                                                                          wrappedKey: rsaCiphertext
                                                                                                      };
                                                                                                  }
                                                                                                  
                                                                                                  function unseal(sealed: SealedMessage, secretKey: Buffer): Buffer {
                                                                                                      const key = crypto.privateDecrypt(
                                                                                                          {
                                                                                                              key: secretKey,
                                                                                                              padding: crypto.constants.RSA_PKCS1_OAEP_PADDING,
                                                                                                              oaepHash: "sha256"
                                                                                                          },
                                                                                                          sealed.wrappedKey
                                                                                                      );
                                                                                                      const aesKey = hmacSha256(Buffer.concat([key, DOMAIN_SEPARATION_AES]), key);
                                                                                                      const macKey = hmacSha256(Buffer.concat([key, DOMAIN_SEPARATION_HMAC]), key);
                                                                                                      const nonce = sealed.cipher.slice(0, 16); // AES-CTR nonce size
                                                                                                      const ciphertext = sealed.cipher.slice(16);
                                                                                                      if (!crypto.timingSafeEqual(sealed.tag, hmacSha256(ciphertext, macKey)) {
                                                                                                          throw new Error("Integrity check failed");
                                                                                                      }
                                                                                                      const aes = crypto.createDecipheriv('aes-256-ctr', aesKey, nonce);
                                                                                                      return Buffer.concat([aes.update(ciphertext), aes.final()]);
                                                                                                  }
                                                                                                  

                                                                                                  (This is why “just use libsodium” is so much better.)

                                                                                                  1. 1

                                                                                                    Please consider using Pastebin for code; Lobsters renders code in a larger-appearing font than text in its comment section and doesn’t seem to fold it away properly, creating a wall of text that makes it harder to scroll through comments.

                                                                                                    1. 1

                                                                                                      I somewhat agree, but I don’t think that there’s a good pastebin which is free to Lobsters without signup and also allows posts to persist. (The Reputation Problem disincentivizes such a service; it would be open to abuse.) It would be cool if Lobsters had the ability to click to expand/hide long code snippets.

                                                                                                      1. 1

                                                                                                        Definitely the best solution would be for Lobsters to fix code rendering in comments.

                                                                                                      2. 1

                                                                                                        For what it’s worth, that comment looks ok to me (Chrome on Windows).

                                                                                                2. 2

                                                                                                  If you are okay with giving up on security (e.g. for educational purposes) then it could be worth it.

                                                                                                  In practice absolutely not.

                                                                                                  1. 1

                                                                                                    Giving up on security is too vague, sorry. Can eve read my messages? No? Then I think I’m pretty safe.

                                                                                                    1. 2

                                                                                                      Maybe bfiedler refers to the second point, meaning if Eve compromises Alice’s private key, then Eve can read past, present and future messages. My personal opinion is that this should be default for any secure messaging system.

                                                                                                3. 7

                                                                                                  This is the worst article I’ve ever seen on the front page of Lobsters. The author decides that he doesn’t like some of the more political assertions in some of Paul Graham’s writings on his blog (since, of course, any critique of the American left is “reactionary”):

                                                                                                  Recently, however, his writing has taken a reactionary turn which is hard to ignore. He’s written about the need to defend “moderates” from bullies on the “extreme left”, asserted that “the truth is to the right of the median” because “the left is culturally dominant,” and justified Coinbase’s policy to ban discussion of anything deemed “political” by saying that it “will push away some talent, yes, but not very talented talent.”

                                                                                                  …and decides to go fisk through everything Graham has ever written in order to find incorrect opinions on programming languages of all things as a way to discredit him and to prove some nebulous point about why Graham isn’t such a great figure to look towards. The author spends a handful of paragraphs basically bullying Graham because his pet project, a programming language called Arc, didn’t take off (except it sort of did: Hacker News is written in Arc, and that’s all beside the point: Paul Graham is a venture capitalist, not a programming language designer!)

                                                                                                  The article then concludes:

                                                                                                  This is all to say that Paul Graham is an effective marketer and practitioner, but a profoundly unserious public intellectual. His attempts to grapple with the major issues of the present, especially as they intersect with his personal legacy, are so mired in intuition and incuriosity that they’re at best a distraction, and worst a real obstacle to understanding our paths forward.

                                                                                                  Like, what are we supposed to get from this? Some kind of self-congratulatory gratification at how big of a smackdown the author gave Paul Graham by setting him straight on programming languages? It’s hard to find a more obvious case of motivated reasoning. I thought people on Lobsters were smarter than to fall for this nonsense.

                                                                                                  I’m not sure how this arrived at the front page of Lobsters. This is really torrid stuff. This is some guy who feels threatened or offended by some of Paul Graham’s political takes and decided that it’s time to discredit him through thinly disguised bullying. There’s no other substance to this poison-soaked article.

                                                                                                  Get this off the front page. Honestly.

                                                                                                  1. 6

                                                                                                    Yeah, I’m not entirely sure why it’s on here. The number of upvotes is also interesting, and a little frightening.

                                                                                                  2. 2

                                                                                                    I have a 2013 mbp and it’s definitely due for an upgrade. However I’m going to wait until the next MBA, I hear the M1X chip is bonkers

                                                                                                    1. 2

                                                                                                      I think M1X is going to be intended for high performance computers like the iMac and 16” MacBook Pro. You’ll be waiting for the M2 most likely.