This is an awfully complex solution that, IMO, doesn’t create a more trustworthy environment.
like @kline already mentioned: don’t use URL shorteners. They reduce transparency and web reliability.
It’s not even clear what the use case is. People in low-trust environments who for some reason trust URL shorteners? See (1.)
The whole point of URL shorteners (or if you insist, content linkers) is that they’re lossy. You’ll never know what content they contain without retrieving the content, which might be malicious.
Okay… I’ll be honest: I was expecting better comments. I don’t mean that as a jab! I’m sincerely surprised by the kneejerk reaction here.
I’ll answer point 2 first then points 1 and 3 together:
Regarding 2: The paper points out that the use case is basically anything that takes a short identifier and turns it into a longer thing with a global integrity view. That’s very much not just URL shorteners. URL shorteners are a quick useful demo, but you can apply this to all kinds of things! Mission-critical files. Documents. Text. You name it. The service gives you one short identifier, and then commits a zero-knowledge proof to a smart contract such that any person using the short identifier to retrieve the full payload gets a global authenticity guarantee.
Regarding 1 and 3: Your qualm with URL shorteners seems to be that they can redirect to malicious URLs. Again, DuckyZip isn’t just about URL shorteners, but if you want to focus on that demo use case, this is actually something that DuckyZip can help solve: before redirecting to any URL, you can obtain not only the full URL and vet it, but also a discrete zero knowledge proof that it’s the right URL to begin with.
If you don’t like URL shorteners then by all means, don’t use them – DuckyZip is a low-level protocol with much broader use cases. Less knee-jerking would be appreciated.
Your qualm with URL shorteners seems to be that they can redirect to malicious URLs
The problem with URL shorteners is that they stop operating eventually, because there’s no reason to operate one. Organization-specific shorteners like https://dolp.in/ have much better longevity.
The whole point of URL shorteners (or if you insist, content linkers) is that they’re lossy. You’ll never know what content they contain without retrieving the content, which might be malicious.
And, in particular, they support updates. You can keep the stable short URL and redirect it to a new canonical URL when things move.
That you’ll never know which content they contain without retrieving/parsing/executing is an intrinsic part of how the web treats a URL as a link regardless of another runtime translation layer/virtualisation/indirection.
You have no guarantees that you will retrieve same exact contents the next request or from the same provider, if you share a ‘direct’ link to someone else they will quite likely still get a different version. My ‘link’ sharing among friends is more often than not print to PDF first for anything not video now for this reason.
Even in a world where the URL would carry all state used as input to the content provider, you’d still fight low level tricks like hosts mapped to round robin DNS as well as high level ones from other tamper-happy intermediates – how many pages that relies on CDNs actually use SRI etc[1]?
As such the shortener doesn’t fundamentally change anything - the weakest part of the link will set the bar. If anything, you could use your own shortening service layered on this to provide further guarantees. If anything having one sanctioned by archive.org that >also< syncs wayback machine >and< provides a signed Firefox Pocket style offline friendly version would improve things at the expensive of yet another round of copyright and adtech cries - the sweetest-tasting of tears.
[1] Kerschbaumer, Christoph (2016). [IEEE 2016 IEEE Cybersecurity Development (SecDev) - Boston, MA, USA (2016.11.3-2016.11.4)] 2016 IEEE Cybersecurity Development (SecDev) - Enforcing Content Security by Default within Web Browsers. , (), 101–106. doi:10.1109/SecDev.2016.033
I believe there to be a fundamental difference between domains that may redirect users anywhere and domains that one can inspect, recognize, and vet in advance. I also consider link transparency to be a fundamental building block of the web’s trust model.
This post is a welcome change from the derogatory yelling that usually surrounds these topics: “Don’t use RSA!” — often ignoring that those who are using RSA often have unfortunate constraints (legacy, etc.) or very good reasons imposed by corner cases outside of their control.
One particularly illustrative example of the hard-headedness that I’m referring to is an incredibly abrasive and elitist post from 2019, which was originally titled, simply, “Fuck RSA” and which focused more on just signaling the authors’ doubtlessly impressive knowledge of RSA’s shortcomings instead of recognizing that some developers using RSA aren’t blubbering fools, but are simply stuck with it for some reason or another.
The API suggested suggested by Soatok’s post, on the other hand, is sufficiently agnostic and provides a helpful grounder for all sorts of developers who could be reading the post. This useful framework is surrounded by exactly the sort of considerations that non-specialist engineers should be primed to think about! It takes a thoughtful mind to be truly pedagogical.
This sort of anti-elitist, non-judgmental, well-written and accessible focus on providing standard engineering solutions is exactly what applied cryptography needs more of.
Another author who writes like this is Vitalik Buterin. His explainers of ZK math are always a joy to read, largely because you feel like he’s genuinely interested in explaining valuable concepts to you in a simple and honestly accessible way, and that by doing so, he solidifies his own knowledge in his mind. Here’s one example.
So much content posted to twitter where it will doubtless be lost forever, or behind a login gate at some point as they chase the last profits from their fleeing audience when the next hip thing takes over :(
The actual content is in the Linux kernel’s git commit logs, which will certainly not be lost forever (unless, I guess, something really extreme happens).
I agree that it would make more sense to link directly to the commits. Mailing list posts are also better, but in this case the Lobste.rs headline already provides the sufficient editorial context. Linking to tweets (which appears to be more and more popular) seems to compromise the visibility of the work in favor of self-promotion, a point humorously reflected by how Lobste.rs’ extract of the post is simply “Trending now”: https://imgur.com/a/Pduk7iq
I hope I’m not misunderstood — this is the latest in an array of excellent contributions and I’ve myself retweeted OP.
Are you going to post an open letter for Microsoft, Google, DropBox, Facebook, Twitter, and all the other companies who have used the exact same database for this exact purpose for the last decade?
Right. So, every other provider has direct access to your photos, and scans for CSAM with their direct access. Apple, rather than give up their E2E messaging, has devised a privacy-preserving scheme to perform these scans directly on client devices.
I really don’t understand how Apple is the bad guy here.
Other providers that scan cleartext images are off the hook, because they’ve never had E2E privacy guarantee.
[smart guy meme]: You can’t have encryption backdoor if you don’t have encryption.
Apple’s E2E used to be a strong guarantee, but this scanning is a hole in it. Countries that have secret courts, gag orders, and national security letters can easily demand that Apple slip in a few more hashes. It’s not possible for anyone else to verify what these hashes actually match and where they came from. This is effectively an encryption backdoor.
If I understood what I read, although the private set intersection is done on device, it’s only done for photos that are synced with iCloud Photo Library.
Apologies to all in this thread. Like many I originally misunderstood what Apple was doing. This post was based on that misunderstanding, and now I’m not sure what to do about it. Disowning feels like the opposite of acknowledging my mistake, but now I have 8 voted based on being a dumbass 🙁
“Apple’s proposed technology works by continuously monitoring all photos stored or shared on a user’s iPhone, iPad or Mac, and notifying the authorities if a certain number of objectionable photos is detected.”
seems like an appropriate high-level description of what is being done, how is it wrong?
I may be wrong but, from what I understood, a team of reviewers is notified to check manually the photos once a certain number of objectionable photos is detected, not the authorities…
If (and only if) the team of reviewers agrees with the hashes matches, they notify the authorities.
This is a detail but this introduces a manual verification before notifying the authorities, which is important.
From MacRumors:
Apple’s method works by identifying a known CSAM photo on device and then flagging it when it’s uploaded to iCloud Photos with an attached voucher. After a certain number of vouchers (aka flagged photos) have been uploaded to iCloud Photos, Apple can interpret the vouchers and does a manual review. If CSAM content is found, the user account is disabled and the National Center for Missing and Exploited Children is notified.
I think this is a good statement of intent though.
I just bought an iPhone 12 and would be otherwise unlikely to be noticed as a lost sale until the iPhone 14~ since most people don’t upgrade a single minor version.
Giving them warning that they have lost me as a customer because of this is a good signal for them. If they choose not to listen then that’s fine, they made a choice.
Also the more noise we make as a community; the more this topic gains attention from those not in the industry.
I didn’t mean to make some sort of “statement” to Apple. I find that idea laughable. What I meant is that if you are really concerned about your privacy to the point where scanning for illegal images is “threaten[ing] to undermine fundamental privacy protections” (which I think is reasonable), then why buy Apple in the first place? This isn’t the first time they have violated their users’ privacy, and it certainly wont be the last.
I think Apple making a stance on privacy, often posturing about it a lot, does cause a lot of good will and generally those who prefer to maintain privacy have been buying their products. (myself included). You can argue that it’s folly but the alternatives are akin to growing your own vegetables on a plot of land in the middle of nowhere connected to no grid (a-la rooted android phones with f-droid) or google owned devices which have a significantly worse privacy track record.
You oughta update your intel about the “alternative” smartphone space. Things have come a long way from “growing your own vegetables on a plot of land in the middle of nowhere connected to no grid.” The big two user-friendly options are CalyxOS and LineageOS with microG. If you don’t feel like installing an OS yourself, the Calyx Institute, the 501(c)(3) nonprofit which develops CalyxOS, even offers the Pixel 4a with CalyxOS preinstalled for about $600.
I’m running LineageOS on a OnePlus 6T, and everything works, even banking apps. The experience is somewhere between “nearly identical” and “somewhat improved” relative to that of the operating system which came with the phone. I think the local optimum between privacy-friendliness and user-friendliness in the smartphone world is more obvious than ever, and iOS sure ain’t it these days.
It does seem folly to make a statement by not buying something, but consider this: When you vote, there are myriad ways that politicians have to dilute your impact (not going to enumerate them here but it’s easy to do). By comparison, when you make an economic choice, ever dollar is counted in full, one way or another. So if you vote, and you should, then there’s every reason to vote with your pocketbook as well.
I don’t get the whole “We are excited to announce the release of Windows Package Manager 1.0!” when it appears to still be a preview that you need to be running Windows Insider to use unless you want to manually install it?
I am confused how the presented scheme is anything close to tracing. The first step is
The plaintext that is to be traced is submitted along with RF, NF and context.
But NF is a 256bit random nonce that no one other than the sender and recipient have access to. You may be able to guess a plaintext, but there’s no way you can guess that.
Additionally, it seems to me that if you have access to an oracle that can say if a given ciphertext is equal to some plaintext, you have broken ciphertext indistinguishability, a property that is very important to confidentiality (“Indistinguishability is an important property for maintaining the confidentiality of encrypted communications.”)
There would be a step where the reveal of this nonce would be compelled, similarly to how message franking implements such a step in its current form. The idea is that you can just substitute the rationale for this step from “abuse reporting” to “message tracing”.
How is compelling the reveal of the nonce any different from compelling the reveal of the plaintext? They’re stored next to each other and the only parties that have the nonce are the same parties that have the plaintext. The difference between “abuse reporting” and “message tracing” is which party is performing the action, and that makes all the difference.
As far as I understand, the nonce serves to validate the initial HMAC, which serves as a pre-commitment to the authenticity of the message within its original context.
I appreciate the intentions behind this post, but as a cursory introduction to a common problem in cryptography, I worry that this article muddies together a number of concepts, and I’m taking the time to write a correction here given how this have been upvoted to the top of Lobsters and could therefore mislead some developers.
This design completely lacks forward secrecy. This is the same reason that PGP encryption sucks.
This is just bizarre, because it strongly implies that the project whose cryptography the author is criticizing, “Zuccnet”, “completely lacks” forward secrecy because it uses RSA. But RSA is a primitive for public key encryption. Forward secrecy, on the other hand, is a property of a cryptographic protocol. Using RSA or not using RSA doesn’t have direct bearing on whether or not you obtain forward secrecy. RSA itself cannot possibly “lack” or “offer” forward secrecy, and constructing an argument based on this logic makes no sense:
Were I to replace RSA usage with AES-CBC, AES-GCM, XSasla20-Poly1305, etc. — none of that would grant me or take away forward secrecy.
Were I to follow the author’s advice and encrypt symmetric keys using RSA, that wouldn’t grant me forward secrecy, either, if I don’t have a protocol that manages the way those keys are generated/derived, used and refreshed.
Even if I were to use an authenticated key exchange as the author later suggests, that itself doesn’t guarantee forward secrecy, either! It simply guarantees, as the name suggests, an authenticated key exchange step for the protocol.
I think that it would be better for the author here to more clearly distinguish between RSA as a primitive and the design of the protocol they are criticizing, to avoid misleading new readers. It’s important to understand that RSA does not affect forward secrecy and vice versa. The conflation with PGP further muddies the comparison and mixes together a bunch of contexts that in reality aren’t very closely related.
Some cryptography libraries let you treat RSA as a block cipher in ECB mode and encrypt each chunk independently. This is an incredibly stupid API deign choice: […]
Calling this an “incredibly stupid design choice” doesn’t make sense to me, because the supposed “design choice” itself has been fundamentally misunderstood and is being miscommunicated. The author here is almost certainly referring to RSA constructions being referred to as, for example, RSA/ECB/OAEPWithSHA1AndMGF1Padding. This is a naming scheme that was first promoted in Java and that has found itself copied into a tiny number other, largely Java-inspired frameworks.
As noted in Java documentation and in ample references around the web, it is highly misleading to refer to how RSA Encryption is used as “ECB mode”. The “ECB” here doesn’t actually mean anything — it’s just a stand-in for there not being a a real block cipher mode of operation, and was likely added as part of the naming scheme for ciphers so that asymmetric ciphers are referred to in a way that structurally is similar to that of symmetric block ciphers (eg. AES/CBC/PKCS5PADDING).
Working around [the lack of forward secrecy] requires an Authenticated Key Exchange (AKE)
Some popular protocols, such as Signal or the Noise Protocol Framework, do establish some forward secrecy (and post-compromise security) via an AKE, but this doesn’t mean that an AKE is required to obtain forward secrecy. In the case of Signal, the majority of the forward secrecy and post-compromise guarantees are actually not even guaranteed by the AKE at all but by the subsequent ratcheting mechanism, with the AKE only setting the stage for that and offering forward secrecy for session initialization only.
Protocols can achieve forward secrecy via periodic key rotation or other mechanisms that don’t implicate an AKE, and this could be preferable depending on the use case scenario and execution context.
Finally, the “Recommendations” section contains pieces of advice that all seem to conflict with one another:
RSA is for encrypting symmetric keys, not entire messages. Pass it on.
Consider not using RSA.
Instead, if you find yourself needing to encrypt a message with RSA, remind yourself that RSA is for encrypting symmetric keys, not messages. And then plan your protocol design accordingly.
You should use RSA-KEM instead of what I’ve sketched out […]
If you’re the party planning the protocol design, then why would you find yourself needing to encrypt a message with RSA? If it’s better not to use RSA at all, then why is the article’s subheading mentioning that “RSA is for encrypting symmetric keys”? If one were to use a KEM, why would they use an RSA-based KEM?
I think the article is better off just providing a simpler, more coherent recommendation that leads people away from RSA entirely. As it is, I could read this article as a new cryptography engineer and walk away with four conflicting recommendations.
As others have noted, this post is commendable for not shaming the developer of “Zuccnet” and trying to raise the bar against common cryptography mistakes, so I’d like to congratulate the author their intentions but wish more time was spent on a polished execution. If folks are interested, I’d like to suggest some readings on protocol design that could serve as a more coherent reference on how to think about protocols, primitives, etc. (yes, they’re from ePrint, but they’re not harder to read than this blog post, I promise!):
I mostly agree with you Nadim but I cannot think of a way to do PFS with RSA.
Except for very scientific constructions like having a million RSA keys and throwing away all the used ones.
The problem is that you cannot really hash an RSA key to a new key. That’s why 0-RTT PFS for TLS is so cool. But it requires puncturable encryption.
So, practically speaking, I would agree that using RSA encryption means you don’t get PFS.
If you try to encrypt a message longer than 256 bytes with a 2048-bit RSA public key, it will fail. (Bytes matter here, not characters, even for English speakers–because emoji.)
This design completely lacks forward secrecy. This is the same reason that PGP encryption sucks.
Could these tradeoffs be worth it if it means the system is really simple and easy to understand?
The first one, no. Breaking on large messages is a serious usability pain-point, and doing a hybrid public key encryption is 100% worth the additional complexity.
The second one, YES! If you make the threat model clear, then eliminating forward secrecy greatly simplifies your protocol. (Implementing X3DH requires an online server to hand out “one-time pre-keys” to be totally safe.) At worst, you’re as bad off as PGP encryption (except, if you follow the advice in my blog, you’re probably going to end up using an authenticated encryption construction rather than CAST5-YOLO).
The first one, no. Breaking on large messages is a serious usability pain-point, and doing a hybrid public key encryption is 100% worth the additional complexity.
Isn’t it something people are quite used to though? Both SMS and tweets have a character limit.
But let’s say we do want to go with the simplest secure model, without forward secrecy but no character limit. So hybrid encryption but not X3DH. What library functions would the smart developer use?
Please consider using Pastebin for code; Lobsters renders code in a larger-appearing font than text in its comment section and doesn’t seem to fold it away properly, creating a wall of text that makes it harder to scroll through comments.
I somewhat agree, but I don’t think that there’s a good pastebin which is free to Lobsters without signup and also allows posts to persist. (The Reputation Problem disincentivizes such a service; it would be open to abuse.) It would be cool if Lobsters had the ability to click to expand/hide long code snippets.
Maybe bfiedler refers to the second point, meaning if Eve compromises Alice’s private key, then Eve can read past, present and future messages. My personal opinion is that this should be default for any secure messaging system.
This is the worst article I’ve ever seen on the front page of Lobsters. The author decides that he doesn’t like some of the more political assertions in some of Paul Graham’s writings on his blog (since, of course, any critique of the American left is “reactionary”):
Recently, however, his writing has taken a reactionary turn which is hard to ignore. He’s written about the need to defend “moderates” from bullies on the “extreme left”, asserted that “the truth is to the right of the median” because “the left is culturally dominant,” and justified Coinbase’s policy to ban discussion of anything deemed “political” by saying that it “will push away some talent, yes, but not very talented talent.”
…and decides to go fisk through everything Graham has ever written in order to find incorrect opinions on programming languages of all things as a way to discredit him and to prove some nebulous point about why Graham isn’t such a great figure to look towards. The author spends a handful of paragraphs basically bullying Graham because his pet project, a programming language called Arc, didn’t take off (except it sort of did: Hacker News is written in Arc, and that’s all beside the point: Paul Graham is a venture capitalist, not a programming language designer!)
The article then concludes:
This is all to say that Paul Graham is an effective marketer and practitioner, but a profoundly unserious public intellectual. His attempts to grapple with the major issues of the present, especially as they intersect with his personal legacy, are so mired in intuition and incuriosity that they’re at best a distraction, and worst a real obstacle to understanding our paths forward.
Like, what are we supposed to get from this? Some kind of self-congratulatory gratification at how big of a smackdown the author gave Paul Graham by setting him straight on programming languages? It’s hard to find a more obvious case of motivated reasoning. I thought people on Lobsters were smarter than to fall for this nonsense.
I’m not sure how this arrived at the front page of Lobsters. This is really torrid stuff. This is some guy who feels threatened or offended by some of Paul Graham’s political takes and decided that it’s time to discredit him through thinly disguised bullying. There’s no other substance to this poison-soaked article.
Go won’t officially support Apple Silicon binary compilation until February 2021. This is pretty slow especially compared to Rust. Apple’s been giving out dev kits since June.
(Emphasis in original).
I don’t believe the dev kits were free. They required an Apple dev membership and cost $500 (possibly defrayed by a rebate on new hardware when it became available) and there wasn’t an infinite amount of them.
I assume the main reason for this is the Go release cycle. It basically has a release every six months and three months of code freeze before that. Therefore, when the DTKs were shipped, the code freeze for the release in August had already happened. The next release is the upcoming one in February. The ..x releases are made just for fixing “critical issues”.
This probably also means that most of the hard work is done and the upcoming beta of Go 1.16 will support Apple Silicon.
Surely Apple and Google could agree on a bunch of dev kits so that Apple Silicon could launch with support for one the world’s most important programming languages?
Agreed. I know that even the Nix foundation got one. I assume it is more a matter of putting it somewhere in the release schedule. The other issue is that you couldn’t really set up CI infrastructure until the M1 Macs were released.
I remember Go’s design targeting particular scale problems seen at Google, notably the need for fast compiles. To what degree are Go’s priorities still set by Google? If that’s significant, what is their business interest in compiling to ARM?
This is largely because the 20% VAT is included in the price and because the EU mandates twice the mandatory warranty of the US on all purchased electronics. So no, the price isn’t really that different.
No. Apple’s listed prices are more expensive in Europe as discussed above, due to higher VAT.
On top of that, over here (Europe) the advertised price almost always includes those taxes; unlike in the US where they are added at the time of purchase.
The reason I posted this, is because I think these price comparisons between difference currencies have no meaning. Why post a dollar amount for europeans who can only buy in euros. If you want to compare prices, compare to something like the big mac index or a cost of living index.
Ah. That’s a pretty good point to make, and I completely agree. But I don’t think that’s clear from your original comment.
Why post a dollar amount for europeans who can only buy in euros. If you want to compare prices, compare to something like the big mac index or a cost of living index.
For an accurate comparison, I think you’d have to compare the price to your chosen index across various US states as well.
And then, there are countries in Europe that are not a part of the Euro zone yet and still have their own currencies, and that dosn’t make the situation any better.
I bought one last week and have used it for 7 days now. I was in an initial hype phase as well, but I am more critical now and doubting whether I should return it.
Performance of native apps is as great as everyone claims. But I think it is a bit overhyped, recent AMD APUs come close in multi-core performance. Of course, that the Air works with passive cooling is a nice bonus.
Rosetta works great with native x86_64 applications, but performance is abysmal with JIT-ing runtimes like the JVM. E.g. JetBrains currently do not have a native version of their IDEs (JVM, but I think they also use some other non-Java code) and their IDEs are barely usable due to slowness. If you rely on JetBrains IDEs, wait until they have an Apple Silicon version.
Also, performance of anything that relies on SIMD instructions (AVX, AVX2) is terrible under Rosetta. So, if you are doing data science or machine learning with heavier loads, you may want to wait. Some libraries can be compiled natively of course, but the problem is that there is no functioning Fortran compiler supported on Apple Silicon (outside an experimental gcc branch) and many packages in that ecosystem rely on having a Fortran compiler.
Another issue with Rosetta vs. native in development is that it is very easy to get environments where native and x86_64 binaries/libraries are mixed (e.g. when doing x86_64 development and CMake building ARM64 objects unless you set CMAKE_OSX_ARCHITECTURES=x86_64), and things do not build.
Then Big Sur on Apple Silicon is also somewhat beta. Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well). When working longer, this issue disappears, but it’s annoying nonetheless.
If you haven’t ordered one, it’s best to wait a while until all issues are ironed out. There is currently a lot of (justified hype) around Apple Silicon, but that doesn’t mean that the ecosystem is ready yet. Unless all you do is web browsing, e-mailing, and an occasional app from the App Store.
Aside from this, I think there are some ethical (sorry for the lack of a better term) issues with newer Apple models. For example, Apple excluding their own services from third-party firewalls/VPNs, no extensibility (reducing the lifespan of hardware), and their slow march to a more and more closed system.
If you need a macbook now , for whatever reason, buying one with an Arm chip does sound the most future-proof option. The Intel ones will be the “old” ones soon, and will then be 2nd rate. It’s what happened with the PowerPC transition as well.
If only there would be the Macs with 32GB RAM I would buy one as I was in need. However due to that, I bought 32GB 13” MacBook Pro instead. I will wait for polishing out the ARMs before next upgrade.
From what I read, you get way more bang for your RAM in Apple processors. It’s all integrated on the same chip so they can do a lot of black magic fuckery there.
In native applications - I am pretty sure that this works well, however as an Erlang/Elixir developer I use 3rd party GCed languages and DBs that can use more RAM anyway. However the fact that it is possible to run native apps from iOS and iPad could save some RAM on Slack and Spotify for sure.
What I mean is, they probably swap to NAND or something, which could very likely be similar performance-wise to RAM you’d find on a x64 laptop (since they have a proprietary connection there instead of NVMe/M.2/SATA). Plus I imagine the “RAM” on the SoC is as fast as a x64 CPU cache. So essentially you’d have “infinite” RAM, with 16gb of it being stupid fast.
This is just me speculating btw, I might be totally wrong.
Lots of valuable insights here and I’m interested in discussing.
Performance of native apps is as great as everyone claims. But I think it is a bit overhyped, recent AMD APUs come close in multi-core performance. Of course, that the Air works with passive cooling is a nice bonus.
Sure, but the thing is that the AMD 4800U, their high-end laptop chip, runs at 45W pretty much sustained, whereas the M1 caps out at 15W. This is a very significant battery life and heat/sustained non-throttled performance difference. Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.
Rosetta works great with native x86_64 applications, but performance is abysmal with JIT-ing runtimes like the JVM. E.g. JetBrains currently do not have a native version of their IDEs (JVM, but I think they also use some other non-Java code) and their IDEs are barely usable due to slowness. If you rely on JetBrains IDEs, wait until they have an Apple Silicon version.
Yeah, I didn’t test anything Java. You might be right. You also mention Fortran though and I’m not sure how that matters in 2020?
Another issue with Rosetta vs. native in development is that it is very easy to get environments where native and x86_64 binaries/libraries are mixed (e.g. when doing x86_64 development and CMake building ARM64 objects unless you set CMAKE_OSX_ARCHITECTURES=x86_64), and things do not build.
This isn’t as big of a problem as it might seem based on my experience. You pass the right build flags and you’re done. It’ll vanish in time as the ecosystem adapts.
Then Big Sur on Apple Silicon is also somewhat beta. Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well). When working longer, this issue disappears, but it’s annoying nonetheless.
Big Sur has been more stable for me on Apple Silicon than on Intel. 🤷
If you haven’t ordered one, it’s best to wait a while until all issues are ironed out. There is currently a lot of (justified hype) around Apple Silicon, but that doesn’t mean that the ecosystem is ready yet. Unless all you do is web browsing, e-mailing, and an occasional app from the App Store.
I strongly disagree with this. I mean, the M1 MacBook Air is beating the 16” MacBook Pro in Final Cut Pro rendering times. Xcode compilation times are twice as fast across the board. This is not at all a machine just for browsing and emailing. I think that’s flat-out wrong. It’s got performance for developers and creatives that beats machines twice as expensive and billed as made for those types of professionals.
Aside from this, I think there are some ethical (sorry for the lack of a better term) issues with newer Apple models. For example, Apple excluding their own services from third-party firewalls/VPNs, no extensibility (reducing the lifespan of hardware), and their slow march to a more and more closed system.
You also mention Fortran though and I’m not sure how that matters in 2020?
There’s really rather a lot of software written in Fortran. If you’re doing certain kinds of mathematics or engineering work, it’s likely some of the best (or, even, only) code readily available for certain work. I’m not sure it will be going away over the lifetime of one of these ARM-based notebooks.
I’m not sure it will be going away over the lifetime of one of these ARM-based notebooks.
There will be gfortran for Apple Silicon. I compiled the gcc11 branch with support and it works, but possibly still has serious bugs. I read somewhere that the problem is that gcc 11 will be released in December, so Apple Silicon support will miss that deadline and will have to wait until the next major release.
No, Numpy is written in C with Python wrappers. It can call out to a Fortran BLAS/LAPACK implementation but that doesn’t necessarily need to be Fortran, although the popular ones are. SciPy does have a decent amount of Fortran code.
Almost anyone who does any sort of scientific or engineering [in the structural/aero/whatever sense] computing! Almost all the ‘modern’ scientific computing environments (e.g. in python) are just wrappers around long-extant c and fortran libraries. We are among the ones that get a bit upset when people treat ‘tech’ as synonymous with internet services and ignore (or are ignorant of) the other 90% of the iceberg. But that’s not meant as a personal attack, by this point it’s a bit like sailors complaining about the sea.
Julia is exciting as it offers the potential to change things in this regard, but there is an absolute Himalaya’s worth of existing scientific computing code that is still building the modern physical world that it would have to replace.
This is a very significant battery life and heat/sustained non-throttled performance difference.
I agree.
Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.
I am not sure what you mean. Modern Intel/AMD CPUs have AES instructions. AMD GPUs (including those in APUs) have acceleration for H.264/H.265 encoding/decoding. AFAIR also VP9. Neural depends a bit on what is expected, but you can do acceleration of neural network training, if AMD actually bothered to support Navi GPUs and made ROCm less buggy.
That said, for machine learning, you’ll want to get an discrete NVIDIA GPU with Tensor cores anyway. It blows anything else that is purchasable out of the water.
You also mention Fortran though and I’m not sure how that matters in 2020?
A lot of the data science and machine learning infrastructure relies on Fortran directly or indirectly, such as e.g. numpy.
I strongly disagree with this. I mean, the M1 MacBook Air is beating the 16” MacBook Pro in Final Cut Pro rendering times. Xcode compilation times are twice as fast across the board. This is not at all a machine just for browsing and emailing. I think that’s flat-out wrong.
Sorry, I didn’t mean that it is not fit for development. I meant that if you are doing development (unless it’s constrained to Xcode and Apple Frameworks), it is better to wait until the dust settles in the ecosystem. I think for most developers that would be when a substantial portion of Homebrew formulae can be built and they have pre-compiled bottles for them.
Sorry, I didn’t mean that it is not fit for development. I meant that if you are doing development (unless it’s constrained to Xcode and Apple Frameworks), it is better to wait until the dust settles in the ecosystem. I think for most developers that would be when a substantial portion of Homebrew formulae can be built and they have pre-compiled bottles for them.
My instinct here goes in the opposite direction. If we know Apple Silicon has tons of untapped potential, we should be getting more developers jumping on that wagon especially when the Homebrew etc. toolchain aren’t ready yet, so that there’s acceleration towards readying all the toolchains quickly! That’s the only way we’ll get anywhere.
Well, I need my machine for work. So, these issues just distract. If I am going to spend a significant chunk of time. I’d rather spend it on an open ecosystem rather than doing free work for Apple ;).
Sure, but the thing is that the AMD 4800U, their high-end laptop chip, runs at 45W pretty much sustained, whereas the M1 caps out at 15W. This is a very significant battery life and heat/sustained non-throttled performance difference. Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.
Like all modern laptop chips, you can set the thermal envelope for your AMD 4800U in the firmware of your design. The 4800U is designed to target 15W by default - 45W is the max boost, foot to the floor & damn the horses power draw. Also, the 4800U has a GPU…an 8 core Vega design IIRC.
Apple is doing exactly the same with their chips - the accounts I’ve read suggest that the power cost required to extract more performance out of them is steep & since the performance is completely acceptable at 15W Apple limits the clocks to match that power draw.
The M1 is faster than the 4800U at 15W of course, but the 4800U is a Zen2 based CPU - I’d imagine that the Zen3 based laptop APUs from AMD will be out very soon & I would expect those to be performance competitive with Apple’s silicon. (I’d expect to see those officially launched at CES in January in fact, but we’ll have to wait and see when you can actually buy a device off the shelf.)
You say that you returned and ordered a ThinkPad, how has that decision turned out? Which ThinkPad did you purchase? How is the experience comparatively?
I bought a Thinkpad T14 AMD. So far, the experience is pretty good.
Pros:
I really like the keyboard much more than that of the MacBook (butterfly or post-butterfly scissors).
It’s nice to have a many more ports than 2 or 4 USB-C + stereo jack. I can go places without carrying a bunch of adapters.
I like the trackpoint, it’s nice for keeping your fingers on the home row and doing some quick pointing between typing.
Even though it’s not aluminum, I do like the build.
On Windows, battery time is great, somewhere 10-12 hours in light use. I didn’t test/optimize Linux extensively, but it seems to be ~8 hours in light use.
Performance is good. Single core performance is of course worse than the M1, but having 8 high performance cores plus hyperthreading compensates a lot, especially for development.
Even though it has fans, they are not very loud, even when running at full speed.
The GPU is powerful enough for lightweight gaming. E.g., I played some New Super Lucky’s tale with our daughter and it works without a hitch.
Cons:
The speakers are definitely worse than any modern MacBook.
Suspend/resume continues to have issues on Linux:
Sometimes, the screen does not wake up. Especially after plugging or unplugging a DisplayPort alt-mode USB-C cable. Usually moving the TrackPoint fixes this.
Every few resumes, the TrackPad and the left button of the TrackPoints do not work anymore. It seems that (didn’t investigate further) libinput believes that a button is constantly held, because it is not possible to click windows anymore to activate them. So far, I have only been able to reset this state by switching off the machine (sometimes rebooting does not bring bak the TrackPoing).
So far no problems at all with suspend/resume on Windows.
The 1080p screen works best with 125 or 150% scaling (100% is fairly small). Enabling fractional scaling in GNOME 3 works. However, many X11/XWayland applications react badly to fractional scaling, becoming very blurry. Even on a 200% scaled external screen. Also in this department there are no problems with Windows, fractional scaling works fine there.
The finger print scanner works in Linux, but it results in many more false negatives than Windows.
tl;dr: a great experience on Windows, acceptable on Linux if you are willing to reboot every few resumes and can put up with the issues around fractional scaling.
I have decided to run Windows 10 on it for now and use WSL with Nix + home-manager. (I always have my Ryzen NixOS workstation for heavy lifting.)
Background: I have used Linux since 1994, macOS from 2007 until 2020, and only Windows 3.1 and briefly NT 4.0 and Windows 2000.
Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well).
Sleep seems to be broken on the latest MacOS versions: every third time I close the lid of my 2019 mac, I’m opening it later only to see that it has restarted because of an error.
I think Lea said it very well. Not really interested in hearing you whining about being mistreated after the credible accusations of sexual assault and harassment. Flag as off-topic because this isn’t technical, and it’s not “culture” either, it’s just you @nadim
I received an email a few minutes ago notifying me that you had tagged me in this comment.
I don’t think we’ve ever met, but I see that you’re one of Matt Green’s PhD students and are thus active in the field, and I wanted to respond to your comment which appears to imply that I deserve the treatment described in my post because I have, according to you, likely committed serious crimes which people have accused me of on Twitter.
I have two things to say:
I understand that you think that my blog post is “whining”. I would disagree. I think that, as an aspiring academic, you should recognize that using someone’s work and soliciting for their feedback over a period of an entire week, in over a hundred messages and in two conference calls, while promising them citations that fail to materialize, isn’t exactly something I would describe as whining; it’s actually calling out plagiarism. And pointing out plagiarists does seem to be in the community interest, especially when they (or their students) resort to ad-hominem attacks in response to the calls for proper citation. Your own thesis advisor, Matt Green, is a co-author with Lea Kissner on the Zoom paper, and so I would also wonder whether there is a conflict of interest materializing when one of his students appears to further insinuate that I have committed crimes when I point out the act of plagiarism and supplement it with evidence. If what you’re saying here is that some people deserve to be plagiarized because of Twitter rumors about them, well, that’s not something I can really come to grips with.
If you are interested in what I have to say regarding the tweets that you’re referring to, I wrote a detailed response here that you can read if you wish. In that response, I address the tweets in detail.
I don’t mind you flagging the post or not wanting to read it, but I would appreciate it if you could please consider the points I make above and try to understand why they could make your comment appear unkind at best. Thank you for reading, and all the best to you.
Having tried one of the earliest versions of Verifpal last year, as a beginner to those tools, going from Tamarin to Verifpal was going from “unusable” to “easy”. Analysis speed was a big part of that, but so was the syntax and semantics of the protocol description language.
Still haven’t gotten around to it, but I consider Verifpal a mandatory gateway to version 1.0 of Monokex (a Noise ripoff I’m working on that Nadim audited), or even any new protocol I dare invent.
Verifpal provides formal methods to the masses. We had TLA+ in a similar vein before for concurrent programs. I can’t wait to see other domains have similar tools.
Thanks very much Loup, looking forward to checking out your coming work.
I think that especially when I see comments like this, I’m glad that I wrote this post. The second part explains clearly the differences in what Verifpal can guarantee versus a more comprehensive tool like Tamarin, and outlines that the ease of use does come at a slight cost in rigor and complete proofyness (I wonder if I just coined that term).
Unfortunately, I haven’t been able to load the site successfully in either Chromium (v 84.0) or Firefox (79.0) on Linux, even after accepting the self-signed cert.
I would definitely be interested to know what approaches you took in implementing the AI.
This is an awfully complex solution that, IMO, doesn’t create a more trustworthy environment.
Okay… I’ll be honest: I was expecting better comments. I don’t mean that as a jab! I’m sincerely surprised by the kneejerk reaction here.
I’ll answer point 2 first then points 1 and 3 together:
Regarding 2: The paper points out that the use case is basically anything that takes a short identifier and turns it into a longer thing with a global integrity view. That’s very much not just URL shorteners. URL shorteners are a quick useful demo, but you can apply this to all kinds of things! Mission-critical files. Documents. Text. You name it. The service gives you one short identifier, and then commits a zero-knowledge proof to a smart contract such that any person using the short identifier to retrieve the full payload gets a global authenticity guarantee.
Regarding 1 and 3: Your qualm with URL shorteners seems to be that they can redirect to malicious URLs. Again, DuckyZip isn’t just about URL shorteners, but if you want to focus on that demo use case, this is actually something that DuckyZip can help solve: before redirecting to any URL, you can obtain not only the full URL and vet it, but also a discrete zero knowledge proof that it’s the right URL to begin with.
If you don’t like URL shorteners then by all means, don’t use them – DuckyZip is a low-level protocol with much broader use cases. Less knee-jerking would be appreciated.
More useful examples would be appreciated.
The problem with URL shorteners is that they stop operating eventually, because there’s no reason to operate one. Organization-specific shorteners like https://dolp.in/ have much better longevity.
And, in particular, they support updates. You can keep the stable short URL and redirect it to a new canonical URL when things move.
That you’ll never know which content they contain without retrieving/parsing/executing is an intrinsic part of how the web treats a URL as a link regardless of another runtime translation layer/virtualisation/indirection.
You have no guarantees that you will retrieve same exact contents the next request or from the same provider, if you share a ‘direct’ link to someone else they will quite likely still get a different version. My ‘link’ sharing among friends is more often than not print to PDF first for anything not video now for this reason.
Even in a world where the URL would carry all state used as input to the content provider, you’d still fight low level tricks like hosts mapped to round robin DNS as well as high level ones from other tamper-happy intermediates – how many pages that relies on CDNs actually use SRI etc[1]?
As such the shortener doesn’t fundamentally change anything - the weakest part of the link will set the bar. If anything, you could use your own shortening service layered on this to provide further guarantees. If anything having one sanctioned by archive.org that >also< syncs wayback machine >and< provides a signed Firefox Pocket style offline friendly version would improve things at the expensive of yet another round of copyright and adtech cries - the sweetest-tasting of tears.
[1] Kerschbaumer, Christoph (2016). [IEEE 2016 IEEE Cybersecurity Development (SecDev) - Boston, MA, USA (2016.11.3-2016.11.4)] 2016 IEEE Cybersecurity Development (SecDev) - Enforcing Content Security by Default within Web Browsers. , (), 101–106. doi:10.1109/SecDev.2016.033
I believe there to be a fundamental difference between domains that may redirect users anywhere and domains that one can inspect, recognize, and vet in advance. I also consider link transparency to be a fundamental building block of the web’s trust model.
What is it with cute animal drawings and exceptionally accessible and pedagogical cryptography explainers?!
I guess cute drawings tend to help comprehension?
Here’s another example of using an animal persona to illustrate one’s points.
This post is a welcome change from the derogatory yelling that usually surrounds these topics: “Don’t use RSA!” — often ignoring that those who are using RSA often have unfortunate constraints (legacy, etc.) or very good reasons imposed by corner cases outside of their control.
One particularly illustrative example of the hard-headedness that I’m referring to is an incredibly abrasive and elitist post from 2019, which was originally titled, simply, “Fuck RSA” and which focused more on just signaling the authors’ doubtlessly impressive knowledge of RSA’s shortcomings instead of recognizing that some developers using RSA aren’t blubbering fools, but are simply stuck with it for some reason or another.
The API suggested suggested by Soatok’s post, on the other hand, is sufficiently agnostic and provides a helpful grounder for all sorts of developers who could be reading the post. This useful framework is surrounded by exactly the sort of considerations that non-specialist engineers should be primed to think about! It takes a thoughtful mind to be truly pedagogical.
This sort of anti-elitist, non-judgmental, well-written and accessible focus on providing standard engineering solutions is exactly what applied cryptography needs more of.
Another author who writes like this is Vitalik Buterin. His explainers of ZK math are always a joy to read, largely because you feel like he’s genuinely interested in explaining valuable concepts to you in a simple and honestly accessible way, and that by doing so, he solidifies his own knowledge in his mind. Here’s one example.
It’s linked in the first sentence..
So much content posted to twitter where it will doubtless be lost forever, or behind a login gate at some point as they chase the last profits from their fleeing audience when the next hip thing takes over :(
Edit for usefulness so I’m not part of the problem: The post linked from the tweet.
The actual content is in the Linux kernel’s git commit logs, which will certainly not be lost forever (unless, I guess, something really extreme happens).
https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git/log/drivers/char/random.c
Great work.
I agree that it would make more sense to link directly to the commits. Mailing list posts are also better, but in this case the Lobste.rs headline already provides the sufficient editorial context. Linking to tweets (which appears to be more and more popular) seems to compromise the visibility of the work in favor of self-promotion, a point humorously reflected by how Lobste.rs’ extract of the post is simply “Trending now”: https://imgur.com/a/Pduk7iq
I hope I’m not misunderstood — this is the latest in an array of excellent contributions and I’ve myself retweeted OP.
Please consider signing the open letter against these changes: https://appleprivacyletter.com/
Are you going to post an open letter for Microsoft, Google, DropBox, Facebook, Twitter, and all the other companies who have used the exact same database for this exact purpose for the last decade?
Which provider has previously used this list against images that aren’t stored on their infrastructure?
Images sent via iMessage are stored on Apple’s infrastructure.
I think the question had implied “stored in plain text”. iMessage doesn’t do that.
Right. So, every other provider has direct access to your photos, and scans for CSAM with their direct access. Apple, rather than give up their E2E messaging, has devised a privacy-preserving scheme to perform these scans directly on client devices.
I really don’t understand how Apple is the bad guy here.
Other providers that scan cleartext images are off the hook, because they’ve never had E2E privacy guarantee.
[smart guy meme]: You can’t have encryption backdoor if you don’t have encryption.
Apple’s E2E used to be a strong guarantee, but this scanning is a hole in it. Countries that have secret courts, gag orders, and national security letters can easily demand that Apple slip in a few more hashes. It’s not possible for anyone else to verify what these hashes actually match and where they came from. This is effectively an encryption backdoor.
If I understood what I read, although the private set intersection is done on device, it’s only done for photos that are synced with iCloud Photo Library.
Apologies to all in this thread. Like many I originally misunderstood what Apple was doing. This post was based on that misunderstanding, and now I’m not sure what to do about it. Disowning feels like the opposite of acknowledging my mistake, but now I have 8 voted based on being a dumbass 🙁
iCloud Photos are stored on Apple infrastructure.
This page gets the scope of scanning wrong in the second paragraph, so I’m not sure it’s well researched.
how so? can you explain?
“Apple’s proposed technology works by continuously monitoring all photos stored or shared on a user’s iPhone, iPad or Mac, and notifying the authorities if a certain number of objectionable photos is detected.”
seems like an appropriate high-level description of what is being done, how is it wrong?
I may be wrong but, from what I understood, a team of reviewers is notified to check manually the photos once a certain number of objectionable photos is detected, not the authorities… If (and only if) the team of reviewers agrees with the hashes matches, they notify the authorities.
This is a detail but this introduces a manual verification before notifying the authorities, which is important.
From MacRumors:
Link to the resource: https://www.macrumors.com/2021/08/05/apple-csam-detection-disabled-icloud-photos/
Second paragraph of the AP article
This resource from Apple also states that only images uploaded to iCloud are scanned.
This quote you cite figures nowhere within the page.
You replied to my comment linking to an open letter, you didn’t post a top-level comment.
Only photos uploaded to iCloud Photos are matched against known hashes.
Or just don’t buy an Apple device. Do you really think a trillion dollar company cares about digital signatures?
I think this is a good statement of intent though.
I just bought an iPhone 12 and would be otherwise unlikely to be noticed as a lost sale until the iPhone 14~ since most people don’t upgrade a single minor version.
Giving them warning that they have lost me as a customer because of this is a good signal for them. If they choose not to listen then that’s fine, they made a choice.
Also the more noise we make as a community; the more this topic gains attention from those not in the industry.
I didn’t mean to make some sort of “statement” to Apple. I find that idea laughable. What I meant is that if you are really concerned about your privacy to the point where scanning for illegal images is “threaten[ing] to undermine fundamental privacy protections” (which I think is reasonable), then why buy Apple in the first place? This isn’t the first time they have violated their users’ privacy, and it certainly wont be the last.
What’s your proposed alternative?
I think Apple making a stance on privacy, often posturing about it a lot, does cause a lot of good will and generally those who prefer to maintain privacy have been buying their products. (myself included). You can argue that it’s folly but the alternatives are akin to growing your own vegetables on a plot of land in the middle of nowhere connected to no grid (a-la rooted android phones with f-droid) or google owned devices which have a significantly worse privacy track record.
You oughta update your intel about the “alternative” smartphone space. Things have come a long way from “growing your own vegetables on a plot of land in the middle of nowhere connected to no grid.” The big two user-friendly options are CalyxOS and LineageOS with microG. If you don’t feel like installing an OS yourself, the Calyx Institute, the 501(c)(3) nonprofit which develops CalyxOS, even offers the Pixel 4a with CalyxOS preinstalled for about $600.
I’m running LineageOS on a OnePlus 6T, and everything works, even banking apps. The experience is somewhere between “nearly identical” and “somewhat improved” relative to that of the operating system which came with the phone. I think the local optimum between privacy-friendliness and user-friendliness in the smartphone world is more obvious than ever, and iOS sure ain’t it these days.
It does seem folly to make a statement by not buying something, but consider this: When you vote, there are myriad ways that politicians have to dilute your impact (not going to enumerate them here but it’s easy to do). By comparison, when you make an economic choice, ever dollar is counted in full, one way or another. So if you vote, and you should, then there’s every reason to vote with your pocketbook as well.
I’m surprised that the author didn’t think that winget deserved more than a passing mention! To me it was one of the most interesting announcements.
I don’t get the whole “We are excited to announce the release of Windows Package Manager 1.0!” when it appears to still be a preview that you need to be running Windows Insider to use unless you want to manually install it?
I am confused how the presented scheme is anything close to tracing. The first step is
But NF is a 256bit random nonce that no one other than the sender and recipient have access to. You may be able to guess a plaintext, but there’s no way you can guess that.
Additionally, it seems to me that if you have access to an oracle that can say if a given ciphertext is equal to some plaintext, you have broken ciphertext indistinguishability, a property that is very important to confidentiality (“Indistinguishability is an important property for maintaining the confidentiality of encrypted communications.”)
There would be a step where the reveal of this nonce would be compelled, similarly to how message franking implements such a step in its current form. The idea is that you can just substitute the rationale for this step from “abuse reporting” to “message tracing”.
How is compelling the reveal of the nonce any different from compelling the reveal of the plaintext? They’re stored next to each other and the only parties that have the nonce are the same parties that have the plaintext. The difference between “abuse reporting” and “message tracing” is which party is performing the action, and that makes all the difference.
As far as I understand, the nonce serves to validate the initial HMAC, which serves as a pre-commitment to the authenticity of the message within its original context.
I appreciate the intentions behind this post, but as a cursory introduction to a common problem in cryptography, I worry that this article muddies together a number of concepts, and I’m taking the time to write a correction here given how this have been upvoted to the top of Lobsters and could therefore mislead some developers.
This is just bizarre, because it strongly implies that the project whose cryptography the author is criticizing, “Zuccnet”, “completely lacks” forward secrecy because it uses RSA. But RSA is a primitive for public key encryption. Forward secrecy, on the other hand, is a property of a cryptographic protocol. Using RSA or not using RSA doesn’t have direct bearing on whether or not you obtain forward secrecy. RSA itself cannot possibly “lack” or “offer” forward secrecy, and constructing an argument based on this logic makes no sense:
I think that it would be better for the author here to more clearly distinguish between RSA as a primitive and the design of the protocol they are criticizing, to avoid misleading new readers. It’s important to understand that RSA does not affect forward secrecy and vice versa. The conflation with PGP further muddies the comparison and mixes together a bunch of contexts that in reality aren’t very closely related.
Calling this an “incredibly stupid design choice” doesn’t make sense to me, because the supposed “design choice” itself has been fundamentally misunderstood and is being miscommunicated. The author here is almost certainly referring to RSA constructions being referred to as, for example,
RSA/ECB/OAEPWithSHA1AndMGF1Padding
. This is a naming scheme that was first promoted in Java and that has found itself copied into a tiny number other, largely Java-inspired frameworks.As noted in Java documentation and in ample references around the web, it is highly misleading to refer to how RSA Encryption is used as “ECB mode”. The “ECB” here doesn’t actually mean anything — it’s just a stand-in for there not being a a real block cipher mode of operation, and was likely added as part of the naming scheme for ciphers so that asymmetric ciphers are referred to in a way that structurally is similar to that of symmetric block ciphers (eg.
AES/CBC/PKCS5PADDING
).Some popular protocols, such as Signal or the Noise Protocol Framework, do establish some forward secrecy (and post-compromise security) via an AKE, but this doesn’t mean that an AKE is required to obtain forward secrecy. In the case of Signal, the majority of the forward secrecy and post-compromise guarantees are actually not even guaranteed by the AKE at all but by the subsequent ratcheting mechanism, with the AKE only setting the stage for that and offering forward secrecy for session initialization only.
Protocols can achieve forward secrecy via periodic key rotation or other mechanisms that don’t implicate an AKE, and this could be preferable depending on the use case scenario and execution context.
Finally, the “Recommendations” section contains pieces of advice that all seem to conflict with one another:
If you’re the party planning the protocol design, then why would you find yourself needing to encrypt a message with RSA? If it’s better not to use RSA at all, then why is the article’s subheading mentioning that “RSA is for encrypting symmetric keys”? If one were to use a KEM, why would they use an RSA-based KEM?
I think the article is better off just providing a simpler, more coherent recommendation that leads people away from RSA entirely. As it is, I could read this article as a new cryptography engineer and walk away with four conflicting recommendations.
As others have noted, this post is commendable for not shaming the developer of “Zuccnet” and trying to raise the bar against common cryptography mistakes, so I’d like to congratulate the author their intentions but wish more time was spent on a polished execution. If folks are interested, I’d like to suggest some readings on protocol design that could serve as a more coherent reference on how to think about protocols, primitives, etc. (yes, they’re from ePrint, but they’re not harder to read than this blog post, I promise!):
I mostly agree with you Nadim but I cannot think of a way to do PFS with RSA.
Except for very scientific constructions like having a million RSA keys and throwing away all the used ones. The problem is that you cannot really hash an RSA key to a new key. That’s why 0-RTT PFS for TLS is so cool. But it requires puncturable encryption.
So, practically speaking, I would agree that using RSA encryption means you don’t get PFS.
Could these tradeoffs be worth it if it means the system is really simple and easy to understand?
The first one, no. Breaking on large messages is a serious usability pain-point, and doing a hybrid public key encryption is 100% worth the additional complexity.
The second one, YES! If you make the threat model clear, then eliminating forward secrecy greatly simplifies your protocol. (Implementing X3DH requires an online server to hand out “one-time pre-keys” to be totally safe.) At worst, you’re as bad off as PGP encryption (except, if you follow the advice in my blog, you’re probably going to end up using an authenticated encryption construction rather than CAST5-YOLO).
Isn’t it something people are quite used to though? Both SMS and tweets have a character limit.
But let’s say we do want to go with the simplest secure model, without forward secrecy but no character limit. So hybrid encryption but not X3DH. What library functions would the smart developer use?
If they’re using libsodium?
crypto_box_seal()
andcrypto_box_seal_open()
. Problem solved for them.If they’re using OpenSSL (or one of the native wrappers), something like this:
(This is why “just use libsodium” is so much better.)
Please consider using Pastebin for code; Lobsters renders code in a larger-appearing font than text in its comment section and doesn’t seem to fold it away properly, creating a wall of text that makes it harder to scroll through comments.
I somewhat agree, but I don’t think that there’s a good pastebin which is free to Lobsters without signup and also allows posts to persist. (The Reputation Problem disincentivizes such a service; it would be open to abuse.) It would be cool if Lobsters had the ability to click to expand/hide long code snippets.
Definitely the best solution would be for Lobsters to fix code rendering in comments.
We have an issue tracking this if anyone wants to pick up the work
For what it’s worth, that comment looks ok to me (Chrome on Windows).
If you are okay with giving up on security (e.g. for educational purposes) then it could be worth it.
In practice absolutely not.
Giving up on security is too vague, sorry. Can eve read my messages? No? Then I think I’m pretty safe.
Maybe bfiedler refers to the second point, meaning if Eve compromises Alice’s private key, then Eve can read past, present and future messages. My personal opinion is that this should be default for any secure messaging system.
This is the worst article I’ve ever seen on the front page of Lobsters. The author decides that he doesn’t like some of the more political assertions in some of Paul Graham’s writings on his blog (since, of course, any critique of the American left is “reactionary”):
…and decides to go fisk through everything Graham has ever written in order to find incorrect opinions on programming languages of all things as a way to discredit him and to prove some nebulous point about why Graham isn’t such a great figure to look towards. The author spends a handful of paragraphs basically bullying Graham because his pet project, a programming language called Arc, didn’t take off (except it sort of did: Hacker News is written in Arc, and that’s all beside the point: Paul Graham is a venture capitalist, not a programming language designer!)
The article then concludes:
Like, what are we supposed to get from this? Some kind of self-congratulatory gratification at how big of a smackdown the author gave Paul Graham by setting him straight on programming languages? It’s hard to find a more obvious case of motivated reasoning. I thought people on Lobsters were smarter than to fall for this nonsense.
I’m not sure how this arrived at the front page of Lobsters. This is really torrid stuff. This is some guy who feels threatened or offended by some of Paul Graham’s political takes and decided that it’s time to discredit him through thinly disguised bullying. There’s no other substance to this poison-soaked article.
Get this off the front page. Honestly.
Yeah, I’m not entirely sure why it’s on here. The number of upvotes is also interesting, and a little frightening.
I have a 2013 mbp and it’s definitely due for an upgrade. However I’m going to wait until the next MBA, I hear the M1X chip is bonkers
I think M1X is going to be intended for high performance computers like the iMac and 16” MacBook Pro. You’ll be waiting for the M2 most likely.
From the “cons” section:
(Emphasis in original).
I don’t believe the dev kits were free. They required an Apple dev membership and cost $500 (possibly defrayed by a rebate on new hardware when it became available) and there wasn’t an infinite amount of them.
I assume the main reason for this is the Go release cycle. It basically has a release every six months and three months of code freeze before that. Therefore, when the DTKs were shipped, the code freeze for the release in August had already happened. The next release is the upcoming one in February. The ..x releases are made just for fixing “critical issues”.
This probably also means that most of the hard work is done and the upcoming beta of Go 1.16 will support Apple Silicon.
Most of the work has been done. You can grab tip and run that rather successfully right now.
Surely Apple and Google could agree on a bunch of dev kits so that Apple Silicon could launch with support for one the world’s most important programming languages?
Agreed. I know that even the Nix foundation got one. I assume it is more a matter of putting it somewhere in the release schedule. The other issue is that you couldn’t really set up CI infrastructure until the M1 Macs were released.
I remember Go’s design targeting particular scale problems seen at Google, notably the need for fast compiles. To what degree are Go’s priorities still set by Google? If that’s significant, what is their business interest in compiling to ARM?
Or $1340 for Europeans.
This is largely because the 20% VAT is included in the price and because the EU mandates twice the mandatory warranty of the US on all purchased electronics. So no, the price isn’t really that different.
Thanks for the reply! So Americans actually pay $1100 for what they call a $1000 product.
Still a difference of $240.
(BTW, this is not meant as negative criticism of your review – I actually like it a lot)
Not in all states. When I was in Oregon (not sure if this is still true), they didn’t have sales tax.
Still true. No state wide sales tax in Oregon.
Or $2430 for Brazilians :) (I’m actually crying)
You mean 835 EUR right?
No. Apple’s listed prices are more expensive in Europe as discussed above, due to higher VAT.
On top of that, over here (Europe) the advertised price almost always includes those taxes; unlike in the US where they are added at the time of purchase.
The reason I posted this, is because I think these price comparisons between difference currencies have no meaning. Why post a dollar amount for europeans who can only buy in euros. If you want to compare prices, compare to something like the big mac index or a cost of living index.
Ah. That’s a pretty good point to make, and I completely agree. But I don’t think that’s clear from your original comment.
For an accurate comparison, I think you’d have to compare the price to your chosen index across various US states as well.
And then, there are countries in Europe that are not a part of the Euro zone yet and still have their own currencies, and that dosn’t make the situation any better.
I bought one last week and have used it for 7 days now. I was in an initial hype phase as well, but I am more critical now and doubting whether I should return it.
Performance of native apps is as great as everyone claims. But I think it is a bit overhyped, recent AMD APUs come close in multi-core performance. Of course, that the Air works with passive cooling is a nice bonus.
Rosetta works great with native x86_64 applications, but performance is abysmal with JIT-ing runtimes like the JVM. E.g. JetBrains currently do not have a native version of their IDEs (JVM, but I think they also use some other non-Java code) and their IDEs are barely usable due to slowness. If you rely on JetBrains IDEs, wait until they have an Apple Silicon version.
Also, performance of anything that relies on SIMD instructions (AVX, AVX2) is terrible under Rosetta. So, if you are doing data science or machine learning with heavier loads, you may want to wait. Some libraries can be compiled natively of course, but the problem is that there is no functioning Fortran compiler supported on Apple Silicon (outside an experimental gcc branch) and many packages in that ecosystem rely on having a Fortran compiler.
Another issue with Rosetta vs. native in development is that it is very easy to get environments where native and x86_64 binaries/libraries are mixed (e.g. when doing x86_64 development and CMake building ARM64 objects unless you set
CMAKE_OSX_ARCHITECTURES=x86_64
), and things do not build.Then Big Sur on Apple Silicon is also somewhat beta. Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well). When working longer, this issue disappears, but it’s annoying nonetheless.
If you haven’t ordered one, it’s best to wait a while until all issues are ironed out. There is currently a lot of (justified hype) around Apple Silicon, but that doesn’t mean that the ecosystem is ready yet. Unless all you do is web browsing, e-mailing, and an occasional app from the App Store.
Aside from this, I think there are some ethical (sorry for the lack of a better term) issues with newer Apple models. For example, Apple excluding their own services from third-party firewalls/VPNs, no extensibility (reducing the lifespan of hardware), and their slow march to a more and more closed system.
Edit: returned and ordered a ThinkPad.
If you need a macbook now , for whatever reason, buying one with an Arm chip does sound the most future-proof option. The Intel ones will be the “old” ones soon, and will then be 2nd rate. It’s what happened with the PowerPC transition as well.
If only there would be the Macs with 32GB RAM I would buy one as I was in need. However due to that, I bought 32GB 13” MacBook Pro instead. I will wait for polishing out the ARMs before next upgrade.
From what I read, you get way more bang for your RAM in Apple processors. It’s all integrated on the same chip so they can do a lot of black magic fuckery there.
In native applications - I am pretty sure that this works well, however as an Erlang/Elixir developer I use 3rd party GCed languages and DBs that can use more RAM anyway. However the fact that it is possible to run native apps from iOS and iPad could save some RAM on Slack and Spotify for sure.
What I mean is, they probably swap to NAND or something, which could very likely be similar performance-wise to RAM you’d find on a x64 laptop (since they have a proprietary connection there instead of NVMe/M.2/SATA). Plus I imagine the “RAM” on the SoC is as fast as a x64 CPU cache. So essentially you’d have “infinite” RAM, with 16gb of it being stupid fast.
This is just me speculating btw, I might be totally wrong.
Edit: https://daringfireball.net/2020/11/the_m1_macs CTRL+F “swap”
Just wondering if you had any take on this, idk if I’m off base here
Lots of valuable insights here and I’m interested in discussing.
Sure, but the thing is that the AMD 4800U, their high-end laptop chip, runs at 45W pretty much sustained, whereas the M1 caps out at 15W. This is a very significant battery life and heat/sustained non-throttled performance difference. Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.
Yeah, I didn’t test anything Java. You might be right. You also mention Fortran though and I’m not sure how that matters in 2020?
This isn’t as big of a problem as it might seem based on my experience. You pass the right build flags and you’re done. It’ll vanish in time as the ecosystem adapts.
Big Sur has been more stable for me on Apple Silicon than on Intel. 🤷
I strongly disagree with this. I mean, the M1 MacBook Air is beating the 16” MacBook Pro in Final Cut Pro rendering times. Xcode compilation times are twice as fast across the board. This is not at all a machine just for browsing and emailing. I think that’s flat-out wrong. It’s got performance for developers and creatives that beats machines twice as expensive and billed as made for those types of professionals.
Totally with you on this. Don’t forget also Apple’s apparent lobbying against a bill to punish forced labor in China.
There’s really rather a lot of software written in Fortran. If you’re doing certain kinds of mathematics or engineering work, it’s likely some of the best (or, even, only) code readily available for certain work. I’m not sure it will be going away over the lifetime of one of these ARM-based notebooks.
There will be gfortran for Apple Silicon. I compiled the gcc11 branch with support and it works, but possibly still has serious bugs. I read somewhere that the problem is that gcc 11 will be released in December, so Apple Silicon support will miss that deadline and will have to wait until the next major release.
Isn’t Numpy even written in FORTRAN? That means almost all science or computational anything done with Python relies on it.
No, Numpy is written in C with Python wrappers. It can call out to a Fortran BLAS/LAPACK implementation but that doesn’t necessarily need to be Fortran, although the popular ones are. SciPy does have a decent amount of Fortran code.
Wow, who knew.
Almost anyone who does any sort of scientific or engineering [in the structural/aero/whatever sense] computing! Almost all the ‘modern’ scientific computing environments (e.g. in python) are just wrappers around long-extant c and fortran libraries. We are among the ones that get a bit upset when people treat ‘tech’ as synonymous with internet services and ignore (or are ignorant of) the other 90% of the iceberg. But that’s not meant as a personal attack, by this point it’s a bit like sailors complaining about the sea.
Julia is exciting as it offers the potential to change things in this regard, but there is an absolute Himalaya’s worth of existing scientific computing code that is still building the modern physical world that it would have to replace.
I agree.
I am not sure what you mean. Modern Intel/AMD CPUs have AES instructions. AMD GPUs (including those in APUs) have acceleration for H.264/H.265 encoding/decoding. AFAIR also VP9. Neural depends a bit on what is expected, but you can do acceleration of neural network training, if AMD actually bothered to support Navi GPUs and made ROCm less buggy.
That said, for machine learning, you’ll want to get an discrete NVIDIA GPU with Tensor cores anyway. It blows anything else that is purchasable out of the water.
A lot of the data science and machine learning infrastructure relies on Fortran directly or indirectly, such as e.g. numpy.
Sorry, I didn’t mean that it is not fit for development. I meant that if you are doing development (unless it’s constrained to Xcode and Apple Frameworks), it is better to wait until the dust settles in the ecosystem. I think for most developers that would be when a substantial portion of Homebrew formulae can be built and they have pre-compiled bottles for them.
My instinct here goes in the opposite direction. If we know Apple Silicon has tons of untapped potential, we should be getting more developers jumping on that wagon especially when the Homebrew etc. toolchain aren’t ready yet, so that there’s acceleration towards readying all the toolchains quickly! That’s the only way we’ll get anywhere.
Well, I need my machine for work. So, these issues just distract. If I am going to spend a significant chunk of time. I’d rather spend it on an open ecosystem rather than doing free work for Apple ;).
Like all modern laptop chips, you can set the thermal envelope for your AMD 4800U in the firmware of your design. The 4800U is designed to target 15W by default - 45W is the max boost, foot to the floor & damn the horses power draw. Also, the 4800U has a GPU…an 8 core Vega design IIRC.
Apple is doing exactly the same with their chips - the accounts I’ve read suggest that the power cost required to extract more performance out of them is steep & since the performance is completely acceptable at 15W Apple limits the clocks to match that power draw.
The M1 is faster than the 4800U at 15W of course, but the 4800U is a Zen2 based CPU - I’d imagine that the Zen3 based laptop APUs from AMD will be out very soon & I would expect those to be performance competitive with Apple’s silicon. (I’d expect to see those officially launched at CES in January in fact, but we’ll have to wait and see when you can actually buy a device off the shelf.)
That made me chuckle. Good choice!
You say that you returned and ordered a ThinkPad, how has that decision turned out? Which ThinkPad did you purchase? How is the experience comparatively?
I bought a Thinkpad T14 AMD. So far, the experience is pretty good.
Pros:
Cons:
tl;dr: a great experience on Windows, acceptable on Linux if you are willing to reboot every few resumes and can put up with the issues around fractional scaling.
I have decided to run Windows 10 on it for now and use WSL with Nix + home-manager. (I always have my Ryzen NixOS workstation for heavy lifting.)
Background: I have used Linux since 1994, macOS from 2007 until 2020, and only Windows 3.1 and briefly NT 4.0 and Windows 2000.
Sleep seems to be broken on the latest MacOS versions: every third time I close the lid of my 2019 mac, I’m opening it later only to see that it has restarted because of an error.
Maybe wipe your disk and try a clean reinstall?
It’s not that security by obscurity is bad. It’s rather that you can’t use it to obtain proofs or formalisms of security.
I think Lea said it very well. Not really interested in hearing you whining about being mistreated after the credible accusations of sexual assault and harassment. Flag as off-topic because this isn’t technical, and it’s not “culture” either, it’s just you @nadim
Hi Max,
I received an email a few minutes ago notifying me that you had tagged me in this comment.
I don’t think we’ve ever met, but I see that you’re one of Matt Green’s PhD students and are thus active in the field, and I wanted to respond to your comment which appears to imply that I deserve the treatment described in my post because I have, according to you, likely committed serious crimes which people have accused me of on Twitter.
I have two things to say:
I understand that you think that my blog post is “whining”. I would disagree. I think that, as an aspiring academic, you should recognize that using someone’s work and soliciting for their feedback over a period of an entire week, in over a hundred messages and in two conference calls, while promising them citations that fail to materialize, isn’t exactly something I would describe as whining; it’s actually calling out plagiarism. And pointing out plagiarists does seem to be in the community interest, especially when they (or their students) resort to ad-hominem attacks in response to the calls for proper citation. Your own thesis advisor, Matt Green, is a co-author with Lea Kissner on the Zoom paper, and so I would also wonder whether there is a conflict of interest materializing when one of his students appears to further insinuate that I have committed crimes when I point out the act of plagiarism and supplement it with evidence. If what you’re saying here is that some people deserve to be plagiarized because of Twitter rumors about them, well, that’s not something I can really come to grips with.
If you are interested in what I have to say regarding the tweets that you’re referring to, I wrote a detailed response here that you can read if you wish. In that response, I address the tweets in detail.
I don’t mind you flagging the post or not wanting to read it, but I would appreciate it if you could please consider the points I make above and try to understand why they could make your comment appear unkind at best. Thank you for reading, and all the best to you.
Having tried one of the earliest versions of Verifpal last year, as a beginner to those tools, going from Tamarin to Verifpal was going from “unusable” to “easy”. Analysis speed was a big part of that, but so was the syntax and semantics of the protocol description language.
Still haven’t gotten around to it, but I consider Verifpal a mandatory gateway to version 1.0 of Monokex (a Noise ripoff I’m working on that Nadim audited), or even any new protocol I dare invent.
Verifpal provides formal methods to the masses. We had TLA+ in a similar vein before for concurrent programs. I can’t wait to see other domains have similar tools.
Thanks very much Loup, looking forward to checking out your coming work.
I think that especially when I see comments like this, I’m glad that I wrote this post. The second part explains clearly the differences in what Verifpal can guarantee versus a more comprehensive tool like Tamarin, and outlines that the ease of use does come at a slight cost in rigor and complete proofyness (I wonder if I just coined that term).
So cool to read this. I recently discovered Valorant and have been playing it almost every evening. Great game.
PS hit me up if you want to play together!
Unfortunately, I haven’t been able to load the site successfully in either Chromium (v 84.0) or Firefox (79.0) on Linux, even after accepting the self-signed cert.
I would definitely be interested to know what approaches you took in implementing the AI.
I think you were experiencing a DNS issue. Try again.
Fun game, but it would be interesting to see a write-up about what all is going on here under the hood.
Thanks, I’m strongly considering such a write-up.