I always liked this paper to talk a bit about this[1]. I think of particular interest is Figure 2. I stopped doing research on passwords circa 2016, so I wouldn’t be surprised if there is newer work helping characterize password strength vs offline password cracking.
[1] Florêncio, Dinei, Cormac Herley, and Paul C. Van Oorschot. “An {Administrator’s} Guide to Internet Password Research.” 28th large installation system administration conference (LISA14). 2014. here: https://www.microsoft.com/en-us/research/wp-content/uploads/2014/11/WhatsaSysadminToDo.pdf
Thanks for the link. The figure 2 you’ve mentioned touches upon a different topic (that I haven’t asked yet, but I thought about) which tackles the issue of passwords used for “online authentication”.
The paper you’ve cited mentions that above 10^6
options (that is ~20 bits of entropy) a password is safe for online use.
(Here by “online authentication” I mean a password that is used solely for authentication purposes, is never reused in another context, and can easily be changed or reset.)
However, I’ll have to read the full paper to see what other information I can extract.
I wish this worked around some web-of-trust model instead of trusting a root CA (never really have, never really will…) but this seems like a really solid approach to a big issue with the library-based programming supply chain.
Sigstore is actually meant to work around different trust models!. Naturally, some models are more mature than others, but part of the research angle that I’m trying to bring into sigstore is around different approaches[1][2]. I would love if we get to a model in which sigstore can accommodate usecases that are friendlier to F/OSS communities
[1] https://dl.acm.org/doi/abs/10.1145/3498891.3498903 [2] https://dl.acm.org/doi/10.1145/3548606.3560596
given that you don’t want your users to have to reauthenticate on each app startup, what would be a solution that provides safety in presence of malware running on the machine?
I’m having flashbacks of this issue happening with browser password managers (people had the same hesitation). In general you may want to have a process akin to gnome-keyring to manage e.g., a decryption key for this sensitive content.
Now that can turn into a big back and forth around threat models, false sense of security and more. There are articles surrounding this I think dating back to 2015. There are also tools about how circumvent this: https://ohyicong.medium.com/how-to-hack-chrome-password-with-python-1bedc167be3d however I think it varies by platform (it appears that in Windows it just stores the key somewhere, while in other OSs it does broker through e.g., gnome-keyring)
so in the end, you’re talking about some defense-in-depth best practice, but this is by no means a “glaring issue” or if it is, then it is one that every single application running on a desktop is prone to.
In my opinion, there is nothing an application can do to protect credentials in light of malware running on the same machine.
Pretty sure macOS has something like iOS’s Keychain which makes it harder for rando apps to get your data. Idea is it is an enclave that you can use to store data. You’d need to exploit the app with the key in order to get it.
See my follow-up question to a sibling post of yours: how is the keychain protected against malware injecting itself into the target binary and then just querying the keychain?
macOS Ventura will verify an application’s signature on every launch rather than only when the quarantained bit is set.
However, ideally, an application wouldn’t just use a token, but ask the Secure Enclave to sign something as a proof that the login is coming from an authorized machine, since the private signing key cannot be queried from software.
I’m not talking about altering the binary on disk. I’m talking about injecting malicious code into the JIT compiled output that the electron app has written into memory. That can’t be signature-checked and yet is still allowed to run. That malicious bit of code can then do whatever the application could do, including having something signed for it.
IIRC macOS has some restrictions on attaching to processes for debugging beyond the standard Unix “anything running as the same user is fair game” policy, so I wouldn’t be surprised if something prevents this; but I couldn’t find any details with a quick Google so I’m not at all sure.
Keyrings are what you’d usually use. There’s implementations on all major platforms and they’re tied to the system’s login.
How is an OS level keychain protected from malware injecting its code into the target binary‘s memory? AFIK we’re not quite there yet code-signature wise to detect something like this, especially in apps which normally do JIT compilation (which would be true for an electron app)
Yeah, not much is going to protect against malware injection, but storing secrets in plaintext is a way different problem. Encryption at rest is a must for secrets, even if it’s at the entire filesystem level, but a keyring is a good medium solution.
Interestingly, the modern .NET MAUI API for this problem lets you get and set secure keys without relying on any “master” secrets, deferring to Keychain on Apple devices. It doesn’t go into vast detail about how it works on Windows and I would love to know what it’s actually promising in terms of key storage (TPMs?) and inter-app isolation, given that this is a hard problem.
Hi! soatok!
As always, great post!
I’m one of the people involved in DSSE, so I wanted to share a little bit more about the rationale:
I wonder what your take is about these things.
ETA: we do call DSSE Dizzy ourselves :)
My remarks about DSSE leaving me dizzy were mostly seeing “Why not PASETO? Too opinionated” then “Why PAE? It’s good enough and well documented” but then not using PAE (which, IIRC, was a PASETO acronym). It’s not that you’re wrong, just that it’s confusing. I think something important got lost in the editorial process, but still exists inside the designers’ heads.
The only thing that I really dislike about DSSE is that you support, but never authenticate, some of your AAD.
Specifically KEYID. I understand the intent here (it’s spelled out clearly in the docs), but even if it’s never meant to be used for any sort of security consideration, the fact that you’re giving any flex at all over what key goes into envelope verification–but never requiring users to commit that value to the signature–seems like a miss to me. PASETO has uenncrypted footers, but it’s still used in the MAC/signature calculation.
Any attack based on swapping between multiple valid keys becomes significantly easier if the identifier for said key is never committed. The README remark about exclusive ownership seems to hint awareness of this concern, but maybe the dots hadn’t been connected?
Having some mechanism of committing the signatures on the envelope to a given signature algorithm and/or public key seems like a good way to mitigate. You can include this in the signature calculation without storing it in the envelope, by the way.
Sophie Schmieg is fond of opining that (paraphrasing) cryptography keys aren’t merely byte strings, they’re byte strings plus configuration.
RSASSA-PSS with e=65537, MGF1+SHA256 and SHA256 is a very specific configuration for RSA. If I yeet a PEM-encoded RSA public key at you (which contains only (n, e) in its contents), what’s stopping me from using PKCS#1 v1.5?
Same thing with ECDSA with named curves and not reimplementing CVE-2020-0601.
None of what I said is really a vulnerability with DSSE, necessarily, but leaves room for things to go wrong.
Thus, if I were designing DSSE-v2, I’d make the following changes:
0
length. It’s a very cheap change to the protocol.This is a small tweak to what DSSE-v1 does, but it will provide insurance against implementation failure (provided a collision-resistant hash function is being consistently used).
ETA: Her exact words were “A key should always be considered to be the raw key material alongside its parameter choices.”
I didn’t even include JOSE in the article because I don’t want to accidentally lend credibility to it. If I see JOSE in an engagement, I brace for the worst. If I see DSSE, I calmly proceed.
Haha, I imagined as such! and I’l take it that as a cautious compliment.
I find that surprising, but good to know.
Yes, I personally don’t think that’s an end-all-be-all rationale, but you see how industry can be very capricious about these things…
This doesn’t surprise me at all.
Likewise, but it is rather frustrating to see how many admittedly bad cryptographic systems have been designed and endorsed this way (RFC4880 and the JOSE suite to name a few). I wonder what’s a way to move forward in this department (one would be to maybe beg scott for half a decade spent in IETF meetings? :P)
My remarks about DSSE leaving me dizzy were mostly seeing “Why not PASETO? Too opinionated” then “Why PAE? It’s good enough and well documented” but then not using PAE (which, IIRC, was a PASETO acronym). It’s not that you’re wrong, just that it’s confusing. I think something important got lost in the editorial process, but still exists inside the designers’ heads.
Fair enough! I think we are due a complete review of what we wrote in there. The very early implementations of DSSE were PASETO’s PAE verbatim….
The only thing that I really dislike about DSSE is that you support, but never authenticate, some of your AAD.
Specifically KEYID. I understand the intent here (it’s spelled out clearly in the docs), but even if it’s never meant to be used for any sort of security consideration, the fact that you’re giving any flex at all over what key goes into envelope verification–but never requiring users to commit that value to the signature–seems like a miss to me. PASETO has uenncrypted footers, but it’s still used in the MAC/signature calculation.
Most definitely, this is something that we wanted to deal with in a separate layer (that’s why the payload fields are so minimal). This separate layer being in-toto layout fields and TUF metadata headers. I’m still wary of this fact though, and I’d love to discuss more.
Any attack based on swapping between multiple valid keys becomes significantly easier if the identifier for said key is never committed. The README remark about exclusive ownership seems to hint awareness of this concern, but maybe the dots hadn’t been connected?
Agreed, this is something we spent some time thinking hard about, and although I don’t think I can confidently say “we have an absolute answer to this” it appears to me that verifying these fields on a separate layer may indeed avoid EO/DSKS-style sttacks…
Having some mechanism of committing the signatures on the envelope to a given signature algorithm and/or public key seems like a good way to mitigate. You can include this in the signature calculation without storing it in the envelope, by the way.
Absolutely! A missing piece here is that in TUF/in-toto we store the algorithm on a separate payload that contains the public keys (e.g., imagine them as parent certificates). This is something that we changed on both systems after a security review from Cure53 many-a-years ago (mostly, to avoid attacker-controlled crypto-parameter fields like in JWT).
Sophie Schmieg is fond of opining that (paraphrasing) cryptography keys aren’t merely byte strings, they’re byte strings plus configuration.
Hard agree!
RSASSA-PSS with e=65537, MGF1+SHA256 and SHA256 is a very specific configuration for RSA. If I yeet a PEM-encoded RSA public key at you (which contains only (n, e) in its contents), what’s stopping me from using PKCS#1 v1.5?
Exactly, we have seen this happen on and on, even in supposedly standardized algorithms (like you point out with 2020-0601 down below).
Same thing with ECDSA with named curves and not reimplementing CVE-2020-0601.
None of what I said is really a vulnerability with DSSE, necessarily, but leaves room for things to go wrong.
Absolutely, and part of me wonders how this plays in the “generalization” of the protocol would fare without all the implicit assumptions I outlined above. FWIW, I’d definitely give PASETO first-class consideration in any new system of mine.
Thus, if I were designing DSSE-v2, I’d make the following changes:
Always include KEYID in the tag calculation, and if it’s not there, include a 0 length. It’s a very cheap change to the protocol.
Definitely, duly noted, and I wonder how hard it’d be to actually make it in V1
Include some representation of the public key (bytes + algorithm specifics) in the signature calculation. I wouldn’t store it in the envelope though (that might invite folks to parse it from the message).
This may be a little bit more contentious, considering what I said above, but I do see the value in avoiding dependencies between layers. I’d also be less concerned about fixing something twice in both places…
This is a small tweak to what DSSE-v1 does, but it will provide insurance against implementation failure (provided a collision-resistant hash function is being consistently used).
Yup! then again I wonder what the delta between PASETO and this would be afterwards :) (modulo encryption, that is)
Lastly, I wanted to commend you (again) for your writing! I love your blog and how accessible it is to people through all ranges of crypto/security expertise!
To avoid dealing with binary, why not just prepend the decimal length of data, followed by a colon? I think this approach originated with djb’s netstrings, and it was also adopted by Rivest’s canonical S-expressions.
It turns foo
into 3:foo
and concatenates bar
, baz
and quux
into 3:bar3:baz4:quux
. Easy to emit, easy to ingest.
Add on parentheses for grouping, and you have a general-purpose representation for hierarchical data …
This post is just a rehash of all the “meme” phrases people throw at golang:
no GNU components
lol: https://github.com/iglunix/iglunix/blob/main/pkgs/gmake/build.sh
The goal is definitely GNU-free, but yea, it still depends on gmake to build some packages. It’s the only GNU dependency, too. A gmake replacement would finish the job.
Seems that you would have to replace freetype as well.
Curious to read a little bit more about the rationale though. What’s so wrong about GNU software?
I think one advantage is that GNU has had something of a “monopoly” in a few areas, which hasn’t really improved the general state of things. The classic example of this is gcc; everyone had been complaining about its cryptic error messages for years and nothing was done. Clang enters the scene and lo an behold, suddenly it all could be improved.
Some more diversity isn’t a bad thing; generally speaking I don’t think most GNU projects are of especially high quality, just “good enough” to replace Unix anno 1984 for their “free operating system”. There is very little innovation or new stuff.
Personally I wouldn’t go so far so make a “GNU free Linux”, but in almost every case where a mature alternative to a GNU project exists, the alternative almost always is clearly the better choice. Sometimes these better alternatives have existed for years or decades, yet for some reason there’s a lot of inertia to get some of these GNU things replaced, and some effort to show “hey, X is actually a lot better than GNU X” isn’t a bad thing.
A LOT of people have soured on GNU/FSF as a result of the politics around RMS and the positions he holds.
A lot of people were soured on them long before that; the whole GPL3 debacle set a lot of bad blood, the entire Open Source movement was pretty much because people had soured on Stallman and the FSF, the relentless pedantry on al sorts of issues, etc. Of course, even more people soured on them after this, but it was just the last in a long line of souring incidents.
Was (re)watching some old episodes of The Thick of It yesterday; this classic Tucker quote pretty much sums up my feelings: “You are a fucking omnishambles, that’s what you are. You’re like that coffee machine, from bean to cup, you fuck up.”
For sure. Never seen The Thick Of It but I love Britcoms and it’s on my list :)
I’ve always leaned towards more permissive licenses. We techies love to act as if money isn’t a thing and that striving to make a living off our software is a filthy dirty thing that only uncool people do.
And, I mean, I get it! I would love NOTHING more than to reach a point in my life where I can forget about the almighty $ once and for all and hack on whatever I want whenever I want for as long as I want! :)
Yeah, when I hear “GNU” I think cruft. And this is from someone who uses emacs! (I guess you could argue it’s the exception that proves the rule, since the best thing about emacs is the third-party ecosystem).
And this is only about GNU as an organization, to be clear. I have no particular opinions on the GPL as a license.
Even Emacs is, unfortunately, being hampered by GNU and Stallman, like how Stallman flat-out refused to make gcc print more detailed AST info for use in Emacs “because it might be abused by evil capitalists”, and the repeated drama over the years surrounding MELPA over various very small issues (or sometimes: non-issues).
From the site:
Why
- Improve portability of open source software
- Reduce requirements on GNU packages
- Prove the “It’s not Linux it’s GNU/Linux …” copypasta wrong
Yeah, “why not?” is a valid reason imvho. I would like to know which one it theirs in actuality. I often find that the rationale behind a project is a good way to learn things.
And fair enough, I assumed you were affiliated. FWIW, Freetype is not a GNU project, but it is indeed fetched from savannah in their repos, which I found slightly funny.
ETA: it also seems to be a big endeavor so the rationale becomes even more interesting to me.
My rationale was partially to learn things, partially for the memez and partially as an opportunity to do things the way I want (all these people arguing about init systems, iglunix barely has one and I don’t really need anything more). I wanted to do Linux from scratch to learn more about Linux but failed at that and somehow this ended up being easier for me. I think I definitely learnt more trying to work out what was needed for myself rather than blindly following LFS.
That’s correct! I downloaded https://download-mirror.savannah.gnu.org/releases/freetype/freetype-2.11.0.tar.xz just to double check, and here is the license:
FREETYPE LICENSES
-----------------
The FreeType 2 font engine is copyrighted work and cannot be used
legally without a software license. In order to make this project
usable to a vast majority of developers, we distribute it under two
mutually exclusive open-source licenses.
This means that *you* must choose *one* of the two licenses described
below, then obey all its terms and conditions when using FreeType 2 in
any of your projects or products.
- The FreeType License, found in the file `docs/FTL.TXT`, which is
similar to the original BSD license *with* an advertising clause
that forces you to explicitly cite the FreeType project in your
product's documentation. All details are in the license file.
This license is suited to products which don't use the GNU General
Public License.
Note that this license is compatible to the GNU General Public
License version 3, but not version 2.
- The GNU General Public License version 2, found in
`docs/GPLv2.TXT` (any later version can be used also), for
programs which already use the GPL. Note that the FTL is
incompatible with GPLv2 due to its advertisement clause.
The contributed BDF and PCF drivers come with a license similar to
that of the X Window System. It is compatible to the above two
licenses (see files `src/bdf/README` and `src/pcf/README`). The same
holds for the source code files `src/base/fthash.c` and
`include/freetype/internal/fthash.h`; they wer part of the BDF driver
in earlier FreeType versions.
The gzip module uses the zlib license (see `src/gzip/zlib.h`) which
too is compatible to the above two licenses.
The MD5 checksum support (only used for debugging in development
builds) is in the public domain.
--- end of LICENSE.TXT ---
Having it under a more permissive license is a very valid reason though. Guess why FreeBSD is writing their own git implementation…
If the only tool for a task is closed-source then there is a project trying to make an open-source one. If the only open-source tool for a task is under a copyleft license then there is a project trying to make a non-copyleft one. Once a project is BSD, MIT or public domain we can finally stop rewriting it.
If avoiding copyleft is the goal then the Linux kernel is a weird choice. And important parts of the FreeBSD kernel (zfs) are under a copyleft license too (CDDL).
I find OpenBSD to be one of the best choices as far as license goes. I’ve been slowly moving all my Debian machines to OpenBSD in the past year (not only because of the license, but because it’s an awesome OS).
I haven’t tried using OpenBSD in earnest since are around 1998. I prefer a copyleft to a BSD style license personally but maybe I’ll take another look. And I hear that tar xzf blah.tar.gz
might even work these days.
It gets improved with every new major release, I’ve used it consistently for the past 3 or 4 releases and there’s always noticeable improvement in performance, user-land tools, drivers, arch support, etc. I’d definitely give it a try again!
This is fine reasoning but relativized. After all, I could just as easily say that if the only tool for a task is under a non-copyleft license, then there is a project trying to make a GNU/FSF version; once GNU has a version of a utility, we can stop rewriting it.
I used to do a fair bit of packaging on FreeBSD, and avoiding things like GNU make, autotools, libtool, bash, etc. will be hard and a lot of effort. You’ll essentially have to rewrite a lot of project’s build systems.
Also GTK is GNU, and that’ll just outright exclude whole swaths of software, although it’s really just “GNU in name only” as far as I know.
Depends on their goals. Some people don’t like GNU or GPL projects. If that’s the case then probably not.
zlib is derived from GNU code (gzip) so anything that includes zlib or libpng etc will “contain GNU code”. This includes for example the Linux kernel.
He didn’t say they’ve achieved their goal. It’s still a goal.
Why does it seem like you’re trying to “gotcha” on any detail you can refute?
It’s just someone’s project.
I’m trying to understand the goal. If the goal is avoiding software that originated from the GNU project that is probably futile. The GNU project has been a huge, positive influence on software in general.
You know the goal. They stated it. The parent comment to you stated it again.
It might be futile, but luckily we don’t control other peoples free time and hobbies, so they get to try if they want. You seem to be taking personal offense at the goal.
From the site:
Iglunix is a Linux distribution but, unlike almost all other Linux distributions, it has no GNU software¹
¹With the exception of GNU make for now
Yes I still haven’t been able to replace a couple projets. For self hosting gnu make is all that’s left, for chromium bison (and by extension gnu m4) and gperf are all that’s left.
Hi, sarciszewski! This looks quite similar to in-toto + TL (e.g., Rekor[3]). Did you have a chance to review [1] or [2]? how do you think this compares to it?
I wonder if we could merge efforts? or at least ensure interoperability, now that there are some implementations of in-toto in the wild, along with tooling for verification/metadata generation.
[1] https://www.usenix.org/conference/usenixsecurity19/presentation/torres-arias [2] https://bora.uib.no/bora-xmlui/handle/1956/20411 [3] https://github.com/projectrekor/rekor
Gossamer uses Ed25519 and BLAKE2b everywhere it can. (It only uses SHA384 for WordPress compatibility, and only for prehashing before Ed25519.)
It doesn’t support RSA, SHA256, etc. The cryptography is intentionally very minimalistic, and only uses what libsodium provides. No certificates or X.509 either.
To that end, the ledger currently supported is Chronicle, not Trillian. Adding support for Trillian is an open issue that has received zero feedback from the PHP community.
Interop with other designs is as-of-yet a non-goal for Gossamer, unless it can be guaranteed without increasing the cryptography primitive footprint.
In here, we see another case of somebody bashing PGP while tacitly claiming that x509 is not a clusterfuck of similar or worse complexity.
I’d also like to have a more honest read on how a mechanism to provide ephemeral key exchange and host authentication can be used with the same goal as PGP, which is closer to end-to-end encryption of an email (granted they aren’t using something akin to keycloak). The desired goals of an “ideal vulnerability” reporting mechanism would be good to know, in order to see why PGP is an issue now, and why an HTTPS form is any better in terms of vulnerability information management (both at rest and in transit).
In here, we see another case of somebody bashing PGP while tacitly claiming that x509 is not a clusterfuck of similar or worse complexity.
Let’s not confuse the PGP message format with the PGP encryption system. Both PGP and x509 encodings are a genuine clusterfuck; you’ll get no dispute from me there. But TLS 1.3 is dramatically harder to mess up than PGP, has good modern defaults, can be enforced on communication before any content is sent, and offers forward secrecy. PGP-encrypted email offers none of these benefits.
But TLS 1.3 is dramatically harder to mess up than PGP,
With a user-facing tool that has plugged out all the footguns? I agree
has good modern defaults,
If you take care to, say, curate your list of ciphers often and check the ones vetted by a third party (say, by checking https://cipherlist.eu/), then sure. Otherwise I’m not sure I agree (hell, TLS has a null cipher).
can be enforced on communication before any content is sent
There’s a reason why there’s active research trying to plug privacy holes such as SNI. There’s so much surface to the whole stack that I would not be comfortable making this claim.
offers forward secrecy
I agree, although I don’t think it would provide non-repudiation (at least without adding signed exchanges, which I think it’s still a draft) and without mutual TLS authentication, which can be achieved with PGP quite easily.
take care to, say, curate your list of ciphers often and check the ones vetted by a third party
There are no bad ciphers in 1.3, it’s a small list, so you could just kill the earlier TLS versions :)
Also, popular web servers already come with reasonable default cipher lists for 1.2. Biased towards more compatibility but not including NULL, MD5 or any other disaster.
I don’t think it would provide non-repudiation
How often do you really need it? It’s useful for official documents and stuff, but who needs it on a contact form?
I want to say that it only provides DNS based verification but then again, how are you going to get the right PGP key?
PGP does not have only one trust model, and it is a good part of it : You choose, according to the various sources of trust (TOFU through autocrypt, also saw the key on the website, or just got the keys IRL, had signed messages prooving it is the good one Mr Doe…).
Hopefully browsers and various TLS client could mainstream such a model, and let YOU choose what you consider safe rather than what (highly) paid certificates authorities.
I agree that there is more flexibility and that you could get the fingerprint from the website and have the same security.
Unfortunately, for example the last method doesn’t work. You can sign anybody’s messages. Doesn’t prove your key is theirs.
The mantra “flexibility is an enemy of security” may apply.
I meant content whose exclusive disclosure is in a signed message, such as “you remember that time at the bridge, I told you the boat was blue, you told me you are colorblind”.
[EDIT: I realize that I had in mind that these messages would be sent through another secure transport, until external facts about the identity of the person at the other end of the pipe gets good enough. This brings us to the threat model of autocrypt (aiming working through email-only) : passive attacker, along with the aim of helping the crypto bonds to build-up: considering “everyone does the PGP dance NOW” not working well enough]
Unfortunately, for example the last method doesn’t work. You can sign anybody’s messages. Doesn’t prove your key is theirs.
I can publish your comment on my HTTPS protected blog. Doesn’t prove your comment is mine.
Not sure if this is a joke but: A) You sign my mail. Op takes this as proof that your key is mine. B) You put your key on my website..wait no you can’t..I put my key on your webs- uh…you put my key on your website and now I can read your email…
Ok, those two things don’t match.
I’d claim I’m familiar with both the PGP ecosystem and TLS/X.509. I disagree with your claim that they’re a similar clusterfuck.
I’m not saying X.509 is without problems. But TLS/X.509 gets one thing right that PGP doesn’t: It’s mostly transparent to the user, it doesn’t expect the user to understand cryptographic concepts.
Also the TLS community has improved a lot over the past decade. X.509 is nowhere near the clusterfuck it was in 2010. There are rules in place, there are mitigations for existing issues, there’s real enforcement for persistent violation of rules (ask Symantec). I see an ecosystem that has its issues, but is improving on the one side (TLS/X.509) and an ecosystem that is in denial about its issues and which is not handling security issues very professionally (efail…).
Very true but the transparency part is a bit fishy because TLS included an answer to “how do I get the key” which nowadays is basically DNS+timing while PGP was trying to give people more options.
I mean we could do the same for PGP but if that fits your security requirements is a question that needs answering..but by whom? TLS says CA/DNS PGP says “you get to make that decision”.
Unfortunately the latter also means “your problem” and often “idk/idc” and failed solutions like WoT.
Hiw could we do the same? We can do some validation in the form of we send you an email encrypted for what you claim is your public key to what you claim is your mail and you have to return the decrypted challenge. Seems fairly similar to DNS validation for HTTPS.
While we’re at it…. Add some key transparency to it for accountability. Fix the WoT a bit by adding some DOS protection. Remove the old and broken crypto from the standard. And the streaming mode which screws up integrity protection and which is for entirely different use-cases anyway. Oh, and make all the mehish or shittyish tools better.
That should do nicely.
Edit: except, of course, as Hanno said: “an ecosystem that is in denial about its issues and which is not handling security issues very professionally”…that gets in the way a lot
I’d wager this is mostly a user-facing tooling issue, rather than anything else. Would you believe that having a more mature tooling ecosystem with PGP would make it more salvageable for, say, vulnerability disclosure emails instead of a google web form?
If anything, I’m more convinced that the failure of PGP is to trust GnuPG as its only implementation worthy of blessing. How different would it be if we had funded alternative, industry-backed implementations after e-fail in the same way we delivered many TLS implementations after heartbleed?
Similarly, there is a reason why there’s active research on fuzzing TLS implementations for their different behaviors (think, frankencerts). Mostly, this is due the fact that reasoning about x509 is impossible without reading through stacks and stacks of RFC’s, extensions and whatnot.
I use Thunderbird with Enigmail. I made a key at some point and by now I just send and receive as I normally do. Mails are encrypted when they can be encrypted, and the UI is very clear on this. Mails are always signed. I get a nice green bar over mails I receive that are encrypted.
I can’t say I agree with your statement that GPG is not transparent to the user, nor that it expects the user to understand cryptographic concepts.
As for the rules in the TLS/X.509 ecosystem, you should ask Mozilla if there’s real enforcement for Let’s Encrypt.
The internal complexity of x509 is a bit of a different one than the user-facing complexity of PGP. I don’t need to think about or deal with most of that as an end-user or even programmer.
With PGP … well… There are about 100 things you can do wrong, starting with “oops, I bricked my terminal as gpg
outputs binary data by default” and it gets worse from there on. I wrote a Go email sending library a while ago and wanted to add PGP signing support. Thus far, I have not yet succeeded in getting the damn thing to actually work. In the meanwhile, I have managed to get a somewhat complex non-standard ACME/x509 generation scheme to work though.
I’m very far removed from an expert on any of this; so I don’t really have an opinion on the matter as such. All I know is that as a regular programmer and “power user” I usually manage to do whatever I want to do with x509 just fine without too much trouble, but that using or implementing PGP is generally hard and frustrating the the point where I just stopped trying.
You are thinking of gnupg. I agree gnupg is a usability nightmare. I don’t think PGP (RFC4880) makes much claims about user interactions (in the same way that the many x509 related RFC’s talk little about how users deal with tooling)
Would you say PGP has a chance to be upgraded? I think there is a growing consensus that PGP’s crypto needs some fixing, and GPG’s implementation as well, but I am no crypto-people.
Would you say PGP has a chance to be upgraded?
I think there’s space for this, although open source (and standards in general) are also political to some extent. If the community doesn’t want to invest on improving PGP but rather replace it with $NEXTBIGTHING, then there is very little you can do. There’s also something to be said that 1) it’s easier when communities are more open to change and 2) it’s harder when big names at google, you-name-it are constantly bashing it.
Can you clarify where “big names at Cloudflare” are bashing PGP? I’m confused.
I actually can’t, I don’t think this was made in any official capacity. I’ll amend my comment, sorry.
So my question now is, how much does this affect SHA-256 and friends? SHA-256 is orders of magnitude stronger than SHA-1, naturally, but is it enough orders of magnitude?
Also, it’s interesting to note that based on MD5 and SHA-1, the lifetime of a hash function in the wild seems to be about 10-15 years between “it becomes popular” and “it’s broken enough you really need to replace it”.
[…] the lifetime of a hash function in the wild seems to be about 10-15 years […]
That’s assuming that we’re not getting better at creating cryptographic primitives. While there are still any number of cryptanalysis techniques remaining to be discovered, at some point we will likely develop Actually Good hashes etc.
(Note also that even MD5 still doesn’t have a practical preimage attack.)
It would stand to reason that we get as good at breaking cryptographic primitives as we get at creating them.
Why? Do you believe that all cryptographic primitives are breakable, and that it’s just a matter of figuring out in what way?
In the response to the SHA1 attacks (the early, theoretical ones, not the practical ones) NIST started a competition, in part to improve research on hash function security.
There were voices in the competition that it shouldn’t be finished, because during the research people figured out the SHA2 family is maybe better than they thought. Eventually those voices weren’t heard and the competition was finished with the standardization of SHA3, but in practice almost nobody is using SHA3. There’s also not really a reason to think SHA3 is inherently more secure than SHA2, it’s just a different approach. Theoretically it may be that SHA2 stays secure longer than its successors.
There’s nothing even remotely concerning in terms of research attacking SHA2. If you want my personal opinion: I don’t think we’re going to see any practical attack on any modern hashing scheme within our lifetimes.
Also the “10-15 years” timeframe - there is hardly any trend here. How many relevant hash functions did we have overall that got broken? It’s basically 2. (MD5/SHA1). Cryptography just doesn’t exist long enough for there to be a real trend.
As any REAL SCIENTIST knows, two data points is all you need to draw a line on a graph and extrapolate! :D
FWIW, weren’t md2 and md4 were both used in real world apps? (I think some of the old filesharing programs used them.) They were totally hosed long before md5.
I considered those as “not really in widespread use” (also as in: cryptography wasn’t really a big thing back then).
Surprising fact by the way: MD2 is more secure than MD5. I think there’s still no practical collision attack. (Doesn’t mean you should use it - an attack is probably just a dedicated scientist and some computing power away - but still counterindicating a trend.)
I have a vague (possibly incorrect) recollection of hearing that RIAA members were using hash collisions to seed broken versions of mp3 files on early file sharing networks that used very insecure hashing which might have been md4 (iirc it was one where you could find collisions by hand on paper). Napster and its successors had pretty substantial user bases that I’d call widespread. :)
The order of magnitude is a derivative of many years of cryptanalysis over the algorithm and the underlying construction. In this case (off the top of my head), this is mostly related to weaknesses to Merke-Damgard, which sha256 ony partially uses.
How funny!
What are your relevant estimates for the time periods?
When was the SHA-256 adoption, again?
Here’s a good reference for timelines: https://valerieaurora.org/hash.html
Seems like cryptogopher’s take on AGE (modulo the pedantic name). I think picking a cryptographic algorithm and then making a tool to wrap around it is going to create a bunch of headaches down the line. At least the earlier iteration of cryptographic tools considered a sliver of crypto-agility…
Sorry for a joke that’s probably getting old, but wouldn’t a lot of the code be easily genericized if go had generics?
How do folks feel about Scarr’s Pizza?
This Sunday (2019-06-30) at 6 pm (1800EDT)?
I’ll confess I laughed audibly when I reached the suggestion of using yet-another-bloated-web-framework over RoR.
Phoenix is a lot of things, but bloated isn’t one of them.
I’d actually argue it’s missing a great deal of stuff that would actually be helpful in the real world.
I think multi-signature transactions make for interesting possibilities in scaling/automating escrow. I guess that might be more of a feature of Bitcoin (and probably other coins - haven’t looked) than a feature of blockchains?
But, for what I want to use multi-sig for I need a stable (or at least mostly stable!) coin. And I’m not convinced stable coins are possible. I kinda feel like if we could have a stable coin we would have got one by now, and it would have killed all the other coins because people could actually use it.
(BTW, I am a crypto pessimist. I don’t see a much of a future for blockchain technology or any of the current coins).
But, for what I want to use multi-sig for I need a stable (or at least mostly stable!) coin
You mean something like tether or?
* full disclosure, I’m a crypto pessimist myself too and I do not endorse tether.
Yeah, I’m aware of tether. It’s a stablecoin in the sense that it has proven to be pretty stable, but there’s the ongoing issue with the lack of a rigorous and independent audit, and the whole “look at our website, you’ll see that number A matches number B” thing. The stability seems to me to be largely based on confidence/trust, which is seemingly true of all the other backed-by-real-asset coins - “trust us, the gold is all in a vault in Singapore”, etc…
Even if the price of Bitcoin settled down and moved more like a regular currency it could prove to be usable as a relatively-stable coin. The only way I see that happening is the price collapsing to the point that no-one cares about Bitcoin any more.
Scary how often viruses like this are showing up in linux! I think this is the beginning of a new time.. and we’re going to have to change the way we do things to stay safe.
These incidents (at least in the NPM context) are good endorsements for the goals of Ryan Dahl’s deno, which runs code sandboxed by default.
sandboxing is good but i don’t want to run malicious code at all, even if it’s properly contained! We need better review too.
I’d go a little bit further than that. We need to extend the security architecture of our package managers. For example, architectures like TUF, notary/Docker Content Trust, or PEP-458 are great starting points.
Personally I think these small language are much more exciting than big oil tankers like Rust or Swift.
I’m not familiar with either of those languages, but any idea what the author means by this? I thought Rust has been picking up quite a bit recently.
I understood the author to be talking about the “size” of the language, not the degree of adoption.
I’m not sure that I personally agree that C is a small language, but many do belive that.
He is right though. C’s execution model may be conceptually simple but you may need to sweat the implementation details of it, depending on what you’re doing. This doesn’t make C bad, it just raises the bar.
I had that opinion before Rust, and I’m certainly not speaking on behalf of the Rust team, so in my understanding, the hat is very inappropriate.
(I’m also not making any claims about Rust’s size, in absolute terms nor relative to C)
Or you can just test his claim with numbers. A full, C semantics is huge compared to something like Oberon whose grammar fits on a page or two. Forth is simpler, too. Whereas, Ada and Rust are complicated as can be.
I agree that there are languages considerably smaller than C. In my view, there is a small and simple core to C that is unfortunately complicated by some gnarly details and feature creep. I’ve expressed a desire for a “better C” that does all we want from C without all the crap, and I sincerely believe we could make such a thing by taking C, stripping stuff and fixing some unfortunate design choices. The result should be the small and simple core I see in C.
When comparing the complexity of languages, I prefer to ignore syntax (focusing on that is kinda like bickering about style; yeah I have my own style too, and I generally prefer simpler syntax). I also prefer to ignore the standard library. What I would focus on is the language semantics as well as the burden they place on implementation. I would also weigh languages against the features they provide; otherwise we’re talking apples vs oranges where one language simply makes one thing impossible or you have to “invent” that thing outside the language spec. It may look simpler to only present a floating 64-bit point numeric type, but that only increases complexity when people actually need to deal with 64-bit integers and hardware registers.
That brings us to Oberon. Yes, the spec is short. I guess that’s mostly not because it has simple semantics, but because it lacks semantics. What is the range of integer types? Are they bignums, and if so, what happens you run out of memory trying to perform multiplication? Perhaps they have a fixed range. If so, what happens when you overflow? What happens if you divide by zero? And what happens when you dereference nil? No focking idea.
The “spec” is one for a toy language. That is why it is so short. How long would it grow if it were properly specified? Of course you could decide that everything the spec doesn’t cover is undefined and maybe results in program termination. That would make it impossible to write robust programs that can deal with implementation limitations in varying environments (unless you have perfect static analysis). See my point about apples vs oranges.
So the deeper question I have is: how small can you make a language with
Scheme, Oberon, PostScript, Brainfuck, etc. don’t really give us any data points in that direction.
So the deeper question I have is: how small can you make a language with
- a spec that isn’t a toy spec
- not simply shifting complexity to the user
- enough of the same facilities we have in C so that we can interface with the hardware as well as write robust programs in the face of limited & changing system resources
Scheme, Oberon, PostScript, Brainfuck, etc. don’t really give us any data points in that direction.
Good question. There are few languages with official standards (sorted by page count) that are also used in practice (well.. maybe not scheme ;>):
I know that page count is poor metric, but it looks like ~600 pages should be enough :)
Here are the page counts for a few other programming language standards:
I know that page count is poor metric, but it looks like ~600 pages should be enough :)
Given that N1256 is 552 pages, yeah, without a doubt.. :-)
The language proper, if we cut it off starting at “future language directions” (then followed by standard library, appendices, index, etc.) is only some 170 pages. It’s not big, but I’m sure it could be made smaller.
I’ve expressed a desire for a “better C” that does all we want from C without all the crap, and I sincerely believe we could make such a thing by taking C, stripping stuff and fixing some unfortunate design choices. The result should be the small and simple core I see in C.
That might be worth you writing up with hypothetical design. I was exploring that space as part of bootstrapping for C compilers. My design idea actually started with x86 assembler trying to design a few, high-level operations that map over it which also work on RISC CPU’s. Expressions, 64-bit scalar type, 64-bit array type, variables, stack ops, heap ops, expressions, conditionals, goto, and Scheme-like macros. Everything else should be expressable in terms of the basics with the macros or compiler extensions. The common stuff gets a custom, optimized implementation to avoid macro overhead.
“ What I would focus on is the language semantics as well as the burden they place on implementation. “
Interesting you arrived at that since some others and I talking verification are convinced a language design should evolve with a formal spec for that reason. It could be as simple as Abstract, State Machines or as complex as Isabelle/HOL. The point is the feature is described precisely in terms of what it does and its interaction with other features. If one can’t describe that precisely, how the hell is a complicated program using those same features going to be easy to understand or predict? As an additional example, adding a “simple, local change” show unexpected interactions or state explosion once you run the model somehow. Maybe not so simple or local after all but it isn’t always evident if just talking in vague English about the language. I was going to prototype the concept with Oberon, too, since it’s so small and easy to understand.
“but because it lacks semantics.”
I didn’t think about that. You have a good point. Might be worth formalizing some of the details to see what happens. Might get messier as we formalize. Hmm.
“So the deeper question I have is: how small can you make a language with”
I think we have answers to some of that but they’re in pieces across projects. They haven’t been integrated into the view you’re looking for. You’ve definitely given me something to think about if I attempt a C-like design. :)
He also says that the issues with memory-safety in C are overrated, so take it with a grain of salt.
He is not claiming that memory safety in general is not an issue in C. What he is saying is that in his own projects he was able to limit or completely eliminate dynamic memory allocation:
In the 32 kloc of C code I’ve written since last August, there are only 13 calls to malloc overall, all in the sokol_gfx.h header, and 10 of those calls happen in the sokol-gfx initialization function
The entire 8-bit emulator code (chip headers, tests and examples, about 12 kloc) doesn’t have a single call to malloc or free.
That actually sounds like someone who understands that memory safety is very hard and important.
I’m not familiar with either of those languages, but any idea what the author means by this?
I’m also way more interested in Zig than I am in Rust.
What I think he’s saying is that the two “big” languages are overhyped and have gained disproportionate attention for what they offer, compared to some of the smaller projects that don’t hit HN/Lobsters headlines regularly.
Or maybe it’s a statement w.r.t. size and scope. I don’t know Swift well enough to say if it counts as big. But Rust looks like “Rubyists reinvented C++ and claim it to be a replacement for C.” I feel that people who prefer C are into things that small and simple. C++ is a behemoth. When your ideal replacement for C would also be small and simple, perhaps even more so than C itself, Rust starts to seem more and more like an oil tanker as it goes the C++ way.
I agree with your point on attention. I just wanted to say maybe we should get a bit more credit here:
“compared to some of the smaller projects that don’t hit HN/Lobsters headlines regularly.”
Maybe HN but Lobsters covers plenty oddball languages. Sometimes with good discussions, too. We had authors of them in it for a few. I’ve stayed digging them up to keep fresh ideas on the site.
So, we’re doing better here than most forums on that. :)
TIL, I thought docker (and pretty much any other container runtime) was built around unshare(2)
, not clone(2)
I personally prefer coredumpctl(1).
I’m having trouble wording this properly: Even though I am happy there is a familiar name taking on the task of ethical AI after Google and Microsoft made their stance clear by firing their own ethical AI teams, Mozilla should probably focus on their existing portfolio’s stability before picking up a project of this magnitude.
Unfortunately, this seems to be very well worded 😭
I agree. I’m not unhappy with the vision, but I’m growing ever so increasingly worried about Mozilla’s inability to stabilize their core ventures instead of taking weird side projects that they drop faster than Google does.
100%. If you’re in search of more words, I just remembered this thread by Cade Diehm from a few months ago: https://post.lurk.org/@shibacomputer/109548784095427843
I don’t know, this seems a bit hyperbolic. I’m perfectly happy on Firefox, the VPN worked well when I used it, and I love relay.