I don’t like the design of Enchive.
The process for encrypting a file:
- Generate an ephemeral 256-bit Curve25519 key pair.
- Perform a Curve25519 Diffie-Hellman key exchange with the master key to produce a shared secret.
OK.
- SHA-256 hash the shared secret to generate a 64-bit IV.
Kinda OK, can justify this complexity by the need for a quick check before decryption (“validate the IV against the shared secret hash and format version”) if we got the correct key.
- Add the format number to the first byte of the IV.
OK.
- Initialize ChaCha20 with the shared secret as the key.
This is using raw multiplication result as a key. It’s recommended to hash the result (but not pure SHA256 as we’re already exposing 56 bits of it as IV) before using is as a cipher key (for example, NaCl uses HSalsa20 as a quick hash for that).
- Write the 8-byte IV.
- Write the 32-byte ephemeral public key.
- Encrypt the file with ChaCha20 and write the ciphertext.
OK. But for big files, it may be worth using chunked authenticated encryption to avoid spilling out unauthenticated plaintext or wasting time (see https://www.imperialviolet.org/2014/06/27/streamingencryption.html and my implementation https://github.com/dchest/nacl-stream-js).
- Write HMAC(key, plaintext).
Here we have three problems.
First is that is uses the same key for HMAC as for encryption. I don’t think there’s a particular interaction problem between HMAC-SHA-256 and ChaCha20 that would lead to something scary, but this design is not ideal. To fix this and previous issue in one shot, the authors could use a 64-byte hash function to derive both encryption and authentication keys from Curve25519 shared key: encr_key || mac_key = SHA512(shared_key), or use HMAC-SHA256 with different personalization strings (encr_key = HMAC-SHA256(“EncrKey”, shared_key) and mac_key = HMAC-SHA256(“AuthKey”, shared_key), or HKDF.
Secondly, it’s MAC-then-encrypt, which exposes cipher to various attacks before there’s a chance of authenticating. Finally, I would also authenticate everything, not just the ciphertext. So I’d use HMAC(mac_key, everything) where everything is IV, ephemeral public key, and ciphertext. This way, HMAC will be checked before decrypting, and malicious payload will be rejected early.
Enchive uses an scrypt-like algorithm for key derivation, requiring a large buffer of random access memory.
If it’s scrypt-like, why not just use scrypt? I haven’t checked the whole algorithm, but I can already see a drawback: it uses SHA-256 to perform work on memory. Scrypt specifically uses a very fast function (8-round Salsa20) so that it can perform this computation as quickly as possible, which is very important for a memory-hard function.
To summarize: there’s nothing particularly broken with this design, as far as I can tell from a quick look, but it’s not a solid design, unfortunately.
Enchive’s author here. These are all good points. Most of the mistakes are me not knowing any better when I designed it, but, fortunately, none of them fatal as far as I know.
But for big files, it may be worth using chunked authenticated encryption to avoid spilling out unauthenticated plaintext
I did eventually figure out chunked authentication for myself months later, but too late for Enchive. If I ever redesign the file format, it would definitely use chunked authentication, among other corrections like using EtM.
If it’s scrypt-like, why not just use scrypt?
At the time (early 2017) I couldn’t find a drop-in scrypt library with a friendly license, and I didn’t want to try implementing it myself. A major design goal was ANSI C and no dependencies. As a result, Enchive can easily be compiled just about anywhere, probably even decades into the future (to, say, decrypt some old archives). As evidence of this, you can build it and run it on Windows 98 decades in the past.
I get the feeling most of those shortcomings are caused by direct use of primitives. I suspect that the author was trying to:
optparse.h, which is (mostly) redundant on a POSIX system due to getopt(3) existing – and source files, andargon2 not being in there is probably not an accident but a result of how difficult it is to implement and how he’d have two hash functions (SHA-256 and BLAKE2 for the argon2 state).
The author might’ve had a better result and less work with naive use of Monocypher, libsodium or TweetNaCl, though TweetNaCl still would’ve let him shoot himself in the foot with raw X25519.
If it’s scrypt-like, why not just use scrypt?
Yeah, it’s like they’re not aware that scrypt comes with a file encryption utility.
I didn’t mean using the file encryption utility itself, but the KDF primitive. Although, indeed, the scrypt utility is great (I use it for my files), but it doesn’t do asymmetrical encryption, which seems to be the point of Enchive.
but it doesn’t do asymmetrical encryption, which seems to be the point of Enchive.
Ah, I missed that part. Hmm, well in that case Enchive seems pretty alright as far as goals are concerned. Hopefully the author will incorporate your suggestions.
So, if you are using FreeBSD, and Gimp, and working with FLIC files, and are dumb enough to either run something random from the net, or let a bad person access your machine with such a file… you’re in trouble.
Somehow I think the union of these sets is in the low 10s, if that.
But hey, this gets attention for the project, and the logo is kinda cute!
dumb enough to either run something random from the net
You literally opened a file from the net to write that. It’s not dumb, users should never be blamed for using files, it’s that’s what they do all the time.
By “dumb”, I mean someone blindly loading a file they got from someone saying “hey d00d check out this cool FLIC!!!”. It’s of course entirely possible for a malicious actor to trick even a vigilant user to load a file by social engineering, MiTM attacks, or spoofing in general.
My point is that the attack surface for this particular vulnerability is very small.
I’m all for better security for all users, and the techniques the project are using seem to bearing fruit. But it doesn’t detract from the fact that this issue is relatively less serious than other issues.
Yeah — I use FreeBSD, sometimes GIMP (but more often Krita), occasionally with files from the internet…
but I have NOT even heard of FLIC before
Do you know if the author of Fossil is looking for help contributing, or if they’re just thinking out loud?
In the past, D. Richard Hipp was very welcoming to contributions to Fossil. I don’t think anything changed. Your best bet would be probably to start from participation in the mailing list.
There’s also https://github.com/vladfolts/oberonjs
Hah, I was actually curious whether AST will make a move. Good to see he did.
Still, it’s sad that he doesn’t seem to care about ME.
Whether he cares about ME is irrelevant here. By releasing the software under most (all?) free software and open source licenses, you forfeit the right to object even if the code is being used to trigger a WMD - with non-copyleft licenses you agree not to even see the changes to the code. That’s the beauty of liberal software licenses :^)
All that he had asked for is a bit of courtesy.
AFAIK, this courtesy is actually required by BSD license, so it’s even worse, as Intel loses here on legal ground as well.
No, it is not - hence the open letter. You are most likely confused by the original BSD License which contained the so called, advertising clause.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Correct. The license requires Intel to reproduce what’s mentioned in the parent comment. The distribution of Minix as part of the IME is a “redistribution in binary form” (i.e., compiled code). Intel could have placed the parts mentioned in the license into those small paper booklets that usually accompany hardware, but as far as I can see, they haven’t done so. That is, Intel is breaching the BSD license Minix is distributed under.
There’s no clause in the BSD license to inform Mr. Tanenbaum about the use of the software, though. That’s something he may complain about as lack of courtesy, but it’s not a legal requirement.
What’s the consequence of the license breach? I can only speak for German law, but the BSD license does not include an auto-termination clause like the GPL does, so the license grant remains in place for the moment. The copyright holder (according to the link above, this is Vrije Universiteit, Amsterdam) may demand compensation or acknowledgment (i.e. fulfillment of the contract). Given the scale of the breach (it’s used in countless units of Intel’s hardware, distributed all over the globe by now), he might even be able to revoke the license grant, effectively stopping Intel from selling any processor containing the then unlicensed Minix. So, if you ever felt like the IME should be removed from this world, talk to the Amsterdam University and convince them to sue Intel over BSD license breach.
That’s just my understanding of the things, but I’m pretty confident it’s correct (I’m a law student).
Actually, they may have a secret contract with the University of Amsterdam that has different conditions. But that we don’t know.
University of Amsterdam (UvA) is not the Vrije University Amsterdam (VU). AST is a professor at VU.
I’ve read the license - thanks! :^)
The software’s on their chip and they distribute the hardware so I’m not sure that actually applies - I’m not a lawyer, though.
Are you saying that if you ship the product in hardware form, you don’t distribute software that it runs? I wonder why all those PC vendors were paying fees to Microsoft for so long.
Yes, software is licensed. It doesn’t mean that if you sell hardware running software, you can violate that software’s license.
This is the “tivoization” situation that the GPLv3 was specifically created to address (and the BSD licence was not specifically updated to address).
No, it was created to address not being able to modify the version they ship. Hardware vendors shipping GPLv2 software still have to follow the license terms and release source code. It’s right in the article you linked to.
BSD license says that binary distribution requires mentioning copyright license terms in the documentation, so Intel should follow it.
Documentation or other materials. Does including a CREDITS file in the firmware count? (For that matter, Intel only sells the chipset to other vendors, not end users, so maybe it’s in the manufacturer docs? Maybe they’re to blame for not providing notice?)
You have a point with the manufacturers being in-between Intel and the end users that I didn’t see in my above comment, but the outcome is similar. Intel redistributes Minix to the manufacturers, which then redistribute it to the end-users. Assuming Intel properly acknowledges things in the manufacturer’s docs, it’d then be the manufacturers that were in breach of the BSD license. Makes suing more work because you need to sue all the manufacturers, but it’s still illegal to not include the acknowledgements the BSD license demands.
Edit:
Does including a CREDITS file in the firmware count?
No. “Acknowledging” is something that needs to be done in a way the person that receives the software can actually take notice of.
You’re correct, my bad. But “reproduce the above copyright notice” etc. aims at the same. Any sensible interpretation of the BSD license’s wording has to come to the result that the receivers of the source code must be able to view those parts of the license text mentioned, because otherwise the clause would be worthless.
If they don’t distribute that copyright notice (I can’t remember last seeing any documentation coming directly from Intel as I always buy pre-assembled hardware) and your reasoning is correct, then they ought to fix it and include it somewhere.
However, the sub-thread started by @pkubaj is about being courteous, i.e. informing the original author about the fact that you are using their software - MINIX’s license does not have that requirement.
Still, it’s sad that he doesn’t seem to care about ME.
Or just refrains from fighting a losing battle? It’s not like governments would give up on spying on and controlling us all.
Do you have a cohesive argument behind that or are you just being negative?
First off, governments aren’t using IME for dragnet surveillance. They (almost certainly) have some 0days, but they aren’t going to burn them on low-value targets like you or me. They pose a giant risk to us because they’ll eventually be used in general-purpose malware, but the government wouldn’t actually fight much (or maybe at all, publicly) to keep IME.
Second off, security engineering is a sub-branch of economics. Arguments of the form “the government can hack anyone, just give up” are worthless. Defenders currently have the opportunity to make attacking orders of magnitude more expensive, for very little cost. We’re not even close to any diminishing returns falloff when it comes to security expenditures. While it’s technically true that the government (or any other well-funded attacker) could probably own any given consumer device that exists right now, it might cost them millions of dollars to do it (and then they have only a few days/weeks to keep using the exploit).
By just getting everyday people do adopt marginally better security practices, we can make dragnet surveillance infeasibly expensive and reduce damage from non-governmental sources. This is the primary goal for now. An important part of “marginally better security” is getting people to stop buying things that are intentionally backdoored.
Do you have a cohesive argument behind that or are you just being negative?
Behind what? The idea that governments won’t give up on spying on us? Well, it’s quite simple. Police states have happened all throughout history, governments really really want absolute power over us, and they’re free to work towards it in any way they can.. so they will.
They (almost certainly) have some 0days, but they aren’t going to burn them on low-value targets like you or me.
Sure, but do they even need 0days if they have everyone ME’d?
They pose a giant risk to us because they’ll eventually be used in general-purpose malware
Yeah, that’s a problem too!
Defenders currently have the opportunity to make attacking orders of magnitude more expensive, for very little cost. [..] An important part of “marginally better security” is getting people to stop buying things that are intentionally backdoored
If you mean using completely “libre” hardware and software, that’s just not feasible for anyone who wants to get shit done in the real world. You need the best tools for your job, and you need things to Just Work.
By just getting everyday people do adopt marginally better security practices, we can make dragnet surveillance infeasibly expensive and reduce damage from non-governmental sources.
“Just”? :) I’m not saying we should all give up, but it’s an uphill battle.
For example, the blind masses are eagerly adopting Face ID, and pretty soon you won’t be able to get a high-end mobile phone without something like it.
People are still happily adopting Google Fiber, without thinking about why a company like Google might want to enter the ISP business.
And maybe most disgustingly and bafflingly of all, vast hordes of Useful Idiots are working hard to prevent the truth from spreading - either as a fun little hobby, or a full-time job.
It reads to me like he just doesn’t want to admit that he’s wrong about the BSD license “providing the maximum amount of freedom to potential users”. Having a secret un-auditable, un-modifiable OS running at a deeper level than the OS you actually choose to run is the opposite of user freedom; it’s delusional to think this is a good thing from the perspective of the users.
Oh, it’s still not lost. ME_cleaner is getting better, Google is getting into it with NERF, Coreboot works pretty well on many newish boards and on top of that, there’s Talos.
We had to build a custom updater for Peerio (reusing parts from electron-builder and the Electron-native updater), because none of the current ones satisfied our requirements. While electron-builder’s updater and Squirrel.Mac verify signatures, they do so using the native code signing tools and checking that the company name in the certificate matches. Instead, we publish a plain-text manifest signed with OpenBSD’s signify (our version, but compatible format), which looks like this:
untrusted comment: Peerio Updater manifest
RWRwKJ91Y/oYjMqOB16Jf5oLxuCkUGwPCM8JOMNtvDTwNuq0SbTdMMPRTfHcVX438LUCx39fAi2rirgq1MoG9dVDxT1goV6omwE=
version: 2.37.1
urgency: mandatory
date: 2017-09-15T18:16:09.343Z
linux-x64-file: https://github.com/PeerioTechnologies/peerio-desktop/releases/download/v2.37.1/peerio-2-linux-x86_64.AppImage
linux-x64-sha512: d135a90809eace24cd741c97bb0044c5ab9b76c65e8bd6d6a8711f47e36e6070310b9e40b08f43660491ff3a29c89ce2a8bd452bf9912ee615a9c378db7b33d9
linux-x64-size: 69271552
...
We distribute two public keys with the program (one main and one backup); the program downloads this manifest, verifies signature, downloads the installer file for the current platform, verifies its hash and then perform the installation. As soon as we have the update, we just publish the new manifest into the location that the app checks and that’s it. Static files everywhere.
Nothing new to this, it’s what secure updaters have been doing for a long time (e.g. Sparkle on Mac), but somehow with the web world, this has been forgotten and rewritten, with Node update servers and complicated code signing with unreliable PKI.
This also allows us to implement a public chain of trusted hashes in the future, making sure everyone gets the same update in a verifiable way.
Simple signed plaintext manifest is also convenient to parse with Unix tools, making it easy to verify signature manually before downloading binaries (you’ll have to get our signify public keys for that, of course).
Is it time to move back to jQuery and Prototype.js? If these mysterious patents are about things like “virtual DOM”, comparing trees of state or something derived from FRP, then using Vue, Preact, Angular 2, Cycle, Riot, Elm, reflex-dom will infinge them too.
Then let’s wait 20-30 years until these patents expire and everyone finally can use these nice state-management things.
[Comment removed by author]
New? Not really. Debian has a well-stated social contract: https://www.debian.org/social_contract
Huh. It’s Debian’s “social contract”. Do you see the difference between somebody publishing “a set of commitments that we agree to abide by” and a non-existing unspecified thing that the poster above requires Facebook to abide by just because they published and maintain some open source code?
So, the issue is that they deliberately added this “patent clause” to induce fear to everyone who thinks about suing Facebook? And not that using React is risky?
Does Facebook actually have patents covering React? I’ve looked around a few times and have never seen a link to an actual patent covering it. I would assume there’s gobs of prior art for anything going on in there.
And yet, there are a number of big companies which undoubtedly have big legal teams, and which seem to be okay with using React somewhere. Just cherry-picking some from the list [0]
Airbnb, American Express, Chrysler, Atlassian, eBay, Expedia, Microsoft, NHL, Netflix, New York Times, Salesforce, Twitter, Visa, Walmart… At least some of these companies must have had their legal teams look at the license and decide it was okay to use to use React. Which makes me wonder if the hysteria (this is a bit of hyperbole, but it does seem to have some people really worked up) is justified.
[0] https://github.com/facebook/react/wiki/sites-using-react
You’re assuming they’re using the “off-the-shelf” license. There’s nothing preventing them from negotiating a different license with Facebook. Now, I haven’t seen anything showing that this has happened, but it’s a fairly common practice to have individualized contracts with traditional commercial software, so it wouldn’t shock me.
It would surprise me, though - why would Facebook enter into an agreement with these big name companies that altered the React license out of Facebook’s favor? I don’t think all these companies did that (and I didn’t list every large or well-known company that’s on that link, by the way), and unless they’re paying FB to use React I just don’t know why FB legal would bother with all the work. Individual negotations with legal teams at all these big companies to reach a mutually agreeable license, just so a dev team can use React? It seems really unlikely. Just as unlikely as all these companies paying FB to get some kind of commerical license for React - when there is no suggestion that such a thing exists.
I assume they are paying. Just because things don’t have a price list or an explicit offer of a commercial license doesn’t mean you can’t get one.
Right, I get that. I just don’t think it’s actually happening. Since there’s no evidence either way I guess we won’t be able to figure it out!
I’m afraid there’s a slight misunderstanding of the 2FA concept and a confusion between actual factors and ways to circumvent them: the SMS token does require a possession factor: the actual SIM card, that in theory should be owned by only one user. The same thing applies to the SafeWord hardware token, which requires the possession of a physical device.
The codes used are to verify your claim that you possess something, just like the dents on a key. Anyone with a key with the same dents will be able to open the same doors as you and I think it would be hard to argue that keys aren’t a possession factor. It just so happens that dents are a lot harder to copy than a set of numbers (on the other hand, those numbers do expire after a minute but the dents are static) but they can still be copied and information on how to copy them can be shared but obviously that won’t make the key a “knowledge factor”.
At the end of the article, I saw that some apps are marked as 2FA or 2SA. However, some apps are mobile apps and a lot of people use their fingerprint scanner to unlock their devices so, in some cases, you’d need:
I’d argue that in some cases you even have 3FA.
Unfortunately SMS codes do not prove possession of the SIM card. Would that it were so - the SIM card does in fact implement a (reasonably) secure authentication process between the phone and network & in principle this could be used to bootstrap a 2FA authentication system.
All an SMS code proves is that the recipient was able to view texts sent to a given phone number: That’s it.
Given that a) phones are insecure, b) phone networks are insecure and c) the SS7 layer that lets mobile networks implement roaming is insecure, SMS messages are not a secure way to deliver authentication tokens for high-value assets.
Unfortunately SMS codes do not prove possession of the SIM card. […] All an SMS code proves is that the recipient was able to view texts sent to a given phone number. That’s it.
The point of the article was that the SMS code belongs to the same factor of authentication (“knowledge”) as the account credentials so using an SMS code was a two-step authentication, while I argued that the SMS code belongs to the possession factor as you need to own something (SIM card, in my example) which will make it a 2FA. The author pointed out that he ignored security issues like MITM or SIM cloning and so on and focused on the factor vs step thing.
By all intents and purposes, the SMS code is a different factor (you possess something, like a phone number or a SIM card), not just another step, like a “what’s your mother’s maiden name” question would be (that’s still a knowledge factor).
Except you don’t possess a phone number - it’s an abstract concept that relies upon the implementation of the phone network for it to have any meaning.
This is the underlying reason why using SMS to implement a “something you have” 2FA doesn’t work - your phone number is not a physical object & doesn’t have the properties of a physical object that such 2FA authentication schemes rely on for their security. A physical second factor requires that only the owner of the object can view the code by virtue of having it in their possession & that the object cannot be duplicated without the knowledge of the owner. Once you break these properties you no longer have a physical (“something that you have”) second factor.
SMS 2FA is a one time password communicated to you over an insecure, untrusted channel. That’s all it is.
Would you consider something like the Danish “NemID” system, which mails you a physical card of one-time-use codes, legitimate 2FA? On the one hand you do really possess a physical thing that’s needed for the challenge-response. On the other hand, like a phone number, its delivery relies on an abstract concept and system of infrastructure (a mailing address and postal system). Besides the mail/phone difference the other main difference is that the phone relies on this delivery method for each individual usage, while the NemID system batches it, mailing you a card of 140 challenge-response numbers where the individual uses then happen with it being in your possession.
Well, it’s more secure than SMS 2FA at least!
If you want to get detailed about definitions, then I would personally say that sounds like a perfectly legitimate second factor scheme, but it’s not a complete “thing that you have” second factor because the thing (the list of one time passwords) can be duplicated without the owner being aware of it. (Obviously you can reduce the impact of this by requiring the use of the passwords in order, so that the holder notices when a factor fails for them. That doesn’t prevent someone else logging in with a copy of the codes at least once however.)
Likewise, SMS codes as a one-time-password 2FA would make a perfectly fine second factor if the delivery system was actually secure: Unfortunately, it’s not secure :(
I think one of the reasons people get confused about these definitions is that physical tokens used for “something you have” second factors all generate codes, which makes them think the codes are the point. For a physical factor the point of the code is to prove possession of the factor in a secure fashion: It’s the possession of the object that’s the crucial thing, not the code.
(As ever with linguistic definitions, none of these things are set in stone - there’s probably a scheme we can come up with that sits exactly on my personal boundary between a ‘thing that you have’ 2FA and other systems! Is a written down password still ‘a thing that you know’? Not really, but it can make up part of a perfectly good authentication scheme that’s good enough for the threat model it’s designed to protect against.)
I’m not arguing that implementing a 2FA based on SMS codes is ideal, I’m saying that the usage of a SMS token would be in the possession factor instead of the knowledge factor, which was what the article claimed.
Stretching definitions a bit, I might argue that the most one can really say about an SMS token is that it demonstrates knowledge of the target recipient’s phone number.
Which makes it a knowledge factor rather than a possession factor, even if it’s intended to be the latter.
Yes, that’s what I was saying: the intention is to be a possession factor. The original article said this:
While MITM attack capability is something that should be considered when evaluating overall security, it’s irrelevant when evaluating authentication factors.
At the same time, I also agree with the fact that poor security will convert what’s intended as a different factor to a knowledge factor (an example would be using the phone’s camera as a biometric sensor but it would also validate a photo of the user).
SMS token does not require the actual SIM card to receive it. It’s transferred via a channel between the website and the operator (and most likely another service between them) and then from the operator to the phone. As you can see, it’s not a possession factor: everyone in the chain sees the token and decides its fate.
(I didn’t read the article because it was deleted.)
The intent of the SMS token is to prove that you own the phone number (which means physical access to the SIM card), so it’s used for the possession factor. Is it exploitable if one of the parties handling the SMS token is compromised in any way? Yes, of course, but the same can be said about physical keys as anyone handling your car keys, for example, could be able to copy them.
Yes, of course, but the same can be said about physical keys as anyone handling your car keys, for example, could be able to copy them.
Indeed, which is precisely why car manufacturers implemented radio keys as standard that do a challenge / response with the car to demonstrate that they really are the one true authenticated key.
Of course, car manufacturers were prime NIH merchants who wouldn’t know a secure cryptosystem if one bit them on the backside, so that didn’t work out too well but the intent was there… (Maybe the current generation of car key cryptosystems is actually secure? I wouldn’t like to bet on it though.)
I appear to be hitting an ssl exception on this URL. Something about the certificate issuer being unknown.
@tedu hasn’t gotten to the book about CA infrastructure yet
Lol. Oh he has. @tedu went further to launch a small-scale experiment on the psychological effects of highly-technical users encountering SSL problems on the homepage of someone they expect understands security. Aside from personal amusement, he probably focused on categorizing them from how many ignore them to quick suggestions to in-depth arguments. He follows up with a sub-study on the quality of those arguments mining them for things that will appeal to the masses. He’ll then extrapolate the patterns he finds to discussions in tech forums in general. He’ll then submit the results to Security and Online Behavior 2018.
Every tedu story on Lobsters having a complaint about this is the fun part of the study for him. A break from all the tedium of cataloging and analyzing the responses. On that, how bout a Joker Meme: “If a random site by careless admins generate CA errors, then the IT pro’s think that’s all part of the plan. Let one, security-conscious admin have his own CA and then everybody loses their minds!”
Well, domain names are scarce in a way that RSA keys aren’t, and have unevenly distributed value. My domain name was not randomly generated. :)
tedunangst.com name server ns-434.awsdns-54.com.
tedunangst.com name server ns-607.awsdns-11.net.
tedunangst.com name server ns-1775.awsdns-29.co.uk.
tedunangst.com name server ns-1312.awsdns-36.org.
Did you ask for people to add your nameservers to their resolver roots?
Domain names and RSA keys are equally scarce. It’s all protection money, for root servers and for root CAs.
[Comment removed by author]
This comment is totally unsupported by data, the Chrome team in particular has done a ton of research which has improved error adherence: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43265.pdf in particular, but there’s others as well.
The past few years have featured the greatest improvement in both the quality and quantity of HTTPS on the web since TLS was introduced, and it’s been supported by careful research on both the crypto side and the UX side.
Huh? The situation was much worse: browsers just displayed OK/Cancel dialog and most users just clicked OK. Today it’s harder for users to click OK, and this single change of UI made many more users secure against MiTM attacks. I don’t have links handy, but those Chrome and Firefox “assholes” did a lot of research regarding this, and made browsing more secure for the majority of non-technical people.
[Comment removed by author]
The scale of the problem they solve is a lot larger than what most people will ever work on. The fan-out nature of the product is challenging enough, but there’s more.
The 140 chars thing is inconsequential. It would be the easiest thing to change.
The 140 chars thing is inconsequential. It would be the easiest thing to change.
Agree with everything until this part. I think it’s very likely that there is some critical RDBMS with a varchar(140) column that’ll make “easy” an actual nightmare with people waking up in cold sweats.
140 characters are counted as 140 Unicode grapheme clusters, so the byte size is already potentially a lot larger and variable.
True. In MySQL you’d likely set the collation to utf-8 or whatever. That doesn’t make doubling, or eliminating the character limit all together any less difficult though?
In MySQL you’d likely set the collation to utf-8 or whatever.
Fun fact: you’d want the “whatever” https://medium.com/@adamhooper/in-mysql-never-use-utf8-use-utf8mb4-11761243e434
But here’s the rub: MySQL’s “utf8” isn’t UTF-8.
The “utf8” encoding only supports three bytes per character. The real UTF-8 encoding — which everybody uses, including you — needs up to four bytes per character.
MySQL developers never fixed this bug. They released a workaround in 2010: a new character set called “utf8mb4”.
A few years ago they had a bug for a day or so which allowed much longer tweets, so I doubt they have this hard limit anywhere except for the validation code.
Twitter is all about pushing ads and trying to find ways to monetize their users. They have over 3k employees, and I have no idea WTF they’re doing to be honest. The site has terrible performance, and it’s buggy as hell. If you look at the source for the page, it’s downright nightmarish. They keep adding shit like moments that nobody wants or ever asked for, while ignoring actual user requests like the ability to edit tweets.
I started using Mastodon recently, and it’s just a better experience all around. The core functionality of Twitter is not that hard to implement,. If you’re not trying to monetize, then you can provide a much better experience for the users.
Personally, I’d really like to see the internet go back to being a distributed system where anybody can run a server and interact with people, as opposed to current centralized model where a few sites dominate all the social media.
Running your own servers is cheaper and easier than ever. You can get a Digital Ocean droplet for 5 bucks a months nowadays, and the prices are only going down.
Meanwhile, setting up and managing apps like Mastodon has become much easier as well thanks to Docker. Run the container that the maintainer packages, and you can get it up and running in minutes.
I think Mastodon is a great example that this model absolutely does work today. I also think that it’s more robust than the startup model.
Mastodon is open source, and it will be around as long as people want to use it. The features get added based on user demand, as opposed to demand of investors. Anybody can run their own node, and set it up any way they like. No central entity decides how Mastodon is used, or what it’s used for.
This is what internet was meant to be. We took a terrible detour with walled gardens like Facebook and Twitter, but it doesn’t have to be that way.
If you’re not trying to monetize, then you can provide a much better experience for the users.
There are tons of shitty FOSS projects out there. I am an open source enthusiast, my job title is literally “Open Source Software Engineer.” I love FOSS software. But the idea that it’s better because you’re not trying to make money is just not one I’d come close to making. I love using Linux on the desktop but it’s way worse for most users than Windows or MacOS. Open source is great because it’s about Freedom, not because it provides a superior user experience. Sometimes it does, sometimes it doesn’t. It really depends on the product and what you’re using it for.
This. Often proprietary is better quality because more man hours are spent on it. However, despite this I will use Free Software over proprietary any day because it gives me something proprietary can never give me, Free Software gives me freedom.
Of course, open source is not a guarantee that you’ll end up with a great piece of software. However, I’m talking about the specific difference in motivation for Twitter and Mastodon developers. Personally, I find Linux far preferable to Windows as a desktop as well, but MacOS is definitely a lot more polished than Linux.
they make a considerable amount of money. is it net profit? no. mostly because they have an insane head count.
It was inevitable.
If only it made him complete a quest with a random character in adventure mode before continuing to update his system. :D
This is one good reason why I always use full, explicit paths in my scripts.
This is one good reason why I always use full, explicit paths in my scripts.
but then they are not portable
Just always use /bin/bash and don’t care about distros/BSDs that don’t care enough about their users to place bash there. Problem solved for 99% of users. ;)
or you know, ignore developers that don’t care about their downstream packagers and users to learn about /usr/bin/env? Problem solved for 99% of users caring about cross platform software.
Not all distros may have env in /usr/bin, so not necessarily an improvement over the extremely common /bin/bash. Then there’s the problem of what /usr/bin/env df might return…
On NixOS, env is the only thing in /usr/bin, so that’s at least one distro that developers can avoid breaking by using it.
IME, globally /usr/bin/env is more likely to exist than /bin/bash. The person who has this dwarf fortress issue seems to have done foolish things to get df to be dwarf fortress so I don’t think this situation is a valid motivator for something that is closer to being a standard (/usr/bin/env) than something that’s not (/bin/bash).
As long as neither /bin/bash nor /usr/bin/env are standards, there can be issues. In addition to this, there is no agreed upon registry for reservation of the names of the executables.
Keep in mind, for this to happen, the user probably changed the system default PATH to put Dwarf Fortress first. sudo usually scrubs the environment to default settings unless you’ve taken steps.
Read the comments on the answer. He dropped a symlink into /usr/local/bin to make the command available to him. /usr/local/bin/df ?
The original df is in /bin. He placed another df to /usr/local/bin. The default PATH on Ubuntu has /usr/local/bin before /bin, so his df gots executed instead of the system one.
Why would they use df? Did they not know of the other df? Or did they just not care? I don’t care if someone else set the PATH variable and it isn’t your fault, at best it is confusing, at worst someone messes up an install/copy/backup script, with potential to hose their system.
Not all the world is Unix. I can’t confirm with cursory searches, but given the character set choice (CP437) I strongly suspect that Windows was the original platform.
I found the code base size statistics interesting as well:
Here are some interesting FreeBSD 11 code stats from The Design and Implementation of the FreeBSD Operating System (2nd edition): https://imgur.com/a/sF3aV
I think it’s actually bigger than all the evaluated, secure kernels combined. It might be bigger than the TCB’s of all the security-oriented filesystems combined. I’m not sure about that as much since at least one might have been big. I’d have blocked it from a secure system before but now I’m certain. User-mode or it’s gone.
oh wow. I remember someone claiming that most of FreeBSD code is Ethernet drivers… they were wrong :D
It may be true! These stats don’t include any drivers or machine-dependent code (“not shown are 2,814,900 lines of code for the hundreds of supported devices”).
Not sure where the discrepancy comes: the linked slides claim ~9M LOC, while the book claims, for amd64, 1.5M (machine-independent) + 0.5M (machine-dependent) + 2.8M (drivers) ~= 4.8M. (I doubt there’s twice as much machine-depended code for other archs).
You’re right, but machine-dep code stats don’t include drivers (see the update in my comment above).
CVE and vulnerability statistics are nonsense. How often do we have to explain this?
Hmm, they didn’t just count CVEs; according to the slides, they did a three-month audit of BSDs and then made conclusions based on the found bugs. So, although close, it’s not exactly “vulnerability statistics”.
[Comment removed by author]
A set of requirements, good design, implementation, and strong verification of each by independent parties. It’s what was in the first, security certifications. The resulting systems were highly resistant to hackers. At B3 or A1 level, that usually showed during the first pentests where evaluators would find very little or nothing in terms of vulnerabilities.
That’s a great presentation despite deficiencies I’ll overlook. Especially on the relationship between what vulnerability researchers focus on and what the CVE lists show. A good example of this I’ve been discussing in another thread is OpenVMS. It lives up to its legendary reliability as far as I can tell so far but I learned that its security was an actual legend: mix of myth and reality. The reality was better architecture for security than its competitors back in the day, attention to quality in implementation, and low CVE’s in practice with famous DEFCON result. I figured what actually was happening is most hackers didn’t care about it or just couldn’t get their hands on the expensive system (same with IBM mainframe/minicomputers). I predicted they’d find significant vulnerabilities in it which happened at a later DEFCON. So, nice work, highly reliable, and not as secure as advertised by far. ;)
Another good example to remember is Linux kernel. I slam it on vulnerabilities but that’s because they (esp Linus) don’t seem to care that much. The vulnerability count itself is heavily biased due to its popularity like Windows once was before Lipner of high-assurance security implemented Security, Development Lifecycle. I’ll especially note the effect of CompSci and vendors of verification/validation tools. They love hitting Linux since it’s a widely-used codebase with open code. Almost every time I see a new tool in static analysis, fuzz testing, or whatever they apply it to Linux kernel or major programs in Linux ecosystem. They find new stuff inevitably since the code wasn’t designed for security or simplicity like OpenBSD or similar project. So, there’s more to report just because there’s more eyeballs and analysis in practice instead of just in “many eyeballs” theory. Same amount of attention applied to other projects might have found similar amount of vulnerabilities, more, less, or who knows what.
nickpsecurity:
So, there’s more to report just because there’s more eyeballs and analysis in practice instead of just in “many eyeballs” theory
That was one of the conclusions from Ilja as well, if I read it right:
“Say what you will about the people reviewing the Linux kernel code, there are simply orders of magnitude more of them. And it shows in the numbers”
A more interesting question is that of password reset. Does one store a random password that’s e-mailed (and/or texted—though this might incur additional cost to the operator) to the user? The details matter: can one reset a password when the temporary one has already been issued? For how long is the temporary one valid? Can the existing one still be used? If the existing one is used, is the temporary one wiped out? Should there be intermediary “security questions” before issuance of a temporary token (as per the OSAWP recommendation)?
And now they’re trying to persuade every project they use to switch to Apache License 2:
I wish the ASF would still be using APlv1. It’s sad that the US legal system and patent situation caused this mess. The ASF is a very US-centric organisation (even though they don’t tend to view themselves as such), and from a perspective of a country where software paternts are not (yet) a thing, the differences between APLv1 and APLv2 appear as a solution looking for a problem.
Even in the US, this feels like a solution looking for a problem. BSD licenses have long been considered to provide an implicit patent grant (by the very wording: “Permission is hereby granted to use, copy, modify and distribute for any purpose…”). http://en.swpat.org/wiki/Implicit_patent_licence
And now they’re trying to persuade every project they use to switch to Apache License 2
No, they are asking politely if the projects might be willing to consider changing their licensing to be compatible. There is no persuasion going on by ASF people (which I assume you mean by “they”).
Maybe I used the word incorrectly, but to me a polite request to change the license or the influential in the open source world organization would stop using the product feels pretty close to persuasion.
I don’t see any major problem with them trying to persuade React and RocksDB to use a different license (in fact, I welcome it, personally). What they aren’t trying to do is coerce RocksDB and React to use the APL2. That would be a very different situation.
[Comment removed by author]
‘pure’ implementations of libraries (for interpreted languages) are generally popular (imo because figuring out how to configure your system to support an unfamiliar build toolchain is wrongly seen as more difficult than porting and maintaining code).
It also produces huge and slowish code. My artisanal hand-ported versions are faster and smaller than libsodium compiled with emscripten, although this will probably change with WebAssembly.
While somewhat true - If full ports were never done we would all collapse under the weight of the setup requirements of the shittiest languages you can find.
It must be noted that this has happened a lot more times if one considers ccTLDs (which are TLDs). In fact, .cs has died twice. Once for Czechoslovakia and once for Serbia-Montenegro https://en.wikipedia.org/wiki/.cs
Indeed. Still see advertisement on trucks, etc. that have email address in the dead .yu zone here in Montenegro. It seems that the transition period of 3 years was too short. On the other hand, .su (for Soviet Union) is still active.