1. 9

    The black market would have paid much more for these exploits than the chump change he was given as bug-bounty by most of the companies (if at all).

    Given we all know how much money is wasted in big corporations everywhere, they should drastically increase their bounties if they really want to motivate people to become and remain white hats.

    1. 5

      I agree with you that bounties should be much higher, but I think there’s also substantially higher risk involved with the black hat world, so the bounties shouldn’t actually need to have perfect parity.

      There’s also the risk of “cobra effect” if your bounties get too high, where your employees or vendors are eventually incentivized to secretly collaborate with “researchers” to introduce and “find” security flaws.

      1. 3

        Is it illegal or unlawful to sell computer exploits?

        1. 7

          IANAL, but Yes-ish. Even if found not guilty, it is immensely powerful to silence and intimate security researchers by reaching for broadly worded laws like the Computer Fraud and Abuse Act (CFAA) or DMCA. There are some intimidating laws in various jurisdictions (Germany has the “Hacker paragraph” which seems worse than CFAA imho)..

          For things to be research-friendly, bug bounty programs typically have to promise not to sue explicitly. E.g., https://blog.mozilla.org/security/2018/08/01/safe-harbor-for-security-bug-bounty-participants/

    1. 17

      I have yet to hide any tag on Lobsters, but if there were a tag for these stupid OOP vs FP hot takes I would hide them in an instant.

      1. 16

        I’d generalize that to every article “Uncle Bob” writes.

        1. 3

          But you would miss the discussions on Lobsters, which are often better than the OP ;)

          1. 2

            Same; I think I’d hide even an OOP tag by itself, never mind FP.

          1. 8

            https://rmmh.github.io/abbrase/ walks Markov chains. Memorable and doesn’t lose entropy.

            1. 3

              That method would still need 12-13 words to get the 127bits of entropy. But I like that it’s helping with her total length of the final password without losing entropy.

              1. 4

                I’m curious where her 128-bit requirement came from. That’s a typical minimum these days for symmetric crypto keys, but that assumes a lot about the threat model and the attacks that will be mounted against the crypto. A password has to handle two threat models:

                • Online attacks. If someone has access to the authentication interface for the thing that you’re trying to password protect, can they guess your password?
                • Offline attacks. If someone leaks the password database, can they generate your password from the salted hash?

                For online attacks, 50 bits of entropy seems like plenty. If you limit logins to one per second with a given username, then it would take 2^49 seconds on average to guess a 50-bit password. That’s almost 20 million years, so it’s far more than you’d need for this threat model.

                For offline attacks, assume hash is properly salted and so you have to attack each password independently. Most password hashing algorithms are designed to be slow (and difficult to implement as fixed-function circuits). If you assume that the person has been incompetent and used SHA256, then this starts to be a bit worrying. A single GPU will crack a 50-bit password in around 10 days on average (using the 622 million SHA256 hashes per second from Some Guy On the Internet). If they’re using Argon2, this is a lot better. It looks as if GPU hash rates Argon2 (which is designed to not get much speedup from GPUs or cheap fixed-function silicon or FPGA) are still under 20,000/second. That means that we’re looking at about 8-10 years of GPU time per password for offline attacks. Of course, these attacks are intrinsically parallel and so if you wanted crack a single specific password then you’d be able to just throw more GPUs at it. If you’re using a cloud provider, one GPU for a year and 365 GPUs for a day cost the same, so you might as well throw a load of them at the problem. Using Azure’s spot pricing, cracking a single 50-bit Argon2-hashed password would cost a bit under $10K.

                Next comes the security economics bit: If it would cost $10K to crack your password, what is the value of the stuff that it would gain access to? If it’s a lot more than $10K, then you should worry about your 50-bit password if offline attacks are in your threat model. Typically; however, they are not. For high-value targets, the password is provided to some isolated hardware / software that stores the real encryption key. For example, most of my day-to-day work use is protected by a 6-digit PIN that is used to authenticate me to the TPM. That PIN is completely useless if you steal it unless you also steal the TPM. Once I’ve logged in, the TPM will provide RSA signatures with a key that the TPM is designed to make it impossible to extract. To impersonate me to a remote system, you’d need to extract that key, not my password. At that point, it’s significantly more secure than the OS kernel and so any sensible attacker would look for a browser or OS exploit to compromise the endpoint. Adding more entropy to my password would not make life any harder for an attacker.

                All of that said, I really like the abbrase model. It’s a shame that it can’t actually be used in most places as-is, because so many things require you to enter a password with a mix of upper-case, lower-case, numbers and symbols. I suppose you could just add a A1; on the end of every password…

              2. 1

                That’s interesting! I think you could actually have a system that has some of the advantages of both of these methods, if you made an alternative diceware list with a way of uniquely shortening each word. i.e. one reason diceware is nice is that you can use it without relying on a prng. I suppose you could also make a version of this app that takes any sequence of triplets and mnemonizes them, which would be another good way to do it if you don’t mind typing your password into an app once (which I usually don’t mind too much)

              1. 3

                It’s a good idea but I think 13 words is still pretty crazy. It’s not only a lot to remember, you also need to type a lot. Also, as OP described, even in the best case scenario where people use a password manager, they still have to remember at least two to three passwords.

                On a side note: I wasn’t able to open the link for the entropy calculation method.

                1. 2

                  It’s definitely a tradeoff; you can of course use the same trick for shorter passwords.

                  I changed the link to point at dropbox instead of overleaf; hopefully that works better.

                  1. 2

                    How often do you type your password? Once a day? Once an hour? I think that that helps definite what ‘too long’ is.

                    If you have a 13 word password that you only type once a month when you restart your computer, and a 6 word password to unlock the second lock on your gpg key every 4 hours, it’s different than a 10 character password you have to retype every 10 minutes.

                    1. 1

                      I see were you want to go but the catch is that the passwords you type the least would then also the hardest to remember. Not a good combination.

                      1. 1

                        That’s where the tradeoff between memorability and shortness comes in.

                        Passwords I type very frequently are generated with about 5 bits per character by excluding pairs on a qwerty keyboard that are hard to type together, passwords that I type less frequently are much longer diceware type which are easier to memorize with infrequent use. Passwords I don’t type at all are just 100 bits or so of base64

                  1. 2

                    I’m normally impressed by the Rust team, but from my position of ignorance this seems like a surprisingly bad solution to this problem. It sounds like at least some people were planning the whole time to release 1.52 without fixing the bugs that were exposed by this check (just, without the check in place). Maybe they didn’t know about the miscompilation?

                    The chosen solution, to totally disable a feature that’s crucial for usability, seems pretty extreme - surely there are less extreme workarounds? e.g. you could presumably simply clean the cache when you see this error.

                    I do really appreciate the helpfulness of this blog post in describing the problem, and I’m probably missing some important details that make this the best way to deal with the situation.

                    1. 21

                      Rust is a lot about correctness, and we’d rather prefer to keep such a critical feature off rather than having it broken. We didn’t know about this bug until last Thursday and it’s actually present in versions before Rust 1.52.0.

                      Even if rare and sporadic, this is most likely an old bug. This means that the recommendation for all users is to disable incremental compilation, even on older Rust versions if they want to make sure they don’t hit this bug.

                      The new version was released to avoid users from downgrading to avoid the ICE, which would just hide a miscompilation - leading potentially to other problems that we would need to trace down to this bug.

                      Incremental compilation also isn’t quite as crucial in Rust: it only happens on a per-crate level, so it’s not like you need to rebuild everything. It’s annoying, but this gives us time to write a proper fix.

                      I would expect a 1.52.2 soon once engineers had a good look at this issue and validated solutions.

                      1. 2

                        surely there are less extreme workarounds?

                        Add RUSTC_FORCE_INCREMENTAL=1 to your .profile and move on ? Hopefully you don’t ship anything critical this way. The new default just prevents any kind of miscompilations by giving you the safe route by default. That way nothing goes sideways at some possibly critical application due to a known bug. Which might do real world damage.

                      1. 47

                        I’m so tired of rehashing this. Pointing out that SemVer is not 100% infallible guarantee, or that major versions don’t always cause major breakage adds nothing new.

                        Lots of projects have a Changelog file where they document major changes, but nobody argues that reading changelogs would hurt you, because it may not contain all tiniest changes, or mention changes that would discourage people from upgrading, staying on insecure versions forever, etc.

                        SemVer is just a machine-readable version of documentation of breaking changes.

                        1. 23

                          Yes, and the article tries to succinctly sum up what value can be derived from that and what fallacies await. I’d have to lie to have ever seen it summed up thought that lens in one place.

                          I’m sorry it’s too derivative to your taste, but when the cryptography fire was raging, I was wishing for that article to exist so I can just paste it instead of extensive elaborations in the comments section.

                          1. 11

                            I thought the same thing initially, but it could also be coming from the perspective of using Rust frequently, which is strongly and statically typed. (I don’t actually know how frequently you use it; just an assumption.)

                            A static/strong type system gives programmers a nice boundary for enforcing SemVer. You mostly just have to look at function signatures and make sure your project still builds. That’s the basic promise of the type system. If it builds, you’re likely using it as intended.

                            As the author said, with something like Python, the boundary is more fuzzy. Imagine you write a function in python intended to work on lists, and somebody passes in a numpy array. There’s a good chance it will work. Until one day you decide to add a little extra functionality that still works on lists, but unintentionally (and silently) breaks the function working with arrays.

                            That’s a super normal Python problem to have. And it would break SemVer. And it probably happens all the time (though I don’t know this).

                            So maybe for weakly/dynamically typed languages, SemVer could do more harm than good if it really is unintentionally broken frequently.

                            1. 8

                              That’s all very true!

                              Additionally what I’m trying to convey (not very successfully it seems) is that the reliance on that property is bad – even in Rust! Because any release can break your code even just by introducing a bug – no matter what the version number says. Thus you have to treat all versions as breaking. Given the discussions around pyca/cryptography this is clearly not common knowledge.

                              The fact that this is much more common in dynamic languages as you’ve outlined is just the topping.

                              I really don’t know what I’ve done wrong to warrant that OP comment + upvotes except probably hitting some sore point/over-satiation with these topics in the cryptography fallout. That’s a bummer but I guess nothing I can do about it. 🧘

                              1. 7

                                Car analogy time: You should treat cars as dangerous all the time. You can’t rely on seatbelts and airbags to save you. Should cars get rid of seatbelts?

                                The fact that SemVer isn’t 100% right all the time is not a reason for switching to YOLO versioning.

                                1. 3

                                  Except that SemVer is not a seatbelt, but – as I try to explain in the post – a sign saying “drive carefully”. It’s a valuable thing to be told, but you still have to take further measures to ensure safety and plan for the case when there’s a sign saying “drive recklessly”. That’s all that post is saying and nothing more.

                                  1. 2

                                    Seatbelts reduce the chance of death. Reading a changelog reduces the chance of a bad patch. Trusting semver does not reduce the chance of an incompatible break.

                                    1. 6

                                      I really don’t get why there’s so much resistance to documenting known-breaking changes.

                                      1. 3

                                        I really don’t get why there’s so much resistance to documenting known-breaking changes.

                                        I mean you could just…like…read the article instead of guessing what’s inside. Since the beginning you’ve been pretending the article’s saying what it absolutely isn’t. Killing one straw man after another, causing people to skip reading because they think it’s another screech of same-old.

                                        I’m trying really hard to not attribute any bad faith to it but it’s getting increasingly harder and harder so I’m giving up.

                                        Don’t bother responding, I’m done with you. Have a good life.

                                        1. -1

                                          mean you could just…like…read the article instead

                                          So where in that article do you say why people don’t want to document known breaking changes ?

                                          Offtopic: That was really hard to read. Too many fat prints and

                                          quotes

                                          with some links in between. It just destroyed my reading flow.

                                          I also think the title “will not save you” is obviously telling everything about why people are just not reading it. It’s already starting with a big “it doesn’t work”, so why should I expect it to be in favor of it ?

                                          1. 4

                                            So where in that article do you say why people don’t want to document known breaking changes ?

                                            Well, the pyca/cryptography team documented that they were rewriting in Rust far in advance of actually shipping it, and initially shipped it as optional. People who relied on the package, including distro package maintainers, just flat-out ignored it right up until it broke their builds because they weren’t set up to handle the Rust part.

                                            So there’s no need for anyone else to cover that with respect to the cryptography fight. The change was documented and communicated, and the people who later decided to throw a fit over it were just flat-out not paying attention.

                                            And nothing in SemVer would require incrementing major for the Rust rewrite, because it didn’t change public API of the module. Which the article does point out:

                                            Funny enough, a change in the build system that doesn’t affect the public interface wouldn’t warrant a major bump in SemVer – particularly if it breaks platforms that were never supported by the authors – but let’s leave that aside.

                                            Hopefully the above, which contains three paragraphs written by me, and only two short quotes, was not too awful on you to read.

                                            1. 1

                                              Thanks, your summary is making a good point, and yes the original blogpost was hard to read, I did not intend this to be a troll.

                                              And nothing in SemVer would require incrementing major for the Rust rewrite

                                              Technically yes, practically I know that many rust crates do not increment the minimum required rust compiler version until a major version. So fair enough, semver in its core isn’t enough.

                                  2. 3

                                    AFAIU, I think the OP comment may be trying to say that they agree with and in fact embrace the following sentence from your article:

                                    Because that’s all SemVer is: a TL;DR of the changelog.

                                    In particular, as far as I can remember, trying to find and browse a changelog was basically the only sensible thing one could do when trying to upgrade a dependency before SemVer became popular (plus keeps fingers crossed and run the tests). With the main time waster being trying to even locate and make sense of the changelog, with basically every project showing it elsewhere, if at all. (Actually, I seem to remember that finding any kind of changelog was already a big baseline plus mark for a project’s impression of quality). As such, having a hugely popular semi-standard convention for a tl;dr of the changelog is something I believe many people do find super valuable. They know enough to never fully trust it, similarly as they’d know to never fully trust a changelog. Having enough experience with changelogs and/or SemVer, they however do now see substantial value in SemVer as a huge time saver, esp. compared to what they had to do before.

                                    Interestingly, there’s a bot called “dependabot” on GitHub. I’ve seen it used b a team, and what it does is track version changes in dependencies, and generate a summary changelog of commits since last version. Which seems to more or less support what I wrote above IMO.

                                    (Please note that personally I still found your article super interesting, and nicely naming some phenomena that I only vaguely felt before. Including the one I expressed in this post.)

                                    1. 2

                                      I think there is something a bit wrong about the blanket statement that others shouldn’t rely on semver. I suspect that for many projects, trying one’s best to use the API as envisioned by the author, and relying on semver, will in practice provide you with bugfixes and performance improvements for free, while never causing any major problems.

                                      I like the parts of this blog post that are pointing out the problems here, but I think it goes way too far in saying that I “need to” follow your prescribed steps. Some of my projects are done for my own enjoyment and offered for free, and it really rubs me the wrong way when anyone tells me how I “should” do them.

                                      [edited to add: I didn’t upvote the top level comment, but I did feel frustrated by reading your post]

                                      1. 1

                                        I’m not sure to respond to that. The premise of the article it that people are making demands, claiming it will have a certain effect. My clearly stated goal is to dissect those claims, so people stop making those demands. Your use case is obviously very different so I have no interest to tell you to do anything. Why am I frustrating you and how could I have avoided it?

                                        1. 3

                                          My negative reaction was mostly to the section “Taking Responsibility”, which felt to me like it veered a bit into moralizing (especially the sentence “In practice that means that you need to be pro-active, regardless of the version schemes of your dependencies:”). On rereading it more carefully/charitably, I don’t think you intended to say that everyone must do it this way regardless of the tradeoffs, but that is how I read it the first time through.

                                    2. 9

                                      Type systems simply don’t do this. Here’s a list of examples where Haskell’s type system fails and I’m sure that you can produce a similar list for Rust.

                                      By using words like “likely” and “mostly”, you are sketching a sort of pragmatic argument, where type systems work well enough to substitute for informal measures, like semantic versioning, that we might rely on the type system entirely. However, type systems are formal objects and cannot admit such fuzzy properties as “it mostly works” without clarification. Further, we usually expect type-checking algorithms to not be heuristics; we expect them to always work, and for any caveats to be enumerated as explicit preconditions.

                                      1. 2

                                        Also there were crate releases where a breaking change wasn’t catched because no tests verified that FooBar stayed Sync/Send.

                                        1. 1

                                          All I meant is that languages with strong type systems make it easier to correctly enforce semver than languages without them. It’s all a matter of degree. I’m not saying that languages like Rust and Haskell can guarantee semver correctness.

                                          But the type system does make it easier to stay compliant because the public API of a library falls under the consideration of semver, and a large part of a public API is the types it can accept and the type it returns.

                                          I’m definitely not claiming that type systems prevent all bugs and that we can “rely entirely on the type system”. I’m also not claiming that type systems can even guarantee that we’re using a public API as intended.

                                          But they can at least make sure we’re passing the right types, which is a major source of bugs in dynamically typed languages. And those bugs are a prominent example of why OP argues that SemVer doesn’t work—accidental changes in the public API due to accepting subtly different types.

                                    1. 2

                                      Absolutely fantastic. I’ve been doing research on practical homomorphic encryption libraries and it’s nice to add another to my list.

                                      I had some questions but I’m not an expert so hopefully they aren’t too basic.

                                      We needed an architecture-agnostic cryptographic hash procedure with a monoid homomorphism respecting string concatenation, written in a low-level language.

                                      1. May I ask why the above is a requirement?

                                      2. What benefit does this library have over Facebook’s Folly LtHash library? Not being tied to the rest of the folly library seems like a big enough plus to me but I would think there’d be more.

                                      3. Is there any possibility of doing arbitrary or even a restricted set of arithmetic over the encrypted hashes based on Cayley hash functions?

                                      4. Do you know of any homomorphic libraries that keep the circuits/calculations private? I was looking into garbled circuits which led me to multiparty computation but it seems that in order for that to work it requires all parties to be online at the same time whereas your library, SEAL and LtHash do not have that requirement.

                                      Cool project even if you don’t answer anything.

                                      1. 2

                                        re 1: The project I’m working on is designed to run in the browser via webassembly, so we can’t rely on fancy instruction sets.

                                        re 2: IIUC LtHash is homomorphic with respect to set union, rather than string concatenation. You might be able to find an additional homomorphism for treating string concatenation as set union, but that seems at least a little tricky.

                                        re 3: I don’t really know, though my guess is that it’s hard.

                                        re 4: I also don’t know about this, sorry!

                                      1. 6

                                        As a data point, I’ve posted ~two things that I authored, both of which I strongly expect no one else would have come across (I don’t have a strong “brand” and don’t use any one blog regularly). One received 15 upvotes and some encouraging discussion, the other received 57 upvotes and also mostly-encouraging discussion. I think neither was flagged even once. It seems to me that banning or discouraging authored-by posts would basically mean that no one would have read either of these, or similar submissions, and I (biased-ly) think that would at least be a cost of your proposal.

                                        A thing that does consistently bother me, in a similar vein (for example with this proposal) is that comments seem to count as upvotes, which makes unpopular-but-inflammatory posts stay on the home page for a long time.

                                        1. 6

                                          When you have eliminated the JavaScript, whatever remains must be an empty page.

                                          For real now, it’s 2020, enable JavaScript.

                                          Reading this made me a little bit angry, and I decided not to read your blog post. I think if it had said “Sorry, my site doesn’t work without Javascript enabled!” I would have whitelisted you instead.

                                          1. 6

                                            Ditto – I understand the frustration of authors who want to use JS, but this error message is just silly. If anything, the case for keeping JS disabled by default has grown stronger over time.

                                            1. 4

                                              Making you angry wasn’t my goal. I’m sorry about that. I changed the message to something friendlier :)

                                              1. 5

                                                Why create a web page that can’t be read without using JS? For something complex I could see the need, but this is a simple blog post that could easily be served as a few KB of static HTML.

                                                And it seems this mechanism is making your site slow and unreliable too. I got the error message, but then a second later it went away, then a second later the text appeared. Not a good UX…

                                                1. 2

                                                  Why create a web page that can’t be read without using JS?

                                                  I have no reason to think KCreate intended to do this, but if someone wanted to cheat lobste.rs, this would be one way to do it.

                                                  Basically, making an article require JavaScript is a guaranteed way to get a few comments. Lobsters also counts comments as upvotes. Since there’s no “Requires JavaScript” downvote reason, requiring JavaScript will actually make your article rank higher, almost guaranteed

                                                  1. 4

                                                    That’s an exceedingly uncharitable interpretation. It’s an outstanding problem I’ve pointed out to him and he intends to remove the JS when he gets time. It’s just old code from when he first set up his website.

                                                    1. 2

                                                      Since there’s no “Requires JavaScript” downvote reason, requiring JavaScript will actually make your article rank higher, almost guaranteed

                                                      Users can flag the submission as “Broken link” instead of commenting.

                                                      I’m fine with requiring any submission here to be readable without JS - but that’s currently not a hard requirement for a submission.

                                                      Edit I just checked every site currently linked from the front page in w3m, and this was the only one that was not readable[1]. This gives one data point towards an informal or formal requirement that every submission should degrade gracefully to be on-topic for lobste.rs.

                                                      [1] the page that is visible does link to the github source now.

                                                  2. 4

                                                    Thanks - the new message is much better :)

                                                1. 1

                                                  Thanks for this, @bwr! Quick note: my RSS client says it can’t find your feed. I tried both the root URL and the /feed/ path.

                                                  1. 1

                                                    Oh, weird! It seems to work fine in my reader. Note that the URL is https://, and doesn’t have www. prepended. Can i ask what client you’re using?

                                                    1. 2

                                                      I tried it again, and it worked. 🤷‍♀️ Subscribed!

                                                      1. 1

                                                        Of course! I use Reeder 4 on MacOS and iOS. I’ll try again, making sure the URL doesn’t have the www.

                                                    1. 1

                                                      (Cross-commented from HN)

                                                      If anyone is interested, I’ve been trying to think about the problem of moving ranges in a list-structured CRDT for a couple of weeks now for a side project, and I’ve got a candidate that seems to satisfy the most obvious constraints. I’d be really interested in any feedback / holes you can poke in my solution!

                                                      Rough notes are here: https://docs.google.com/document/d/1p1K3sxgKGYMEBH72r-lnP9GnBm5N15h77C81W15kPiE/edit?usp=sharing

                                                      1. 6

                                                        I want to agree with a more limited scope of this, that software has to deal with complexity from the business domain eventually and either you do or you narrow your focus until you’ve ignore swathes of the business domain.

                                                        Unfortunately, the full claim (at least as articulated here) also seems to hand wave away shitty developers and bad engineering and inexperience as another form of complexity. While I can kinda see that argument–e.g., that you have to account for your resources in writing software and that failure to do so will leak into the finished project as emergent complexity in implementation instead of developer training–it seems to me to both be too easily misunderstood and to go in the face of experience many of us have with software that is just obviously too complicated (to wit, enterprise FizzBuzz) for what it is trying to do.

                                                        1. 1

                                                          I think the most contentious part of the post is that I just simply assert that people are an inherent part of software. You often avoid the incidental complexity in code by indirectly shifting it to the people working the software.

                                                          Their mental models and their understanding of everything is not fungible, but is still real and often what lets us shift the complexity outside of the software.

                                                          The teachings of disciplines like resilience engineering and models like naturalistic decision making is that this tacit knowledge and expertise can be surfaced, trained, and given the right environment to grow and gain effectiveness. It expresses itself in the active adaptation of organizations.

                                                          But as long as you look at the software as a system of its own that is independent from the people who use, write, and maintain it, it looks like the complexity just vanishes if it’s not in the code, and can be eliminated without side effects.

                                                          1. 9

                                                            I think it’d be easier for people to have got that if you’d built the case for simple software and then explored the people side of “okay, sure, the software is simple, but how’d we get there”.

                                                            The problem with your current framing is that it seems to boil down to this conversation (:

                                                            • Me: “Software is overly complicated, we can make it simpler.”
                                                            • Thee: “No we can’t, only complexity can manage complexity!”
                                                            • Me: “Preeeeettttty sure we can. A lot of code is clearly overcomplicated for what it does. <example of, say, too much ceremony around adding a seconds offset to a time and then printing it in UTC>”
                                                            • Thee: “The code is simpler, yes, but at what cost? Can we not say that the training required to prevent that engineer from overcomplicating a time routine is itself a form of complication? Can any among us not indeed be considered but spirited complications upon the timeface of software development?”
                                                            • Me: “If we claim that all complexity in software can only be removed at the increase of complexity elsewhere, I find this conclusion uniquely unsatisfying, somewhat like the laws of thermodynamics.”
                                                            • Thee: “Indeed. Life is often unsatisfying, and one might even say, complicated.”
                                                            • Me: “…”
                                                            • Me: “I’m not going to reward that joke. Anyways, it’s easy to construct (or, sadly, to recount) cases where software was simple and then, for literally no reason other than it seemed like a giggle, somebody made it less simple without appreciable increase in value or functionality. This strikes me as not as an issue of complexity but one of poor decisions or ignorance.”
                                                            • Thee: “Dealing with poor decisions and ignorance is core to the human part of software, and thus is a complication core to software itself.”
                                                            • Me:” Right, and I kinda agree, but if we recast all malice, poor decisions, or ignorance as complexity then we might as well define complexity as anything that makes life harder than we want it. And while I am attracted to that as a way of correlating with Evil, I also think that such a redefinition of complexity conflates it with other forms of things that make life harder than we want it. Ergo, I disagree with your overly-broad usage of complexity.”
                                                            • Me: “…Erlang in anger is still a great read though, as is Learn You Some Erlang. <3
                                                            1. 2

                                                              Point taken on the article being possible to word more clearly.

                                                              The cybernetics argument for the law is that the inherent complexity you are able to represent and model in your mind is what lets you extract and cancel some of the complexity.

                                                              If you want to make a long story short, part of my blog post would probably be a neighboring idea to “if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it” – if you had to be very clever to simplify the code and reduce the problem to a simpler expression, you’ll have to have that cleverness around to be able to update it the next time around. I.e. we have to embrace that aspect of complexity as well if we want the overall enterprise to keep moving and evolving sustainably.

                                                              Let’s pick the time example you introduced, maybe the example is more workable.

                                                              Imagine the risk of simplifying too much, for example, where instead of adding a seconds offset to a time in UTC, you decide to only handle UTC, and since you only have UTC, you don’t show the timezones anymore. Also since you do time calculation, you can make it a lot simpler by using the unix epoch (ignoring leap seconds) everywhere.

                                                              This essentially forces the complexity onto the user, at the benefit of the developer. Clearly, aiming for code simplicity isn’t to anyone’s benefit once we increase our scope to include the user.

                                                              If you shift it around and always work with user-level timestamps (so it always works aligned with what the user wants), then you have to fight a lot harder when you want to calculate time differences of short duration within your codebase’s internals. This will increase the complexity of code in “unnecessary” manners, but encourages alignment with user needs throughout.

                                                              You might decide that the proper amount of complexity in code results from using both timestamps in a human referent format (year/month/day, hours, minutes, seconds, microseconds) in some cases, and then monotonic timestamps in other cases. This is a trade-off, and now you need to manage your developers’ knowledge to make sure it is applied properly within the rules. Because this higher variety sometimes increases code quality, but can lead to subtle bugs.

                                                              I think embracing the complexity of the real world is the way to go, but embracing it also means managing it, investing in training, and considering the people to write the software to be first order components. In some cases, it might be worth it to have messy ugly code that we put in one pile because it lets everyone else deal with the rest more clearly. We might decide “all the poop goes in this one hole in the ground, that way we know where it isn’t”, and sometimes we might decide not.

                                                              Also I’m glad you enjoyed the books :) (incidentally, how much should be “discoverable” in the tool, and how much should be relegated to the docs is a question in managing complexity, IMO)

                                                              1. 2

                                                                if you had to be very clever to simplify the code and reduce the problem to a simpler expression, you’ll have to have that cleverness around to be able to update it the next time around.

                                                                This ignores the fact that many times simplifying code has nothing to do with cleverness and everything to do with just wanting less code. Folding a bunch of array assignments into a simple for loop isn’t clever, for example–and if you argue that it is clever, then I fear that we’ve set such a low bar that we can’t continue to discuss this productively.

                                                                Clearly, aiming for code simplicity isn’t to anyone’s benefit once we increase our scope to include the user.

                                                                There’s nothing clear about that. It benefits the implementing engineer, it benefits the debugging engineer, and it benefits the computer that runs it. Your example about making the code as simple as possible until it’s just delivering a Unix timestamp (instead of the UTC string) is a red herring because the original deliverable spec was ignored.

                                                                1. 3

                                                                  This ignores the fact that many times simplifying code has nothing to do with cleverness and everything to do with just wanting less code. Folding a bunch of array assignments into a simple for loop isn’t clever, for example–and if you argue that it is clever, then I fear that we’ve set such a low bar that we can’t continue to discuss this productively.

                                                                  No I’d probably argue that it isn’t significant. Probably. On its face, it’s such a common refactoring or re-structuring that anyone with the most basic programming knowledge has already internalized enough information to do it without thinking. It fades back into the background as mostly anyone with basic knowledge is able to conceptualize and correct that.

                                                                  I figure it might have been a trickier thing to do with assembler or punch cards (would it have even been relevant?) It might play differently with some macros or post-processing. Or changing to a loop folds the lines and reduces code coverage statistics to the point where a build fails (I’ve seen that happen). Or a loop implies new rules when it comes to preemptive scheduling based on your runtime, or plays with how much optimization takes place, etc.

                                                              2. 2

                                                                I love this reply. I’d just add that there is definitely no conservation law for “things that make life harder than we want it”.

                                                                It might be that we’re often living in optimization troughs that look locally flat, but when that’s true it’s usually because those are equilibria that we’ve spent effort to choose, relative to nearby possibilities that are worse.

                                                                It’s dangerous to say “everywhere we look, there are no easy wins, so we should just give up on making improvements”. To use an annoyingly topical analogy, it’s like saying “see? The coronavirus isn’t spreading anywhere near it’s initial exponential rate! We didn’t need to socially isolate after all!”

                                                          1. 6

                                                            I loved “Our Mathematical Universe” - I even thought I understood the various topics before reading it. But Tegmark does an amazing job of pointing to all the crazy implications of our current physical and cosmological theories, and is a very interesting writer - it reminded me a bit of reading various books by Richard Feynman, with the accessible explanations mixed with personal asides. I found it much more mind-bending than anything I’ve read by Feynman, though.

                                                            Edit: sorry if the goal was only to post books that came out during 2019; I only read one or two of those and wouldn’t strongly recommend either of them.

                                                            1. 9

                                                              I used to think that this convention was “right”; over the last few years I’ve changed my mind somewhat. I now think that most code ought to use this convention, but certain types of project should probably use some of the naming tropes listed.

                                                              There are reasons that mathematicians use single-character variables. One it is that it lets you see more concepts at once. In code, this is partly evident in the fact that long names force you to wrap more lines. Steve Yegge has a classic post on the related phenomenon of comments-to-code ratios.

                                                              1. 6

                                                                Your website uses Javascript. Assuming you are wholly responsible for the content, this seems a little weird doesn’t it?

                                                                That said, it is readable without JS huge win for that. So many websites aren’t anymore. :)

                                                                1. 17

                                                                  A few points -

                                                                  1. It’s not my website, it’s a community site whose content I don’t control (although I do like the site in general)
                                                                  2. As I sort of imply in the post, I’m not actually against JavaScript, I’m only against running tons of third party JavaScript purely for someone else’s benefit and at my own risk.
                                                                  1. 6

                                                                    I’m only against running tons of third party JavaScript

                                                                    I actually stopped using NoScript for this reason; I switched to uBlock Origin, which makes it really easy to block all 3rd-party JS without blocking same-site scripts which are more likely to be legitimate: https://github.com/gorhill/uBlock/wiki/Blocking-mode:-medium-mode

                                                                    1. 5

                                                                      Mostly agreed on point #2. JS is disabled in my browser by default, but I turn it on here and there as needed. I’d much prefer entire websites were not created 100% in JS, that’s super annoying. A little JS here and there to make the website more friendly I’m OK with, but I’d rather people just learn proper HTML5 where most of those things are not needed anymore.

                                                                      As for #1: I actually like the layout as well! I did say “wholly responsible”, you are not, so fair enough.

                                                                      1. 8

                                                                        I tend to be of the mind that if a site wont load without javascript, it’s not worth my time.

                                                                        It’s not the best attitude, and I definitely miss out on some good stuff, but I have a low bandwidth, temperamental connection, and most sites I don’t care that much about so I have to draw the line somewhere.

                                                                        1. 4

                                                                          I think the same thing when a site obscures itself with a popup claiming “We respect your privacy”, but doesn’t provide a “No thanks” button. I just move on to the next news story or search result.

                                                                    2. 5

                                                                      That said, it is readable without JS huge win for that. So many websites aren’t anymore. :)

                                                                      to not let them entirely off the hook, the previous version of their site was much less hostile:

                                                                      http://web.archive.org/web/20170203061059/http://lesswrong.com/

                                                                    1. 4

                                                                      I doubt that this response is exactly what you were looking for, but if you’re at all interested in “doing the most good possible with your career”, a la the effective altruism movement, then AI safety research in academia or at a nonprofit like MIRI, OpenAI, or Ought.org, or even at a for-profit org like DeepMind seems like a pretty good bet in expectation, if you can swing it. You can check out an overview of this idea and possible ways of getting involved here.

                                                                      And if you’re looking for something slightly less sci-fi (/more near-term) (but still something like “doing a lot of good with your career”), I think working on self-driving cars is one of the most high-impact things you can do right now. It’s still a relatively small field, and if you can counterfactually bring the advent of ubiquitous self-driving cars nearer by one day (admittedly this is quite difficult), that corresponds to roughly one thousand lives saved.

                                                                      Disclaimer: I currently work at MIRI, and used to work at the autonomous car company Cruise (in both cases, because I take this line of argument very seriously). I don’t speak for either employer. I also don’t think that this line of reasoning is necessarily the right way for most people to go about choosing a career, and I mainly mention it in case you find it interesting or useful.

                                                                      1. 2

                                                                        Out of curiosity, why do you not think getting a career to “do the most good” is a good line of reasoning? (Or am I misreading you) I’m currently thinking of moving into a more socially impactful career myself.

                                                                        1. 3

                                                                          It’s not that I don’t think it’s a good line of reasoning - I do; I just also don’t want to imply that I think everyone is obligated to think that way.

                                                                          1. 1

                                                                            I just also don’t want to imply that I think everyone is obligated to think that way.

                                                                            A radical thought in itself. 🙂

                                                                          2. 2

                                                                            Not GP, but it seems pretty well settled that in most circumstances (probably not autonomous cars though, which I imagine pay pretty well) the best way to aid it with your career is to make a boatload of cash however you can, and use the cash to fund the work you feel should be done.

                                                                          3. 2

                                                                            and if you can counterfactually bring the advent of ubiquitous self-driving cars nearer by one day (admittedly this is quite difficult), that corresponds to roughly one thousand lives saved.

                                                                            This is a completely ridiculous statement. There’s no evidence whatsoever that self-driving cars would be any safer than real cars which actually exist. And if you were good enough to actually make a difference to such a field, you’re smart enough to make an impact somewhere that actually has a bat’s chance in hell of ever actually going anywhere.

                                                                            The self-driving car hype is absurd.

                                                                            1. 1

                                                                              There’s no evidence whatsoever that self-driving cars would be any safer than real cars which actually exist.

                                                                              This is pretty obviously false, conditioning on self driving cars working at all. See, e.g., https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115

                                                                              And as someone with experience of the current state of the art, I’d bet at 4:1 odds (with some better operationalization, and up to a limit of around $200; PM me) that we’ll start seeing widespread adoption of self-driving cars within this decade, conditional on no major economic disasters.

                                                                          1. 29

                                                                            Does anyone know if any actual kernel devs have been involved with this threat? Something that isn’t pointed out in this (imo pretty biased) article is that the initial email was sent more-or-less anonymously, from an account hosted on cock.li, which seems to be at least spiritually 4chan-related. This makes me suspicious that the whole thing was invented as a “look how much backlash there is against these grievous CoCs!” stunt. This suspicion is strengthened by the misleading reference to the Drupal case, which seems to have been mostly in spite of a lack of CoC violations.

                                                                            [edit: there is some discussion of this type of issue on LKML]

                                                                            I mostly found it interesting because I had always assumed that you couldn’t rescind viral licenses like the GPL.

                                                                            1. 3

                                                                              There is a reason the GNU foundation requires a copyright assignment for contributions. An author is allowed to revoke licenses to any code they write under many licenses.

                                                                              Google also requires a copyright assignment for any code you contribute to projects they oversee for similar reasons.

                                                                              Linux sides not have such a requirement which gives the contributors a lot of power.

                                                                              1. 3

                                                                                How does that change anything? The point is the law permits the transfer to be terminated.

                                                                                1. 5

                                                                                  There is a reason the GNU foundation requires a copyright assignment for contributions. An author is allowed to revoke licenses to any code they write under many licenses.

                                                                                  But not under the GPL. You absolutely cannot rescind your license under the GPL.

                                                                                  Google also requires a copyright assignment for any code you contribute to projects they oversee for similar reasons.

                                                                                  No, Google requires a copyright assignment for the same reason as many other companies: so that they have complete control and can use the software commercially and sell it under a proprietary license.

                                                                                  1. 4

                                                                                    On what grounds do you believe that the GPL doesn’t allow you to rescind your license?

                                                                                    1. 2

                                                                                      On the grounds that there’s no mention in the GPL that it allows you to rescind your license.

                                                                                      1. 1

                                                                                        I’m not sure that the GPL has to specifically allow it.https://www.law.cornell.edu/uscode/text/17/203

                                                                                        In the case of any work other than a work made for hire, the exclusive or nonexclusive grant of a transfer or license of copyright or of any right under a copyright, executed by the author on or after January 1, 1978, otherwise than by will, is subject to termination under the following conditions

                                                                                        It does indicate that there are restrictions on when 35 years after you can do the revoking and how much advance warning (on the order of 2 years) is necessary. But no where in the relevant US code do I see anything saying that you must specify that a license is revokable in order to allow for revoking a license.

                                                                                        1. 1

                                                                                          The GPL is a contract with consideration. As I understand it you cannot simply terminate a contract without reason, but if you do so and then you’re sued I don’t think you’ll necessarily be forced to honour the contract by the court. Courts are generally apprehensive about forcing people to do things and prefer to impose damages.

                                                                                        2. 1

                                                                                          GPL is still copyright, it’s just an attempt at subverting it. The developers are still the actual owners, and whenever it’s redistributed, it’s always redistributed under their own terms, namely those specified in the COPYING file. They aren’t slaves to their own conditions, they have the same right to remove it as they have to enforce it – it’s not public domain after all.

                                                                                          1. 1

                                                                                            You can’t just unilaterally revoke a contract. If you download and then redistribute my software under the GPL I can’t just tell you to stop. That’s literally the whole point of the license.

                                                                                            1. 1

                                                                                              No, but I can stop distributing newer versions under that license.

                                                                                              1. 2

                                                                                                Absolutely. But then the ‘kill switch’ these developers have is just the freedom to stop working on the kernel. That’s a freedom they’ve always had, nothing new. The suggestion is that they can ‘rescind’ or ‘revoke’ their license and stop others from continuing development or continuing distributing existing versions. I see no reason why that would be true.

                                                                                      2. 3

                                                                                        Did you not read the article? GPL2 definitely DOES allow one to rescind the license.

                                                                                        1. 15

                                                                                          that article was awful. I mean, if your quoting anonymous comments from 4chan, what you’re doing isn’t journalism

                                                                                      3. 1

                                                                                        I always thought projects did this to be able to change the license.

                                                                                        1. 1

                                                                                          Changing the license is a similar operation to revoking a license in my mind so it makes sense that both reasons would behind the practice.

                                                                                      4. 3

                                                                                        I mostly found it interesting because I had always assumed that you couldn’t rescind viral licenses like the GPL.

                                                                                        You were and are right: you cannot do so. The idea that you can’t because it doesn’t explicitly say you can’t is utterly absurd. The entire point of a copyright license is that you are licensing the work to others. There’s nothing in that license that says you can rescind it, thus you can’t rescind it. It’s pretty simple. The entire world of free and open source software is built upon this premise. If people could just go around and rescind the licenses to their contributions it would destroy free and open source software.

                                                                                        1. 9

                                                                                          So the analysis link seems excessively long, but here’s the relevant part of the law.

                                                                                          https://www.law.cornell.edu/uscode/text/17/203

                                                                                          You don’t have to build a rescinding clause into the license. It’s provided by law. Unless there’s some other reason this isn’t applicable.

                                                                                          1. 6

                                                                                            Seems like (section a.3 is saying) revocation can only occur after 35 years after initial grant, and within a 5 year window? Weird.

                                                                                            1. 2

                                                                                              Ah, this is a good point that I had missed: This particular way of rescinding a license actually isn’t possible for the next few years, since Linux isn’t that old. So the argument described by the possibly-troll lawyer is closer to the way this might actually work, though it seems way more tenuous to me now.

                                                                                              1. 2

                                                                                                But that part is only relevant for people who have already been granted the license correct? I am very definitely not a lawyer and could easily be wrong here. However it seems like there could be an argument that Current users of the kernel would be allowed to continue to exercise their use under the license but that no future versions of the kernel could be published with the code in question. That could have the effect of preventing bugfixes to current kernel versions, as well as stalling kernel development for some time as the core devs worked to strip out and replace all of the code that is no longer licensed to be used.

                                                                                                How the relevant law applies with a viral license is a question that seems unsettled to me and likely would need to go through the courts to settle the question.

                                                                                      1. 3

                                                                                        Serious question from a fully paid up member of the tinfoil hat brigade: Why protonmail?

                                                                                        Don’t forget that if you’re sending messages you also have to consider the recipient. If your hardware is hosed, theirs may well be too. On that basis perhaps mail should be avoided depending on your threat model.

                                                                                        One option would be to use something like a BeagleBone black as it’s open source and I believe verifiable.

                                                                                        Another option would be to use a disconnected host for creating, encrypting and viewing messages then a separate host for relaying. This was the basis for a project I did (and cancelled) a few years back.

                                                                                        1. 1

                                                                                          Yeah, I guess I’m imagining that I’d be able to give my correspondents their own copy of the setup, and instructions on how to use it. I’m definitely not expecting that emails I send to random people will magically be safe from now until the end of time.

                                                                                          1. 2

                                                                                            If you’re talking about dedicated hardware at both ends, why not signal?

                                                                                            1. 1

                                                                                              Signal’s proprietary central server and TOFU-oriented protocol add a lot of attack surface that doesn’t exist in other approaches.

                                                                                          2. 1

                                                                                            The answer to the question “why protonmail” is mainly that I’m not sure what else to do. I have long given up hope that I’ll ever convince anyone to manually use PGP. Protonmail is a platform that I might be able to convince people to use; - has a nice UI and can conform to people’s existing habits and tools.

                                                                                            Edit: Reading this next to my other response does seem to make it clear that I’m confused about how other people ought to relate to the hypothetical system involved here. Obviously they can’t be allowed to just use their phones to read messages, so I’m not sure in what sense they should be allowed to stick with their existing habits.

                                                                                            1. 1

                                                                                              if it’s just secure email wouldn’t spiped suffice?

                                                                                          1. 2

                                                                                            Maybe there should instead be a set of “content notification”-style tags, in a different color (to indicate that they’re flags rather than community topics in themselves): One for sex, one for violence, one for classified material, maybe one for politics, and so on. This would let people decide for themselves what is NSFW in their particular context.

                                                                                            1. 3

                                                                                              I think adding several tags, or a tag infrastructure is going too far. I’d much rather have a simple NSFW tag, that if it catches too much I’ll figure out later when I get home. Most of the time on lobsters there won’t be any NSFW articles, we don’t typically post NSFW things. If you wouldn’t like to see it in your workplace mark it NSFW, and if you don’t care, then don’t mark it. This is not a trigger warning solution, this is a “There are possible job consequences that would mean I can’t read this site at work”, I’d much much rather have false positives than fail to filter something because someone put it under (content warning: very specific nsfw thing that I didn’t know to filter). Adding a content warning infrastructure is a much bigger request than just a single tag.