1. 2

    I don’t think articles with paywalls should be posted. I cannot read this article without purchasing a subscription.

    1. 4

      There are 3 workarounds to the paywall in the comments, one of which was posted by one of the authors of the piece simultaneously with submitting it.

      1. 2

        Feedback taken, all the same, and thank you for raising it.

        1. 1

          Hi, sorry – didn’t see that it had already been posted across the comments. I appreciate you posting the article. It’s good content.

          1. 2

            Not a problem, it was still good to point out. Glad you liked the article!

    1. 1

      If I (Canadian, can’t view it) go the the list of videos published by that account, it claims that ‘This channel has no videos”. I’m curious about what US viewers see?

      https://www.youtube.com/channel/UCuVPpxrm2VAgpH3Ktln4HXg/videos

        1. 1

          Thanks!

      1. 3

        Would someone more familiar with rust sort of explain what’s going on here?

        My understanding is they’re rewriting the borrow checker to somehow be smarter about inferring ownership. He kept throwing around “NLL” but I couldn’t readily find what that stands for.

        1. 9

          NLL is Non-Lexical Lifetimes. Lifetimes in Rust so far are determined essentially by the same rules as variable scope; a program is not allowed to have references to a variable after that variable goes out of scope. This is easy to implement and easy to reason about, but not terribly flexible.

          NLL does more powerful analysis based off the actual data flow in the program, which basically relaxes the rules somewhat. I believe the canonical introduction to the concept is here: http://smallcultfollowing.com/babysteps/blog/2016/04/27/non-lexical-lifetimes-introduction/

          1. 7

            Everything icefox says is true, here is a (runnable) simple example that hints at why we want it so badly:

            let mut v = vec![0]; // Create a Vector (growable array) containing a single element `0`.
            let ref_v = &v[0]; // Take a reference to that element
            
            if opaque() {
                println!("{}", ref_v); // Do something with that reference
            }
            else {
                v.push(2); // Mutate the vector
            }
            

            without NLL, i.e. with lexical lifetimes, the compiler doesn’t know that we won’t use ref_v inside the else block. As such it has to reject the code (otherwise ref_v might become a dangling pointer if the vector reallocates itself). With NLL we know that we never use ref_v again if we enter the else block, so we can accept this code.

            This pattern comes up all the time in real life, it’s basically always possible to work around it, but often it makes the code less elegant.

          1. 2

            supported by the Mozilla WebRender rendering engine

            So… electron.rs? ☹️

            But, no javascript? 😀

            I’m so conflicted.

            1. 12

              So… electron.rs? ☹️

              Doesn’t seem so: https://hacks.mozilla.org/2017/10/the-whole-web-at-maximum-fps-how-webrender-gets-rid-of-jank/ & https://github.com/servo/webrender & https://github.com/servo/webrender/wiki

              As I seem to understand, WebRender is nowhere close to be an Electron alternative. Seems to be an efficient and modern rendering engine for GUI and provide nothing related to JavaScript/Node/Web API.

              So it looks like you can be free of conflict and enjoy this interesting API :) . I personally definitely keep an eye on it for my next pet project and find it refreshing to have an API for UI that look both usable and simple.

              1. 4

                If you’re a fan of alternative, Rust-native GUI’s, you might want to have a look at xi-win-ui (in the process of being renamed to “druid”). It’s currently Windows-only, because it uses Direct2D to draw, but I have plans to take it cross-platform, and use it for both xi and my music synthesizer project.

                1. 1

                  Please, give us some screenshots! ;)

                  1. 1

                    Soon. I haven’t been putting any attention into visual polish so far because I’ve been focusing on the bones of the framework (just spent the day making dynamic mutation of the widget graph work). But I know that screenshot are that important first impression.

                    1. 1

                      Please do submit a top-level post on lobste.rs once you add the screenshots :)

                      1. 1

                        Will do. Might not be super soon, there’s more polishing I want to do.

                        1. 1

                          Thanks! :) And sure, take your time :)

              2. 6

                If comparing with Chromium stack, Webrender is similar to Skia, and this is GUI toolkit on top of it, instead of on top of whole browser. BTW, there’s example of app that has whole (non-native) UI on top of Skia: Aseprite.

                (AFAIK, Skia is something a la Windows GDI, immediate mode, and Webrender is scene graph-style lib, more “retained-mode”)

                And seems that, despite there’s no components from real browser, Azul has DOM and CSS. So, Azul is something in the spirit of NeWS and Display Postscript, but more web-ish, instead of printer-ish?

                1. 4

                  There is also discussion of making an xml format for specifying the dom, like html.

                  1. 1

                    It’s using Mozilla’s WebRender, so how about XUL?

                    1. 1

                      Considering that Mozilla is actively trying to get rid of XUL, doing anything new with it seems like a bad idea.

                      But also, if I understand what XUL is correctly it’s mostly a defined list of widgets over a generic XML interface, if I understand that proposal properly it’s to make the list of widgets completely user controllable (though there will no doubt be some default ones, including HTML like ones).

                2. 1

                  WebRender is basically a GPU-powered rectangle compositor, with support for the kinds of settings / filters you can put on HTML elements. It’s nowhere near the bloated monstrosity that is electron.

                1. 21

                  So I think I’m a bit late for the big go and rust and garbage collection and borrow checker discussion, but it took me a while to digest, and came up with the following (personal) summary.

                  Determining when I’m done with a block of memory seems like something a computer could be good at. It’s fairly tedious and error prone to do by hand, but computers are good at monotonous stuff like that. Hence, garbage collection.

                  Or there’s the rust approach, where I write a little proof that I’m done with the memory, and then the computer verifies my proof, or rejects my program. Proof verification is also something computers are good at. Nice.

                  But writing the proof is still kind of a pain in the ass, no? Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done. Go figure out the refs and borrows to make it work, kthxbye.

                  1. 18

                    But writing the proof is still kind of a pain in the ass, no? Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done. Go figure out the refs and borrows to make it work, kthxbye

                    I’m in the middle of editing an essay on this! Long story short, proving an arbitrary code property is undecidable, and almost all the decidable cases are in EXPTIME or worse.

                    1. 10

                      I’m kinda familiar with undecidable problems, though with fading rigor these days, but the thing is, undecidable problems are undecidable for humans too. The impossible task becomes no less impossible by making me do it!

                      I realize it’s a pretty big ask, but the current state of the art seems to be redefine the problem, rewrite the program, find a way to make it “easy”. It feels like asking a lot from me.

                      1. 10

                        The problem is undecidable (or very expensive to decide) in the most general case; what Rust does is solve it in a more limited case. You just have to prove that your usage fits into this more limited case, hence the pain in the ass. Humans can solve more general cases of the problem than Rust can, because they have more information about the problem. Things like “I only ever call function B with inputs produced from function A, function A can only produce valid inputs, so function B doesn’t have to do any input validation”. Making these proofs without computer assistance is no less of a pain in the ass. (Good languages make it easy to enforce these proofs automatically at compile or run time, good optimizers remove redundant runtime checks.)

                        Even garbage collectors do this; their safety guarantees are a subset of what a perfect solution would provide.

                        1. 3

                          “Humans have more information about the problem”

                          And this is why a conservative borrower checker is ultimately the best. It can be super optimal, and not step on your toes. It’s up to the human to adjust the lifetime of memory because only the human knows what it wants.

                          I AM NOT A ROBOT BEEP BOOP

                        2. 3

                          Humans have a huge advantage over the compiler here though. If they can’t figure out whether a program works or not, they can change it (with the understanding gained by thinking about it) until they are sure it does. The compiler can’t (or shouldn’t) go making large architectural changes to your code. If the compiler tried it’s hardest to be as smart as possible about memory, the result would be that when it says “I give up, the code needs to change” the human who can change the code is going to have a very hard time understanding why and what they need to change (since they haven’t been thinking about the problem).

                          Instead, what Rust does is apply as intelligent a set of rules they could that produce consistent understandable results for the human. So the compiler can say “I give up, here’s why”. And the human can say “I know how the compiler will work, it will accept this this time” instead of flailing about trying to convince the compiler it works.

                          1. 1

                            I realize it’s a pretty big ask

                            I’ve been hearing this phrase lately “big ask” from business people generally, seems very odd to me. Is it new or have I just missed it up to now?

                            1. 2

                              I’ve been hearing it from “business people” for a couple years at least, I assume it’s just diffusing out slowly to the rest of society.

                              The new one I’m hearing along these lines is “learnings”. I think people just think it makes them sound smart if they use different words.

                              1. 1

                                A “learning”, as a noun, is attested at least as far back as the early 1900s, FYI.

                                1. 0

                                  This sort of comment annoys me greatly. Someone used a word incorrectly 100 years ago. That doesn’t mean it’s ‘been a word for 100 years’ or whatever you’re implying. ‘Learning’ is not a noun. You can argue about the merits of prescriptivism all you like, you can have whatever philosophical discussion you like as to whether it’s valid to say that something is ‘incorrect English’, but ‘someone used it in that way X hundred years ago’ does not justify anything.

                                  1. 2

                                    This sort of comment annoys me greatly. Someone used a word incorrectly 100 years ago. That doesn’t mean it’s ‘been a word for 100 years’ or whatever you’re implying. ‘Learning’ is not a noun.

                                    It wasn’t “one person using it incorrectly” that’s not even remotely how attestation works in linguistics. And of course, of course it is very much a noun. What precisely, man, do you think a gerund is? We have learning curves, learning processes, learning centres. We quote Pope to one another when we say that “a little learning is a dangerous thing”.

                                    To take the position that gerunds aren’t nouns and cannot be pluralized requires objecting to such fluent Englishisms as “the paintings on the wall”, “partings are such sweet sorrow”, “I’ve had three helpings of soup”

                                    1. 0

                                      ‘Painting’ is the process of painting. You can’t pluralise it. It’s also a (true) noun, the product of doing some painting. There it obviously can be pluralised. But ‘the paintings we did of the house kept improving the sheen of the walls’ is not valid English. They’re different words.

                                      1. 2

                                        LMAO man, how do you think Painting became a “true” noun? It’s just a gerund being used as a noun that you’re accustomed to. One painted portraits, landscapes, still lifes, studies, etc. To group all these things together as “paintings” was an instance of the exact same linguistic phenomenon that gives us the idea that one learns learnings.

                                        You’re arguing against literally the entire field of linguistics here on the basis of gut feelings and ad hoc nonsense explanations.

                                        1. 0

                                          You’re arguing against literally the entire field of linguistics here on the basis of gut feelings and ad hoc nonsense explanations.

                                          No, I’m not. This has literally nothing to do with linguistics. That linguistics is a descriptivist scientific field has nothing to do with whether ‘learnings’ is a real English word. And it isn’t. For the same reason that ‘should of’ is wrong: people don’t recognise it as a real word. Words are what we say words are. People using language wrong are using it wrong in the eyes of others, which makes it wrong.

                                          1. 1

                                            That linguistics is a descriptivist scientific field has nothing to do with whether ‘learnings’ is a real English word. And it isn’t. For the same reason that ‘should of’ is wrong: people don’t recognise it as a real word. Words are what we say words are.

                                            Well, I hate to break it to you, but plenty of people say learnings is a word, like all of the people you were complaining use it as a word.

                                            1. 0

                                              There are lots of people that write ‘should of’ when they mean ‘should’ve’. That doesn’t make them rightt.

                                              1. 1

                                                Yes and OK is an acronym for Oll Korrect, anyone using it as a phrase is not OK.

                                                1. 0

                                                  OK has unknown etymology. And acronyms are in no way comparable to simply incorrect grammar.

                                                  1. 1

                                                    Actually it is known. Most etymologists agree that it came from Boston in 1839 originating in a satirical piece on grammar. This was responding to people who insist that English must follow some strict unwavering set of laws as though it were a kind of formal language. OK is an acronym, and it stands for Oll Korrect, and it was literally invented to make pedants upset. Certain people were debating the use of acronyms in common speech, and to lay it on extra thick the author purposefully misspelled All Correct. The word was quickly adopted because pedantry is pretty unpopular.

                                                    1. 1

                                                      What I said is that there is what is accepted as valid and what is not. Nobody educated thinks that ‘should of’ is valid. It’s a misspelling of ‘should’ve’. Nobody thinks ‘shuold’ is a valid spelling of ‘should’ either. Is this really a debate you want to have?

                                                      1. 1

                                                        I was (mostly) trying to be playful while also trying to encourage you to be a little less litigious about how people shuold and shuold not use words.

                                                        Genuinely sorry for making you actually upset though, I was just trying to poke fun a little for getting a bit too serious at someone over smol beans, and I was not trying to make you viscerally angry.

                                                        I also resent the attitude that someone’s grammatical or vocabulary knowledge of English represents an “education”.

                              2. 1

                                It seems like in the last 3 years all the execs at my company started phrasing everything as “The ask is…” I think they are trying to highlight that you have input (you can answer an ask with no) vs an order.

                                In practice, of course, many “asks” are orders.

                                1. 4

                                  Sure, but we already have a word for that, it’s “request”.

                                  1. 4

                                    Sure, but the Great Nouning of Verbs in English has been an ongoing process for ages and continues apace. “An ask” is just a more recent product of the process that’s given us a poker player’s “tells”, a corporation’s “yearly spend”, and the “disconnect” between two parties’ understandings.

                                    All of those nouned verbs have or had perfectly good non-nominalized verb nouns, at one point or another in history.

                                    1. 1

                                      One that really upsets a friend of mine is using ‘invite’ as a noun.

                                2. 1

                                  Newly popular? MW quotes this usage and says Britishism.

                                  https://www.merriam-webster.com/dictionary/ask

                                  They don’t date the sample, but I found it’s from a 2008 movie review.

                                  https://www.spectator.co.uk/2008/10/cold-comfort/

                                  So at least that old.

                              3. 3

                                You no doubt know this, but the undecidable stuff mostly becomes decidable if you’re willing to accept a finite limit on addressable memory, which anyone compiling for, say, x86 or x86_64 is already willing to do. So imo it’s the intractability rather than undecidability that’s the real problem.

                                1. 1

                                  It becomes decidable by giving us an upper bound on the number of steps the program can take, so should require us to calculate the LBA equivalent of a very large BB. I’d call that “effectively” undecidable, which seems like it would be “worse” than intractable.

                                  1. 2

                                    I agree it’s, let’s say, “very” intractable to make the most general use of a memory bound to verify program properties. But the reason it doesn’t seem like a purely pedantic distinction to me is that once you make a restriction like “64-bit pointers”, you do open up a bunch of techniques for finite solving, some of which are actually usable in practice to prove properties that would be undecidable without the finite-pointer restriction. If you just applied Rice’s theorem and called verifying those properties undecidable, it would skip over the whole class of things that can be decided by a modern SMT solver in the 32-bit/64-bit case. Granted, most still can’t be, but that’s why the boundary that interests me more nowadays is the “SMT can solve this” vs. “SMT can’t solve this” one rather than the CS-theory sense of decidable/undecidable.

                              4. 6

                                Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done.

                                It’s really hard. The main tool for that is separation logic. Manually doing it is harder than borrow-checking stuff. There are people developing solvers to automate such analyses. Example. It’s possible what you want will come out of that. I think there will still be restrictions on coding style to ease analyses.

                                1. 3

                                  In my experience, automated proof generators are very leaky abstractions. You have to know their search methods in detail, and present your hypotheses in a favorable way for those methods. It can look very clean, but it can mean that seemingly easy changes turn out to be frustrated by the methods’ limitations.

                                  1. 4

                                    I’m totally with you on this. Rust very much feels like an intermediate step and I don’t know why they didn’t take it to it’s not-necessarily-obvious conclusion.

                                    1. 5

                                      In my personal opinion, it might be just that we’re happy that we can actually get to this intermediate point (of Rust) reliably enough, but have no idea yet how to get to the further point (conclusion). So they took it where they could, and left the subsequent part as an excercise for the reader… I mean, to be explored by future generations of programmers, hopefully.

                                      1. 4

                                        We have the technology, sort of. Total program analysis is really expensive though, and the workflow is still “edit some code” -> “compile on a laptop” -> repeat. Maybe if we built a gc’ed language that had a mode where you push your program to a long running job on a compute cluster to figure out all the memory proofs.

                                        This would be especially cool if incrementals could be cached.

                                        1. 4

                                          I’ve recommended that before. There’s millions being invested into SMT/SAT solvers for common bugs that might make that happen, too. Gotta wait for the tooling to catch up. My interim recommendation was a low-false-positive, static-analysis tool like RV-Match to be used on everything in the fast path. Anything that passes is done no GC. Anything that hangs or fails is GC’d. Same with automated proofs to eliminate safety checks. If it passes, remove that check if that’s what pass allows. If it fails, maybe it’s safe or maybe tool is too dumb. Keep the check. Might not even need cluster given number of cores in workstations/servers and efficiency improvements in tools.

                                        2. 4

                                          I think it’s because there’s essentially no chance that a random piece of code will be provable in such a way. Rust encourages, actually to the point of forcing, the programmer to reason about lifetimes and ownership along with other aspects of the type as they’re constructing the program.

                                          I think there may be a long term evolution as tools get better: the languages checks the proofs (which, in my dream, can be both types and more advanced proofs, say that unsafe blocks actually respect safety), and IDE’s provide lots of help in producing them.

                                          1. 2

                                            there’s essentially no chance that a random piece of code will be provable in such a way

                                            There must be some chance; rust is already proving memory safety.

                                            Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                            1. 17

                                              Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                              This is a misconception. The Rust compiler does not see anything beyond the function boundary. That makes lifetime checking efficient. Basically, when compiling a function, the compiler makes an reasonable assumption about how input and output references are connected (the assumption is “they are connected”, also known as “lifetime elision”). This is an assumption communicated the outside world. If this assumption is wrong, you need to annotate lifetimes.

                                              When compiling, the compiler will check if the assumption holds for the function body. So, for every function call, it will check if the the signature holds (lifetimes are part of the function signature).

                                              Note that functions with different lifetime annotations taking the same data might differ in their behaviour. It also isn’t always obvious to the compiler whether you want references to be bound together or not and that situation might be ambigous.

                                              The benefit of this model is that functions only need to be rechecked/compiled when they actually change, not some other code somewhere else in the program. It’s very predictable and errors are local to the function.

                                              1. 2

                                                I’ve been waiting for you @skade.

                                                1. 2

                                                  Note that functions with different lifetime annotations taking the same data might differ in their behaviour.

                                                  I wrote this late at night and have some errata here: they might differ in their behaviour wrt. lifetime checking. Lifetimes have no impact on the runtime, an annotation might only prove something safe that the compiler previously didn’t see as safe.

                                                2. 4

                                                  Maybe I’m misunderstanding. I’m interpreting “take it to its conclusion” as accepting programs that are not annotated with explicit lifetime information but for which such an annotation can be added. (In the context of Rust, I would consider “annotation” to include choosing between &, &mut, and by-move, as well as adding .clone() when needed, especially for refcount types, and of course adding explicit lifetimes in cases that go beyond the present lifetime elision rules, which are actually pretty good). My point is that such a “smarter compiler” would fail a lot of the time, and that failures would be mysterious. There’s a lot of experience around this for analyses where the consequence of failure is performance loss due to not being able to do an optimization, or false positives in static analysis tools.

                                                  The main point I’m making here is that, by requiring the programmer to actually provide the types, there’s more work, but the failures are a lot less mysterious. Overall I think that’s a good tradeoff, especially with the present state of analysis tools.

                                                  1. 1

                                                    I’m interpreting “take it to its conclusion” as accepting programs that are not annotated with explicit lifetime information but for which such an annotation can be added.

                                                    I’ll agree with that definition

                                                    My point is that such a “smarter compiler” would fail a lot of the time, and that failures would be mysterious.

                                                    This is where I feel we disagree. I feel like you’re assuming that if we make lifetimes optional that we would for some reason also lose the type system. That was not my assumption at all. I assumed the programmer would still pick their own types. With that in mind, If this theoretical compiler could prove memory safety using the developer provided types and the inferred ownership, why would it still fail a lot?

                                                    where the consequence of failure is performance loss due to not being able to do an optimization

                                                    That’s totally understandable. I assume like any compiler, it would eventually get better at this. I also assume lifetimes become an optional piece of the program as well. Assuming this compiler existed it seems reasonable to me that it could accept and prove lifetimes provided by the developer along with inferring and proving on it own.

                                                    1. 3

                                                      Assuming this compiler existed it seems reasonable to me that it could accept and prove lifetimes provided by the developer along with inferring and proving on it own.

                                                      That’s what Rust does. And many improvements to Rust focus on increasing the number of lifetime patterns the compiler can recognize and handle automatically.

                                                      You don’t have to annotate everything for the compiler. You write code in patterns the compiler understands, and annotate things it doesn’t. So Rust has gotten easier and easier to write as the compiler gets smarter and smarter. It requires fewer and fewer annotations / unsafe blocks / etc as the compiler authors discover how to prove and compile more things safely.

                                                  2. 4

                                                    Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                                    I wondered this at first, but inferring the lifetimes (among other issues) has some funky consequences w.r.t. encapsulation. Typically we expect a call to a function to continue to compile as long as the function signature remains unchanged, but if we infer the lifetimes instead of making them an explicit part of the signature, subtle changes to a function’s implementation can lead to new lifetime restrictions being inferred, which will compile fine for you but invisibly break all of your downstream callers.

                                                    When the lifetimes are an explicit part of the function signature, the compiler stops you from compiling until you either fix your implementation to conform to your public lifetime contract, or change your declared lifetimes (and, presumably, since you’ve been made conscious of the breakage in this scenario, notify your downstream and bump your semver).

                                                    It’s basically the same reason that you don’t want to infer the types of function arguments from how they’re used inside a function – making it easy for you to invisibly breaking your contract with the outside world is bad.

                                                    1. 3

                                                      I think this is the most important point here. Types are contracts, and contracts can specify far more than just int vs string. Complexity, linearity, parametricity, side-effects, etc. are all a part of the contract and the more of it we can get the compiler to enforce the better.

                                              2. 1

                                                Which is fine, until you have time or memory constraints that are not easily met by the tracing GC, which is all software of sufficient scale or complexity. At that point, you end up with half-assed and painful to debug/optimize manual memory management in the form of pools, ect.

                                                1. 1

                                                  Or there’s the rust approach, where I write a little proof that I’m done with the memory, and then the computer verifies my proof, or rejects my program. Proof verification is also something computers are good at. Nice.

                                                  Oh I wish that were how Rust worked. But it isn’t. A variant of Rust where you could actually prove things about your programme would be wonderful. Unfortunately, in Rust, you instead just have ‘unsafe’, which means ‘trust me’.

                                                1. 2

                                                  I think the date (edit: On the merged groklaw article) should be 2008. The article lists 2008, and comments are all from 2008, this comment thread which suggests there was a software mistake that printed 2006 but that the real date was 2008.

                                                  1. 3

                                                    That’s one of the beauties of the GPL, actually, that even if some individual gets a bug up his nose, or dies and his copyright is inherited by his wife who doesn’t care about the GPL and wants to take it proprietary, or just to imagine for a moment, a Megacorp were to buy off a GPL programmer and get him to pretend to revoke the GPL with threats, and even if it were to initiate a SCO-like bogo-lawsuit (based perhaps on a theory under Copyright Law § 203, termination of rights – lordy do we have to endure a living demo in some courtroom somewhere of every antiGPL theory found on every message board before they give up?), it doesn’t matter ultimately, I don’t think, as to what you can and can’t do with the GPL. The GPL is what it is.

                                                    That does not sound like a particularly robust counter argument to the termination clause. That’s more like ad hominem word salad.

                                                    1. 2

                                                      I think it’s because it was written in the style of the then-ongoing debate over the SCO lawsuit.

                                                      Unfortunately the linked post that inspired this one is lost to bitrot.

                                                  1. 10

                                                    This change avoids user confusion and leaking information to other users of the computer. It don’t leak information. It’s probably a good change:

                                                    • Logging out of Google -> Logging out of Chrome

                                                    This change is probably as likely to avoid user confusion as cause it. However it avoids leaking information to other users of the computer and doesn’t leak information to google. It’s probably a good change.

                                                    • Logging out of Chrome -> Logging out of Google

                                                    This change is probably mildly convenient for users, and probably avoids confusion. It doesn’t leak information, but it will cause some unsophisticated users to unknowingly overshare information with google. I’m basically ok with it.

                                                    • Logging into Chrome -> Logging into Google

                                                    This change leaks information to Google, is surprising behavior, and avoids no confusion. This is why I will not use Chrome to interact with Google services.

                                                    • Logging into Google -> Logging into Chrome.

                                                    The reasons cited in the article unsurprisingly only support changes 1 and 2.

                                                    1. 4

                                                      I have first hand watched this confusion among my less-technical family. The “logged into Chrome” versus “logged into Gmail” issue. I think for the very non-technical, those two things aren’t as obviously different to them as it might be to someone who has a better idea of the boundaries between things.

                                                      I suspect for most users, this will be their computers just “doing what they want”.

                                                    1. 3

                                                      After reading that, I wonder why no one started a fork yet. Perhaps if someone does, people will quickly join.

                                                      1. 8

                                                        Most people who could & would implement a fork, use PureScript instead.

                                                        1. 2

                                                          Because it is very hard and takes a lot of time I’d wager. Few have the time, money or drive to do such a thing.

                                                          1. 1

                                                            There’s not a substantial amount of money in maintaining a language like this, so it would pretty much have to be a labour of love.

                                                            Under those circumstances how many people would chose to fork an existing language and maintain it rather than create their own?

                                                            1. 1

                                                              Because the whole reason people use something like this is that they don’t want to develop and maintain it themselves.

                                                            1. 6

                                                              In Canada most home routers (well, from bell at least, which is one of two dominant ISPs) come with a long randomly generated wifi password stamped on them.

                                                              Specifically 8 characters long. And for no apparent reason it is limited to hex ([0-9A-F]{8}). Creating about 4 billion passwords. It takes about a day on my gtx970m to try every single one against a captured handshake.

                                                              The defaults ESSID’s (wifi network names) are of the form BELL###. So there are a thousand extremely common ESSID’s. Apparently WPA only salts the password with the ESSID before hashing it and publicly broadcasting it as part of the handshake. In a few years of computation time on a decent laptop (so far less if I rented some modern gpus from google…) I could make rainbow tables for every one of those IDs that included every possible default password.

                                                              On the bright side it looks like this new method extracts a hash that includes the mac addresses acting as a unique salt, so at least the rainbow table method will still require capturing a handshake.

                                                              1. 2

                                                                Oh, ours from vodafone NZ are 16 chars 0-9a-zA-Z

                                                                1. 1

                                                                  I never had this realization. Now my head has exploded.

                                                                  What tool do you use to try these combinations? And is it heavily parallelized? To me 4 billion should not take a whole day…

                                                                  1. 1

                                                                    I experimented with pyrit (24h runtime, builds some form of rainbow table, wrote a short program to pipe it all the passwords) and hashcat (20h runtime, no support for rainbow tables, supports generating the password combinations by itself via command line flags). They are both heavily parallelized, 100% utilization of my GPU.

                                                                    My GPU is a relatively old GPU in a laptop with shitty cooling, which may contribute to the runtime.

                                                                    Running on a CPU it said it would take the better part of a month.

                                                                    1. 1

                                                                      Interesting. While waiting for a reply, I thought to myself: I wonder how much it would cost to run it on Google Compute with the best hardware. Could be worth it to those who want wifi for a week or longer without paying anything. Spooky.

                                                                  2. 1

                                                                    In Luxembourg every (Fritz)box comes with a password written only on the notice (not on the box itself) that is 20 (5*4chars) in hexa. It’s a pain to type at first, but well, it’s seem like a good one.

                                                                  1. 2

                                                                    Something I’ve been wondering about (and this is probably the wrong forum to ask about) is whether or not doing this would result in employees or executives having issues if they go to Europe?

                                                                    1. 0

                                                                      What do you mean?

                                                                      I’m doing GDPR consulting at the moment.

                                                                      1. 1

                                                                        I think the question is something along the lines of “could a company be prosecuted for violations of the GDPR if its employees visit or work in Europe”.

                                                                        I assume the answer is “no”, as long as they’re not actually doing business in Europe. (Which would be the primary reason to have employees there, but with the increased prevalence of remote work, it’s not necessarily the case.)

                                                                        1. 2

                                                                          I am fairly certain you could even go to EU and work in an office on data for non-EU customers and still not be subject to GDPR. As long as you are not dealing with any EU entities, your physical location should not matter.

                                                                          1. 1

                                                                            “It applies to all companies processing and holding the personal data of data subjects residing in the European Union, regardless of the company’s location.”

                                                                            https://www.eugdpr.org/gdpr-faqs.html

                                                                            So if you are working in the EU, your company would probably need to comply with GDPR, as they likely has personal information on you in their systems. I guess it comes down to how lawyers would interpret “residence”. Enforcable? Idk.

                                                                        2. 1

                                                                          Suppose I work for a company in Canada and that company flagrantly violate’s the GDPR. I later leave the company and move to Europe.

                                                                          Is it possible for Europe to come after me personally, instead of (or as well as) the company?

                                                                          What if I’m the CTO? CEO? Owner? Just an employee but directly responsible for the GDPR violations?

                                                                          What if I don’t leave the company and just go to Europe on a vacation?

                                                                          1. 4

                                                                            Is it possible for Europe to come after me personally, instead of (or as well as) the company?

                                                                            This is the entire point of the legal fiction of a “corporate person”. If a corporation is doing bad things, you go after the corporation. It’s very rare that anyone within the company directly is charged with a crime unless they’re knowingly and intentionally violating something. GDPR is fairly lenient with remediation and other things.

                                                                            What if I don’t leave the company and just go to Europe on a vacation?

                                                                            They’d more or less have to issue a warrant for you, and you would know.

                                                                            1. 2

                                                                              Maybe if it were egregious enough.

                                                                              The US has been known to go after employees of money launderers and copyright violators in other companies, so it’s not without an international precedent, but I’d need more information to give better advice.

                                                                        1. 3

                                                                          I’m disappointed that companies who own significant copyright in Linux (like RedHat or Intel) and industry groups like the BSA don’t go after intellectual property thieves like Tesla. There are plenty of non-Linux choices if companies don’t want to comply with the GPL’s license terms. Other car companies seem to be happy with VxWorks and similar.

                                                                          What’s the point of asking China to comply with American IP if the US won’t even police its own companies?

                                                                          1. 10

                                                                            I’m pretty unsurprised that a company like Intel or Red Hat wouldn’t sue. Lawsuits are expensive, and it’s not clear a GPL suit would produce any significant damages (can they show they’ve been damaged in any material way?), just injunctive relief to release the source code to users. So it’d be a pure community-oriented gesture, probably a net loss in monetary terms. And could end up a bigger loss, because with the modern IP regime as de-facto a kind of armed standoff where everyone accumulates defensive portfolios, suing someone is basically firing a first shot that invites them to dig through their own IP to see if they have anything they can countersue you over. So you only do that if you feel you can gain something significant.

                                                                            SFC is in a pretty different position, as a nonprofit explicitly dedicated to free software. So these kinds of lawsuits advance their mission, and since they aren’t a tech company themselves, there’s not much you can counter-sue them over. Seems like a better fit for GPL enforcement really.

                                                                            1. 8

                                                                              a GPL suit would produce any significant damages (can they show they’ve been damaged in any material way?

                                                                              This is generally why the FSF’s original purpose in enforcing the GPL was always to ensure that the code got published, not to try to shakedown anyone for money. rms told Eben in the beginning, make sure you make compliance the ultimate goal, not monetary damages. The FSF and the Conservancy both follow these principles. Other copyleft holders might not.

                                                                              1. 3

                                                                                Intel owned VxWorks until very recently. Tesla’s copyright violations competed directly with their business.

                                                                                1. 2

                                                                                  I’m not a lawyer but the GPL includes the term (emphasis added)

                                                                                  1. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.

                                                                                  Even if monetary damages are not available (not sure if they are), it should be possibile to get injunctive relief revoking the right to use the software at all. Not just injunctive relief requiring them to release the source.

                                                                                  1. 3

                                                                                    This is from GPLv2.

                                                                                    GPLv3 is a bit more lenient:

                                                                                    However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

                                                                                    Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

                                                                                    Now, I think people should move to GPLv3 if they want this termination clausole.

                                                                                    And in any case, 5 years are completely unrespectful of the various developers that contributed to Tesla through their contribution to the free software they adopted.

                                                                                    To that end, we ask that everyone join us and our coalition in extending Tesla’s time to reach full GPL compliance for Linux and BusyBox, not just for the 30 days provided by following GPLv3’s termination provisions, but for at least another six months.

                                                                                    As a developer, this sounds a lot like changing the license text for the benefit of big corporates without contributors agreement.

                                                                                    When I read these kind of news I feel betrayed by FSF.
                                                                                    I seriously wonder if we need a more serious strong copyleft.

                                                                                    1. 2

                                                                                      It is not without contributor agreement. Any contributor who does not agree is free to engage in their own compliance or enforcement activity. Conservancy can only take action on behalf of contributors who have explicitly asked them to.

                                                                                      The biggest problem is that most contributors do not participate in compliance or enforcement activities at all.

                                                                                      1. 1

                                                                                        Conservancy can only take action on behalf of contributors who have explicitly asked them to.

                                                                                        Trust me, it’s not that simple.

                                                                                        The biggest problem is that most contributors do not participate in compliance or enforcement activities at all.

                                                                                        Maybe contributors already agreed to contribute under the license terms and just want it to be enforced as is?

                                                                                        I’m sincerely puzzled by Software Freedom Conservancy.

                                                                                        Philosophycally I like this gentle touch, I’d like to believe that companies will be inspired by their work.

                                                                                        But in practice, to my untrained eye, they weaken the GPL. Because, the message to companies is that Conservancy is afraid to test the GPL in court to defend the developers’ will expressed in the license. As if it was not that safe.

                                                                                        I’m not a lawyer, but as a developer, this scares me a bit.

                                                                                        1. 3

                                                                                          If contributors want they license enforced they have to do something about that. No one can legally enforce it for them (unless they enter an explicit agreement). There is no magical enforcement body, only us.

                                                                                          Conservancy’s particular strategy wouldn’t be the only one in use if anyone else did enforcement work ;)

                                                                                          1. 1

                                                                                            You are right. :-)

                                                                                2. 2

                                                                                  They’re asking China to comply with the kind of American IP that makes high margins, not the FOSS. They’re doing it since American companies are paying politicians to act in the companies’ interests, too.

                                                                                1. 1

                                                                                  In total there are 387 users online :: 42 Registered, 0 Hidden and 345 Guest
                                                                                  Most users ever online was 23 on Thu Mar 21, 2002 10:18 am

                                                                                  1. 7

                                                                                    I opted to use nightly to help make Firefox a more stable product for people less willing to put up with crashse, and to help test certain new technologies (stylo originally, recently webrender). I did not install it with the understanding that Mozilla would feel free to share my browsing habits with a third party.

                                                                                    It’s unfortunate that there is not a reasonable way to help test firefox that respects your rights, but I suppose I understand why they want some channel to test things like this. Either way, I’ve now uninstalled nightly and will stick to the stable version.

                                                                                    1. 4

                                                                                      Disable Shield Studies in the Preference Menu. Then you can test Firefox and not have Firefox share your browsing data with which an explicit agreement exists to not log your browsing data for anything beyond debugging purposes and 24h maximum which is used to develop DoH which massively increases user privacy.

                                                                                      1. 2

                                                                                        It’s not just this study I want to avoid. It’s any similar behavior in the future, which I may or may not hear about like I did this time. As the original post points out, there doesn’t seem to be any policy in place guaranteeing that this sort of behavior will be limited to shield studies. The attitutudes displayed in the bug for this study seem to be along the lines of “if it’s not forbidden this level of invasion of privacy is fine”, so I don’t see why I would trust that setting to prevent a similar thing from happening except not as a “study”. Moreover there is an open issue about that checkbox getting reenabled, which inspires little confidence.

                                                                                        As for your attempt at minimizing the data shared, you’re right, it could be worse, but this is already far too much. Mozilla has no right to trust a third party like this on my behalf. Even if they did that agreement simply does not exist in the form you want it to for the simple reason that cloudflare is a US company. They do not have the power to enter into such an agreement since by law they can be required to provide that data to the US government (which, to me, is a foreign government).

                                                                                        And to be perfectly frank, I wouldn’t even mind opting into this study, I already use a US dns provider. But the lack of trustworthiness and respect displayed towards nightly users means I will not be one.

                                                                                        1. 3

                                                                                          Again, you can opt out. The bug you mentioned is an open issue as you noted, if you’d like to see it fix, you can submit reports or code to help out.

                                                                                          Mozilla is not a closed organization, almost everything happens in public.

                                                                                          The study hasn’t even started yet, so I’m not sure why people are freaking out unless Mozilla will actually implement this as opt out (the mailing list seems to indicate otherwise)

                                                                                          1. 2

                                                                                            I can opt out this time because I heard about it in advance. There is no guarantee I will hear about it in advance next time, so next time I may not be able to. Mozilla in doing this (even in seriously considering this) has clearly indicated that they do not respect the rights of nightly users.

                                                                                            As I read it the mailing list has everyone approving the opt out version, so I don’t think the mailing list indicates otherwise. Specifically of the 6 people who were asked to approve it, 5 already have. From context I think the last one is a lawyer checking if it’s legal? When the limit of whether or not you’re violating privacy too much is “is this literally illegal” you don’t impress me.

                                                                                            That fact that it’s a bug doesn’t terribly matter, it means that there isn’t even an effective way to opt out of the studies. The only proper response upon discovering that bug was halting all shield studies until they found the cause, fixed it, and alerted everyone who might have had the button automatically rechecked about it (which in practice may mean every nightly user). I’d care more about if Mozilla hadn’t already lost my trust in regards to nightly in general, and if it provided some form of meaningful consent (which it doesn’t, since it could easily have remained check simply by me not hearing about it or realizing what it meant).

                                                                                            1. 2

                                                                                              The opt out is not per study, you can opt out of any shield studies. And as others have noted, Nightly is not for production. It is playground for Mozilla where they test exactly these kinds of features.

                                                                                              If you run Beta software, be prepared that it does things you wouldn’t expect.

                                                                                              I’m still fairly convinced people are simply overreacting over simple A/B tests to improve privacy of Firefox proper.

                                                                                              1. 2

                                                                                                The opt out is not per study, but it is per feature. I see no reason to believe that the only way Mozilla will do things like this is through shield studies when none of their policies indicate that this is the case. It is also not clearly communicated what the opt out is for - assuming it has the same text above it as in firefox stable (I have uninstalled nightly already and didnt check first) it says immediately above it that “We always ask permission before receiving personal information”, which is plainly not the case here.

                                                                                                I expected nightly to be buggy, to crash sometimes, and maybe to not work at all once in awhile. That was about the case (surprisingly unbuggy without enabling things like webrednder.enabled.all actually). I did not expect nightly to intentionally violate my privacy.

                                                                                                As I said at the start, maybe I just misunderstood what nightly is for, either way I’m not going to continue using it.

                                                                                                1. 2

                                                                                                  Mozilla explicitly notes in their privacy page that Nightly will have different privacy characteristica than the normal distribution, so no it’s not simply “the same text”.

                                                                                                  If you value privacy you should not be using a pre-beta release of a software meant for (A/B) testing latest features.

                                                                                    1. 1

                                                                                      Since Cloudflare serves an inordinate amount of Internet traffic, I’m not sure this changes much.

                                                                                      1. 1

                                                                                        If I understood correctly, this sends all hostnames to Cloudflare, not just those served by Cloudflare. I think it is a significant change.

                                                                                        1. 3

                                                                                          Not many DoH servers out there. Somebody has to host them.

                                                                                          1. 1

                                                                                            It seems like it should be possible to find a DNS provider that already exists to host them, and then only test on users who already use that DNS provider.

                                                                                            1. 6

                                                                                              Let me amend what I said: There are no DoH servers out there except for the one Mozilla set up. DoH isn’t even a standard yet. It’s a draft proposal to the IETF.

                                                                                              1. 2

                                                                                                Google has a DoH server, not sure how compliant with the draft though.

                                                                                                Meanwhile Yandex has a DNSCrypt server (and client in their browser).

                                                                                                1. 1

                                                                                                  That doesn’t change what I said. Instead of partering with cloudflare to have them implement this/run servers providing it, partner with someone who already provides DNS to a substantial number of users. Either way you are requring someone to implement new technology, this is just changing who that someone is.

                                                                                            2. 1

                                                                                              It’s a change, but it would only matter if you trust Cloudflare to see a significant fraction of your traffic, but are not ok with them seeing all your DNS queries. I can’t fathom how that makes sense.

                                                                                          1. 2

                                                                                            I use both Sublime Text and Visual Studio Code, but I often noticed VSCode having some decent input lag (at least when using the Vim Mode), where Sublime, once it is started, rarely does to the same extent.

                                                                                            That being said, there are a lot of things that VSCode does that can be quite useful, and somehow, it manages to have a terminal that is less laggy than ConEmu.

                                                                                            1. 1

                                                                                              VSCode’s vim emulation is really laggy.

                                                                                              I was playing with throttling my CPU to 400MHz to see how slow certain things were, an unexpected consequence was it took substantial time (felt like about half a second, but I didn’t measure) for a keystroke in vscode to actually register. Turning of the vim extension fixed this entirely.

                                                                                              1. 2

                                                                                                So, minor update on this: I don’t think it’s just the Vim Emulation that’s laggy. I suspect there are other extensions that are similarly laggy (I didn’t test to figure out which ones, because I have other things to do at the moment). I actually switched from VS Code back to Sublime Text for Angular/Typescript development due to just getting fed up with the general lack of responsiveness I was getting. Given that Sublime Text has autocomplete for TypeScript, I’m not losing a great deal.

                                                                                                1. 1

                                                                                                  That’s a shame. I like having my Vim keys. Maybe I’ll have to look into other Vim for VS extensions

                                                                                              1. 5

                                                                                                Another case of using “the null garbage collector” is (was?) the dmd compiler for D.

                                                                                                1. 4

                                                                                                  That’s also work bookmarking as a great illustration of why programmers should instrument and profile code instead of guessing what optimizations should work. The author, who is a very experienced programmer, was getting the intuitive choices wrong. Switching to instrumenting the code showed exactly where the problems were. That let the author make better choices for optimization that actually worked.

                                                                                                1. 6

                                                                                                  As a (primarily) C programmer who has been eyeing Rust from a distance with some interest, the author makes a number of compelling points – but from what I’ve read elsewhere…

                                                                                                  No integer overflow

                                                                                                  Enough said.

                                                                                                  No, very much not enough said – if this is an issue you care about, this is a gross oversimplification. Such a description might be accurate for a language with automatic bignum-promotion, where integer overflow can be really said to (within the bounds of memory) actually not happen – Python, say. But the situation in Rust, while yes, probably preferable to the one in C in most ways, isn’t that simple.

                                                                                                  1. 2

                                                                                                    Yeah, it wasn’t obvious to me why that was ”enough said.” I use (unsigned) overflow on purpose quite a lot in audio programming.

                                                                                                    I think it’s nice how Swift made overflow trap, using regular arithmetic operators, but added versions prefixed with & to opt-out, e.g. &+.

                                                                                                    1. 4

                                                                                                      In case you didn’t check the article 1amzave linked:

                                                                                                      Rust has .wrapping_<op> methods for 2’s compliment arithmetic (and a few other variants, saturating, checked - which gives a handable error on overflow, overflowing - which wraps and tells you if it wrapped), as well as a Wrapping<T> type that makes the normal operators wrapping.

                                                                                                      It doesn’t have a fancy &+ syntax though, which is probably a good thing IMO given how rarely wrapping arithmetic is used in general.

                                                                                                      1. 1

                                                                                                        Yes, I didn’t know about those before coming here for the comments!

                                                                                                        &+ is not really special syntax in Swift though, since it allows user-defined operators, for better or worse.

                                                                                                    2. 2

                                                                                                      Rust panics on overflow by default, but provides functions that explicitly allow integer overflow wrapping, as well as functions for checked arithmetic and saturating arithmetic:

                                                                                                      https://doc.rust-lang.org/std/primitive.u32.html

                                                                                                      This seems like the best of all approaches to me.

                                                                                                      1. 4

                                                                                                        Rust panics on overflow by default

                                                                                                        But not in release mode. (This has bitten me, painfully!) Worth being vigilant while coding in case your code might run into edge cases in production it doesn’t in test.

                                                                                                        1. 1

                                                                                                          Thanks for pointing that out! I somehow missed that important detail. I’ll have to keep that in mind!

                                                                                                    1. 2

                                                                                                      Anyone have a copy?

                                                                                                      1. 2

                                                                                                        Just google “iboot github” and find a not-yet-dmcad link. Currently https://github.com/emrakul2002/iboot works.

                                                                                                        1. 1

                                                                                                          Apparently the original upload have been taken down, but there are more copies that can be easily searched at the same site. I would assume that a lot of people have copies by now…

                                                                                                        1. 2

                                                                                                          I use cryfs for this, it’s a transparent fuse filesystem that maps one folder with plaintext (don’t store this in dropbox) into another folder with a bunch of cyphertext blocks (store this in dropbox).

                                                                                                          Doesn’t (yet) work on windows, not sure about mobile, but it’s pretty painless.

                                                                                                          1. 1

                                                                                                            Although I’d prefer having everything encrypted client-side, this would break all the Dropbox functionality on my phone – hence I went with something in between. Thanks for sharing your interesting setup!

                                                                                                            1. 0

                                                                                                              That looks incredibly painful. There’s no way it works on anything but a Linux desktop.

                                                                                                              1. 3

                                                                                                                Should work on mac to… but I live my life on linux desktops so that’s good enough for me.

                                                                                                                Keep in mind that the alternative we are comparing to is “manually click a bunch of buttons to encrypt a PDF for anything you want to keep secure”.

                                                                                                            1. 4

                                                                                                              I thought Moxie’s response on HN (technically responding to the wired article, but I don’t think he would say anything substantially different about the blog post) was really good. The conclusion of which is

                                                                                                              To me, this article reads as a better example of the problems with the security industry and the way security research is done today, because I think the lesson to anyone watching is clear: don’t build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not. It’s much more effective to be Telegram: just leave cryptography out of everything, except for your marketing.

                                                                                                              1. 2

                                                                                                                Nonsense. If you build an end-to-end encrypted thing you inherently call the server your adversary. Hence, don’t handle key management in the server.

                                                                                                                1. 1

                                                                                                                  I think this accurately highlights a real problem with how security and privacy are talked about in popular culture and even in many technical outlets: they are seen as something you either have or don’t. In reality, of course, all technology has to balance security with other concerns, such as usability, cost of building and maintaining the product, technical feasibility, etc. There is no such thing as completely secure software, only software which is secure enough for a certain purpose. Signal says that their service is designed to combat passive surveillance, and I think you could make a case that what this article is describing is more of an active/targeted attack. Which, of course, is not an argument against plugging the hole in Signal’s model if possible.

                                                                                                                  Signal has done a pretty good job of maximizing security while providing a nice user interface. It is probably worth pointing out that it is still a better option than many of the alternatives in articles like this one.