Threads for Manishearth

    1. 1

      The beginning led me to think that this might be talking about something like Cap’n’Proto where the format is made in such a way where you can work with in in-memory without any (de)serialization. Though, formats like that have some toughness to deal with, like the fact that Cap’n’Proto needs to have 5 different types of pointers to be able to be encoded efficiently while being easily modifiable in memory.

      1. 1

        Not quite, Cap’n’Proto is regular zero-copy deserialization (and the previous post in this series talks about zerovec, a crate that brings this to Rust’s serde framework). Cap’n’proto (and zerovec + serde) still require a validating deserialization step, it’s just cheap. You could skip the validation but that would potentially be a vector for memory unsafety.

        This has no validation required, ever, with the tradeoff that you have to bake it in to your binary.

        And on the plus side, it is far more flexible in what kinds of types it supports since stuff just has to be const constructible.

    2. 25

      Note a couple things:

      • With the 2021 edition, Rust plans to change closure capturing to only capture the necessary fields of a struct when possible, which will make closures easier to work with.
      • With the currently in-the-works existential type declarations feature, you’ll be able to create named existential types which implement a particular trait, permitting the naming of closure types indirectly.

      My general view is that some of the ergonomics here can currently be challenging, but there exist patterns for writing this code successfully, and the ergonomics are improving and will continue to improve.

      1. 12

        I have very mixed feelings about stuff like this. On the one hand, these are really cool (and innovative – not many other languages try and do async this way) solutions to the problems async Rust faces, and they’ll definitely improve the ergonomics (like async/await did).

        On the other hand, adding all of this complexity to the language makes me slightly uneasy – it kind of reminds me of C++, where they just keep tacking things on. One of the things I liked about Rust 1.0 was that it wasn’t incredibly complicated, and that simplicity somewhat forced you into doing things a particular way.

        Maybe it’s for the best – but I really do question how necessary all of this async stuff really is in the first place (as in, aren’t threads a simpler solution?). My hypothesis is that 90% of Rust code doesn’t actually need the extreme performance optimizations of asynchronous code, and will do just fine with threads (and for the remaining 10%, you can use mio or similar manually) – which makes all of the complexity hard to justify.

        I may well be wrong, though (and, to a certain extent, I just have nostalgia from when everything was simpler) :)

        1. 9

          I don’t view either of these changes as much of a complexity add. The first, improving closure capturing, to me works akin to partial moves or support for disjoint borrows in the language already, making it more logically consistent, not less. For the second, Rust already has existential types (impl Trait). This is enabling them to be used in more places. They work the same in all places though.

          1. 11

            I’m excited about the extra power being added to existential types, but I would definitely throw it in the “more complexity” bin. AIUI, existential types will be usable in more places, but it isn’t like you’ll be able to treat it like any other type. It’s this special separate thing that you have to learn about for its own sake, but also in how it will be expressed in the language proper.

            This doesn’t mean it’s incidental complexity or that the problem isn’t worth solving or that it would be best solved some other way. But it’s absolutely extra stuff you have to learn.

            1. 2

              Yeah, I guess my view is that the challenge of “learn existential types” is already present with impl Trait, but you’re right that making the feature usable in more places increases the pressure to learn it. Coincidentally, the next post for Possible Rust (“How to Read Rust Functions, Part 2”) includes a guide to impl Trait / existential types intended to be a more accessible alternative to Varkor’s “Existential Types in Rust.”

              1. 6

                but you’re right that making the feature usable in more places increases the pressure to learn it

                And in particular, by making existential types more useful, folks will start using them more. Right now, for example, I would never use impl Trait in a public API of a library unless it was my only option, due to the constraints surrounding it. I suspect others share my reasons too. So it winds up not getting as much visible use as maybe it will get in the future. But time will tell.

          2. 3

            eh, fair enough! I’m more concerned about how complex these are to implement in rustc (slash alternative compilers like mrustc), but what do I know :P

            1. 7

              We already use this kind of analysis for splitting borrows, so I don’t expect this will be hard. I think rustc already has a reusable visitor for this.

              (mrustc does not intend to compile newer versions of rust)

            2. 1

              I do think it is the case that implementation complexity is ranked unusually low in Rust’s design decisions, but if I think about it, I really can’t say it’s the wrong choice.

        2. 4

          Definitely second your view here. The added complexity and the trajectory means I dont feel comfortable using Rust in a professional setting anymore. You need significant self control to write maintainable Rust, not a good fit for large teams.

          What I want is Go-style focus on readability, pragmatism and maintainability, with a borrow checker. Not ticking off ever-growing feature lists.

          1. 9

            The problem with a Go-style focus here is: what do you remove from Rust? A lot of the complexity in Rust is, IMO, necessary complexity given its design constraints. If you relax some of its design constraints, then it is very likely that the language could be simplified greatly. But if you keep the “no UB outside of unsafe and zero cost abstractions” goals, then I would be really curious to hear some alternative designs. Go more or less has the “no UB outside of unsafe” (sans data races), but doesn’t have any affinity for zero cost abstractions. Because of that, many things can be greatly simplified.

            Not ticking off ever-growing feature lists.

            Do you really think that’s what we’re doing? Just adding shit for its own sake?

            1. 9

              Do you really think that’s what we’re doing?

              No, and that last bit of my comment is unfairly attributed venting, I’m sorry. Rust seemed like the holy grail to me, I don’t want to write database code in GCd languages ever again; I’m frustrated I no longer feel confident I could corral a team to write large codebases with Rust, because I really, really want to.

              I don’t know that my input other than as a frustrated user is helpful. But I’ll give you two data points.

              One; I’ve argued inside my org - a Java shop - to start doing work in Rust. The learning curve of Rust is a damper. To me and my use case, the killer Rust feature would be reducing that learning curve. So, what I mean by “Go-style pragmatism” is things like append.

              Rather than say “Go does not have generics, we must solve that to have append”, they said “lets just hack in append”. It’s not pretty, but it means a massive language feature was not built to hammer that one itch. If “all abstractions must have zero cost” is forcing introduction of language features that in turn make the language increasingly hard to understand, perhaps the right thing to do is, sometimes, to break the rule.

              I guess this is exactly what you’re saying, and I guess - from the outside at least - it certainly doesn’t look like this is where things are headed.

              Two, I have personal subjective opinions about async in general, mostly as it pertains to memory management. That would be fine and well, I could just not use it. But the last few times I’ve started new projects, libraries I wanted to use had abandoned their previously synchronous implementations and gone all-in on async. In other words, introducing async forked the crate community, leaving - at least from my vantage point - fewer maintained crates on each side of the fork than there were before the fork.

              Two being there as an anecdote from me as a user. ie. my experience of async was that it (1) makes the Rust hill even steeper to climb, regressing on the no. 1 problem I have as someone wanting to bring the language into my org; (2) it forked the crate community, such that the library ecosystem is now less valuable. And, I suppose, (3) it makes me worried Rust will continue adding very large core features, further growing the complexity and thus making it harder to keep a large team writing maintainable code.

              1. 9

                the right thing to do is, sometimes, to break the rule.

                I would say that the right thing to do is to just implement a high-level language without obvious mistakes. There shouldn’t be a reason for a Java shop to even consider Rust, it should be able to just pick a sane high-level language for application development. The problem is, this spot in the language landscape is currently conspicuously void, and Rust often manages to squeeze in there despite the zero-cost abstraction principle being antithetical to app dev.

                That’s systems dynamics that worries me a lot about Rust: there’s a pressure to make it a better language for app development at the cost of making it a worse language for systems programming, for the lack of an actual reasonable app dev language one can use instead of Rust. I can’t say it is bad. Maybe the world would be a better place if we had just a “good enough” language today. Maybe the world would be better if we wait until “actually good” language is implemented.

                So far, Rust resisted this pressure successfully, even exceptionally. It managed to absorb “programmers can have nice things” properties of high level languages, while moving downwards in the stack (it started a lot more similar to go). But now Rust is actually popular, so the pressure increases.

                1. 4

                  I mean, we are a java shop that builds databases. We have a lot of pain from the intersection of distributed consensus and GC. Rust is a really good fit for a large bulk of our domain problem - virtual memory, concurrent B+-trees, low latency networking - in theory.

              2. 5

                I guess personally, I would say that learning how to write and maintain a production quality database has to be at least an order of magnitude more difficult than learning Rust. Obviously this is an opinion of mine, since everyone has their own learning experiences and styles.

                As to your aesthetic preferences, I agree with them too! It’s why I’m a huge fan of Go. I love how simple they made the language. It’s why I’m also watching Zig development closely (I was a financial backer for some time). Zero cost abstractions (or Zig’s case, memory safety at compile time) isn’t a necessary constraint for all problems, so there’s no reason to pay the cost of that constraint in all cases. This is why I’m trying to ask how to make the design simpler. The problem with breaking the zero cost abstraction rule is that it will invariably become a line of division: “I would use Rust, but since Foo is not zero cost, it’s not appropriate to use in domain Quux, so I have to stick with C or C++.” It’s antithetical to Rust’s goals, so it’s really really hard to break that rule.

                I’ve written about this before, but just take generics as one example. Without generics, Rust doesn’t exist. Without generics (and, specifically, monomorphized generics), you aren’t able to write reusable high performance data structures. This means that when folks need said data structures, they have to go implement them on their own. This in turn likely increases the use of unsafe in Rust code and thus significantly diminishes its value proposition.

                Generics are a significant source of complexity. But there’s just no real way to get rid of them. I realize you didn’t suggest that, but you brought up the append example, so I figured I’d run with it.

                1. 2

                  I guess personally, I would say that learning how to write and maintain a production quality database has to be at least an order of magnitude more difficult than learning Rust.

                  Agree, but precisely because it’s a hard problem, ensuring everything else reduces mental overhead becomes critical, I think.

                  If writing a production db is 10x as hard as learning Rust, but reading Rust is 10x as hard as reading Go, then writing a production grade database in Go is 10x easier overall, hand wavingly (and discounting the GC corner you now have painted yourself into).

                  One thing worth highlighting is why we’ve managed to stick to the JVM, and where it bites us: most of the database is boring, simpleish code. Hundreds of thousands of LOC dealing with stuff that isn’t on the hot-path. In some world, we’d have a language like Go for writing all that stuff - simple and super focused on maintainability - and then a way to enter “hard mode” for writing performance critical portions.

                  Java kind of lets us do that; most of the code is vanilla Java; and then critical stuff can drop into unsafe Java, like in the virtual memory implementation. The problem with that in the JVM is that the inefficiency of vanilla Java causes GC stalls in the critical code.. and that unsafe Java is horrifying to work with.

                  But, of course, then you need to understand both languages as you read the code.

            2. 4

              I think the argument is to “remove” async/await. Neither C or C++ have async/await and people write tons of event driven code with them; they’re probably the pre-eminent languages for that. My bias for servers is to have small “manual” event loops that dispatch to threads.

              You could also write Rust in a completely “inverted” style like nginx (I personally dislike that, but some people have a taste for it; it’s more like “EE code” in my mind). The other option is code generation which I pointed out here:

              https://lobste.rs/s/rzhxyk/plain_text_protocols#c_gnp4fm

              Actually that seems like the best of both worlds to me. High level and event driven/single threaded at the same time. (BTW the video also indicated that the generated llparse code is 2x faster than the 10 year old battle-tested, hand-written code in node.js)

              So basically it seems like you can have no UB and zero cost abstractions, without async/await.

              1. 5

                After our last exchange, I don’t really want to get into a big discussion with you. But I will say that I’m quite skeptical. The async ecosystem did not look good prior to adding async/await. By that, I mean, that the code was much harder to read. So I suppose my argument is that adding some complexity to language reduces complexity in a lot of code. But I suppose folks can disagree here, particularly if you’re someone who thinks async is overused. (I do tend to fall into that camp.)

                1. 2

                  Well it doesn’t have to be a long argument… I’m not arguing against async/await, just saying that you need more than 2 design constraints to get to “Rust should have async/await”. (The language would be a lot simpler without it, which was the original question AFAICT.)

                  You also need:

                  1. “ergonomics”, for some definition of it
                  2. textual code generation is frowned upon

                  Without constraint 3, Rust would be fine with people writing nginx-style code (which I don’t think is a great solution).

                  Without constraint 4, you would use something like llparse.

                  I happen to like code generation because it enables code that’s extremely high level and efficient at the same time (like llparse), but my experience with Oil suggests that most “casual” contributors are stymied by it. It causes a bunch of problems in the build system (build systems are broken and that’s a different story). It also causes some problems with tooling like debuggers and profilers.

                  But I think those are fixable with systems design rather than language design (e.g. the C preprocessor and JS source maps do some limited things)

                  On a related note, Go’s philosophy is to fix unergonomic code with code generation (“go generate” IIRC). There are lots of things that are unergonomic in Go, but code generation is sort of your “way out” without complicating the language.

      2. 1

        I didn’t know about the first change. That’s very exciting! This is definitely something that bit me when I was first learning the language.

    3. 5

      Rust’s standard library is severely lacking by comparison

      I take some issue with the description that it’s lacking. As far as I understand, this was a deliberate design choice. I’m not familiar enough with it to be able to comment on the actual reasoning behind that decision (maybe someone else can clarify?), but in my own experience, the split between the standard library and other crates for additional functionality has only been positive.

      1. 23

        Yes it was an active choice to have a small standard library in rust. But calling it lacking is valid, it’s just the other side of the coin.

        Upside:

        • you don’t have to decide on the standards and scope of your STD*
        • you can easily change stuff in crates and evolve rapidly, you can’t just change an std lib (see rust error trait)
        • the community can figure out the best designs over time and adopt new things (async)
        • you have a wider selection to choose from and don’t have to “pay” for unused std stuff*
        • less deprecated-since-1.0 interfaces (looking at you java)
        • less work to do for the std library team, workload easier to scale
        • build systems like cargo are better because they have to deliver, otherwise you don’t want to include so many external crates

        Downsides:

        • there is no official standard, even if it’s defacto one (serde), making it harder for new people to figure thm out (which is why this post exists..)
        • because crates can do whatever they want your crates aren’t as stable as std included libraries
        • you can get many competing standards (tokio/async-std/.., actix/tower) which are hard to compare or decide on if you want something that is just stable and has the biggest support in the community
        • you have to decide on your own and hope that you’ve selected the long running horse for your project

        I think rust also had a longer way figuring out the best practices for libraries and things like async/await came later to the party. If you’d have included some of the current libraries 2 years ago in std, they’d have a hard time to keep up with the latest changes.

        *Well you still have to, but it’s easier to say no when you already have a slim std.
        *You could recompile std to not include this, and it’s something that is AFAIK being worked on. But as of now, you don’t get to choose this.

        1. 5

          I’d like to submit one more downside, based the cousin thread from @burntsushi:

          • more concern about the transitive dependencies, and related consequences

          I’m appreciative of how nice it is to be able to reach for the stdlib and know that you’re introducing precisely 0 new dependencies, not even have the mental overhead of auditing the dependency tree or worrying about future maintenance burden etc.

      2. 2

        It was an active choice; it’s more maintainable and paired with a good dependency management solution like Cargo you can just release the batteries as versioned libraries (and often people will build better ones). Versioning gives you a lot of freedom in changing the API, reducing the pressure on initial design.

        It’s still fine to call it lacking IMO. The stdlib is largely “complete” by its own standards but it does lack things others have, and it’s a common thing people from other ecosystems take time getting used to. shrug

    4. 1

      I’m somewhat curious how long it will take for the console fn improvements to trickle out to crates enough to see performance improvements.

      1. 2

        I love the idea of compile time computation, but I imagine that it could be fraught for rust given it’s notorious compile times. Definitely leans into existing orientation towards runtime performance over compile time performance. That said, it’s just another arrow in the quiver.

        1. 9

          One unintuitive thing about compilers is that assessing time spent in general is really hard. E.g. early optimisations that remove code may make later optimisation passes or code generation cheaper.

          const functions are a case where you get a guaranteed and fault-free transformation at compile time (const function calls also have an upper runtime limit, compared to e.g. proc macros), without relying on additional optimisations to e.g. catch certain patterns and resolve them at compile time.

          1. 1

            That is an interesting point! Substituting a const function for a macro could lead to an improvement in compile time.

            My initial thought was that having a new compile time facility might lead people to handle new things at compile time. It’s always hard to tell how something shakes out in a dynamic system at first glance.

        2. 9

          My expectation is that const fn will speed up Rust programs. Running snippets of rust on miri, which is how const fn works, is not a slow process. Running LLVM to optimize things perfectly is. Any code that LLVM doesn’t have to deal with because it was const’d is a win.

        3. 8

          If it’s used to replace proc macro’s, it should improve compile times. But there are certainly use cases where it can degrade compile times. E.g. generating a perfect hash function. For cases like that, you’re probably better off using code generation.

        4. 2

          There’s some work on providing tools for profiling compile times. I think some of the larger crates are doing so to manage their compile times and could use that to manage the trade-off.

          1. 1

            That seems like it would help quite a bit. Once you can measure something it becomes much more actionable and you can make informed decisions about trade offs.

    5. 4

      I will mention, since this is sometimes confusing to people first encountering Clippy: Clippy does distinguish between “this is probably bad and incorrect” vs “this is bad style” in its lints. These are the clippy::correctness and clippy::style lint groups respectively (there are a few others, like complexity and perf, that are listed in the readme). Only correctness causes clippy to fail builds by default, the others just spew warnings unless you deny them.

      If you want to see only the correctness lints you can just turn off the other groups of lints (or turn off clippy::all and turn correctness back on).

      Suggestions on how to document this better would be appreciated. This is kinda front and foremost in the clippy readme.


      We are totally open to moving lints from clippy to rustc, this is up to the compiler/lang teams but you can just file an issue and link to it in https://github.com/rust-lang/rust/issues/53224 to get the ball rolling. There’s no automatic process to uplift lints after they have “baked” for a sufficient period of time, however, we just do it whenever people feel a need.

    6. 14

      Ted made a classic mistake in unsafe code (one I’ve made myself). This is why the Rust community normatively discourages writing unsafe code if it can be avoided. If I were a code reviewer, I’d be paying specific attention to any module containing unsafe code.

      It’s worth saying that the as_ptr documentation explicitly notes this issue, and another option instead of the one shown would be to use CString::into_raw which you can safely cast from *mut c_char to *const c_char.

      1. 7

        I’m not a Rust developer, but it seems to me Ted’s point here is that it seems this scenario would be harder to catch by the code reviewer paying special attention, while any reviewer of Go code would see the opening/accessing of a resource (even if they were themselves on shaky ground with unsafe code) and think “that next line better be a defer statement to clean up”.

        1. 13

          First, I want to say I appreciate Ted for voicing his experience with Rust in this area, and I do think he hit this speed bump honestly, did his best to resolve it, and is understandably displeased with the experience. Nothing I say here is intended as a judgment of Ted’s efforts or skill. This is a hard thing to do right, and he did his best.

          To your comment: I agree that is part of his point. I don’t agree in his assessment that the Go code is “clearer” [1]. In Rust, any unsafety is likely to receive substantial scrutiny, and passing clippy is table-stakes (in this case, Clippy flags the bad code, which Ted notes). I view the unsafe presence in the module as a red-flag that the entire module must be assessed, as the module is the safety boundary due to Rust’s privacy rules (see, “Working with Unsafe” from the Rustonomicon).

          That said, Rust can improve in this area, and work is ongoing to better specify unsafe code, and develop norms, documentation, and tooling for how to do unsafe code correctly. In this case, I’d love to see the Clippy lint and other safety lints like it incorporated into the compiler in the future.

          [1] Assessing “clarity” between two languages is really tough because clarity is evaluated in the context of socialization around a language’s norms, so a person can only effectively make the comparison if they are at least sufficiently knowledgeable in both languages to understand the norms around code smells, which I don’t think Ted is.

          1. 5

            Is clippy used that pervasively? I’d rather have that particular warning in the compiler itself, in this case. Clippy seems to be super verbose and I don’t care too much for a tool that has strong opinions on code style.

            1. 8

              Feel free to file an issue to uplift the warning into rustc: Clippy warnings, especially deny-by-default clippy warnings – are totally fair game for uplifting into rustc if they’re seen as being useful enough.

              Clippy has a lower bar for lints, but for high value correctness lints like this one the only reason a lint is in clippy is typically “rustc is slower to accept new lints”, and if a lint has great success from baking in clippy, it can be upstreamed.

              People use clippy quite pervasively. Note that in this case the clippy lint that is triggered is deny-by-default, and all deny-by-default clippy lints are “this is almost definitely a bug” as opposed to “your code looks ugly” or whatever. You can easily turn off the different classes of clippy lints wholesale, and many people who want correctness or perf lints but not complexity or style lints do just that.

            2. 2

              In my experience, yes Clippy is used pervasively (but maybe I’m wrong).

            3. 1

              We have a few Rust projects at $work and we use cargo clippy -- -D warnings as part of our builds. It’s a very popular tool.

      2. 3

        This is not about unsafe code as this code works in safe rust but it is still not correct. Proof

        I think using as_ptr (and similar functions) in this way should 100% be an error.

        1. 1

          However, there’s no bug in the example you’ve written as the pointer is never dereferenced.

      3. 1

        Unable to edit the post anymore, but others are right to point out that I missed part of the point here, and it’s important to note that CString::into_raw will leak the memory unless you reclaim it with CString::from_raw, and that CString::as_ptr is a challenging API because it relies on the lifetime of the original CString in a way that isn’t unsafe per Rust’s formal definition, but is still sharp-edged and may easily and silently go wrong. It’s caught by a Clippy lint, but there are some for whom Clippy is too aggressive.

        Thanks @hjr3 and @porges.

      4. 1

        Be careful (to other readers) that into_raw used in the same way here would leak the memory. Really as_ptr is very suspicious in any code, in exactly the same way that c_str is in C++.

    7. 31

      The reason they spread these misconceptions is straightforward: they want to discourage people from using the AGPL, because they cannot productize such software effectively.

      This doesn’t stand up to even a modicum of scrutiny. First of all, it assumes you know the intent of Google here. I don’t think Google’s intentions are that great to be honest, but as a rule of thumb, if you form an argument on knowing the intentions of other humans, it’s probably a bad argument unless you can provide credible evidence of their intent. Secondly, I see no such credible evidence in this article, and the lack of attention paid to how Google handles other licenses in this article is borderline disingenuous. All I see is a casual observation that Google’s policy benefits them systemically, which I would absolutely agree with! But that shouldn’t be a surprise to anyone.

      Why? Because it omits critical context. The AGPL is not the only license that Google bans. They also ban the WTFPL, which is about as permissive as it gets. They ban it because they have conservative legal opinions that conclude it has too much risk to rely on. I think those legal opinions are pretty silly personally, although I am somewhat biased because I’ve released code under the WTFPL only to have one Googler after another email me asking me to change the license because it’s banned at Google.

      My point is that there are other reasonable business explanations for banning licenses. Like that a team of lawyers paid to give their best expert advice on how a judge would rule for a particular license might actually, you know, be really risk averse. Licenses aren’t some black and white matter where things that are true and things that are not are cleanly separated in all cases. There’s oodles of grey area largely because a lot of it actually hasn’t been tested in court. Who would have thought the courts would rule the way they did in Google v. Oracle?

      What’s the cost of being wrong and having Google required to publish all of their source code? Can anyone here, even a Googler, even begin to estimate that cost? If you haven’t thought about that, then you probably haven’t thought deeply enough to criticize the intentions on this particular piece of “propaganda.” Because that’s probably what Google’s lawyers are weighing this against. (And probably an assortment of other such things, like the implications of allowing AGPL but giving each such use enough scrutiny as to be sure that it doesn’t wind up costing them dearly.)

      But by all means, continue punishing companies for making their policies like this public. Because that’s a great idea. (No, it’s not. Despite how annoying I find Google’s policies, I really appreciate having them documented like they are.)

      Disclaimer: I don’t like copyleft, but primarily for philosophical reasons.

      1. 11

        I don’t think Google’s intentions are that great to be honest, but as a rule of thumb, if you form an argument on knowing the intentions of other humans, it’s probably a bad argument unless you can provide credible evidence of their intent.

        As someone who previously worked on the open source team at Google and sat in the office and am friends with these humans, I can say very strongly that those lawyers do not have some sort of hidden agenda. It is also certainly false to assume they are not competent at their job. My read is that they are, as you might expect, very good at their job (noting I am also not a lawyer).

        A common mistake I see many commenters (and news stories etc etc) and I think you head to unintentionally, is to talk about Google as if it is a single anthropomorphic entity with its own thoughts and feelings. This piece does the same. There is not “a Google” that is making amoral decisions for its global benefit . There is an office of humans that try their best and have good intentions.

        The team makes decisions in this order:

        1. Protect the open source ecosystem.
        2. Protect the company.

        “Protect the ecosystem” is hard to believe if you buy into the “amoral entity” argument but is provably true: the easiest way to protect the company is to ban open source contribution (aside from forced copyleft terms) at all, but Google does this a lot under the Apache 2 (permissive) license. The banned licenses, as you note, are those that either do not have enough specificity (like WTFPL) or ones with what the legal team believe are onerous terms. They are good laywers, and so you have to assume they have a pretty strong case for their interpretation. Even if you think they are wrong (as all law is essentially malleable), hashing things out in court to decide what the terms of the license truly mean is a really bad use of time and money.

        1. 13

          There is not “a Google” that is making amoral decisions for its global benefit . There is an office of humans that try their best and have good intentions.

          Yes, there is. The two are not mutually exclusive. A corporation like Google is structured in such a way that the sum of all its humans, all trying their best, serves the interests of the company. It’s not anthropomorphic, but it does have an agenda, and it’s not necessarily that of any of its constituent humans. Whether morality features prominently on that agenda is a legitimate matter for debate.

          I think you’re trying to open a semantic crack in which responsibility can be lost: misdeeds are attributed to Google, but since Google isn’t one person it can’t be guilty of anything. But if companies really aren’t more than the sum of their parts, at least one person at Google must be responsible for each of its transgressions, which I think casts doubt on the claim that they have good intentions.

           

          The team makes decisions in this order:

          1. Protect the open source ecosystem.
          2. Protect the company.

          Maybe that’s true of the open source team. It’d be hard to believe that of Google in general—partly because it’s a coompany and you’d expect it to protect itself first, but more concretely because there’s history. Google has been hegemonizing Android for years. They’re also trying to do the same to the Web, via Chrome. The open source ecosystem gets to use whatever Google puts out, or suffer. I don’t see how that’s healthy.

           

          “Protect the ecosystem” is hard to believe if you buy into the “amoral entity” argument but is provably true: the easiest way to protect the company is to ban open source contribution (aside from forced copyleft terms) at all, but Google does this a lot

          (I note that you don’t have a problem anthropomorphizing Google when it’s doing things you think are good.)

          I’ve yet to see the proof. Publishing open source software doesn’t necessarily speak to any commitment to the wellbeing of the open-source ecosystem, nor does it typically carry any great risk. Let’s take a couple of minutes to think of as many reasons as we can why a company might publish open-source software out of self-interest:

          • The existence of good tooling for markets you dominate (web, mobile) directly benefits you
          • Developers like publishing things, so letting them publish things is a cheap way to keep them happy if it doesn’t hurt you too badly
          • It’s great PR
          • If you have a way to use your open-source thing in a way that nobody else does, the free work other people do on it gives you an advantage

          You might say: so what? Obviously they have businessy motivation to care about open source, but what does it matter if the result is they care about open source? But, as we’ve seen, the moment it benefits them to work flat-out on destroying an open ecosystem, they do that instead.

          1. 3

            But, as we’ve seen, the moment it benefits them to work flat-out on destroying an open ecosystem, they do that instead.

            This could be said of nearly any corporation as well.

            Move from OS sales to cloud services, buy an open-source friendly company, release a good editor that works on the competition, and even inter-op with rhe competition.

            The example may have the best intentions in mind, insofar a corporation can, but could also be a long-con for traction and eventually blast out something that makes the users jump ship to the corporation’s platform.

            Best part of it all is, it could be hedging in case that “something” comes along. There is some win either way and an even bigger win if you can throw the ideals under the bus.

            1. 2

              For sure. It’d be naïve to think Microsoft had become nice. They’ve become smarter, and they’ve become a smaller player comparatively, and in their situation it’s pragmatic to be a good citizen. Google was the same with Android before they won their monopoly.

          2. 2

            (I note that you don’t have a problem anthropomorphizing Google when it’s doing things you think are good.)

            It’s easy to do, mistakes were made, I’m human. Don’t assume malice or misdirection.

            1. 5

              I don’t assume either. I think it’s a natural way to communicate about organisations. But your opening gambit was about how talking about Google in those terms betrayed some error of thought, so I’d hoped that pointing this out might give you pause to reconsider that position. I didn’t mean to cast doubt on your sincerity. Apologies.

              1. 2

                All good 👍

        2. 10

          Right, I mostly agree with what you’re saying! I do think a lot of people make the mistake of referring to any large company as a single entity, and it makes generalizing way too easy. With the WTFPL thing, I experienced that first hand: a bunch of individuals at Google reached out to me because none of them knew what the other was doing. And that’s a totally reasonable thing because no large company is one single mind.

          Now, I don’t want to come off like I think Google is some great thing. The WTFPL thing really left a sour taste in my mouth because it also helped me realize just how powerful Google’s policies are from a systemic point of view. They have all these great open source projects and those in turn use other open source projects and so forth. My libraries got caught up in that, as you might imagine in this day and age where projects regularly have hundreds or thousands of dependencies, and Google had very powerful leverage when it came to me relicensing my project. Because it worked itself back up the chain. “{insert google project here} needs to stop using {foo} because {foo} depends on {burntsushi’s code that uses WTFPL}.” Now foo wants to stop using my code too.

          I’m not saying any of this is particularly wrong, to be honest. I am an individualist at heart so I generally regard this sort of thing as okay from an ethical or legal perspective. But still, emotionally, it was jarring.

          Do I think the lawyers in Google’s open source policy office think about that sort of effect it has on individuals? I don’t really. I don’t think many do. It’s probably a third order effect of any particular decision, and so is very hard to reason about. But from my perspective, the line of policy making on Google connects very directly to its impact on me, as an individual.

          In the grand scheme of things, I think this is not really that big of a deal. I’m not all hot and bothered by it. But I do think it’s a nice counter-balance to put out there at least.

          1. 4

            To play devil’s advocate:

            It appears that seasoned lawyers have deemed the license you use “not specific enough”.

            Isn’t the whole point of a license to fully lay out your intentions in legal terms? If it doesn’t succeed at that, wouldn’t it be better to find another license that does a better job at successfully mapping your intentions to law?

            1. 6

              To be clear, I don’t use the WTFPL any more, even though I think it makes my intent perfectly clear. So in a sense, yes, you’re right and I changed my behavior because of it. I stopped using it in large part because of Google’s influence, although the WTFPL didn’t have a great reputation before Google’s policy became more widely known either. But most people didn’t care until Google’s policy influenced them to care. Because in order for my particular problem to exist, some amount of people made the decision to use my project in the first place.

              I brought up the WTFPL thing for two reasons:

              • To demonstrate an example of a license being banned that isn’t copyleft, to show that Google has other reasons for banning licenses than what is stated in the OP.
              • To demonstrate the impact of Google’s policies on me as an individual.

              I didn’t bring it up with the intent to discuss the particulars of the license though. I’m not a lawyer. I just play one on TV.

              1. 2

                But I think even Google’s influence is just one example of the commercial world interacting with the “libre” world; in this light, Google is just entering earlier and/or investing more heavily than its peers. And it could be argued that’s a good thing, as it puts libre creators more in touch with the real needs of industry. It’s the creator’s choice whether to acknowledge and adapt to that influence, or to bend to it entirely. As I see it, Google can’t make you do anything.

                I do hope that Google carves out exceptions for things like Affero though, since I share Drew’s confusion at Google’s claim of incompatibility. I’m in the same boat, after all; I’m also a user of a niche license (License Zero), the legal wording of which I nevertheless have great confidence in.

                I believe that at some point, companies like Google will have to bend to the will of creators to have control over how their work is licensed. I happen to use License Zero because it seems to provide more control on a case-by-case basis, which I think is key to effecting that shift.

                1. 4

                  I do hope that Google carves out exceptions for things like Affero though, since I share Drew’s confusion at Google’s claim of incompatibility.

                  Large parts of Google work in a monorepo in which anything goes if it furthers the mission. The Google licensing site brings up that example of a hypothetical AGPL PostGIS used by Google Maps. In normal environments that wouldn’t be an issue: your code interfaces to PostGIS through interprocess APIs (which still isn’t linking even with the AGPL) and users interact with your code, but not with PostGIS. In the monorepo concept code can quickly be drawn into the same process if it helps any. Or refactored to be used elsewhere. That “elsewhere” then ends up under AGPL rules which could be a problem from a corporate standpoint.

                  It’s a trade-off between that flexibility in dealing with code and having the ability to use AGPL code, and the organizational decision was apparently to favor the flexibility. It can be possible to have both, but that essentially requires having people (probably lawyers) poring over many, many changes to determine if any cross pollination between license regimes took place. Some companies work that way, but Google certainly does not.

                  I believe the issue with WTFPL is different: because it’s so vague my guess is that the open source legal folks at Google would rather see that license disappear completely to protect open source development at large from the potential fallout of it breaking down eventually, while they probably don’t mind that the AGPL exists. At least that’s the vibe I get from reading the Google licensing site.

                  (Disclosure: I work at Google but neither on open source licensing nor with the monorepo. I also don’t speak for the company.)

                2. 4

                  As I see it, Google can’t make you do anything.

                  Maybe I didn’t express it clearly enough, but as I was writing my comments, I was painfully aware of the possibility that I would imply that Google was making me do something, and tried hard to use words that didn’t imply that. I used words like “influence” instead.

                  And it could be argued that’s a good thing, as it puts libre creators more in touch with the real needs of industry. It’s the creator’s choice whether to acknowledge and adapt to that influence, or to bend to it entirely.

                  Sure… That’s kind of what I was getting at when I wrote this:

                  I’m not saying any of this is particularly wrong, to be honest. I am an individualist at heart so I generally regard this sort of thing as okay from an ethical or legal perspective. But still, emotionally, it was jarring.

                  Anyway, I basically fall into the camp of “dislike all IP.” I’d rather see it abolished completely, for both practical and ideological reasons. Then things like copyleft can’t exist. But, abolishing IP would change a lot, and it’s hard to say how Google (or any company) would behave in such a world.

                  1. 2

                    Anyway, I basically fall into the camp of “dislike all IP.” I’d rather see it abolished completely, for both practical and ideological reasons.

                    Maybe we should turn Google into a worker coop 😉 Then its employees could change IP policy like you say, the same way they successfully protested the deals w/ China & the US military.

        3. 3

          There is not “a Google” that is making amoral decisions for its global benefit . There is an office of humans that try their best and have good intentions.

          Mike Hoye wrote a short article called “The Shape of the Machine” a couple of months ago that examines the incentives of multiple teams in a large company. Each team is doing something that seems good for the world, but when you look at the company as a whole its actions end up being destructive. The company he’s talking about also happens to be Google, although the lesson could apply to any large organization.

          I definitely agree with you that Google has lots of capable, conscientious people who are doing what they think is right. (And to be honest, I haven’t thought about the licensing issue enough to be able to identify whether the same thing is at play here.) I just think it’s good to keep in mind that this by itself is not sufficient for the same to be said for the organization as a whole.

      2. 9

        This is exactly what I came here to say. Basing an argument on your own interpretation of a license is a great way to get into legal trouble. Not only is there the risk that a judge in a court of law may disagree with your interpretation but there is also the risk that you will invite litigation from others that have a different interpretation and disregarding the risk of losing that litigation that litigation has a cost.

        So by using AGPL you incur not only the risk of having the wrong interpretation once it is tested in court but also the risk of an increase in costly litigation over time. This risk is further magnified by your size and how much larger it makes the target on your back.

        1. 12

          Basing an argument on your own interpretation of a license is a great way to get into legal trouble

          The article starts with “I’m not a lawyer; this is for informational purposes only”, and then proceeds to make strong un-nuanced claims about the license and even proceeds to claim that Google’s lawyers are incompetent buffoons and/or lying about their interpretation. Saying you’re not an expert and then pretending you are in the very next sentence is pretty hilarious. It’s abundantly clear this article is to support the author’s politics, rather than examine legal details.

          1. 6

            I’m not a lawyer; this is for informational purposes only

            I believe that Americans write that type of disclaimer because it is illegal over there to practice law without a license, and articles about software licenses can easily wander into dangerous territory. So based on that, I think it’s unfair to hold that up as a point against the article.

            Disclaimer: I’m not a lawyer; this is for informational purposes only.

          2. 1

            I started to call that tactic ‘joe-roganizing’. He does the same: “I don’t know anything about this.”, Then, in the next sentence: ‘[very strong opinion] - everyone who disagrees is surely stupid….’

      3. 9

        I worked at a startup where we had a massive compliance burden (yay FDA!) and so had even fewer resources than usual. One of my jobs as engineering lead there was to go and audit the tools and source that we were using and set guidelines around what licenses were acceptable because we could not afford the lawyer time if there were any issues.

        If the AGPL had been tested in court, I think companies would be a bit more chill about it, but I reckon that nobody wants to bankroll a legal exploration that could turn out very much not in their favor.

        One of the annoying things too about licensing, especially with networked systems and cloud stuff, is that the old reliable licenses everybody basically understands (mostly) like BSD and MIT and GPL and LGPL were made in a (better) world where users ran the software on machines they owned instead of interacting with services elsewhere. We still haven’t really identified an ontology for how to treat licensing for composed services on a network, versus how to handle services that provide aggregate statistics for internal use but not for end users, versus dumbly storing user data, versus transforming user data for user consumption.

      4. 4

        What’s the cost of being wrong and having Google required to publish all of their source code?

        That’s not how the AGPL works.

        The AGPL does not force you to distribute anything.

        If they’re “wrong”, they are in breach of contract. That’s it. They can then remedy that breach either by ceasing use of that software or by distributing their changes, or even by coming to some alternative agreement with the copyright holders of the AGPL’d software in question.

        1. 2

          This seems like a nit-pick. The point of my question was to provoke thought in the reader about the costs of violating the license. What are those costs? Can you say with certainty that they will be small? I’m pretty sure you’d need to be a lawyer to fully understand the extent here, which was my way of saying, “give deference where it’s due.”

          I personally think your comment is trying to minimize what the potential costs could be, but this isn’t theoretical. Oracle v. Google is a real world copyright case that has been going on for years and has almost certainly been extremely costly. I don’t see any reason why an AGPL violation couldn’t end up in the same situation.

          1. 4

            It’s an actual misconception that many people have, and I don’t think it’s good to perpetuate it.

            1. 2

              I guess that’s fair, but it seems like splitting hairs to me. Even you said “distributing their changes” as a possible remedy, and there’s a fine line between that and “publish all of their source code.” It really depends on how the law and license is interpreted, and nobody knows how it will be. So lawyers guess and they guess conservatively.

            2. 0

              The easiest way not to perpetuate it is to not use the AGPL.

      5. 3

        Thanks for saying this. I don’t work at Google, but I know many people who work at it and other large companies and have talked with them about license policy, and the article just reeks of ignorance as to how corporate lawyers work; even for relatively small companies.

        There’s no ideology here, there’s just lawyers doing what they were hired to do: use an abundance of caution to give the company as ironclad a position as possible.


        Hell, forget WTFPL, I’ve been waved off considering triple licensing of (approved) licenses by Googlers as “the lawyers would never go for this”. The lawyers are going to go for well understood, battle tested licenses where the failure cases aren’t catastrophic.


        Besides that it seems like the article misunderstands what constitutes a “derivative work”, if the article’s definition of “derivative work” (i.e., the code must be modified, not simply “used as a dependency”) was the one used by the *GPL licenses, then there would be no need for LGPL to exist.

      6. 1

        but as a rule of thumb, if you form an argument on knowing the intentions of other humans, it’s probably a bad argument

        This is not true.

        Firstly, the rule for another person and the rule for CORPORATIONS are completely different. Corporations do not operate like people do. When corporations are small, they sort of do, but as they grow larger then they become more corporations like.

        Secondly, it is impossible to know the intentions of other humans. So by this argument, no argument is ever good.

        We might give people the benefit of the doubt, because people are mostly good. They are ruled by an ethical system, built into their brain, to socialise and cooperate. Corporations do not have this internal system. Their motivational system is entirely profit based, and therefore you cannot treat them like people.

        If you have been alive long enough and paid attention to what corporations do, and especially google, the idea that they consider AGPL hostile, and wish to limit its influence and expansion, is highly plausible. How will they limit its influence? They could ban it completely, and then publish a document detailing why they think it’s bad. That’s highly plausible.

        Is risk-averse lawyering a factor? Most likely yes. But risk-averse lawyer adds to the hostility argument. Having received the advice from lawyers to not use AGPL, leadership would easily conclude that a limit to AGPL spread would give them the best chance of getting free software and have their way.

        Additionally, your steelman argument does not explain why google publishes that they do not like AGPL. They could keep it entirely internal. Why do you think they would do that? Free legal advice to competing startups?

        1. 3

          Firstly, the rule for another person and the rule for CORPORATIONS are completely different. Corporations do not operate like people do. When corporations are small, they sort of do, but as they grow larger then they become more corporations like.

          That makes sense in a certain light, sure. But I don’t see what it has to do with my point.

          Secondly, it is impossible to know the intentions of other humans. So by this argument, no argument is ever good.

          I don’t really agree. It might be true in the strictest philosophical sense, but that needn’t be our standard here. Intent is clearly something that we as a society have judged to be knowable to an extent, at least beyond some reasonable doubt. Just look at the criteria for being convicted of murder. It requires demonstrating something about the intent of someone else.

          Why do you think they would do that?

          When was the last time you saw any company publish legal advice generated by internal review?

          If you have been alive long enough and paid attention to what corporations do, and especially google, the idea that they consider AGPL hostile, and wish to limit its influence and expansion, is highly plausible. How will they limit its influence? They could ban it completely, and then publish a document detailing why they think it’s bad. That’s highly plausible.

          I think you’ve really missed my point. If the OP were an article discussing the plausibility of one of any number of reasons why Google published an anti-AGPL policy, then I would almost certainly retract my comment. But that’s not what it was. It’s a one sided turd without any consideration of alternative perspectives or explanations at all.

    8. 1

      Is it possible to get the full article? It seems to be truncated (or is dig doing that?)

      1. 1

        that’s a limitation of UDP-based DNS AFAIK

    9. 18

      Not this again.

      Different kinds of Arabic read numbers out loud differently, so some read most significant first, and some read least significant first, many are middle-endian. It’s pretty hard to make this conclusion just based on that.

      BUT: Arabic got the numerals from India. Indian languages write down numbers the same way you do in English, most significant digit first in a left to right script. Indian languages typically say numbers as most-to-least significant, with the middle-endian nature of tens and ones digits being swapped (similar to how “thirteen” is “backwards”). So it doesn’t even matter how it’s read in Arabic, because for a long time they were writing numbers the same way as the Indians, and it would be very weird if they used the same numerals but in the opposite order when looked at visually. The numbers got copied twice, from an LTR script to an RTL script back to an LTR script, without changing the order. The RTL nature of Arabic is a red herring here.

      And this doesn’t have much to do with why our computers work a certain way, anyway (see other comments).

      1. 2

        Different kinds of Arabic read numbers out loud differently, so some read most significant first, and some read least significant first, many are middle-endian. It’s pretty hard to make this conclusion just based on that.

        What do you mean by different kinds of Arabic? Classical and Modern Arabic both read numbers out loud the same, are you referring to dialectal Arabic? because most Arabs would not consider their dialect to be a legitimate Arabic, since almost all published text is written in Modern Arabic, or still in some cases Classical Arabic.

        For the record though this post is just wrong, because 521 in both Classical and Modern arabic is read as five hundred, one and twenty, and the author claims that “formal” arabic reads it differently. I’m not really aware of any dialects that read that as one, twenty, and five hundred, and at least to me (north African native speaker, but also Arabic language enthusiast) it sounds insanely wrong even for a dialect.

        BUT: Arabic got the numerals from India.

        Eastern Arabic Numerals are from India, Western Arabic numerals are believed to have originated in North Africa. European languages use the Western Arabic numerals, and not the eastern Arabic Numerals. The decimal system though, the more interesting part in my opinion, was indeed copied from Indian mathematicians.

        1. 2

          Eastern Arabic Numerals are from India, Western Arabic numerals are believed to have originated in North Africa

          Any sources for that? Most of the sources I have seen seems to trace the origins of both to Brahmi numerals.

        2. 1

          Right, dialectal Arabic, but also I knew that MSA doesn’t follow the same rules as what the author talks about and I wasn’t sure what they meant by “formal”, so I was really talking about MSA (I don’t know enough about Classical).

          Eastern Arabic Numerals are from India, Western Arabic numerals are believed to have originated in North Africa.

          No, the exact numeral forms were invented in North Africa (with influence from the eastern arabic forms!), but the concept of using numbers like that came from Eastern Arabic numerals, and Western Arabic numerals are a bit of an early divergence from them. They’re still related, see the first paragraph in https://en.wikipedia.org/wiki/Arabic_numerals#Origin_of_the_Arabic_numeral_symbols , in particular mathematicians had already trained themselves on Eastern Arabic numerals but hadn’t necessarily agreed on the forms.

    10. 1

      wow, it’s almost as if encoding images in a standard that was intended for writing systems causes a lot of problems and cannot be implemented in a consistently sane way; who would have guessed

      1. 14

        No.

        Almost every time people say “emoji make unicode more complex” they’re wrong. Emoji simply force programmers to deal with complexities they were able to ignore in the past, because those complexities affected languages those programmers were less exposed to. Most of the complexity in handling emoji in unicode is existing complexity that affects other scripts.

        This is not to say that emoji hasn’t brought in its own complexity (they way both country and non-country regional flags are handled are novel systems, for example). But 99% of the time emoji handling is problematic in your code – congratulations, your code probably breaks on other languages as well. This is true for basically everything in the above article.

        1. 5

          Almost every time people say “emoji make unicode more complex” they’re wrong. Emoji simply force programmers to deal with complexities they were able to ignore in the past

          The problems encountered adding emoji support to Emacs have been almost all centered around moving from the completely sensible “text should be monochrome” assumption to “fonts can do colors now for some reason”, and it’s been a huge headache.

          It’s frustrating to see all this effort spent that could have been used to solve real problems.

          1. 4

            Yeah, colors are another new thing emoji have brought in. Font code is complicated a ton by them.

            But in the context of this post, nothing is emoji-specific here.

      2. 9

        It’s worth pointing out that these problems would exist even if emoji did not exist. “Extended grapheme clusters” exist to deal with the complexity of human orthograph and are not solely an emoji thing.

        I don’t really see a single issue that this article explores whose root cause is “emoji in Unicode”, to be honest.

    11. 2

      How much does it matter? I resize windows, there’s usually a few weird glitches, but as long as it mostly approximates the correct window it seems fine. I don’t like to make excuses for doing the “wrong” thing but I don’t recall anyone ever complaining and I can’t really think of any practical consequences. As soon as I stop resizing, the window has the correct contents.

      1. 16

        The point of the post is to describe a proxy metric for “good GUI frameworks” that’s easy to measure.

        Not being able to smoothly resize windows may not be too big a deal in and of itself. It points to other underlying issues that may end up being a big deal, however.

        1. 1

          Ah.

      2. 6

        It’s a good proxy that is easy to check and use.

        As someone who resizes a lot, I definitely find it important. Jarring or slow resizing has an effect on how much I like an app. For example, I hate Thunderbird for that reason, it’s just terrible at this.

      3. 2

        Did you even read the first part of the article?

        That’s just a proposed alternative to the “green tea test” for Chinese restaurants or the “tamagoyaki test” for sushi restaurants but for GUI frameworks.

      4. 1

        It’s not only a great way to test for “quality” in general, but resizing is a great insight into the attitudes and priorities of a GUI framework. It’s very easy to mess up the combination of flexible layout and nested component hierarchies, particularly when objects can have min/preferred sizes, and scrollbars can come and go. Sometimes you can get yourself into ugly edge cases where user resizes can trigger a scrollbar to appear on a container, which then causes another layout loop inside it because the inner size has changed (thanks to the new scrollbar), and so on and so forth. It can get real ugly :)

    12. -6

      Rust has no specification, which means that there is nothing to distinguish between unintended implemented behaviour, and intended implemented behaviour, for anyone who wishes to create a new rust compiler, or port it to another platform. It also means that the behaviour of arbitrary elements can be ‘fixed’, and potentially the meaning of currently working programs to be changed in a negative way. Thus rust shall, while it has no specification, remain less adequate than Ada for use in the “Safety Critical” domain. If you want Rust to operate in that domain, you need a spec.

      1. 40

        Please read the article. It goes at great length to sketch out a long term plan to arrive at this spec, to then get Rust certified. The planned ToC also contains a whole document with a complete plan for the specification.

        There’s ample previous work done for Rust already, Ownership and Borrowing is formally proven and a memory model is far in the works, together with a checking interpreter (miri). Type resolution is being rewritten and a huge part of that is moving to an exact specification of how it actually works.

        The plan is also informed by speaking to multiple companies that got LLVM-based compilers through certification, so it’s not like we’re just writing down wishes.

        1. 1

          Please read the article.

          The parent post appeared to be talking about /now/, and your article was clearly talking about /the future/ – in your post you even go so far as to call the plan a “moonshot”, and “likely many full time developer-years of effort”. Even seems like a soft call for funding in there.

          So I don’t see how your “please read the article” comment is fair?

          Note: I /do/ think the goal is laudable, and exciting.

          1. 17

            The parent post appeared to be talking about /now/,

            Talking about /now/ independent of the article isn’t really pertinent to a discussion on the article, is it? Especially since the article addresses the content of the comment in question. I’d say @skade’s request to read the article was reasonable.

          2. 10

            I mean, it’s abundantly clear from the article that Rust doesn’t have a specification yet, nobody is contesting that. “Rust has no specification” is quite nonsensical as a comment on a post talking about plans to specify Rust.

    13. 20

      Well, this is infuriating. I hate that my browser just became essentially useless to me because someone at Mozilla messed something up. Anyone know if there’s a way to opt out of the extension verification stuff?

      1. 12

        I’m seriously considering just switching to Chromium (ungoogled-chromium maybe?) as a workaround. I don’t feel like Mozilla is doing too well in general with regards to being pro-user and pro-privacy lately;

        • There’s this issue, leaving everyone vulnerable to tracking and disabling protections for tor users.
        • The fact that this feature exists at all, and the only supported way to disable signing requires nightly, takes a lot of control out of users’ hands.
        • Mozilla have bought companies with closed source products (such as pocket), integrated them into Firefox, promised to open-source those products, and just never open-sourced them, leaving Firefox still with built-in integrations with potentially privacy-breaching inauditable closed-source products.
        • They have plans to move away from DNS, where a query first consults my OS (and its hosts file) and then consults my ISP which is a norwegian company following strong privacy laws, to just sending queries directly to a random American company which follows the US’ seemingly non-existant privacy laws.
        • It seems likely that they’ll move from IRC to discord or slack, which will be pretty bad if it happens (though this point is invalid if they end up moving to something free and open source). They should at least have come out and clearly stated that they’re not moving to a closed-source solution.
        • And, well, Chromium just has better performance on machines I’ve tested it on; having a worse experience for a good cause is worth it, but having a worse experience just to support a company which doesn’t really stand for anything might not be.

        I honestly really want to support Mozilla, and to do my small part in avoiding a complete browser monopoly by not using chromium, and I really don’t want to support Google. Mozilla just does so many stupid things which flies in the face of the values they claim to hold.

        1. 28

          I definitely plan to stay with Firefox. They are sometimes failing, but at least they’re trying to fight. There’s a saying, that “if somebody’s not failing, they’re not trying hard enough”. Whereas Chrome has a fundamental conflict of interest against many user protection mechanisms, because paid by Google Ads.

          1. 8

            You’re probably right. I’m on chromium right now, but I will probably honestly end up switching back to Firefox when this whole thing is over. It just sucks that Mozilla has to put themselves in the position of being the least bad of two evils, instead of just being plain good.

            1. 5

              It just sucks that Mozilla has to put themselves in the position of being the least bad of two evils, instead of just being plain good.

              You’ve hit the nail on the head. I just want a browser that’s privacy-respecting and good.

          2. 6

            Mozilla is also paid by Google Ads.

        2. 20

          Can you not be a drama llama? They goofed up. They will probably fix it soon. So you are without addons for a few days.

          As for their decisions, they are clearly straddling a line between purity and a little bit of the dirty stuff to make it more convenient for the non-0.1% of users who are ‘technical’. Meanwhile Google is ACTIVELY TRYING TO FUCK YOUR SHIT UP to maximise their control and profit.

          Perfect is the mortal enemy of the good.

          1. 2

            I think the problem here is that not only do they enforce the signing, but they also make it impossible for the user to turn it off, unless the user downloads non-stable or non-official versions of software, taking control out of the hands of the user.

            Sure, Google is worse, but what excuse does Mozilla have for the workaround (e.g., disabling the feature) not working on stable versions of Firefox? I see that as the very definition of the lesser of the two evils.

            1. 3

              I think I’ve seen some article long ago, basically saying how users will do everything they’re told by a website if this means they get to watch one more funny cat video - including changing settings in about:config, in OS, etc. Unfortunately I can’t seem to find the article with google nor ddg.

              1. 3

                This rings a bell, I read that too. I think the term you are looking for is “dancing pigs”. The Wikipedia page for dancing pigs cites a few sources for it. The one I think you and I both read is probably one of the Bruce Schneier articles. Wiki suggests the first publicly available written thing using the term was a chapter in a book about the Java security model. Which is kind of funny when one thinks about it because it’s hard to think of a piece of technology that did a worse job of what it was supposed to do than the Java security model.

              2. 1

                You’re saying the users are the only one gullible here?! What about the developers? A couple of folks at Mozilla and Google tell devs to trust LetsEncrypt with all your SSL needs, and pretty much every single developer restricts access to their websites now through LetsEncrypt now. Talking about the folks being gullible!

                1. 1

                  Hm, I see now that the way I wrote it may be seen as more ambiguous than I expected! :) Basically, what I meant, and what the article I mention tried to convey AFAIR, was that as a developer, one sometimes needs to protect users from themselves; in this case, I guess the “[Mozilla] mak[ing] it impossible for the user to turn [addon signing verification] off” decision might have been to protect users from themselves. That is, to protect users from being conned into disabling the verification feature “to see this one funny cat video”, and installing some malware addon.

                  As to LetsEncrypt, I don’t think I want to engage in a discussion completely (in my opinion) unrelated to the original post/article :)

          2. 1

            this isn’t the only thing they’ve done. it’s part of a longer trend of user-hostility which tells us that the mainstream web will not be compatible with freedom, so long as google controls what standards are implemented.

        3. 5

          Mozilla just does so many stupid things which flies in the face of the values they claim to hold.

          Yeah, remember that “auto install” of the LookingGlass/Mr.Robot thing a while back (end of 2017 I think…)?
          wtf Mozilla. I am going to check out some alternatives.

          Anyone here tried Brave or Vivaldi? If so, any good?

          1. 3

            Been working with Brave and Firefox for quite some time now.

            Brave is less polished and is missing quite a lot of sync-related-features I tend to use quite often on firefox. But the fact that firefox broke at a critical moment on my smartphone, right this morning, was the turning point.

            I haven’t tried Vivaldi as extensively though.

        4. 3

          The fact that this feature exists at all, and the only supported way to disable signing requires nightly

          https://twitter.com/SwiftOnSecurity/status/1124545069078536192

          There’s no solution here that doesn’t involve making normal users more vulnerable to malware. It’s been tried.

          Chrome has had similar problems in the past.

          They have plans to move away from DNS …. to just sending queries directly to a random American company

          Nobody has said that it will be a random American company. Mozilla’s testing this feature out with Cloudflare. I suspect this will be pretty configurable if it becomes an actual thing, and probably more local.

          It seems likely that they’ll move from IRC to discord or slack

          Mozilla’s moving away from IRC, but from the requirements here it doesn’t seem like slack or discord are likely solutions.

          1. 2

            Nobody has said that it will be a random American company. Mozilla’s testing this feature out with Cloudflare.

            Cloudflare is the random American company I’m talking about.

            1. 2

              Right, operative term being “testing this feature out”. There’s no indication that if this feature becomes a thing it will be only cloudflare that it uses. There’s just a lot of FUD around it.

              My comment is not correcting “random American company” to cloudflare, it is correcting your statement about Mozilla plans around this. They have not ever stated that this is the plan. It’s just what they’re testing out, because you have to bootstrap an ecosystem somehow.

          2. 11

            Wtf?? “Mozilla deliberately trying to disable adblockers”?? Ok, now that’s a claim I’m definitely not going to take seriously.

            1. 2

              And you are definitely right in doing so, because you should never trust random people on the internet. (so you deserve an upvote)

              But the fact of the matter is that Mozilla has just rolled out their own adblocker, which allows certain “non-intrusive” advertisements and blocks others, while an adblocker like Ublock Origin just simply blocks everything. This creates an incentive for Mozilla to make using other adblockers as much of a pain as possible. Given the fact that the main use of the addon functionality is blocking ads, this is a perfect way to make it look like an “accident”, while simultaneously disabling most of the adblockers currently installed.

              Given their track record in the last years (starting from the moment Mozilla stated that Thunderbird development was not a priority anymore in 2012), I simply cannot justify blind trust in them anymore for the programs I use for my main activities. This means that I simply have to face the fact that Mozilla is not what it once was anymore and therefore I start reaching for the alternatives I’ve been testing/using.

              Besides this: The fact that this demonstrates a dangerous level of incompetence, still stands.

              1. 4

                Hm, what I find worth meditation in this argument, is that even in their case, them being (or trying to be) good is not something that is guaranteed to stay true forever. Personally, I still believe they are, and are trying to be, but others may not agree indeed, and even I need to stay vigilant. Also, you’re right they’re treading a fine line there. And finalky, Stallman is sometimes surprisingly right.

                1. 2

                  You’ve summarized the exact point I was trying to make, in a better way then I could.

              2. 2

                Mozilla doesn’t block everything because their adblocker is shipped everywhere. If a user sees a website bugging out or not working due to them overblocking, that’s bad. On the other hand µBlock Origin accepts this risk and will overblock rather than keeping a website working. Mozilla has to consider that they run a browser, not a browser addon you can install.

                1. 2

                  Indeed they run a browser, but there are still only 2 differentiating criteria left for the browser they are running. 1) It has an alternative rendering engine and 2) it is extensible and customizable to a ridiculous extent by just installing some extensions.

                  On just about every other front, their competitors are better. And now they’ve messed up the one differentiating trait their browser has left, because a rendering engine alone is not enough of a dealmaker for just about everyone.

                  No matter how you look at it, this doesn’t look good and oozes the smell of unprofessionalism and Mozilla not really knowing what their focus and their target audience is.

                  1. 1

                    On just about every other front, their competitors are better.

                    This is very much not the case. Firefox uses less RAM and is considerably less resource hungry on low-end machines.

                    1. 2

                      Define low-end machines. My daily driver laptop is an 2-core-HT 2,6 GHz system with 8 GB ram running debian stable. Brave and other Chrome-based browsers outperform Firefox in both speed and memory usage. They also feel “faster”.

                      On an Atom N450 with 2GB, Firefox is also slower than Chromium and it also allocates more memory.

                      On the even lower end of the spectrum: On the 2 Raspberry Pi’s I have, Firefox is so slow that I’d almost call it unusable. Those also have better and faster alternatives available.

                      I don’t experience the benefits you describe on any of those systems.

        5. 1

          Mozilla isn’t moving away from DNS, you can disable DoH in the network settings and you can set any other DoH endpoint you want in the same dialog (so for example, you could set your Norwegian ISP or no DoH at all).

          The Pocket extension is open source to my knowledge, I do recall a github repo floating around. What isn’t open source (yet) is the backend.

          1. 4

            Sure, it will probably be possible to disable DoH, but how many non-American Firefox users will know to do that, compared to how many will not even know it’s something they have to worry about and send all their queries to a US corporation?

            The pocket extension is open source, but it’s the backend which is interesting, and it’s the backend they promised to open-source a long time ago. (Look at this comment from a Mozilla employee 2 years ago: https://www.reddit.com/r/firefox/comments/5wio45/mozilla_acquires_pocket/deadcf7/ - that didn’t say that the Pocket extension would become open source, but Pocket itself.)

            1. 1

              To my knowledge the current default and to keep it disabled, the DoH provider setting currently defaults to only using standard DNS as well, I don’t know of any plans to change that, Mozilla is still very early in testing the waters on how to deploy it.

      2. 2

        See the description of this post for a workaround.

      3. -9

        Well, this is infuriating. I hate that my browser just became essentially useless to me because someone at Mozilla messed something up. Anyone know if there’s a way to opt out of the extension verification stuff?

        LOL, says a person who’s website is “protected” by a time-bombed HTTPS and is unavailable via HTTP.

        You are aware that your website suffers from the same issues that you appear to condemn in this very comment? That it’s up to external third parties on whether or not the user is allowed to see it, because you decided to cave in to their pressure to “secure” your static website, and yourself made a choice to prohibit folks from accessing it via HTTP through your own policy?

        How are you then act surprised that Mozilla does same?!

        1. 6

          Well firstly, my website is not a tool that people depend on to do work. Firefox is. Secondl, I have automated systems in place to renew the SSL certs & get warned when they’re near to expiry. Thirdly, if you had my site open & the certs somehow expired, you could still see the content; Firefox just disabled a bunch of functionality while it was running without giving me any chance to intervene. Finally, if a website’s certificates are expired, you still have the ability to say “show me anyway”; there doesn’t seem to be any ability to do the same with stable Firefox.

          Glad to see you’re enough of a fan of mine to look into how I configure my site though!

          1. 2

            But how’s a website different from a tool? Firefox is still made by people just like you. The fact that one can click “show me anyway” on your website is merely omission on the part of site’s operator to not install HSTS. With proper HSTS, the user is guaranteed to not have any way to access your site even if you decide to cancel your https policy. There is no way to intervene, either, if HSTS is setup correctly. If you click reload and a new connection has to be established, pretty certain things won’t work no more, either.

            “Automated systems in place to renew SSL certs”? Are they autonomous and self-contained, or do they depend on any third parties? Are the third-parties they depend upon by any chance related to the very same party that caused the incident at stake? Isn’t Mozilla the biggest backer behind LetsEncrypt? This has got to be a joke! The most classic example of #TooBigToFail!

          2. 2

            Firefox is only a tool you depend on because people serve websites which require a modern browser to be usable. HSTS contributes to this monoculture.

        2. 7

          This is a personal attack and not something that contributes to the conversation.

          1. 0

            How’s something a personal attack if it applies to pretty much every site operator nowadays? The comment purposefully doesn’t even contain any PII, either.

            1. 2

              There are better ways of discussing the merits and problems involved with the https certificate system than dismissing what someone said with “LOL, says a person who [..]” and doubting the person’s sincerity with “issues that you appear to condemn”.

              1. -1

                the dismissal or questioning of their sincerity is something you’re adding with your interpretation. it doesn’t follow from the parts you quoted.

                maybe his goal was not to discuss the merits and problems of the https certificate system, but to actually lessen the spread of this scourge.

          2. -3

            Pointing out hypocrisy is a good tool when discussing moral issues.

        3. 3

          HTTPS is a bit different; with a website, you’re inherently relying on someone else paying the bills for the server and domain name continuously anyways, and if they don’t, you can’t use their website even if it”smnot HTTPS. Relying on the owner to renew their certs too doesn’t really change anything. If you want to have access to a website without relying on anyone else, you need to download it and access it locally, whether it’s HTTP or HTTPS.

          There’s no such expectation for addons I have downloaded to my personal machine which don’t inherently need to rely on any third-party.

        4. -4

          Good post, sad to see it got swarmed by haters.

    14. 3

      Hmm… not a huge fan of the JavaScript nav with no URL bar changing or way to link to a sub-step. As a side-effect of this, if I hit the “next” button at the bottom of a long step, the next step appears already scrolled to the bottom so I have to manually scroll up to read it.

      Update: Showing this to macOS users at work, the first question I got was if the git from homebrew has git-send-email or not, since that’s the preferred way to install tools.

      1. 5

        It’s actually not JavaScript - there’s no JavaScript on this page. That was a deliberate design decision, and the tradeoff is that I can’t scroll the page up when you switch through the steps.

        1. 4

          Could it just be normal links instead of whatever it is?

          1. 3

            Maybe… I’ll look into updating that later. Would also accept a patch.

            1. 3

              You can use :target selectors to control visibility depending on what the document fragment is

      2. 4

        git-send-email is all kinds of broken on OSX because the perl dependencies get very messed up, and the problems change over time (i’ve had multiple problems with it before, each time new ones)

    15. 75

      Cargo is mandatory. On a similar line of thought, Rust’s compiler flags are not stable

      This is factually false. Everything in rustc that is not behind -Z (unstable flags) is considered public interface and is stable. Cargo uses only the non--Z interface, so it can be replaced.

      I also don’t agree with the rest of the statement, integrating cargo into other build systems is a problem that would get worse if it were solved badly and it is terribly hard to find an interface that helps even “most” of the consumers of such a feature. Yes, it always looks like “not caring” from the side of consumers, but we have a ton of people to talk to about this, so please give that time? There’s the unstable build-plan feature which allows to export data from cargo, so please use it and add your feedback.

      A lot of the arguments fall down to “not yet mature enough” (which I can easily live with, given that the 4th birthday of the 1.0 release is in 1.5 months) or - and I don’t say that easily - some bad faith. For example, Rust doesn’t have a (finalized!) spec, yes, but it should also be said that lots of time is poured into formally proving the stuff that is there. And yes, we’re writing a spec. Yet again, there is almost no practical language today that had a formalized and complete spec matching the implementation out of the door!

      I also don’t agree with the statement that Rust code from last year looks old, code churn around the 2018 edition was rather low, except that you could now kill a lot of noisy lines, which a lot of projects just do on the go.

      I’m completely good with accepting a lot of the points in the post and please have your rant, but can’t help but feeling like someone wanted to grind an axe instead and highlight their mastodon posts.

      Finally, I’d like to highlight how much effort Federico from Meson has put into exploring exactly the build system space around Rust in a much better fashion. https://people.gnome.org/~federico/blog/index.html

      1. 3

        Yet again, there is almost no practical language today that had a formalized and complete spec matching the implementation out of the door!

        This is factually false? JavaScript has a superb spec and also has a formalized spec. Practically speaking, formalized spec is not very useful yet, so if we restrict to complete spec, all of C, C++, Java, C#, JavaScript have complete spec supported by multiple independent implementations. Rust’s spec as it exists is considerably less complete and useful compared to those specs.

        1. 16

          My point is: Did all of those have it out of the door?

          Yes, the current spec is not useful for reimplementing Rust and that has to change. My point is that it’s rare to see languages that have such a spec 3 years out of the door.

          1. 25

            Java was released in 1996 together with Java Language Specification written by Guy Steele and co (zero delay). C# was released in 2002 and ECMA-334 was standardized in 2003 (1 year delay). Compared to Java and C#, Rust very much neglected works on specification, primarily due to scarce resource. My point is that even after 3 years, unlike Java and C#, there is no useful spec of Rust.

            1. 4

              Why did Steele write the Java spec? Usually there is little value in writing a spec if there is only one implementation. Did they write the Java spec because Microsoft made its own Java?

              Also, Python has no spec although it has multiple implementations and it is certainly a useful and successful language.

              1. 3

                I believe Python does have a spec. “don’t rely on dict ordering” was a consequence of saying “Python spec doesn’t specify this even if CPython in fact orders it”, though this has changed. Not closing files explicitly is considered incorrect from a spec perspective even though CPython files will close files on file object destruction

                It’s not the C++ language spec but there are a good amount of declarations relative to “standard Python behavior”

                1. 3

                  By that logic, so does Rust. They both follow almost identical processes of accepting RFCs and documenting behavior.

                  1. 1

                    I’m agnostic to the “Rust having a spec” question. I have not thought about it more than today.

                    Python has the reality of having multiple mature implementations (I’m not sure if this is true of Rust?) so there’s actually a good amount of space for a difference between spec and impl.

                    I also think there’s actually an ongoing project to defining a Rust spec? It feels like “Rust spec” is pretty close to existing , at least in a diffuse form

              2. 2

                Usually there is little value in writing a spec if there is only one implementation.

                There is a lot of value in writing down the conclusion of a discussion. When the conclusions are about formalization, it adds value to write it down as formally as reasonable. That enables other humans to check it for logical errors, functional problems, etc. and catch those before they are discovered while coding or even later.

            2. 1

              You’re right. coming from a background of more dynamic languages (Ruby/Python/etc., I’m more used to their pace so speccing).

            3. 0

              hm - i was against you until this comment

              thats a good point - perhaps mozilla wants hegemony over the language and wants to prevent other rust implementations - i wonder if any other serious implementations even exist currently?

              1. 15

                I don’t think that’s the case. Spec writing is a very specific skill, and you pretty much need to hire spec writer to write specification. Mozilla didn’t invest in hiring Rust spec writer. (They did hire doc writer to produce the book.) Since Java and C# did invest in specification, it is right and proper to judge Rust on the point, but then Mozilla is not as rich as Sun and Microsoft were.

              2. 13

                Rust is independently governed from Mozilla; while there are Mozilla employees on the teams, there was a deliberate attempt to make Rust its own project a bit before 1.0.

                There are active attempts to specify parts of Rust: we have a group of people attempting to pin down the formal semantics of unsafe code so that that can be specified better (we need to figure this out before we specify the rest of it).

                Specifying a language is a huge endeavor, it’s going to take time, and Rust doesn’t have that many resources.

              3. 6

                Equally likely that Mozilla doesn’t want hegemony over Rust, and so doesn’t put a lot of effort into the things that don’t benefit them directly as much. Java and C# were both made by large companies that needed a standard written down so that a) they could coordinate large (bureaucratic) teams of people, and b) they could keep control over what the language included.

                There’s already one alternative Rust implementation: https://github.com/thepowersgang/mrustc . Afaik it’s partial, but complete enough to bootstrap rustc.

                1. 20

                  (Yes, and…) Having worked not on but nearby the Microsoft JavaScript and C# teams I can tell you that in both cases the push for rapid standardization was to a significant degree a result of large-corporation politics. For JavaScript, Netscape wanted to control the language and Microsoft put on a crash effort to participate so it wouldn’t be a rubber stamp of whatever Netscape had. For C#, Microsoft wanted to avoid the appearance of a proprietary language, so introduced it with an open standards process to start with. In both cases somebody had to write a spec for a standards process to happen.

                  BTW, the MS developers had some “hilarious” times trying to write the JavaScript spec. The only available definition was “what does Netscape do”, and pretty often when they tried to test the edge cases to refine the spec, Netscape crashed! Not helpful.

              4. 3

                i wonder if any other serious implementations even exist currently?

                There is mrustc, although I haven’t followed development of it lately, so I’m unsure of the exact roadmap.

                1. 2

                  mrustc doesn’t do lifetime checking at all, which is notoriously unspecified how it exactly works (like: what must be accepted, what not?)

        2. 14

          This debate is Rust vs C. Rust had good design imitating strengths of various languages with a spec to come later. C was a slightly extended variant of B and BCPL, which was bare minimum of what compiled on an EDSAC. Unlike Wirth’s, it wasn’t designed for safety, fast compiles, or easy spec. Pascal/P was also more portable with folks putting it on 80 architectures in a few years. Even amateurs.

          Far as spec, we got a C semantics with undefined behavior decades after C since the “design” was so rough. I can’t recall if it covers absolutely everything yet. People started on safety and language specs on Rust within a few years of its release. So, Rust is literally moving decades faster than C on these issues. Im not sure it matters to most programmers since they’ll just use Rust compiler.

          C is still ahead, though, if combined with strict coding style and every tool we can throw at it. Most C coders don’t do that. Im sure the author isn’t given what he stated in article.

          EDIT: Post was a hurried one on my phone. Fixed organization a bit. Same content.

          1. 2

            C is still ahead, though, if combined with strict coding style and every tool we can throw at it. Most C coders don’t do that. Im sure the author isn’t given what he stated in article.

            This is something I’m always quite surprised by. I can’t get why some don’t even use the minimum of valgrind/the sanitizers that come with the compiler they use, also cppcheck, scan-build, and a bunch of other other free C++ tools work wonders with C as well.

    16. 10

      I’ve suggested the rust tag to this post because as far as I can the issue is internal to Rust.

      1. 3

        I posted this for explanation of how Fuchsia is not unix from Fuchsia team, but okay. To summarize: no fork and exec, no child processes, no signal, no uid and gid, no unix filesystem permission, no global filesystem, no file descriptor, no C ABI, etc.

        1. 3

          Having ported C software to Fuchsia I can promise you it’s not UNIX.

        2. 1

          An issue for a specific language’s build processes is not the best forum for informing about an operating system’s internals.

          Your title editorializes the issue title.

          In the future: write a blog post summarizing the content (like in your reply to me), cite the issue as a source (ideally, look up more information about Fuschia and its goals), and submit that.

          1. 2

            Your title editorializes the issue title.

            No, the issue title was edited, it was a joke initially but someone complained about the title not being descriptive enough :)

            1. 1

              So it has, I apologize to @sanxiyn for accusing them of editorializing the title.

    17. 4

      I don’t think at any point was the addition of Rust touted as a replacement for XUL, they do very different things in Firefox.

      What happened was that the big release – Firefox Quantum – contained both the introduction of a lot of key Rust code, as well as disabling external XUL as a first step to getting rid of it entirely. People definitely misinterpreted these two things as being “Rust is replacing XUL”, but that’s not the case. Rust replaced a bunch of C++ code.

      If anything, the dependency chain is the other way around: Completely ripping out the C++ code was blocked on a bunch of XUL stuff going away (or the Rust code getting support for it); until then the old C++ code still had to be around for some special cases.

      The XUL code is being replaced by more normal HTML/JS, slowly.

      FWIW if you don’t want to actually build Firefox, most of the Firefox XUL stuff is on your disk in plaintext form, just in a weird zip file that also contains cached “compiled” versions. I’ve edited this and had it work in the past, I think the advice there should still be applicable but I’m not sure.

      1. 2

        When I started, I didn’t know that the changes would all be in XUL. I didn’t want to learn anything more than necessary to get my goal done. At the time, it was easier to spend a few hours recompiling than figuring out how not to.

        Now I wonder, since apparently the only changes I made are in omni.ja, could I just save that file and keep it around for future Firefox updates until they finally rip out XUL?

        1. 2

          Could I just save that file and keep it around for future Firefox updates

          No, that would almost certainly break, omni.ja contains most of the browser UI code and is tied tightly to browser internals

          But you can keep around a patch and write a script that lets you repatch omni.ja each update

          until they finally rip out XUL?

          It doesn’t really matter if they rip out XUL, the code would then be replaced with HTML/JS still living in omni.ja and you’d be able to write your patch to work with that when it changes

        2. 2

          could I just save that file and keep it around for future Firefox updates

          In fact, that was the spirit behind why these things were put in a JAR in the first place—so you could do almost exactly that. The choice to build on XPCOM was deeply tied into that philosophy. In practice, this failed for reasons similar to why semver can fail. People kind of do what they want to do instead of satisfying the constraints that are laid out. And of course, after acknowledging that it never really worked, Mozilla doesn’t pretend to follow that architecture anymore.

          If you haven’t read Yegge’s old “Pinocchio Problem” yet, it’s a great read in its own right. But extremely relevant to this topic are the parallels he draws wrt systems like Emacs and systems like Firefox.

    18. 5

      If I’m not mistaken, the XUL pages should be editable as content of omni.ja within a release build directory of Firefox, which could spare you all the building of binary files.

      Again I might be wrong, but I think you could “just” extract the XUL files from omni.ja, patch them and then deflate again.

      Hopefully someone in irc.mozilla.org #developers or #build can confirm this.

      1. 4

        omni.ja is not quite a ZIP file, although it’s close enough that some unmodified ZIP tools can extract files from it.

        Apparently you can use unmodified ZIP tools to repack omni.ja, if you’re careful.

      2. 4

        This is true, however it’s not so simple, you also need to purge some caches.

        http://inpursuitoflaziness.blogspot.com/2014/01/editing-files-from-omnija-in-firefox-20.html

        1. 3

          There should be more threads about arcane and obscure Firefox trickery. Thanks! 👍

    19. 3

      +Semi-related work

      +Replacements (refs/replace) are superficially similar to obsolescences in that they describe that one commit should be replaced by another.

      git replace is new to me. When did it appear? Is anyone using it?

      I’ve also been meaning to try git absorb which looks thoughtfully designed.

      1. 4

        If I remember correctly, git replace was useful to me in a case where an old CVS repository had been imported without history to git and then worked on for a long, long time. Much later, the old CVS history was preserved by using a decent cvs2git kind of utility, creating another git repository. Git replace would then allow me to stitch these two repositories together, effectively prepending history (something which would be impossible to do with a normal git parent commit ref, without completely cracking sha1)

      2. 3

        Absorb is really nice if you’re using a fixup-heavy workflow. Fast, too.

        1. 1

          Are you an actual user of git absorb? Or are you talking about the hg original? I’ve wondered how usable it is for git already.

          1. 1

            Yes I use git absorb. Haven’t run into any problems yet 🙂

      3. 2

        I’m hoping to eventually integrate git absorb into git revise, which is basically a faster in-memory version of git rebase for rebases that don’t change the final file contents, just reorder when changes happen (i.e. 90% of my use cases for git rebase)

        1. 3

          Since that’s in Python you could probably take advantage of the linelog implementation that’s underneath hg absorb to make the work easier. I recommend looking at the hg absorb code. Linelog makes it easy and slightly more robust than just moving patch hunks around.

    20. 2

      This seems a very strange definition of “undefined behavior”. At first it seems to be referring to the same concept as C, but there’s nothing undefined about accessing a uint8_t variable through a uint8_t pointer.

      1. 5

        What do you mean?

        C pointers, as far as I understand them (I not an expert C standard rules lawyer), can be chopped up into about three categories

        • restrict pointers, which definitely allow you to invoke UB by accessing a uint8_t pointer through a uint8_t pointer, if you deference two different aliasing pointers, one of which is restrict.

        • “normal” pointers, which are still subject to the strict aliasing rules in standard C. This definitely means you can access a uint8_t variable through a uint8_t pointer, but you can’t access the first byte of an int through one unless you do an appropriate dance through union. Otherwise, it’s undefined behavior.

        • “generic” pointers, namely void* and char*, have no aliasing rules. About the only way to invoke undefined behavior is to alias them with a restrict pointer, or dereference them when they dangle.

        You’ll notice that this article describes two different types of pointer in Rust:

        • References, which are a bit like restrict pointers in C.

        • Raw pointers, which are a bit like “generic” pointers in C, but can exist in any type, not just c_char.

        “a bit like restrict” is a complete oversimplification, though. The real deal of how any of these things work is the spec. The C spec is at least as bad as this article is for Rust.

        1. 2

          The first code example, in C:

          uint8_t x;
          uint8_t *y = &x;
          x = 7;
          *y = 5;
          

          Or whatever. There’s nothing undefined about that.

          1. 5

            That’s not a completely accurate translation. An accurate translation would make the pointer a restrict pointer, and it would be undefined behavior in C too.

            An easier way to look at it is this: In Rust, some things that would be UB in C (e.g. strict aliasing violations) are not UB in Rust, however other things that not UB in C (writing through multiple aliases that are not marked as unsafecell) are UB in Rust.

            The final nature of undefined behavior in Rust will likely not map cleanly to C concepts. We can refine the concept of restrict pointers a lot, for one.

      2. 4

        Yes, Rust has its own cases of undefined behaviour. Especially that &mut references are not allowed to alias and are always unique, so getting hold of a second on the same value while one is already active is undefined behaviour. unsafe allows you to create that case.

        The nomicon has an example why breaking this rule does matter, as the compiler will rely on the uniqueness of &mut references. https://doc.rust-lang.org/nomicon/aliasing.html

        The example is trivial, but &mut needs to be globally unique (its on of the core guarantees of Rust), so some care is necessary when handing them out from Rusts unsafe blocks.

        Ralfj has more writing on this (that’s his research topic).

        1. 2

          Thanks. I missed some context.

          cc @notriddle @Manishearth

          1. 1

            You’re welcome.