1. 7

    Most of the marketing done by Firefox is precisely of the form you describe here. I’m not sure why you feel that’s not the case.

    1. 6

      Yeah I didn’t mean Firefox’s own marketing but more the community word of mouth message.

      Added a footnote outlining as such, thanks for making the ambiguity clear to me.

    1. 12

      I made a long thread about this (and other properties of voting systems) a couple weeks ago

      A very important property of voting systems is secrecy. Once you drop in your vote, nobody should be able to tell who you voted for. This includes yourself – you should not be able to prove who you voted for.

      This protects against candidates paying for votes, as well as people forcing you to vote a certain way. Once you’re out of the polling place, you’re free to lie about who you voted for and nobody – not even someone with power in the government – can tell if you’re lying.

      Coercion is absolutely a problem in the united states. Often families are forced to vote the way the patriarch does. Many polling places in the South will even help families get adjacent voting booths (this is bad).

      Secret ballot is a property of voting systems that is there quite universally – most countries have it.

      Alameda County – the county in which I was helping run a polling place –does give you ballot stubs that you can take home. These don’t have your vote on them (they do have a unique ID) but you can use them to prove you voted (e.g. if you need to prove to your employer you voted so you can justify taking the 2 hours paid leave California requires employers to give you on election day)

        1. 2

          Reading your thread about ID, and about secure elections (no personally identifying paper trail) made me realize it’s actually quite easy to be ineligible to vote and still vote and there is no way to track this. A certain someone keeps harping on illegal voters and I drink the kool-aid that this is all over blown, but now I realize that anyone with any kind of id can just vote and we can’t track legality - we can only, after the fact, identify people who registered to vote illegally and only after systematically going through the whole voter roll and tracking down everyone and checking their citizenship. In the polling station I went to in Mass they don’t need any signature, so one can claim someone else voted in their name and so on. They took my ID, but I can’t remember if that was just because they initially couldn’t find me on the rolls, so I think you just need a name and address.

          1. 12

            You sign the voter roster under penalty of perjury, and if you’re voting provisionally that all gets dealt with later.

            If you are voting for the first time they often need ID because of the HAVA act, but otherwise there is no ID requirement in many states (california too).

            A lot of things in this country operate under trust that you’re not lying in a situation where lying is illegal. It works out.

            There’s plenty of research showing that the threat of illegal voting is extremely low. Illegal voting is very hard to scale, and if you’d like to flip an election you’d need a lot of illegal voters. The chances of getting caught go up dramatically as you try to scale this. It’s not worth it; and very few people do it.

            Your argument is that you can game the system. That is true, but that doesn’t mean people do game the system, and that doesn’t mean that it’s worth it to game the system.

            OTOH a lot of people don’t have photo id. The cons of requiring id outweigh the pros. Disenfranchising a large segment of our poorer population is totally not worth it to catch a couple cases of voter fraud.

            1. 3

              Don’t want to start this discussion on lobste.rs but that makes me worry - because now there is an incentive for candidates to treat illegal voters as a voting block and cater to them, just like any other voting block. This creates a market for this. May be I should try and understand more from you via message.

              I recall telling someone canvassing for votes a few years ago (local election) that I couldn’t vote because I wasn’t a citizen (at that time) and she just shrugged in a strange way. I always puzzled about that. It wasn’t “Oh, yes you can’t vote, bye.” almost a wink-wink.

              1. 10

                That could also be because non-citizens can still be politically active – in fact iirc non citizens are often over-represented amongst campaigners because that’s all they can do to affect the election.

                I know non-citizens who have been canvassed and asked to help phone bank or whatever when they explain they’re not citizens.


                Again, scaling a process of catering to illegal voters is hard. Every single vote you try this for is an opportunity to get caught; you can’t do it in bulk. And a wink-and-nudge isn’t enough since you still have to explain how to impersonate a different voter or whatever – most people don’t know how voting works.

                It is totally possible for a single person to vote illegally. This process is very hard to scale without getting caught. Furthermore, it has not historically been a problem, and still isn’t.

                Voter fraud fearmongering is typically used to enact hurdles to voting that end up disenfranchising legitimate voters.

                1. 6

                  One of the most salient political issues in the US right now is the presence of tens of millions of illegal immigrants on US soil, and the question of what, if anything, should be done about it (anything from “national borders are inherently illegitimate” to “greatly expand the size and power of the government’s law enforcement apparatus in order to deport them all”). Many illegal immigrants have some kinds of official documentation, because not all parts of the government are the ones that check for citizenship/legal residency, and because deliberately not checking for citizenship/legal residency when interacting with government services is a politically-popular pro-immigrant position in many jurisdictions (of course, it’s also a massively unpopular position in other jurisdictions).

                  If someone’s presence in the country at all is illegal, but they are part of a group of tens of millions with similar status, know that enforcing the law (i.e. deporting them) is logistically difficult for law enforcement and very politically contentious, and in general feel like they are rightfully Americans, just without documentation, I find it very plausible that they might decide to cast a vote, and that the mechanisms to detect illegal voting wouldn’t detect them doing so. I don’t think that doing something under penalty of perjury is a significant deterrent to someone whose is already subject to deportation if the parts of the government that enforce immigration law learn about it.

                  1. 6

                    I find it very plausible that they might decide to cast a vote

                    They can’t cast a vote under their own name though, they have to be registered.

                    And as the OP mentioned it’s much easier to be caught during the registration process.

                    What they have to do is turn up at a voting place, and impersonate someone else. This is very much an actively malicious act, not a passive “I feel like I’m american, i’ll vote” act where there’s more misunderstanding than malice.

                    1. 2

                      hah I just brought up where that happened to my great grandfather, the misunderstanding option though. He thought he had done all the proper paperwork but he had not. I don’t have the full story though he may have gotten a visa confused with citizenship or something, the world will never know.

                      1. 2

                        You don’t need proof of citizenship to register. I did it online.

                        1. 4

                          Sure, but once done it’s something they can look for and catch at any time they want. Unlike voting under someone else’s name – if not caught that day (e.g. if the person being impersonated comes in and tries to vote later), it won’t be caught at all (but this is fine because it doesn’t scale).

                          When you register online you’ll provide an SSN or state id number, both of which can be traced to citizenship status. The state may not be interested in helping the federal government deal with illegal immigrants, and may not care about citizenship status in general, however the registrar of voters definitely will care about these things.

                          1. 1

                            I gave my drivers license I think. Don’t recall if that is tied to my ssn. If registration is linked to ssn then its less scary because automated scans can be done re: eligibility

                            1. 2

                              I’m registered in california; I registered through my state id (you can autoregister when you apply for an id). When you register online you either provide an id number or ssn.

                              When I want to access my voter settings (change vote by mail preference, check if my VBM ballot was counted, check my polling place, etc) it asks me for an id number or ssn. Being too lazy to fish out my id I just use my SSN, which I know. It still works, despite having registered through my state id.

                              This stuff can be linked if they want to, usually.

                              And again, evidence shows that none of this is actually a problem.

                      2. 5

                        Yeah except all research on this issue shows that voter fraud is exceptionally rare. Some of the most recent examples were conservatives who thought voter fraud was easy with this exact mindset and got caught. My great grandfather found out he wasn’t actually a citizen when he went to vote, they told him he couldn’t because he wasn’t a citizen, and then went to mexico and applied for proper citizenship in the US.

                        The reality is voter fraud, intentional or accidental is actually deceptively difficult. There are actually many layers at every step of the process that end up preventing this from being a problem. Voting machine based voter fraud, that may be a real thing, and we’ll probably never know how much. Humans walking in to do voter fraud, accidental or purposeful is statistically not a thing.

                        Even Trump’s voter fraud investigation turned up dust.

                        1. 5

                          I don’t think that doing something under penalty of perjury is a significant deterrent to someone whose is already subject to deportation if the parts of the government that enforce immigration law learn about it.

                          But the threat of deportation definitely is - have you met anyone who’s undocumented? The ones I know are terrified of every interaction with law enforcement, DMVs, employers, etc. Go to any restaurant kitchen anywhere in the country, any farm anywhere in the country, and see if you can even get them to tell you their full name without knowing why you’re asking.

                          I sense you’re not close to any of these people. You would be subjecting yourself to an immense personal risk of losing access to all personal property, friends and family, etc just by putting yourself on a voting roll when you aren’t a citizen. I would never risk losing access to my children because of my desire to vote on anything.

                          This is outside any discussion as to what we should do about the fact that large portions of our economy depend on labor that is undocumented – but their voting power is nil.

                          1. 4

                            yeah I found that part of the argument absurd, but it seemed very subjective so I left it alone

                            I’ve known some illegal immigrants, all of them are very careful about this.

                            1. 2

                              I sense you’re not close to any of these people.

                              That’s painfully clear.

                              My wife works with a community organization that serves undocumented migrants. The list of services public or private they avoid to avoid any interaction with government officials who might question their immigration status would amaze you.

                              The thought that an organized voting fraud bloc would arise around them is positively risible.

                              As noted in the thread, the evidence clearly shows in person fraud is a non issue; in reality, strict voter ID laws are the real problem, as they serve to disenfranchise the poor and those underserved by government while providing no real benefits.

                      3. 3

                        Way too many unsourced assertions here. And I hope I’m not the only Lobster for whom “just trust, don’t verify” rings hollow.

                        1. 4

                          here’s a whole bunch of sources from a non-partisan org: https://www.brennancenter.org/analysis/debunking-voter-fraud-myth

                  1. 24

                    I think the arguments around coercion and bribery for votes are quite compelling. Any system that proves to me who I voted for can also prove to someone else who I voted for; this feels extremely risky.

                    And offering a sweepstakes as an incentive seems interesting, but doesn’t seem to drive a politically engaged populace. I guess it would force the government to ensure that adequate voting sites are available which is a net positive, but I’d rather drive people to the polls by having candidates that push policies that improve their material conditions.

                    Voting in my county works in a way that addresses your concerns: a voter makes selections on an electronic machine that prints a paper ballot. The ballot contains the names of the candidates you voted for, as well as a “Scantron” representation. Once your ballot is printed by the machine, you run it through an optical scanner that records the votes, and you then seal it in an envelope and put it in a locked box. Certifying the vote involves taking random samples from across the county and comparing the recorded optical vote against the printed paper vote. And all paper ballots are preserved for recounts/full audits.

                    In my mind, this appears to be a fairly tamper resistant system: an attacker would need to effectively change two counts - the electronic count, and the paper ballots as well. Any attack I thought of had many moving pieces.

                    1. 4

                      The biggest attack on paper + electronic systems is to not routinely count the paper ballots. As we saw this week, it also makes it easier to restrict voting opportunities by doing a crappy job at deploying the machines.

                      Your comments on coercion are valid and at the heart of secret voting.

                      1. 4

                        This is decent but a problem with such systems (and similar systems that use a VVPAT printer for the paper trail) is that this stuff isn’t obvious. Consider the latest texas goof up, where texas machines were switching votes and many people didn’t really think to verify before submitting. There’s a risk the machine messes up and people neglect to check the paper ballot.

                        The system we have locally is a paper ballot you mark, which gets scanned in (scanner can detect problems and tell you, too). Scanner keeps an immediate internal tally (printed out by the end of the day), and also keeps the paper ballots in an internal receptacle. The scanner printout, the scanner’s memory bank, and the contents of the internal receptacle all get sent out to the registrar of voters at the end of the day.

                        Marking the ballot is easy and hard to mess up (and you don’t have to check anything for machine-caused mistakes), but there’s still a paper trail.

                        So this system is okay, but you can make it better by removing machines from the ballot-marking stage of the process entirely.

                        1. 1

                          Instead of offering rewards, like a lottery, we could make voting mandatory. That would help enforce adequate voting sites.

                          I think San Francisco is getting something like the process you mention, in 2019.

                        1. 24

                          As far as I can tell at no point has it been suggested in Firefox’s plans for the future involve moving everything to Cloudflare. AIUI Cloudflare was the testbed, nothing more, and Mozilla has explicitly stated that they’re going to look into having a choice of providers.

                          (I’m a bit annoyed by the amount of FUD on this coming from the PowerDNS folks, there’s been a bunch on Twitter too)

                          1. 2

                            I remember reading the blog posts when this was announced and I felt it really wasn’t clear. Maybe I should go read it again.

                            I’m still a little concerned. Will there be a big list in Firefox of name servers, similar to SSL roots? Do the browser vendors then get to decide the list of authorized DNS providers?

                            I wonder how viable it would be to add a layer of DNS-over-HTTP root servers? Companies who are serious about privacy could contribute to ICANN to see this happen.

                          1. 1

                            Does this change require any changes from web developers or is this something the browser can do in the background to speed up rendering any page? When I looked it up I saw some stuff in Servo but not the MDN.

                            1. 2

                              No, this is an implementation change, so webdevs do not need to change anything.

                            1. 2

                              Cool project 👍🏻. I’m wondering is it “correct” to say that “now we can write safer c” if the C code is transpiled to Rust?

                              1. 13

                                The resulting Rust code is only slightly safer. Some things like array bounds that were not previously checked will be checked. For the most part this translation is just the first step in enabling more substantial refactoring from which the benefits from Rust can start to shine.

                                1. 2

                                  Ah okay, thanks 👍🏻

                                  1. 1

                                    Why is the resulting rust code only slightly safer? Rust as a language is a lot more memory safe than C. If you’re talking about current transcompilers, then improving those should lead to improvements in C.

                                    1. 6

                                      It’s translating to mostly-unsafe Rust (so does corrode, the other project that does this)

                                      This means you still have the same burden of checking most of the invariants involved.

                                      One use case for tools like these is an easy way to start converting a codebase from C to Rust, doing away with a bunch of the tedium.

                                      1. 2

                                        Ah, I’ve misread that. I was referring to Rust -> C compilers which are useful to create if only to understand the domain well enough to bring improvements to C.

                                1. 7

                                  One wonders why not just, say, use JSON for this.

                                  1. 24

                                    I believe the prefs format predates JSON (or at least JSON’s popularity), and changing it now is a non-starter as it would break everyone’s user preferences. Even this changeset whose only backwards incompatible changes were fixing some glaringly obvious bugs caused some reports of failing CI around the web.

                                    We could try to migrate prefs files to a new format, but that would be a high risk/low reward operation.

                                    1. 5

                                      We could try to migrate prefs files to a new format, but that would be a high risk/low reward operation.

                                      I wish you folks would do that. :(

                                      1. 19

                                        Would you volunteer to respond to all the breakage reports it might cause?

                                        I may sound bitter now, but when a maintainer says something is too hard/risky, and a random user replies with “yeah, you should do it anyway” disregarding who is it that’s going to deal with problems, it’s just utterly disrespectful.

                                        1. 17

                                          Respect doesn’t enter into it–hell, I even agree with the assessment that the work is high-risk/low-reward…then again, I feel the proposed fix has some of the same issues. Like, the parser has worked well-enough that refactoring is maybe not a good use of time.

                                          If the decision is made “We must refactor it”, then it makes sense to go one step further and fix the underlying file format anyways. Then again, Mozilla has a history of derping on file formats.

                                          As for “all of the breakage reports it might cause”, given that the docs themselves discourage direct editing of files, it would seem that there probably isn’t a huge amount of breakage to be concerned about. Further, if the folks are clever enough to write a neat parser for the existing format, I’m quite sure they’re clever enough to write a tool that can correctly convert legacy config files into a new thing.

                                          (And again, it’s common advice that there are no user-servicable parts inside good chunks of it, because it’s a derpy file format.)

                                          Like, just to hammer this home, here is the format of a prefs.js file:

                                          # Mozilla User Preferences
                                          
                                          /* Do not edit this file.
                                           *
                                           * If you make changes to this file while the application is running,
                                           * the changes will be overwritten when the application exits.
                                           *
                                           * To make a manual change to preferences, you can visit the URL about:config
                                           */
                                          
                                          user_pref("accessibility.typeaheadfind.flashBar", 0);
                                          user_pref("app.update.lastUpdateTime.addon-background-update-timer", 1520626265);
                                          user_pref("app.update.lastUpdateTime.blocklist-background-update-timer", 1520626385);
                                          user_pref("app.update.lastUpdateTime.browser-cleanup-thumbnails", 1520640065);
                                          user_pref("app.update.lastUpdateTime.experiments-update-timer", 1520626145);
                                          user_pref("app.update.lastUpdateTime.recipe-client-addon-run", 1520626025);
                                          user_pref("app.update.lastUpdateTime.search-engine-update-timer", 1520625785);
                                          user_pref("app.update.lastUpdateTime.telemetry_modules_ping", 1520625905);
                                          user_pref("app.update.lastUpdateTime.xpi-signature-verification", 1520626505);
                                          
                                          <snip>
                                          

                                          There is no reason that this shouldn’t be in a sane file format (read: JSON). This could be accomplished with a conversion tool, and gracefully deprecated.

                                          Edit:

                                          It even already contains JSON!

                                          user_pref("browser.onboarding.tour.onboarding-tour-performance.completed", true);
                                          user_pref("browser.pageActions.persistedActions", "{\"version\":1,\"ids\":[\"bookmark\",\"bookmarkSeparator\",\"copyURL\",\"emailLink\",\"sendToDevice\",\"pocket\",\"screenshots\"],\"idsInUrlbar\":[\"pocket\",\"bookmark\"]}");
                                          user_pref("browser.pagethumbnails.storage_version", 3);
                                          
                                          1. 7

                                            No disrespect taken :)

                                            For the record, I agree a standard format would be better. Also for the record I’ve never even looked at the prefs code before, so my statement was coming more from experience knowing how much the tiniest changes can blow up on the scale of the web.

                                            You never know, maybe we’ll support JSON and the legacy format at some point, but that smells like it might be unnecessary complexity to me.

                                            1. 2

                                              You said unnecessary complexity. Normally, I’d either say that’s a good thing or suggest a simple subset if it’s something like JSON. If Firefox already supports JSON, wouldn’t there already be a component included that could be called to handle it? Is that inaccessible? Or does it suck so much it’s worth rolling and including ones’ own parser that’s not a cleaned-up, subset of JSON? Just curious given Firefox is an older, big project.

                                              1. 5

                                                The pref parser is a small isolated module, so I don’t think it would be technically difficult to implement (bear in mind I’m not familiar with it at all).

                                                The complexity I’m referring to was more around ux, maintenance, and support, that come with providing two different ways of doing the same thing.

                                          2. 2

                                            ““yeah, you should do it anyway” disregarding who is it that’s going to deal with problems, it’s just utterly disrespectful.”

                                            Bringing up respect and morals when a FOSS project uses non-standard formats instead of standard ones that already existed with tooling people could’ve used? And that definitely would need extra work or fixes later? I doubt they were thinking of morality when they did it. More like “Let’s implement this feature the way I feel like doing it with my preferences and constraints right now.” Kind of a similar mindset to many people asking them for changes.

                                            A better question would be, “Is replacing non-standard stuff in the browser with well-supported, standardized stuff worth the effort to fix the breakage?” In this case, I’m not sure without knowing more specifics. The general answer for file formats is “Yes wherever possible for interoperability and ecosystem benefits.”

                                            1. 6

                                              non-standard formats instead of standard ones that already existed with tooling people could’ve used

                                              That’s untrue, the grandparent comment mentions this probably predates JSON’s popularity.

                                              Edit: Yeah, the bug itself is 17 years old, and the prefs format is probably older. Wikipedia says “Douglas Crockford originally specified the JSON format in the early 2000s;”, which means that at best the prefs format came around the same time Crockford first specified it, and at worst it probably came into being a couple eyears earlier.

                                              1. 1

                                                Good thinking on the history. I did say “standard formats,” not JSON. Before JSON, the formats I used included LISP-style sexprs for easy parsing, Sun’s XDR, ASN.1, and XML. I also hoped simpler ones gaining popularity would lead to secure or verified implementations. That was effortless for LISP-based syntax with Galois doing a verified ASN.1 later. Most went with the overcomplicated formats or hand-rolled their own each with problems. For XML, I found I could just use a subset of it close to basic HTML tags that made it easier for someone to convert later with standard or customer tooling.

                                                So, those were among alternative approaches back in those days that many projects were taking. Except LISP syntax which only LISPers were using. ;)

                                      2. 3

                                        Or Toml, since that’s Rust’s go-to data markup language.

                                        1. 4

                                          That’d be just a little too cute.

                                      1. 2

                                        Is there a good resource for understanding Epochs as overlays to the release/versioning system in Rust (if that’s even the right way to think about them)? I use and follow Rust development but haven’t really wrapped my head around the significance of Epochs.

                                        1. 7

                                          Epochs are a way for the Rust team to make breaking changes to the syntax of Rust. For example, reserving new keywords, turning warn-by-default checks into error-by-default, that kind of thing.

                                          Importantly, crates designed for different epochs can be used together, so a Rust epoch won’t fracture the ecosystem like the infamous Python 2/3 split. Even if you want to call a function in an old library whose name is a reserved word in the new epoch, Rust provides a “raw identifier” syntax so you can use anything (within reason) as an identifier.

                                          More pragmatically, epochs are an excuse for more publicity. With a release every six weeks, you rarely get more than one or two new features at a time, which isn’t worth making a fuss about, so people outside the Rust community rarely hear about all the stuff that’s going on. An epoch is an excuse to list all the features of the past year or two and show people just how much progress has been made.

                                          1. 1

                                            Understanding this as presented, then, it is a breaking change that exceeds the threshold for even major version changes.

                                            Does that mean an Epoch change will always occur simultaneously with a major version change? And that an earlier Epoch is always incompatible with a later major version?

                                            I think this has been the confusing part to me.

                                            1. 10

                                              No, because you have to explicitly opt in to the new epoch.

                                              Code on newer epochs using your code does not break.

                                              People compiling your code with rustc or cargo do not break (rustc and cargo continue to use the same epoch to compile your code)

                                              The only time your code breaks is when you explicitly do rustc yourcode.rs --epoch=2018 or cargo build after modifying the epoch key in Cargo.toml.

                                              Which you’ll only do when upgrading to the epoch.

                                              Rust does not deprecate programming within the old epoch, so it continues to work. Five epochs from now the “2015 epoch” and “2018 epoch” should still work with the latest compiler.

                                              Also, Rust does not automatically choose the latest epoch for you: it defaults to the 2015 epoch when unspecified. The only time it uses the latest epoch is when you create a new project with cargo.

                                              Does this make sense?

                                              1. 5

                                                it is a breaking change that exceeds the threshold for even major version changes.

                                                Not really. The breaking changes we’re talking about are “turn some warnings into errors” and “add new keywords”. Fundamentally, the changes possible are very limited. It’s a core constraint of epochs. Also, as was said elsewhere, your code shouldn’t break without an explicit action on your part. And the whole thing is interoperable.

                                                The closest analogy is the C++ and Java systems. Each new release does technically break some things, but they’re very minor, and so to most people they’re languages that “never break things.”

                                                1. 3

                                                  Major version changes would include things like removing language features, or adding a mandatory garbage collecting runtime, or anything else that would prevent older code and newer code from working together.

                                                  An epoch explicitly cannot introduce such incompatible changes; it must always be possible for old-epoch crates and new-epoch crates to be used together.. It might not be possible to copy/paste code between them, i.e. they may not be source compatible, but they should compile down to the same ABI, effectively.

                                                  This is not exactly the same thing as ABI stability; a crate compiled by an old-epoch compiler and a crate compiled by a new-epoch compiler probably won’t be compatible. However, the Rust project has promised that every 1.x Rust compiler will always support every previous epoch, so you can take your new-epoch compiler, compile one crate in old-epoch mode and another crate in new-epoch mode, and then they’ll be compatible.

                                              2. 5

                                                The canonical text is https://github.com/rust-lang/rfcs/blob/master/text/2052-epochs.md . It should explain it reasonably well, if you’re the kind of person who follows Rust development.

                                                That said, we haven’t actually done one yet, so it’s kinda hard to point to an example, it’s all theory at the moment. They also may not even be called “epochs” when we actually ship one. The name is… okay. I like it for its bad qualities, which probably means we should change it :p

                                              1. 5

                                                Stable (even if “preview”) rustfmt!

                                                1. 1

                                                  Is this a rewrite of the old rustfmt?

                                                  1. 6

                                                    Nah, it’s the same one.

                                                    What happened was that rustfmt was moved into the distribution, but this process made it nightly-only for a while. It’s back now.

                                                1. 5

                                                  Interesting proposal, though it makes me wonder about something. Full Disclosure up front: my primary languages are C, Haskell and Lua; I’ve tried Rust numerous times and found it annoying/unusable enough that it will likely be a while before I take another serious look at programming in it. Having said that, programming language and library design is still something I find quite interesting.

                                                  I remember early on in Rust’s creation, many of the proponents of the language wanted to clean up some of C++ (and, by extension, C)’s mistakes. One of the things that I heard talked about a lot was #ifdef soup for platform/os support.

                                                  Recgonizing that I do not have a proposal for a better way of doing platform-portability, is this proposal not just a reversal of that stance and an adoption of the equivalent mechanic?

                                                  If not, then what am I missing? Is rust’s cfg somehow inherently better than the CPP’s #ifdef (more than just being in Rust’s macro system rather than a preprocessor)? If so, was this digression just a mistake/red herring (there’s nothing wrong with that, by the way—everyone makes mistakes); or something else?

                                                  1. 11

                                                    I don’t think it’s a reversal of Rust’s previous stance. The current way of achieving portability in Rust is by putting platform-agnostic things in the stdlib, etc. This doesn’t change that.

                                                    I’m not very clear what “digression” you’re talking about.

                                                    Rust has always had cfg, and Rust libs have always used it for portability.

                                                    What this is proposing is that the stdlib should:

                                                    • work well if you turn off certain components (e.g. threads)
                                                    • turn off components which aren’t supported on a given platform.

                                                    Before, you simply couldn’t use the stdlib for these platforms.

                                                  1. 6

                                                    I really need to get around to writing my sum type proposal for Go.

                                                    Instead of introducing an entirely new feature, the idea is to tweak the existing features to support it.

                                                    The bare idea is simple: “closed” interfaces. If you declare an interface as closed, you pre-declare all the types that belong to it, and that’s it. The syntax could be something like

                                                    type Variant1 struct {..}
                                                    type Variant2 struct {..}
                                                    type Foo interface {
                                                        // methods
                                                        for Variant1, Variant2
                                                    }
                                                    

                                                    You continue to use type switches (i love type switches) with these interfaces, except that the default case can’t be used for exhaustive switches (you can also enforce that in non-exhaustive switches).

                                                    It would also be nice to lift the restriction for implementing methods on interfaces; and make it be possible to run interface methods on explicitly-interface (not downcasted) types. There was a proposal for that too..

                                                    Under the hood, these could possibly be implemented as stack-based discriminated unions instead of vtable’d pointers, though there might be tricky GC interaction there.

                                                    I haven’t really written this up properly, but I suspect it might “fit well” in Go and be nicer than directly adding sum types as a new thing.

                                                    1. 4

                                                      I encourage you to make this suggestion on https://github.com/golang/go/issues/19412 which was recently marked as “For Investigation” which suggests someone is collating ideas.

                                                      1. 2

                                                        +1, that’s an excellent thread with a lot of interesting insights about the constraints that the core language devs are battling with when introducing a new feature like this. It’s super long but I’ve found the discussion to be really informative.

                                                        1. 1

                                                          I actually just wrote https://manishearth.github.io/blog/2018/02/01/a-rough-proposal-for-sum-types-in-go/ yesterday

                                                          But I don’t have the time/desire to really push for this. Feel free to use this proposal if you would like to push for it!

                                                      1. 2

                                                        This is huge! It’d be so cool to see the JS community get into rust! Maybe someone needs to write a rust-based jQuery or react?

                                                        1. 9

                                                          WASM isn’t really great for interfacing with the DOM. You can farm out expensive computations to WASM (“calculate this” / “parse this” / “do some string operations”) but doing stuff with tight DOM work will not help performance much (if at all; I’d expect it to slow down), and will have questionable benefits as to typesafety (TypeScript is a better way to get this working well)

                                                          DOM manipulation isn’t slow because JS is slow, it’s slow because it’s inherently slow – it retriggers restyles and relayouts and a lot of other computations in the browser. Rust can’t help with that.

                                                        1. 0

                                                          I have this horrible horrible feeling that Rust is becoming the new Perl. This all reminds me of when Perl added “object orientation” and things became more confusing and hard to understand for passers by like myself.

                                                          1. 5

                                                            Every serious new language needs to be able to solve the c10k problem; we knew Rust would need async I/O sooner or later.

                                                            What is ugly is the proliferation of macros when a proper general-purpose solution is possible. If you look at the final example from the link, async! is fulfilling exactly the same role as the notorious try!; if the language would adopt HKT they could build a single reusable standard form of “do notation” into the language and reuse it for result, async, option, and many other things: https://philipnilsson.github.io/Badness10k/escaping-hell-with-monads/

                                                            1. 6

                                                              It is extremely unclear that a strongly-typed do notation is possible in Rust. It’s also not clear if we’ll ever get HKT directly; GAT gives us equivalent power, but fits in with the rest of stuff more cleanly.

                                                              1. 3

                                                                What is GAT?

                                                                1. 2

                                                                  generic associated types

                                                              2. 1

                                                                I think async! will eventually be made into a language feature (a couple of community members have proposals for this), it’s just that we’re experimenting on it as a proc macro, because we can. It’s way more annoying to experiment with features baked into the language.

                                                              3. 1

                                                                The only language feature added here is generators (and possibly, async/await sugar), everything else will be a library. Both things are quite common amongst languages; and shouldn’t be too confusing.

                                                                Everything else listed here is a library that you only deal with when you are doing async stuff. And the complexities listed are largely internal complexities; tokio should be pretty pleasant to work with.

                                                              1. 3

                                                                I’m very inexperienced and new to Rust, but the times I tried to do something I too found it painful that the good stuff is rust-nightly only.

                                                                1. 2

                                                                  What things do you run into?

                                                                  1. 2

                                                                    One example: Clippy requires nightly. I noticed I could run clippy+nightly on my codebase that aims at stable, but it’s still odd :-)

                                                                    Also, tokio

                                                                    1. 4

                                                                      Cool, thanks!

                                                                      Yeah, developer tools are still on nightly. rustfmt will be on stable as of the next release, with the rls following closely, and clippy at some point.

                                                                      Tokio is in a weird spot; it doesn’t require nightly, but some nightly features make it more ergonomic; impl Trait is almost here!

                                                                      Thanks again for taking the time.

                                                                      1. 3

                                                                        Clippy is a tool, so it’s kinda been low priority. Using it doesn’t force your library to use nightly, it just means you have to locally use nightly when running clippy. But we’re working on making it stable!

                                                                        tokio is pretty new and experimental and some of the stuff relies on experimental new features in the compiler built for tokio (i.e. generators). It’s getting there.

                                                                        There will always be new shiny stuff on nightly :)

                                                                  1. -1

                                                                    I think there’s a question that should be asked. Would this be found if firefox was a GPL project, and should we be primarily contributing to GPL projects since ALL of it must be shared?

                                                                    1. 8

                                                                      That is irrelevant. The Linux kernel is GPL and yet you don’t get immediate access to all development done by companies around it. Most will throw you a tarball of the source code over the wall once in a while (see Google Android). They can develop an auto install feature, use it to distribute a payload and show you the code months later, heck they don’t even have to if the payload is a loadable Linux kernel module.

                                                                      In this specific case, the extension is actually shared and open source. So was the code used to deploy the plugin/shield study. However that doesn’t prevent a valid use-case (deploying opt-in user studies) being misused as an advertising channel (TV show tie-in piggy backing on your consent to help with user studies).

                                                                      1. 2

                                                                        I guess then the answer is don’t contribute to corporate maintained repositories and that we should be using a non-corporate browser.

                                                                        1. 12

                                                                          Firefox is the closest to a non-corporate browser you can get. Essentially there are only 4 serious web rendering engines still in active development:

                                                                          • WebKit (derived from KHTML) maintained & developed mainly by Apple
                                                                          • Blink forked of off WebKit by Google
                                                                          • Gecko maintained & developed by Mozilla
                                                                          • Trident/Edge developed by Microsoft

                                                                          Those companies have the resources to push development and keep up with security updates. Developing a web browser rendering engine is a very resource intensive process. If you switch to a browser that just consumes one of those then you are really not changing anything - that browser is at the mercy of the upstream vendor and will lag with security updates. If you find a browser that actually forks one of the above then you run with the risk of them not keeping up with security & development.

                                                                          1. 5

                                                                            This is true, but it’s very important to note that if you install Firefox or Chromium from a distro like Debian, they will do the work of stripping out the tracking misfeatures while still applying critical security updates from upstream. The whole job of the Debian maintainers in this case is to protect users from exactly this situation, and they do a good job at it.

                                                                            1. 2

                                                                              Yes, I guess this is the heart of the problem. There really should be a community driven browser just as there is a community driven operating system.

                                                                        2. 2

                                                                          The code was open source (https://github.com/mozilla/addon-wr) and even if it wasn’t addon code is shipped in source form so you can inspect it on your end.

                                                                        1. 22

                                                                          Compiling Firefox on 8 core ryzen targetting the host arch takes between 10 and 15 minutes.

                                                                          1. 10

                                                                            Wow that is fast, takes ~2h on a build server that takes 7 hours for chromium. All the recent rust stuff really slowed it down.

                                                                            1. 6

                                                                              All the recent rust stuff really slowed it down.

                                                                              Oof, yeah, I bet. Rust is notoriously slow to compile. Luckily, incremental compilation is in the nightly compiler right now. I’ve been using it wherever I can and it really does make a difference. But I suppose it wouldn’t do much for an initial compilation of a project. :(

                                                                              1. 4

                                                                                In this case a large chunk of this is just bindgen; we need to generate bindings so we throw a libclang-based-bindings generator at all of the header files. Twice (complicated reasons for this).

                                                                                It’s also pretty single threaded (codegen units will help with this, but currently isn’t, and I have to investigate why).

                                                                                Incremental compilation and working codegen units and cleaning up the bindgen situation will help a lot. Going to take a while, though.

                                                                                1. 3

                                                                                  But I suppose it wouldn’t do much for an initial compilation of a project.

                                                                                  Also not going to help packagers who use only temporary compilation environments which are discarded after a package is built.

                                                                                  1. 6

                                                                                    Package managers (and regular builds also) should not be starting from scratch every time. Even if we insist on doing all source editing in ASCII, we need to be delivering modules as fully-typed, parsed ASTs.

                                                                                    This insistence on going back to plain source code every chance we get and starting over is easily wasting more energy than the bitcoin bubble.

                                                                                    1. 9

                                                                                      Package managers (and regular builds also) should not be starting from scratch every time.

                                                                                      They should if you want reproducible builds.

                                                                                      1. 3

                                                                                        These are completely orthogonal. There’s nothing stopping reproduceable builds where you run the entire pipeline if you insist on comparing the hash of the source and the hash of the output. And you would still get the same benefit by comparing source<->ast and ast<->binary

                                                                                        1. 1

                                                                                          Yeah, ideally a compiler would take a difference in source to a difference in output. Compiler differentiation?

                                                                                        2. 2

                                                                                          I believe you can do a decent amount of caching in this space? Like MSFT and co. have compile serverse that will store incrementally compiled stuff for a lot of projects so you’re only compiling changes

                                                                                    2. 7

                                                                                      My non-beefy lenovo x series laptop takes ~45 minutes for a complete recompile, ~12min for average changes w/ ccache etc. and ~min for JS-only changes (which you can do as artifact builds, so they’re always 3min unless you need to build C++/Rust components

                                                                                    1. 32

                                                                                      I wasn’t implying. I was stating a fact.

                                                                                      And he’s wrong about that.

                                                                                      https://github.com/uutils/coreutils is a rewrite of a large chunk of coreutils in Rust. POSIX-compatible.

                                                                                      1. 12

                                                                                        So on OpenBSD amd64 (the only arch rust runs on… there are at least 9 others, 8 7 or 6 of which rust doesn’t even support! )… this fails to build:

                                                                                        error: aborting due to 19 previous errors
                                                                                        
                                                                                        error: Could not compile `nix`.
                                                                                        warning: build failed, waiting for other jobs to finish...
                                                                                        error: build failed
                                                                                        
                                                                                        1. 8

                                                                                          Yep. The nix crate only supports FreeBSD currently.

                                                                                          https://github.com/nix-rust/nix#supported-platforms

                                                                                        2. 8

                                                                                          The openbsd guys are stubborn of course, though they might have a point. tbh somebody could just fork a BSD OS to make this happen. rutsybsd or whatever you want to call it.

                                                                                          edit: just tried to build what you linked, does cargo pin versions and verify the downloads? fetching so many dependencies at build time makes me super nervous. Are all those dependencies BSD licensed? It didn’t even compile on my machine, maybe the nixos version of rust is too old - i don’t know if the rust ecosystem is stable enough to base an OS on yet without constantly fixing broken builds.

                                                                                          1. 10

                                                                                            just tried to build what you linked, does cargo pin versions and verify the downloads?

                                                                                            Cargo pins versions in Cargo.lock, and coreutils has one https://github.com/uutils/coreutils/blob/master/Cargo.lock.

                                                                                            Cargo checks download integrity against the registry.

                                                                                            For offline builds, you can vendor the dependencies: https://github.com/alexcrichton/cargo-vendor, downloading them all and working from them.

                                                                                            Are all those dependencies BSD licensed?

                                                                                            Yes. Using: https://github.com/onur/cargo-license

                                                                                            Apache-2.0/MIT (50): bit-set, bit-vec, bitflags, bitflags, block-buffer, byte-tools, cc, cfg-if, chrono, cmake, digest, either, fake-simd, filetime, fnv, getopts, glob, half, itertools, lazy_static, libc, md5, nodrop, num, num-integer, num-iter, num-traits, num_cpus, pkg-config, quick-error, rand, regex, regex-syntax, remove_dir_all, semver, semver-parser, sha2, sha3, tempdir, tempfile, thread_local, time, typenum, unicode-width, unindent, unix_socket, unreachable, vec_map, walker, xattr

                                                                                            BSD-3-Clause (3): fuchsia-zircon, fuchsia-zircon-sys, sha1

                                                                                            MIT (21): advapi32-sys, ansi_term, atty, clap, data-encoding, generic-array, kernel32-sys, nix, onig, onig_sys, pretty-bytes, redox_syscall, redox_termios, strsim, term_grid, termion, termsize, textwrap, void, winapi, winapi-build

                                                                                            MIT OR Apache-2.0 (2): hex, ioctl-sys

                                                                                            MIT/Unlicense (7): aho-corasick, byteorder, memchr, same-file, utf8-ranges, walkdir, walkdir

                                                                                            It didn’t even compile on my machine, maybe the nixos version of rust is too old - i don’t know if the rust ecosystem is stable enough to base an OS on yet without constantly fixing broken builds.

                                                                                            This is one of my frequent outstanding annoyances with Rust currently: I don’t have a problem with people using the newest version of the language as long as their software is not being shipped on something with constraints, but at least they should document and test the minimum version of rustc they use.

                                                                                            coreutils just checks against “stable”, which moves every 6 weeks: https://github.com/uutils/coreutils/blob/master/.travis.yml

                                                                                            Can you give me rustc --version?

                                                                                            Still, “commitment to stability” is a function of adoption. If, say, Ubuntu start shipping a Rust version in an LTS release, more and more people will try to stay backward compatible to that.

                                                                                            1. 2

                                                                                              rustc 1.17.0 cargo 0.18.0

                                                                                              1. 11

                                                                                                You’re probably hitting https://github.com/uutils/coreutils/issues/1064 then.

                                                                                                Also, looking at it, it is indeed that they use combinatior functionality that became available in Rust 1.19.0. std::cmp::Reverse can be easily dropped and replaced by other code if 1.17.0-support would be needed.

                                                                                                Thanks, I filed https://github.com/uutils/coreutils/issues/1100, asking for better docs.

                                                                                                1. 1

                                                                                                  thanks for doing that, great community outreach :P

                                                                                            2. 5

                                                                                              Rust is “stable” in the sense that it is backwards compatible. However it is evolving rapidly so new crates or updates to crates may require the latest compiler. This won’t mean you’ll have to constantly fix broken builds; just that pulling in new crates may require you to update to the latest compiler.

                                                                                              1. 4

                                                                                                Yes, Cargo writes a Cargo.lock file with versions and hashes. Application developers are encouraged to commit it into version control.

                                                                                                Dependencies are mostly MIT/Apache in the Rust world. You can use cargo-license to quickly look at the licenses of all dependencies.

                                                                                                Redox OS is fully based on Rust :)

                                                                                              2. 4

                                                                                                Although you’re right to point out that project, one of Theo’s arguments had to do with compilation speeds:

                                                                                                By the way, this is how long it takes to compile our grep:

                                                                                                0m00.62s real 0m00.63s user 0m00.53s system

                                                                                                … which is currently quite undoable for any Rust project, I believe. Cannot say if he’s exaggerating how important this is, though.

                                                                                                1. 10

                                                                                                  Now, at least for GNU coreutils, ./configure runs a good chunk of what rust coreutils needs to compile. (2mins for a full release build, vs 1m20.399 just for configure). Also, the build is faster (coreutils takes a minute).

                                                                                                  Sure, this is comparing apples and oranges a little. Different software, different development states, different support. The rust compiler uses 4 cores during all that (especially due to cargo running parallel builds), GNU coreutils doesn’t do that by default (-j4 only takes 17s). On the other hand: all the crates that cargo builds can be shared. That means, on a build farm, you have nice small pieces that you know you can cache - obviously just once per rustc/crate pairing.

                                                                                                  Also, obviously, build farms will pull all kinds of stunts to accelerate things and the Rust community still has to grow a lot of that tooling, but I don’t perceive the problem as fundamental.

                                                                                                  EDIT: heh, forgot --release. And that for me. Adjusted the wording and the times.

                                                                                                  1. 5

                                                                                                    OpenBSD doesn’t use GNU coreutils, either; they have their own implementation of the base utils in their tree (here’s the implementation of ls, for example). As I understand it, there’s lots of reasons they don’t use GNU coreutils, but complexity (of the code, the tooling, and the utils themselves) is near the top of the list.

                                                                                                    1. 6

                                                                                                      Probably because most(all?) the openBSD versions of the coreutils existed before GNU did, let alone GNU coreutils. OpenBSD is a direct descendant of Berkeley’s BSD. Not to mention the licensing problem. GNU is all about the GPL. OpenBSD is all about the BSD(and it’s friends) license. Not that your reason isn’t also probably true.

                                                                                                    2. 2

                                                                                                      That means, on a build farm, you have nice small pieces that you know you can cache - obviously just once per rustc/crate pairing.

                                                                                                      FWIW sccache does this I think

                                                                                                    3. 7

                                                                                                      I think it would be more fair to look at how long it takes the average developer to knock out code-level safety issues + compiles on a modern machine. I think Rust might be faster per module of code. From there, incremental builds and caching will help a lot. This is another strawman excuse, though, since the Wirth-like languages could’ve been easily modified to output C, input C, turn safety off when needed, and so on. They compile faster than C on about any CPU. They’re safe-by-default. The runtime code is acceptable with it improving even better if outputting C to leverage their compilers.

                                                                                                      Many defenses of not using safe languages is that easy to discount. And OpenBSD is special because someone will point out that porting a Wirth-like compiler is a bit of work. It’s not even a fraction of the work and expertise required for their C-based mitigations. Even those might have been easier to do in a less-messy language. They’re motivated more by their culture and preferences than any technical argument about a language.

                                                                                                      1. 3

                                                                                                        It’s a show stopper.

                                                                                                        Slow compile times are a massive problem for C++, honestly I would say it’s one of the biggest problems with the language, and rustc is 1-2 orders of magnitude slower still.

                                                                                                        1. 12

                                                                                                          It’s a show stopper.

                                                                                                          Hm, yet, last time I checked, C++ was relatively popular, Java (also not the fastest in compilation) is doing fine and scalac is still around. There’s people working on alternatives, but show stopper?

                                                                                                          Sure, it’s an huge annoyance for “build-the-world”-approaches, but well…

                                                                                                          Slow compile times are a massive problem for C++, honestly I would say it’s one of the biggest problems with the language, and rustc is 1-2 orders of magnitude slower still.

                                                                                                          This heavily depends on the workload. rustc is quite fast when talking about rather non-generic code. The advantage of Rust over C++ is that coding in mostly non-generic Rust is a viable C alternative (and the language is built with that in mind), while a lot of C++ just isn’t very useful over C if you don’t rely on templates very much.

                                                                                                          Also, rustc stable is a little over 2 years old vs. C/C++ compilers had ample headstart there.

                                                                                                          I’m not saying the problem isn’t there, it has to be seen in context.

                                                                                                          1. 9

                                                                                                            C++ was relatively popular, Java (also not the fastest in compilation) is doing fine and scalac is still around.

                                                                                                            Indeed, outside of gamedev most people place zero value in fast iteration times. (which unfortunately also implies they place zero value in product quality)

                                                                                                            rustc is quite fast when talking about rather non-generic code.

                                                                                                            That’s not even remotely true.

                                                                                                            I don’t have specific benchmarks because I haven’t used rust for years, but see this post from 6 months ago that says it takes 15 seconds to build 8k lines of code. The sqlite amalgamated build is 200k lines of code and has to compile on a single core because it’s one compilation unit, and still only takes a few seconds. My C++ game engine is something like 80k if you include all the libraries and builds in like 4 seconds with almost no effort spent making it compile fast.

                                                                                                            edit: from your coreutils example above, rustc takes 2 minutes to build 43k LOC, gcc takes 17 seconds to build 270k, which makes rustc 44x slower…

                                                                                                            The last company I worked at had C++ builds that took many hours and to my knowledge that’s pretty standard. Even if you (very) conservatively say rustc is only 10x slower, they would be looking at compile times measured in days.

                                                                                                            while a lot of C++ just isn’t very useful over C if you don’t rely on templates very much.

                                                                                                            That’s also not true at all. Only small parts of a C++ codebase need templates, and you can easily make those templates simple enough that it has little to no effect on compile times.

                                                                                                            Also, rustc stable is a little over 2 years old vs. C/C++ compilers had ample headstart there.

                                                                                                            gcc has gotten slower over the years…

                                                                                                            1. 6

                                                                                                              Even if you (very) conservatively say rustc is only 10x slower,

                                                                                                              Rustc isn’t slower to compile than C++. Depends on the amount of generics you use, but the same argument goes for C++ and templates. Rust does lend itself to more usage of generics which leads to more compact but slower-compiling code, which does mean that your time-per-LOC is higher for Rust, but that’s not a very useful metric. Dividing LOCs is not going to get you a useful measure of how fast the compiler is. I say this as someone who has worked on both a huge Rust and a huge C++ codebase and know what the compile times are like. Perhaps slightly worse for Rust but not like a 2x+ factor.

                                                                                                              The main compilation speed problem of Rust vs C++ is that it’s harder to parallelize Rust compilations (large compilation units) which kind of leads to bottleneck crates. Incremental compilation helps here, and codegen-units already works.

                                                                                                              Rust vs C is a whole other ball game though. The same ball game as C++ vs C.

                                                                                                              1. 2

                                                                                                                That post, this post, my experience, lines, seconds… very scientific :) Hardware can be wildly different, lines of code can be wildly different (especially in the amount of generics used), and the amount of lines necessary to do something can be a lot smaller in Rust, especially vs. plain C.

                                                                                                                To add another unscientific comparison :) Servo release build from scratch on my machine (Ryzen 7 1700 @ 3.9GHz, SATA SSD) takes about 30 minutes. Firefox release build takes a bit more. Chromium… even more, closer to an hour. These are all different codebases, but they all implement a web browser, and the compile times are all in the same ballpark. So rustc is certainly not that much slower than clang++.

                                                                                                                Only small parts of a C++ codebase need templates

                                                                                                                Maybe you write templates rarely, but typical modern C++ uses them all over the place. As in, every STL container/smart pointer/algorithm/whatever is a template.

                                                                                                                1. 2

                                                                                                                  To add another unscientific comparison :) Servo release build from scratch on my machine (Ryzen 7 1700 @ 3.9GHz, SATA SSD) takes about 30 minutes. Firefox release build takes a bit more. Chromium… even more, closer to an hour. These are all different codebases, but they all implement a web browser, and the compile times are all in the same ballpark. So rustc is certainly not that much slower than clang++.

                                                                                                                  • Firefox 35.9M lines of code
                                                                                                                  • Chromium 18.1M lines of code
                                                                                                                  • Servo 2.25M lines of code

                                                                                                                  You’re saying compiling 2.25M lines of code for a not feature complete browser that takes 30 minutes is comparable to compiling 18-35M lines of code in ‘a bit more’?

                                                                                                                  1. 4

                                                                                                                    Line counters like this one are entirely wrong.

                                                                                                                    This thing only counted https://github.com/servo/servo. Servo code is actually split among many many repositories.

                                                                                                                    HTML parser, CSS parser, URL parser, WebRender, animation, font sanitizer, IPC, sandbox, SpiderMonkey JS engine (C++), Firefox’s media playback (C++), Firefox’s canvas thingy with Skia (C++), HarfBuzz text shaping (C++) and more other stuff — all of this is included in the 30 minutes!

                                                                                                                    plus,

                                                                                                                    the amount of lines necessary to do something can be a lot smaller in Rust

                                                                                                                    1. 2

                                                                                                                      Agreed, it grossly underestimates how much code Chromium contains. You are aware of the horrible depot_tools and the amount of stuff they pull in?

                                                                                                                      My point was, you are comparing a feature incomplete browser that is a smaller code base at least in one order of magnitude but takes 30 minutes compared to “closer to an hour” of Chromium. If think your argument doesn’t hold - you are free to provide data to prove me wrong.

                                                                                                                    2. 3

                                                                                                                      Servo’s not a monolithic codebase. Firefox is monolithic. It’s a bad comparison.

                                                                                                                      Chromium is also mostly monolithic IIRC.

                                                                                                            2. 2

                                                                                                              Free- and OpenBSD can compile userland from source:

                                                                                                              So decent compile times are of essence, especially if you are targeting multiple architectures.

                                                                                                            3. 6

                                                                                                              Well, ls is listed as only semi done, so he’s only semi wrong. :)

                                                                                                              1. 11

                                                                                                                The magic words being “There has been no attempt”. With that, especially by saying “attempt”, he’s completely wrong. There have been attempts. At everything he lists. (he lists more here: https://www.youtube.com/watch?v=fYgG0ds2_UQ&feature=youtu.be&t=2112 all of what Theo mentions has been written, in Rust, some even have multiple projects, and very serious ones at that)

                                                                                                                For a more direct approach at BSD utils, there’s the redox core utils, which are BSD-util based. https://github.com/redox-os/coreutils

                                                                                                                1. 2

                                                                                                                  Other magic words are “POSIX compatible”. Neither redox-os nor the uutils linked by @Manishearth seem to care particularly about this. I haven’t looked all that closely, but picking some random utils shows that none of them is fully compliant. It’s not even close, so surely they can’t be considered valid replacements of the C originals.

                                                                                                                  For example (assuming that I read the source code correctly) both implementations of cat lack the only POSIX-required option -u and the implementations of pwd lack both -L and -P. These are very simple tools and are considered done at least by uutils…

                                                                                                                  So, Theo may be wrong by saying that no attempts have been made, but I believe a whole lot of rather hard work still needs to be done before he will acknowledge serious efforts.

                                                                                                                  1. 5

                                                                                                                    This rapidly will devolve into a no true scotsman argument.

                                                                                                                    https://github.com/uutils/coreutils#run-busybox-tests

                                                                                                                    uutils is running the busybox tests. Which admittedly test for something other than POSIX compliance, but neither the GNU or BSD coreutils are POSIX-compliant anyway.

                                                                                                                    uutils is based on the GNU coreutils, redox’s ones are based on the BSD ones, which is certainly a step in the right direction and can certainly be counted as an attempt.

                                                                                                                    For example (assuming that I read the source code correctly) both implementations of cat lack the only POSIX-required option -u and the implementations of pwd lack both -L and -P.

                                                                                                                    Nobody said they were complete.

                                                                                                                    All we’re talking about is Theo’s rather strong point that “there has been no attempt”. There has.

                                                                                                              2. 1

                                                                                                                I’m curious about this statement in TdR in the linked email

                                                                                                                For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.

                                                                                                                Is this true?

                                                                                                                1. 15

                                                                                                                  As always with these complaints, I can’t find any reference to exact issues. What’s true is that LLVM uses quite a bit of memory to compile and rustc builds tend not to be the smallest themselves. But not that big. Also, recent improvements have definitely worked here

                                                                                                                  I do regularly build the full chain on a ACER c720p, with FreeBSD, which has a celeron and 2 GB of RAM, I have to shut down the X server and everything before, but it works.

                                                                                                                  As usual, this is probably an issue of the kind “please report actual problems, and we work fixing that”. “We want to provide a build environment for OpenBSD and X Y Z is missing” is something we’d be happy support, some fuzzy notion of “this doesn’t fulfill our (somewhat fuzzy) criteria” isn’t actionable.

                                                                                                                  Rust for Haiku does ship Rust with i386 binaries and bootstrapping compilers (stage0): http://rust-on-haiku.com/downloads

                                                                                                                  1. 10

                                                                                                                    As always with these complaints, I can’t find any reference to exact issues.

                                                                                                                    Only because it’s a thread on the OpenBSD mailing lists, people reading that list have the full context of the recent issues with Firefox and Rust.

                                                                                                                    I’ll assume you just don’t follow the list so here is the relevant thread lang/rust: update to 1.22.1

                                                                                                                    • For this release, I had lot of problem for updating i386 to 1.22.1 (too much memory pressure when compiling 1.22 with 1.21 version). So the bootstrap was initially regenerated by crosscompiling it from amd64, and next I regenerate a proper 1.22 bootstrap from i386. Build 1.22 with 1.22 seems to fit in memory.

                                                                                                                    As I do all this work with a dedicated host, it is possible that ENOMEM will come back in bulk.

                                                                                                                    And if the required memory still grows, rustc will be marked BROKEN on i386 (and firefox will not be available anymore on i386)

                                                                                                                    1. 7

                                                                                                                      Only because it’s a thread on the OpenBSD mailing lists, people reading that list have the full context of the recent issues with Firefox and Rust.

                                                                                                                      Sure, but has this:

                                                                                                                      And if the required memory still grows, rustc will be marked BROKEN on i386 (and firefox will not be available anymore on i386).

                                                                                                                      Reached the Rust maintainers? (thread on the internals mailing list, issue on rust-lang/rust?)

                                                                                                                      I’m happy to be corrected.

                                                                                                                      1. 7

                                                                                                                        Reached the Rust maintainers? (thread on the internals mailing list, issue on rust-lang/rust?)

                                                                                                                        I don’t know. I don’t follow rust development, however the author of that email is a rust contributor like I mentioned to you in the past so I assume that it’s known to people working on the project. Perhaps you should check on that internals mailing list, I checked rust-lang/rust on github but didn’t find anything relevant :)

                                                                                                                        1. 7

                                                                                                                          I checked IRLO (https://internals.rust-lang.org/) and also nothing. (“internals” by the way referring to the “compiler internals”, we have no closed mailing list). The problem on projects of that scale seems to be that information travel is a huge issue and that leads to aggrevation. The reason I’m asking is not that I want to disprove you, I just want to ensure that I don’t open a discussion that’s already happening somewhere just because something is going through social media currently.

                                                                                                                          Thanks for pointing that out, I will ensure there’s some discussion.

                                                                                                                          Reading the linked post, it seems to mostly be a regression when doing the jump between 1.21 to 1.22, so that should probably be a thing to keep an eye out for.

                                                                                                                        2. 2

                                                                                                                          Here’s a current Rust bug that makes life hard for people trying to work on newer platforms.

                                                                                                                    2. 2

                                                                                                                      I’m skeptical; this has certainly worked for me in the past.

                                                                                                                      I used 32 bit lab machines as a place to delegate builds to back when I was a student.

                                                                                                                      1. 4

                                                                                                                        Note that different operating systems will have different address space layout policies and limits. Your effective space can vary from possibly more than 3GB to possibly less than 2GB.

                                                                                                                  1. 4

                                                                                                                    “For a developer, the hardening effort could be a great boon, in that it could show nasty bugs early, it could make them easier to report, and it could add a lot of useful information to that report that makes them easier to fix too.”

                                                                                                                    This is actually one of the point fans of Design-by-Contract have been making since it takes you right to the bug. Memory-safe languages can prevent them. You don’t see Linus adopting many things like that in this quest to squash all the bugs. I say he’s mostly talking.

                                                                                                                    Now, let’s say I tried to commit something with hardening. He wants it to show the bug with a report. It can sometimes be obvious where something was hit but not always. So, a app gets hit with a non-obvious one eventually triggering some containment code. I’m guessing the Linux kernel already has support for pulling the code and data in the app from memory to analyze it in a way that shows where the attack is? Or does he expect me to dump all of that in a file to pull off the machine for manual analysis? Or just the writable parts in memory? I’m just wondering what’s standard in terms of support infrastructure for those doing it his way. There could even be opportunities to design mitigations around it.

                                                                                                                    1. 6

                                                                                                                      You don’t see Linus adopting many things like that in this quest to squash all the bugs. I say he’s mostly talking.

                                                                                                                      I say this a lot whenever the new userspace rant crops up.

                                                                                                                      And not even in the context of memory safe languages. It’s far more basic than that. Linux doesn’t really have an extensive set of API/regression tests or a test infrastructure.

                                                                                                                      Without any of that, “don’t break userspace” is completely hollow. It’s really “don’t let me see you breaking userspace”; if folks actually cared about that that much then they would test for it.

                                                                                                                      This is also why I mostly consider attempts to rewrite linux in a safer language premature; without good testing it’s just not going to be doable.

                                                                                                                      Browsers are quite similar to operating systems in many ways (specifically, that they expose a large API/ecosystem within which you can program, and have a huge base of programs written for them). Browsers have extensive tests which go everywhere from testing the basic behavior of a feature to its million edge cases, including “nobody should write code that relies on this but we’re going to test it anyway” edge cases. When we did the Stylo work for Firefox a large, possibly majority, component of the work was just getting all these tests to pass, because we had lots of edge cases we missed. I can’t even begin to imagine how we’d do it without tests. I can’t even begin to imagine how a project like Linux would do it without tests.

                                                                                                                      1. 3

                                                                                                                        I didn’t know they were lacking a test infrastructure. Yeah, that’s even worse than what I was saying. I especially like your characterization here:

                                                                                                                        “Without any of that, “don’t break userspace” is completely hollow. It’s really “don’t let me see you breaking userspace”; if folks actually cared about that that much then they would test for it.”

                                                                                                                        Yeah, this stuff is Linus’ ego until they get tests or contracts helping ensure that behavior. I also remember CompSci people bug-hunting the API’s had problems due to under or no specification of some components. They had to reverse engineer it a bit while they did the formal specs. They all found bugs, too.

                                                                                                                        1. 2

                                                                                                                          It’s not like the kernel doesn’t get tested, though: https://stackoverflow.com/a/3180642/942130

                                                                                                                          1. 2

                                                                                                                            I expected a little testing like that. Manishearth and I’s point is that this is a huge, critical project with more contributors than most whose leader is supposedly all about protecting the stability of the userspace. Yet, there’s no testing infrastructure for doing that. Yet, smaller projects and startups routinely pull that off for their growing codebases.

                                                                                                                            So, Linus is a hypocrite to not be doing what he can on testing side. There’s also a benefit to submitters where they could run the tests to spot breaks before submitting.

                                                                                                                    1. 5

                                                                                                                      The post in question Big-O: how code slows as data grows

                                                                                                                      The comment by ‘pyon’:

                                                                                                                      You should be ashamed of this post. How dare you mislead your readers? In amortized analysis, earlier cheap operations pay the cost of later expensive ones. By the time you need to perform an expensive operation, you will have performed enough cheap ones, so that the cost of the entire sequence of operations is bounded above by the sum of their amortized costs. To fix your list example: a sequence of cheap list inserts pays the cost of the expensive one that comes next.

                                                                                                                      If you discard the emotion, he gives out a fairly interesting additional note about what amortized analysis means. Instead of giving the information value, Ned reacts on the part that questions his authority. Such a brittle ego that puts you to writing a small novel worth’s of rhetoric instead of shrugging it off. Childish.

                                                                                                                      1. 52

                                                                                                                        If @pyon had just phrased the first part of the comment like “You’re making a number of simpliications regarding “amortization” here that I believe are important…” this would probably not have escalated. This is what Ned means with being toxic - being correct, and being a douche about it.

                                                                                                                        1. 10

                                                                                                                          Indeed; the original article appeared on Lobsters and featured a thoughtful discussion on amortization.

                                                                                                                          1. 2

                                                                                                                            I wonder whether better word choice without changing the meaning would help one step earlier: the original post did include «you may see the word “amortized” thrown around. That’s a fancy word for “average”», which sounds a bit dismissive towards the actual theory. Something like «Notions of ‘‘amortized’’ and ‘‘average’’ complexity are close enough for most applications» would sound much more friendly.

                                                                                                                            (And then the follow-up paints the previous post as if it was a decision to omit a detail, instead of a minor incorrectness in the text as written, which can be (maybe unconsciously) used to paint the situation as «correctness versus politeness», and then options get represented as if they were mutually exclusive)

                                                                                                                            1. 4

                                                                                                                              I feel like that would have put the author in a more defensible position on this specific point, yes. Being clear about where additional nuance exists and where it doesn’t is something that anyone writing about technical subjects should strive for, simply because it’s useful to the reader.

                                                                                                                              I don’t think it’s likely that that clarification would have much of an effect on most readers, since the hypothetical reader who’s mislead would have to study complexity theory for some years to get to the point where it’s relevant, and by that time they’ll probably have figured it out some other way. We should all be so lucky as to write things that need several years of study before their imperfections become clear. :)

                                                                                                                              But more to the point, while I can’t know anything about this specific commenter’s intent, somebody who’s determined to find fault can always do so. Nobody is perfect, and any piece of writing can be nit-picked.

                                                                                                                              1. 1

                                                                                                                                Several years sounds like an upper bound for an eventually succesful attempt. A couple months can be enough to reach the point in a good algorithms textbook where this difference becomes relevant and clear (and I do not mean that someone would do nothing but read the textbook).

                                                                                                                                I would hope that the best-case effect on the readers could be a strong hint that there is something to go find in a textbook. If someone has just found out that big-O notation exists and liked how it allows to explain the practical difference between some algorithms, it is exactly the time to tell them «there is much more of this topic to learn».

                                                                                                                                These two posts together theoretically could — as a background to the things actually discussed in them — create an opposite impression, but hopefully it is just my view as a person who already knows the actual details and no newbie will actually get the feeling that the details of the theory are useless and not interesting.

                                                                                                                                As for finding something to nitpick — my question was whether the tone of the original paragraph could have made it not «finding» but «noticing the obvious». And whether the tone may have changed — but probably nobody will ever know, even the participants of the exchange — the desire to put a «well, actually…» comment into the desire to complain.

                                                                                                                                1. 3

                                                                                                                                  Not having previous familiarity with this subject matter, I was guessing at how advanced the material was. :)

                                                                                                                                  I agree about your best case, and that it’s worth trying for whenever we write.

                                                                                                                                  I’ve never found anything that avoids the occasional “well, actually”, and not for want of trying. This is not an invitation to tell me how to; I think it’s best for everyone if we leave the topic there. :)

                                                                                                                                  1. 1

                                                                                                                                    I consider a polite «well, actually» a positive outcome… (Anything starting with a personal attack is not that, of course)

                                                                                                                          2. 25

                                                                                                                            It’s possible to share a fairly interesting additional note without also yelling at people. Regardless of what Pyon had to say, he was saying it in a very toxic manner. That’s also childish.

                                                                                                                            1. 5

                                                                                                                              Correct. But I don’t just care about the emotion. I care about the message.

                                                                                                                              Instead of trying to change web into a safe haven of some kind, why not admire it in its colors? Colors of mud and excrete among the colors of flowers and warmth, madness and clarity. You have very little power over having people get angry or aggressive about petty things. Though you can change a lot yourself and not take up about everything that’s said. Teaching your community this skill is also a pretty valuable in life overall.

                                                                                                                              1. 32

                                                                                                                                I don’t want my community to be defined by anger and aggression. I want beginners to feel like they can openly ask questions without being raged or laughed at. I want people to be able to share their knowledge without being told they don’t deserve to program. I want things to be better than they currently are.

                                                                                                                                Maintaining a welcoming, respectful community is hard work and depends on every member being committed to it. Part of that hard work is calling out toxic behavior.

                                                                                                                                1. 5

                                                                                                                                  I want beginners to feel like they can openly ask questions without being raged or laughed at.

                                                                                                                                  While I agree this is critically important, it’s not entirely fair to conflate “beginners asking questions” and “people writing authoritative blog posts”.

                                                                                                                                2. 10

                                                                                                                                  Yeah. That kind of self-regulation and dedication to finding signal in noise are endlessly rewarding traits worth practicing. And to extend your metaphor, we weed the garden because otherwise they’ll choke out some of the flowers.

                                                                                                                                  1. 5

                                                                                                                                    But I don’t just care about the emotion. I care about the message.

                                                                                                                                    I’m with you unless the message includes clear harm. I’ll try to resist its affect on me but advocate such messages are gone. That commenter was being an asshole on top of delivering some useful information. Discouraging the personal attacks increases number of people who will want to participate and share information. As Ned notes, such comment sections or forums also get more beginner friendly. I’m always fine with a general rule for civility in comments for such proven benefits.

                                                                                                                                    Edit: While this is about a toxic @pyon comment, I think I should also illustrate one like I’m advocating for that delivers great information without any attacks. pyon has delivered quite a lot of them in discussions on programming language theory. Here’s one on hypergraphs:

                                                                                                                                    https://lobste.rs/s/cfugqa/modelling_data_with_hypergraphs#c_bovmhr

                                                                                                                                    1. 5

                                                                                                                                      I personally always care about the emotion (as an individual, not as a site moderator), it’s an important component of any communication between humans. But I understand your perspective as well.

                                                                                                                                      1. 3

                                                                                                                                        I may have been unclear. I do too. I was just looking at it from other commenters’ perspective of how Id think if I didnt care about it but wanted good info and opportunities in programming sphere. Id still have to reduce harm/toxicity to other people by ground rules to foster good discussion and bring more people in.

                                                                                                                                        So, whether emotional or not, still cant discount the emotional effect of comments on others. Still should put some thought into that with reducing personal attacks being among easiest compromise as they add nothing to discussions.

                                                                                                                                        1. 2

                                                                                                                                          Ah! Okay. I misunderstood then, and it sounds like we’re in agreement.

                                                                                                                                    2. 4

                                                                                                                                      It’s ridiculous to say that if someone cannot ignore personal attacks, they have a brittle ego and are childish. While also defending personal attacks and vitriol as being the thing that we should celebrate about the internet. Rather, we should critique people for being assholes. The comment was critiquing the manner and tone in which he explained amortized analysis, but he’s not allowed to say that the comment’s manner and tone was bad? It’s ridiculous. The comment was bad, not because of the point it made, but because it made the point badly.

                                                                                                                                  2. 22

                                                                                                                                    Compare this approach:

                                                                                                                                    I believe this post simplifies the idea incorrectly. In amortized analysis, earlier (cheap) operations pay the cost of later (expensive) ones. When you need to perform an expensive operation, you will have performed enough cheap ones that the cost of the entire sequence of operations is bounded by the sum of their amortized costs. In the context of your list example, a sequence of cheap list inserts would pay the cost of the expensive one that comes next.

                                                                                                                                    This is the same content, free of “shame” and accusations of “misleading.” The original comment is a perfect example of the terrible tone that people take, as discussed in this post and in my previous post of Simon Peyton-Jones’ email.

                                                                                                                                    1. 4

                                                                                                                                      Instead of giving the information value, Ned reacts on the part that questions his authority.

                                                                                                                                      The author does give it value. You’ve missed the point. The author isn’t saying it’s incorrect or not valuable; he’s saying that this attitude from experts (who use their expertise as a tool to put others down) is highly toxic.

                                                                                                                                      1. 4

                                                                                                                                        If you discard the emotion, he gives out a fairly interesting additional note about what amortized analysis means. Instead of giving the information value, Ned reacts on the part that questions his authority.

                                                                                                                                        It’s not clear that Ned interprets pyon as questioning his authority. His criticism is of pyon‘s tone, which is histrionic. The cutting intro isn’t bad if we discard it; but what is the effect if we include it? It would be more balanced for Ned to discuss the details and value of pyon’s post, but that does not invalidate Ned’s point.

                                                                                                                                      1. 4

                                                                                                                                        This speaks to the importance of the Test Pilot program for experimental add-ons. It’s a smart way for Mozilla to gather feedback on new UI features like the Multi-Account Containers, so they can make sure they are polished (or on the other hand, not worth releasing at all). I’m happy to see Containers graduate into a full-fledged add-on in AMO.

                                                                                                                                        I thought that they might build a feature like this into the browser itself, rather than an add-on, but I think it’s smart to have some power user features relegated to add-ons, even if they are official Mozilla ones.

                                                                                                                                        1. 2

                                                                                                                                          It’s actually built in – it’s built in to nightly (maybe beta? idk) and will probably be there in the 57 release (maybe 56). They prototyped it in an addon and are sharing the addon since it isn’t built in in the current release, but in a future release it will be built in.