1. 12

    Currently, I’m sick in bed with tonsils so swollen they’re bulging our of my neck (yeah, gross, sorry).

    Hopefully I’ll get better by tomorrow, because I have a loooong day of critical analysis ahead of me - the joys of college!

    Aside from that, I’ll be writing about error handling in Rust and working on my tiny modal text editor.

    1.  

      Upvote for cool sounding Rust and modal text editor! Not the swollen tonsils! Feel better!

      Is your tiny model editor written in Rust as well?

      1.  

        Yes! It’s based on another minimalist editor project, but I’m slowly throwing away bits of that code and I may end up Ship of Theseus-ing myself into a wholly original codebase.

      2. 2

        Ouch. Take care with the tonsils, let them heal in their own time, because if you don’t there’s no limit to the problems they can cause.

        1. 1

          Hope you get well soon.

          1. 2

            Thanks!

        1. 3

          You ought to write the code to suit its purpose. Normally code is written to be run, but in your case you’re writing code to be read in a fairly narrow context. Adapt your line length to that. If you insist that the code have very long lines you make your problem more difficult than it needs to be.

          You have to use a pair of fonts that work. The code font should be different enough from the text font, but not too different. This is difficult, really difficult. Courier IMO never works with any other font. If your code and text fonts are very close in appearance (e.g. from the same font family), you can increase the contrast a little by reducing the code font size by 5%.

          If your design is lively, you can use a different background colour for the code, but if you only have one background colour that won’t work, again in my opinion. Look at Stack Overflow, which uses five different background colours, one of them for code. That works. If code is important, you might even consider using a little more background colour for other elements, so you can use it for code without skewing the design. It’s also possible to use just a border around the code blocks.

          Some people add line numbers for the code, which gives you additional contrast. I find that using e.g. Documenta for the text and Documenta Sans for code works if you have line numbers. (I’ve used that combination on paper, never on the web.)

          1. 1

            Thank you for a reply, exactly what I was asking for.

            In particular the comment about background working only in cases where there are other colors, like stack overflow, is, I think, right to the point. I tried removing the background entirely in my case, it looks somewhat better, but with longer code blocks it becomes harder to track where code starts and where it ends.

            And I hadn’t considered adding line numbers, good point.

            1. 1

              I didn’t mention box shadows, but they are another possibility, onscreen at least. And I’ve never seen it used for code, but of course animation exists and could be used. For example by using a discreet box shadow, and making the box shadow a little more pronounced when the cursor hovers over the code.

              1. 2

                It’s been a few days now, and I took your advice in the first sentence to heart:

                You ought to write the code to suit its purpose

                I think it has to be said that for some types of problems, to make it more readable, content has to be changed instead of style. If you are curious HERE is how it looks now. I no longer use code block to mean exactly runnable code. The code blocks on that page for example often have two distinct commands displayed side by side for comparison. Found this to be much more readable compared with them coming one after the other. And it serves the purpose of the article really well.

                So thanks a lot for your reply, it really helped.

                1. 1

                  No, thank you.

                  It’s so rare that someone a) pays attention when I speechify about code and writing and documentation and blah b) acts on it c) achieves a good result (I think your output is now exemplary) and d) says thanks. You made my day.

          1. 14

            While it might be true that 1500 Bytes is now the de facto MTU standard on the Internet (minus whatever overhead you throw at it), everything’s not lost. The problem is not that we don’t have the link layer capabilities to offer larger MTUs, the problem is that the transport protocol has to be AWARE of it. One mechanism for finding out whether what size MTU is supported by a path over the Internet is an Algorithm called DPLPMTUD. It is currently being standardized by the IETF and is more or less complete https://tools.ietf.org/html/draft-ietf-tsvwg-datagram-plpmtud-14. There are even plans for QUIC to implement this algorithm, so if we’ll end up with a transport that is widely deployed and also supports detection of MTUs > 1500 we’ll actually might have a chance to change the link layer defaults. Fun fact: All of the 4G networking gear actually supports jumbo frames, most of the providers just haven’t enabled the support for it since they are not aware of the issue.

            1. 6

              Wow, it might even work.

              I can hardly believe it… but if speedtest.net were able to send jumbo frames and most users’ browsers support receiving it, it might get deployed by ISPs as they look for benchmark karma. Amazing. I thought 1500 was as invariant as π.

              1. 5

                I was maintainer for an AS at a previous job and set up a few BGP peers with jumbo frames (4470). I would have made this available on the customer links as well, except none of them would have been able to receive the frames. They were all configured for 1500, as is the default in any OS then or today. Many of their NICs couldn’t handle 4470 either, though I suppose that has improved now.

                Even if a customer had configured their NIC to handle jumbo frames, they would have had problems with the other equipment on their local network. How do you change the MTU of your smartphone, your media box or your printer? If you set the MTU on your Ethernet interface to 4470 then your network stack is going to think it can send such large frames to any node on the same link. Path MTU discovery doesn’t fix this because there is no router in between that can send ICMP packets back to you, only L2 switches.

                It is easy to test. Try to ping your gateway with ping -s 4000 192.168.0.1 (or whatever your gateway is). Then change your MTU with something like ip link set eth0 mtu 4470 and see if you can still ping your gateway. Remember to run ip link set eth0 mtu 1500 afterwards (or reboot).

                I don’t think that DPLPMTUD will fix this situation and let everyone have jumbo frames. As a former network administrator reading the following paragraph, they are basically saying that jumbo frames would break my network in subtle and hard to diagnose ways:

                   A PL that does not acknowledge data reception (e.g., UDP and UDP-
                   Lite) is unable itself to detect when the packets that it sends are
                   discarded because their size is greater than the actual PMTU.  These
                   PLs need to rely on an application protocol to detect this loss.
                

                So you’re going to have people complain that their browser is working, but nothing else. I wouldn’t enable jumbo frames if DPLPMTUD was everything that was promised as a fix. That said, it looks like DPLPMTUD will be good for the Internet as a whole, but it does not really help the argument for jumbo frames.

                And I don’t know if it has changed recently, but the main argument for jumbo frames at the time was actually that they would lead to fewer interrupts per second. There is some overhead per processed packet, but this has mostly been fixed in hardware now. The big routers use custom hardware that handles routing at wire speed and even consumer network cards have UDP and TCP segmentation offloading, and the drivers are not limited to one packet per interrupt. So it’s not that much of a problem anymore.

                Would have been cool though and I really wanted to use it, just like I wanted to get us on the Mbone. But at least we got IPv6. Sorta. :)

                1. 3

                  If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                  Back when I tried running an email server on there, I actually did run into trouble with this, because some bank’s firewall blocked ICMP packets, so… I thought you’d like to know, neither of us used “jumbo” datagrams, but we still had MTU trouble, because their mail server tried to send 1500 octet packets and couldn’t detect that the DSL link couldn’t carry them. The connection timed out every time.

                  If your application can’t track a window of viable datagram sizes, then your application is simply wrong.

                  1. 2

                    If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                    It’s even worse: in the current situation[1], your system’s MTU won’t matter at all. Most of the network operators are straight-up MSS-clamping your TCP packets downstream, effectively discarding your system’s MTU.

                    I’m very excited by this draft! Not only it will fix the UDP situation we currently have, but will also make tunneling connections way more easy. That said, it also means that if we want to benefit from that, the network administrators will need to quit mss-clamping. I suspect this to take quite some time :(

                    [1] PMTU won’t work in many cases. Currently, you need ICMP to perform a PMTU discovery, which is sadly filtered out by some poorly-configured endpoints. Try to ping netflix.com for instance ;)

                    1. 2

                      If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                      Very true, one can’t assume an MTU of 1500 on the Internet. I disagree that it’s on the application to handle it:

                      If your application can’t track a window of viable datagram sizes, then your application is simply wrong.

                      The network stack is responsible for PMTUD, not the application. One can’t expect every application to track the datagram size on a TCP connection. Applications that use BSD sockets simply don’t do that, they send() and recv() and let the network stack figure out the datagram size. There’s nothing wrong with that. For UDP the situation is a little different, but IP can actually fragment large UDP datagrams and PTMUD works there too (unless, again, broken by bad configurations, hence DPLPMTUD).

                      1. 3

                        I disagree that it’s on the application to handle it

                        Sure, fine. It’s the transport layer’s job to handle it. Just as long as it’s detected at the endpoints.

                        For UDP the situation is a little different, but IP can actually fragment large UDP datagrams and PTMUD works there too

                        It doesn’t seem like anyone likes IP fragmentation.

                        • If you’re doing a teleconferencing app, or something similarly latency-sensitive, then you cannot afford the overhead of reconstructing fragmented packets; your whole purpose in using UDP was to avoid overhead.

                        • If you’re building your own reliable transport layer, like uTP or QUIC, then you already have a sliding size window facility; IP fragmentation is just a redundant mechanism that adds overhead.

                        • Even DNS, which seems like it ought to be a perfect use case for DNS with packet fragmentation, doesn’t seem to work well with it in practice, and it’s being phased out in favour of just running it over TCP whenever the payload is too big. Something about it acting as a DDoS amplification mechanism, and super-unreliable on top of that.

                        If you’re using TCP, or any of its clones, of course this ought to be handled by the underlying stack. They promised reliable delivery with some overhead, they should deliver on it. I kind of assumed that the “subtle breakage” that @weinholt was talking about was specifically for applications that used raw packets (like the given example of ping).

                        1. 1

                          You list good reasons to avoid IP fragmentation with UDP and in practice people don’t use or advocate IP fragmentation for UDP. Broken PMTUD affects everyone… ever had an SSH session that works fine until you try to list a large directory? Chances are the packets were small enough to fit in the MTU until you listed that directory. As breakages go, that one’s not too hard to figure out. The nice thing about the suggested MTU discovery method is that it will not rely on other types of packets than those already used by the application, so it should be immune to the kind of operator who filters everything he does not understand. But it does mean some applications will need to help the network layer prevent breakage, so IMHO it doesn’t make jumbo frames more likely to become a thing. It’s also a band-aid on an otherwise broken configuration, so I think we’ll see more broken configurations in the future, with less arguments to use on the operators who can now point to how everything is “working”.

                1. 8

                  This is one of those things that seems so obvious now that I’ve read it, but for some reason have never thought of or seen before.

                  One idea from the linked article by Martin Fowler is that we should design systems so that they’re easier to “strangle” in the future. What kinds of design decisions support that?

                  1. 9

                    Narrower interfaces, explicit versioning.

                    I’ve found narrower interaces to really help. Specifically, making it difficult for users to rely on unintentional effects, side efect or postconditions. Eliminating accidental postconditions is often quite simple: Document what the intended interface is and then make look at the output/result/…, see if there’s more, and remove it, randomise it or think about how rarish unusual behaviour can be made more usual, so that the documented behaviour is simple and the rest appears complicated.

                    1. 4

                      Anything I can come up with in that direction will also reduce the necessity of a rewrite in the first place.

                    1. 2

                      Relevantly… I’m working ~100% on my java compiler, doing things that are necessary in order to compile and run lambdas. I found something today that should boost full recompiles speed by a large factor, perhaps a hundred or so, which will please me greatly because this lambda work is rich in full recompiles, and waiting for half an hour is awful.

                      Writing these is good for me; I should do it more.

                      1. 3

                        I am concerned about how this affects JS crypto libraries… Does anyone have any insight into this? /cc @nickpsecurity

                        1. 1

                          You mean apart from “tests that break for bad reasons tend to be skipped sooner or later”?

                          1. 1

                            If you’re interested in an answer from @dchest, check out this issue: https://github.com/dchest/tweetnacl-js/issues/190

                          1. 4
                            • HTML is the most universal if you have a solution for the backend
                            • Qt if licensing is acceptable and up for a huge learning curve
                            • WxWidgets
                            1. 8

                              Re. html, the overhead of a browser engine is pretty high. This would only really make a sense to run as a standalone desktop app, with some native glue for talking to the OS, so the universality isn’t much of an asset.

                              How much of a learning curve are we talking with Qt? Is the python API (or any of the others) meaningfully different from the c++ API in context of that?

                              1. 6

                                How much of a learning curve are we talking with Qt? Is the python API (or any of the others) meaningfully different from the c++ API in context of that?

                                Qt is much more than just a “GUI library”, it’s more of an “application framework”. Whether that’s a good or bad thing depends on your tastes and what you want to do.

                                I always used the C++ docs when writing Qt in Python, the API is pretty similar and AFAIK there are no good docs for just Python. It works but you do need to make a mental translation.

                                Personally I liked working with Python/Qt (more than e.g. GTK), but I found it does have a bit of a learning curve.

                                1. 1

                                  What is the Python’s added value then? Is it only a binding (i.e. if I already have Python code, I can easily add GUI using the same language)? I used Qt only directly from C++ and it was relatively comfortable (including the memory management).

                                  1. 4

                                    For many people programming in Python instead of C++ is a huge added value. I’m not interested in starting a “C++ vs. Python”-debate, but a lot of people simply prefer Python over C++ for all sorts of reasons (whereas others prefer C++, so YMMV).

                                    1. 1

                                      Then I understand is just as a binding. Which is perfectly OK. My intention was not to start another useless flamewar. I just wanted to know, whether the Qt+Python combination can offer something more than Qt itself when the developer is not much interested in Python nor C++ and both languages are for him just the way to create a Qt GUI layer of an application.

                                      1. 1

                                        Yah, I know you didn’t want to start a flamewar, just wanted to make it explicit :-)

                                        And yeah, PyQt doesn’t really offer anything over C++ Qt other than the Python environment. So if you’re happy with that, then there’s little reason to switch (other than, perhaps, making external contributions a bit easier).

                                        1. 1

                                          Yep, that’s about what I was interested in knowing too. They said there’s a high learning curve, and I wanted to know if that only applied to the c++ api.

                                  2. 1

                                    For how much, read one of the tutorials and see. People’s opinions differ. There’s much more, for example including a HTML displayer with the properties and javascript integration you want is orthogonal, but reading that will give you a feeling for the learning curve.

                                    In my opinion, the other APIs are different but not meaningfully so.

                                1. 6

                                  Publishers and other site owners feel forced to use AMP as they fear that they’ll lose Google visibility and traffic without it.

                                  By the way, is this fear justified? I mean, did they conduct serious studies to come to the conclusion that they must use AMP?

                                  I suspect there is a lot of superstition here, but I may be wrong.

                                  1. 12

                                    Yes.

                                    Amp-enabled pages can be featured in the ‘carousel’ above regular pages.

                                    If you think Google isn’t doing everything they can do push every man and his dog to adopt AMP, or that they’re doing it for anything other than gaining more control over the web, I’m sorry but you’re either wrong, or naive about Google and it’s tactics.

                                    1. 1

                                      Frankly, I don’t know. That’s why I ask. I believe some press publishers do not need to rely heavily on SEO or Google-compliance because that’s not part of their core strategy. They have other (and better qualified) channels. In this situation, I wonder why it is required to implement AMP. Warning: I may be biased here, because I usually don’t rely on a search engine to find press content.

                                      1. 2

                                        The way I’ve heard it is: AMP is a bundle of requirements to get fast, mobile-friendly pages, and the key word is bundle.

                                        If you make a proposal to implement the bundle, you have to win through one meeting. After that, there’ll be 17 subtasks on jira to implement each of the 17 requirements, but no more management discussion. (Or if not 17, then however many requirements apply to you, which may be higher or lower than the list in the AMP spec. The precise number doesn’t matter.)

                                        If you want to get the same speed without the bundle, you have to propose each of those 17 tasks. Now you have 17 meetings and have to justify each of 17 tasks separately. Management will be sick and tired before you reach 10.

                                        A consequence of this view is that the two ways to get rid of AMP are:

                                        • make most people accept slowness and bloat
                                        • define another bundle, with a buzzword-worthy name, that provides the same single-meeting advantage

                                        I personally don’t think the latter is doable. The ship has sailed. The name for that other bundle now would be AMUNIH, short for Accelerated Mobile Uhm, NIH.

                                        1. 3

                                          The idea of putting more thought into building fast and efficient sites is absolutely something web developers should be doing. But that’s not only what AMP is about, and I’m not even sure if AMP is good at that.

                                          The way that Google imagines AMP is that anybody uses a specified subset and format of HTML alongside a fixed set of JS libraries. AMP HTML is pretty much standard HTML, but with lots of added WebComponents and some JS libraries that are supposed to make resource loading more efficient.

                                          Now, the thing is that if you want to use AMP right (and get the gentle preferred treatment in some cases), WebDevs are supposed to load all those AMP WebComponents from Google-run CDNs (cdn.ampproject.org). This alone puts Google and the people developing AMP into a position where they have total control, and where they are also the single point of failure on all websites. It’s not like a library that you embed for a specific feature that can fail gracefully - if the AMP JS libraries fail to load, your AMP website is literally a blank page. There is no room for graceful degradation.

                                          And even if we talk about AMPs actual effectiveness, I’m… honestly unsure if this works. I wanted to pick a good article, so I searched Reddit for posts that have “amp” in the URL and used the one with the largest number of upvotes. This AMPified CNBC article came up (note that clicking the link might forward you to the desktop version if you don’t have a Chrome-mobile-y user agent string). Looking at the devtools, it does a total of 24 requests for CSS and JS files, totaling at 1.91MB (although they were compressed, and only 571KB had to be transferred, but that’s still a lot). 14 of those requests ended up in the AMP CDN, but there are also requests to CNBC’s servers, as well as a couple of external services, including ad- and tracking scripts. Document loading finished in 874ms, and that’s on a super powerful laptop. That’s not a slim, fast website.

                                          It gets much worse if you look at the way Google handles AMP links inside their products. If you access an AMP search result, for example, you don’t even access the original servers. The article I shared was linked in this Reddit post, and if you look closely, they don’t even link to the CNBC article itself. The link goes to https://www.google.com/amp/s/www.cnbc.com/.... Google is proxying that request, and AMP-site owners have no control over what Google is doing. Right now, it looks like Google is only adding a top bar that brings you back to the search results and offers a “share” button, but… who knows. Stuff like that puts even more control into Google’s hand.

                                          I certainly have lots of issues with supporting a technology that requires everyone using it to depend on a single point of failure. And I also don’t like the fact that AMP decision-makers can make completely arbitrary decisions just because.

                                          That’s not really how the web is supposed to work.

                                          (Disclaimer: I’m a Mozilla-employee, and even though this is my private opinion, I’m obviously biased. And sorry for my rant. :))

                                          1. 1

                                            I’ve been involved in only one AMP project, and in that, all of the nontrivial work tickets were about killing go-slow shit (my opinion, naturally). One is a small sample size, maybe others are different.

                                            There were some trivial changes that didn’t seem particularly clueful, but who cares about trivial stuff. And there was the Google integration, which I though was upgefucked and vile, but I couldn’t see any better way. Still can’t, can you? It should be an absolute defence against arguments that just one additional tracker or blah-blah integratrion won’t hurt and getting rid of it requires {renegotiating a contract,finding another way,a lot of work}. AMP leverages Google’s famous stonewall. Noone can discuss with Google, right? So if the site has to be accepted by code running on Google’s servers, that’s it, end of discussion.

                                            EDIT: thought about my contradiction here. Why do I think AMP’s upgefucked and vile, but still sort of approve? I think it’s because I personally want the web to work decentralised et cetera, while the key idea of AMP is about orgchart compatibility and needing little management attention. That’s a realism about corporate life that I cannot really like, even though I think I should.

                                      2. 1

                                        The first search I tried now didn’t give me a carousel at all, the second gave me a a carousel with links to three different sites, none of them with AMP.

                                        All three sites were pleasantly fast, BTW, so perhaps someone has confused “fast” with AMP?

                                    1. 1

                                      Alternative suggestion:

                                      Let lobste.rs users point to “their” rss/atom feed(s) as part of the user settings. Have the server check for new blog postings once per day. When longtime users open the lobste.rs home page, show them the first 50 words from a blog posting from the pool and ask: “Do you think this should be posted to lobste.rs?” If a majority of five asked users click yes, post it.

                                      1. 1

                                        That surprised me. I thought I’d remembered that the difference is surprisingly small.

                                        It’s weird that the article doesn’t mention the compiler.

                                        1. 2

                                          It’s more a question of runtime and ABI than of compiler. If you compile to linux/ELF and your runtime uses libunwind there isn’t that much the compiler can do to affect the performance.

                                          Many compiler people talk as though that the performance of exceptions is the performance of code when no exception in thrown. Look at the phrasing about zero-cost exceptions, for example. (BTW, zero-cost exceptions either have a small cost or none, depending on how you count. It’s quite fascinating if you’re fascinated by that sort of thing.)

                                        1. 7

                                          Allow me to summarise: If you make the exception into the common case, the result is awful.

                                          1. 2

                                            Two things:

                                            1. The appropriate code style for tests has a deep difference from that for code that’s shipped, even if the indentation rules are the same. The reason: If tests break (as opposed to tests that work and shows that code is broken), only the development team is bothered, while if the shipping/deployed code breaks, customers are bothered. Therefore, many (but not all) kinds of laxness are acceptable in tests that would be out of the question in deployed code. It’s better to write more tests and get higher coverage than to spend time following the strict coding rules that are appropriate for shipped code.

                                            2. How to shape the code and interfaces such that the number of untested paths is small. Can’t really say much about it, but it is a skill that one can practise and learn.

                                            1. 1

                                              IME, #2 is one of those things you get for free if you’re following the Single Responsibility Principle.

                                              1. 1

                                                It took me only two seconds to think of something I’ve written where I didn’t get that for free from the SRP.

                                                In that case, I improved testability by not implementing some very simple optimisations. These optimisations would have increased the number of code paths to test (and doubled the code’s performance, maybe much more).

                                                1. 1

                                                  I of course should’ve said that it’s something I find I mostly get for free from following the SRP.

                                                  1. 1

                                                    Did I find an exception straight away by luck, or because I’m oh so brilliant? I’d love to be either lucky or brilliant. And rich too, and favoured by the ladies.

                                                    The strongest factor in my experience is a certain sinking feeling: “oh ████████ this will be hard to test” followed by either a design/implementation change or some procrastination, or both. But how can one nearn to feel that sinking feeling?

                                            1. 3

                                              Compiling java lambdas is hard work. The JDK contains what one might describe as a JIT compiler for lambdas, that is, a seeparate JIT compiler that runs in addition to the JIT compiler most people use in the JVM, and there are n levels of indirection. I’m really quite amazed. Even many quite simple java programs a) parse bits of java source at startup and b) compile to bytecode at startup. No wonder java servers often take half a minute to start listening for requests.

                                              I am implementing lambdas in my java compiler. Mentioned twice before here, chief feature drastically lower memory usage. Lambdas has taken >1 week so far and I’m not done.

                                              1. 3

                                                multi-path TCP bits are finally going mainline

                                                Does this mean MPTCP is fully supported, or is it just partial bits and pieces?

                                                1. 3

                                                  Very large bits and pieces. AIUI mptcp over v4 is feature-complete and stable now, but the v6 code is not.

                                                1. 11

                                                  Something about that spammer-spammer conversation makes my eyes hurt.

                                                  I couldn’t possibly violate an NDA, not about spam. But I could, in the most general terms, describe a spam filter I once implemented for a different site, a very successful filter. This spam filter did not deal with spam: It recognised some kinds of postings as not spam according to site-specific rules, and asked a human for all other postings.

                                                  The key to its success was that

                                                  • the site-specific rules were actually site-specific and not even nearly true in general. They were only true for the kinds of things talked about on that site. Translating to lobste.rs, “postings by @calvin are okay” might be a candidate, since it’s true within the context of lobste.rs, but does not apply to other calvins at other sites.
                                                  • spammers would have to really get to know the site’s audience to learn the filter’s rules.
                                                  • almost all postings were handled by the rules, so the humans weren’t worn out.
                                                  • spam was taken down a minute or three hours after being posted, not at once. Taking down spam at once made the spammers adapt.
                                                  1. 2

                                                    That last point seems like a low-effort, high-power action! (The rest of your description is also interesting.)

                                                  1. 6

                                                    Other answers go into “other people may have something to hide”, I want to go into a different angle, namely going into what they want to have the the right to keep to themselves.

                                                    Talking to someone who has nothing to hide, you can say: Which things do you think you should decide whether are private or not? Where’s the boundary? Would you accept losing a right to privacy because I don’t care — for example, would you give up the right to privacy about sex because some/many other people post nude selfies or more? Which things do you think you should be permitted to keep to yourself, even if you don’t actually care about keeping them to yourself?

                                                    That is, you turn privacy into a right that they care to have, even if they may not care to exercise it, and try to make them describe what the right to privacy spans, in their opinion.

                                                    EDIT: Rereading, I think I’m saying: You can ask them to describe the right to privacy they want to have, the zone of privacy they want to have, instead of letting them describe examples of privacy they don’t want to exercise.

                                                    1. 4

                                                      I liked that. It’s like talking about the right of free speech. Even if I don’t have anything to say I still think it should be protected. Same for the freedom of movement. No one really cares about it until it’s taken away.

                                                    1. 14

                                                      An important point that’s getting downplayed here is that this isn’t just a Firefox feature. It’s a library that you can pull into any Cargo-based project and that provides header files and bindings to C. We’ve already seen how this library-ification of “Firefox” features can help other projects when librsvg adopted Mozilla’s CSS implementation, and seeing the nearly ubiquitous* use of Servo-owned libraries like url.

                                                      If you like using apps that interoperate with the web without being implemented as browser apps, then this should be really good news in general. It gives you access to web standards without having to write your app in JavaScript and without having to implement it all yourself.

                                                      * Across the ecosystem of Rust applications, of course. Firefox is about the only C++ application I know of that directly depends on rust-url.

                                                      1. -2

                                                        Of course that also means that now anything that depends on librsvg now depends on Rust, which is a massive and complicated dependency that is difficult to package and maintain, and isn’t very portable. There are platforms that major GNU+Linux distros support that LLVM doesn’t, and as a result librsvg is now being held back to the last non-Rust version on those platforms. As much as I like Rust as a language, it’s a bad citizen when it comes to integrating well into the rest of the ecosystem. If Rust people want what they seem to want, which is to replace C with Rust as the go-to systems programming language, they need to play nicely with the rest of the world.

                                                        LLVM has been a blessing and a curse for Rust, I think. On one hand, it’s made developing Rust itself much easier. But it’s also been a major contributing factor towards Rust’s biggest problems: low portability, abysmal compile times, and a bit too much abstraction (because you can layer a lot of abstractions on top of each other in Rust without significantly hurting runtime performance).

                                                        1. 9

                                                          This is a pretty uncharitable reading. Indeed, we work together with all package maintainers that actually get in touch with us and want help. We have very good experiences with the Debian folks, for example.

                                                          LLVM is a major block in this, but there’s only so much we can do - work on new backends is underway, which may help the pain. When Rust started, GCC was also not in a state that you would want to write a frontend for it the way you could do it for LLVM. I would love to see a rustc version backed by GCC or cranelift to come to a state that makes writing codegen backends easier (this is an explicit goal of the project).

                                                          “Bad citizen” implies that we don’t appreciate those problems, but we all got limited hands and improvements are gradual. Indeed, Rust has frequently been the driver behind new LLVM backends and has been driven LLVM to support targets outside of the Apple/Google/Mobile spectrum that LLVM is traditionally aimed for. It’s not like people that can write a good quality LLVM backend are easy to find and to motivate to do that on their free time. A lot of backends need vendor support to be properly implemented. We actively speak to those vendors, but hey, the week has a limited time of hours and it’s not like vendor negotiation is something you want to do in your free time either.

                                                          librsvg did weight these pros and cons and has decided that the modularity and ability to use third party packages is worth making the jump. These moves are important, because no one will put an effort behind porting a programming language without some form of pain. Projects like this become the motivation.

                                                          1. 0

                                                            This is a pretty uncharitable reading. Indeed, we work together with all package maintainers that actually get in touch with us and want help. We have very good experiences with the Debian folks, for example.

                                                            I never said or implied that Rust’s developers or users have anything but good intentions. Bad citizen doesn’t mean that you don’t appreciate the problems, it means that the problems exist.

                                                            Like it or not, Rust doesn’t play well with system package management. Firefox, for example, requires the latest stable Rust to be compiled, which means that systems either need to upgrade their Rust regularly or not update Firefox regularly. Neither are good options. Upgrading Rust regularly means having to test and verify every Rust program in a distro’s repositories every 6 weeks, which becomes a bigger and bigger effort as more and more Rust packages are updated. What happens if one of them no longer works with newer stable Rusts? It’s not like Rust is committed to 100% backwards compatibility. And not upgrading Firefox means missing out on critical security fixes.

                                                            Rust needs a proper distinction between its package manager and its build system. Conflating them both into a Cargo-shaped amorphous blob is harmful.

                                                            LLVM is a major block in this, but there’s only so much we can do

                                                            Just don’t depend on LLVM in the first place. Most compiled languages have their own backends. It’s not like Rust has saved any real effort in the long term anyway, as they’re having to reimplement a whole lot of optimisation in MIR anyway to avoid generating so much LLVM bytecode.

                                                            These moves are important, because no one will put an effort behind porting a programming language without some form of pain. Projects like this become the motivation.

                                                            That’s just arrogant imo.

                                                            1. 3

                                                              Just don’t depend on LLVM in the first place.

                                                              Because rust would definitely support more CPU architectures if they had to build out the entire backend themselves. Just like DMD and the Go reference implementation, both of which lack support for architectures like m68k (the one that caused so much drama at Debian to begin with) and embedded stuff like avr and pic.

                                                              I’d be more interested in a compile-to-C version, or a GCC version. Those might actually solve the problem.

                                                          2. 1

                                                            @milesrout could you point to some examples that back up your comments on librsvg being held back to the non-rust version on those platforms and perhaps some commentary / posts from package maintainers on the subject? Not agreeing or disagreeing here, I just want to get some insight from package maintainers before I form an opinion on the subject.


                                                            On the topic of compile times I agree that LLVM is a blessing and curse. Clang is my preferred C/C++ compiler, but I have noticed that compiling with -O2 or -O3 on projects even as small as ~10kloc takes substantial time when compared to a compiler such as TCC. Sure the generated machine code is much more performant, but I do not think the run-time performance is always worth the build-time cost (for my use cases at least).

                                                            I haven’t written enough Rust to know how this translates over to rustc, but I imagine that a lot of the same slowness in optimizing of C++ template instantiations would appear in Rust generics.

                                                            1. 3

                                                              FYI (I’m familiar with the internals there) what you want for quick compiles is to not run -O2 or -O3.

                                                              Clang (and LLVM) emphasise speed very much, but do include some very expensive analysis and transformation code, and some even more expensive sanity checks (I’ve seen one weeklong compile). If you want quick compiles, don’t run the passes that do the slow work. TCC doesn’t run such extra slow things. If you want fast output (or various other things) rather than quick compiles, then do run the appropriate extra passes.

                                                              If you want both, you’re out of luck, because doing over 30 kinds of analysis isn’t quick.

                                                              1. 1

                                                                Hey @arnt, thanks for the reply.

                                                                I understand that -O2 and -O3 trade off compilation speed for additional optimization passes. Perhaps TCC was actually bad example because it doesn’t have options for higher levels of optimization.

                                                                My issue is more that the actual time to perform such additional passes is extraordinarily high period, and that if one wants to make meaningful changes to performance sensitive code, the rebuild process leads to a less-than-stellar user experience. The LLVM team has done amazing work that has lead to absolutely astounding performance in modern C/C++ code. There are much smarter people than I who have evaluated the trade-offs involved when dealing with higher optimization levels, and I trust their judgement. It is just that for me the huge bump in compilation time from -O0 or -O1 to -O2, -O3, or -Ofast is painful, and I wish there was an easier path to getting middle-of-the-road performance for release builds with good compilation times.

                                                                1. 2

                                                                  You have performance-sensitive code in .h files, so very many files need to be recompiled? BTDT and I feel your pain.

                                                                  In my own compiler (not C++) I’m taking some care to minimise the problem by working on a more fine-grained level, to the degree that is possible, which is nonzero but…

                                                                  One of the major LLVM teams uses a shared build farm: A file is generally only recompiled by one person. The rest of the team will just get the compiled output, because the shared cache maps the input file to the correct output. This makes a lot of sense to me — large programs are typically maintained by large teams, so the solution matches the problem quite well.

                                                        1. 15

                                                          Basically Chrome becoming the even more evil version of IE.

                                                          I wonder if the issue is making a de facto standard the product of a huge company or just making anything that.

                                                          What if people around the world start ditching Chrome for Firefox, shall we still be worried that Mozilla might turn on us in the near future?

                                                          Currently, as in the IE-era, I am forced to also use Chrome as certain websites just don’t load properly on Firefox.

                                                          1. 9

                                                            Can you help us and say which websits don’t work in Firefoxat https://webcompat.com/ ? Just in case we don’t have it on file.

                                                            1. 1

                                                              For me mainly the Unity website and forums, really awful.

                                                            2. 10

                                                              Basically Chrome becoming the even more evil version of IE.

                                                              … which was sort of predictable because Microsoft (in the past, at least) was never an adtech company. Google’s existence hinges on its ability to monetise its users. That’s going to lead to way more user-hostile behaviour than a straightforward browser war.

                                                              1. 9

                                                                These concerns were raised when Chrome was launched.

                                                                1. 6

                                                                  Indeed. I find it somewhat disheartening when a well-informed, vocal, minority of technologists predict doom, are ignored, and then a few years later the doom is in the news and everyone is diving for the fainting couches.

                                                                  Not that I burned a bunch of time and energy arguing against EME or anything, too (sigh).

                                                                  1. 8

                                                                    I think many technologists eagerly jumped on Chrome as soon as it was released… just as they jumped on Gmail.

                                                                    Even today, “Google does it” can be used as a sales pitch for all manner of products and processes.

                                                                    1. 3

                                                                      Even today, “Google does it” can be used as a sales pitch for all manner of products and processes.

                                                                      “No one ever got fired for buying IBM.”

                                                                    2. 2

                                                                      being well-informed and arguing online never makes a significant difference. maybe it’s enough to be able to say “i told you so,” but changing things requires a coordinated mass effort.

                                                                      1. 1

                                                                        Well, hindsight 20/20.

                                                                        You cannot certainly call them “well-informed” as this is not based on information but rather on intuition.

                                                                        You cannot call them vocal as if their voices don’t reach even tech-savy people, can we say they really said anything at all? (tree in the forest, lalala)

                                                                        And predicting doom is absolutely definitely not a good reason to listen to anyone as there are tons of those everywhere.

                                                                        Back in the day, the main push was to move people away from IE and that was done on an individual level by many. People were installing Firefox right away and touching IE only when they needed to update their system (focusing on the Windows lot).

                                                                        Then Firefox bloated up and Chrome was as light as a feather, running well even on an older computer (I remember how it felt to switch).

                                                                        So the tech-savy people in the family would just install Chrome on the freshly installed machine instead of Firefox.

                                                                        In my experience, it was never about trusting Google or liking it as a company. They just made a better product and people voted with their installs.

                                                                        Same for Gmail and all the rest, they made good stuff, let’s not forget that.

                                                                        But now we need to trade more and more of our privacy to get less and less benefits, that’s why switching to Firefox is the best option right now.

                                                                        1. 1

                                                                          You cannot certainly call them “well-informed” as this is not based on information but rather on intuition.

                                                                          Not at all. It’s not intuition to conclude that, if an ad-tech company produces a Web browser, that it’s going to be used as a tool to monetize user behavior. That’s straightforward deduction.

                                                                          In my experience, it was never about trusting Google or liking it as a company. They just made a better product and people voted with their installs.

                                                                          Only if you consider a narrow definition of ‘better’. People did, initially. Now the needle is swinging back: people are increasingly valuing privacy, and casting a skeptical eye at Chrome and Android as a result.

                                                                    3. 4

                                                                      Please don’t assume malevolence.

                                                                      You can also explain the same thing by assuming something much less evil: 1. Google depends critically on users being able to reach itself and its advertising customers. 2. Google assumes that it has better technical skill than its adtech competitors.

                                                                      If you assume those, then if someone else dominates the browser development and deployment, and can somehow intercept or discriminate against Google customers, then that’s an existential risk for Google. Funding a neutral browser, or even two, would be an effective defense against that risk.

                                                                      I know nothing about why Google chose to fund Firefox and Chrome. But I like to look for explanations that don’t assume malevolence.

                                                                      1. 2

                                                                        i don’t think they assumed malevolence. just that google depends on monetizing its users, which you seem to agree with.

                                                                        1. 1

                                                                          I might be wrong, but I interpreted some of the upstream comments as saying “Google says that Chrome does … but it actually does …” which is accusing Google of lying, and in my book liars are malevolent.

                                                                          1. 1

                                                                            i’m not seeing what part of the upstream comments could be interpreted that way.

                                                                    4. 5

                                                                      The core difference with Mozilla is that it’s a non-profit organization meaning that there is no ulterior motive as there is with companies like Google or MS. Running it as a non-profit means that it’s just a way to organize sustained funded development around an open source project.

                                                                      1. 6

                                                                        So I like Firefox and Mozilla, but it’s important to note that almost all their income comes from adtech companies (almost all from Google, actually).

                                                                        When you search Google from Firefox, that adds to Mozilla revenue.

                                                                        1. 6

                                                                          So I like Firefox and Mozilla, but it’s important to note that almost all their income comes from adtech companies (almost all from Google, actually).

                                                                          I often think about the Microsoft-Apple investment in the 90’s. Apple was in serious dire straits; there was a very real concern that it would cease to exist as a company.

                                                                          Then Microsoft swoops in and invests $150 million, when Apple was just weeks away from bankruptcy. The official reason was to encourage the use of new versions of Microsoft Office and IE on the Mac, but I often think that it was instead a hedge against any future anti-monopoly actions against Microsoft: “see, we’re not a monopoly, here’s Apple…”

                                                                          I often wonder if Google does this with Mozilla.

                                                                          1. 2

                                                                            Just guessing, but my guess is that Google paid Mozilla because people having access to a few fine web browser is a requirement for Google. What would happen to Adwords if 1%, 10%, 20% of web users were to start a Facebook app instead of opening Facebook in a web browser, and the Facebook app were to shield those users from Adwords? Even funding a second or third independent web browser team might make sense, just for robustness — €100m/year is only 0.1% of Adwords.

                                                                            1. 1

                                                                              I think one key difference is that Apple wasn’t compelled in any way (either from within or external) to include microsoft bits in their products, whereas Mozilla is/feels compelled to include google tracking software in their products. IANAL, but it would seem the ‘we aren’t a monopoly, look at these folks’ argument would be weakened by the other folks (mozilla) essentially being an extension of your (google’s ad) core business.

                                                                              1. 2

                                                                                IIRC they were required to have MSIE as the default browser on Macs.

                                                                                1. 1

                                                                                  Oh! I didn’t realize that..

                                                                            2. 4

                                                                              Similarly, Apple gets billions from Google to keep it as the default search engine in Safari.

                                                                              1. 3

                                                                                That’s true, but the only way to change that is for more people to start donating to Mozilla so that they don’t have to rely on the likes of Google for revenue. I started donating last year myself because I think that Firefox is an incredibly important project. If Firefox goes away Chrome will be the only game in town, and the internet is just too important for Google to become the gatekeepers of.

                                                                                1. 2

                                                                                  I fully agree with your point about allowing google to take over.

                                                                                  But I’m torn. On one hand I want Mozilla to succeed (for reasons you stated), but on the other hand I don’t want to encourage a company that openly advertises ‘privacy’ in their products but includes google tracking software. If Mozilla were genuine about it (e.g. disable google tracking by default and make it opt-in), then I would definitely feel obligated to donate. But it’s just hard for me to open my wallet and support a company that is misleading users, even if their survival is critical.

                                                                                  1. 3

                                                                                    On one hand I want Mozilla to succeed (for reasons you stated), […]

                                                                                    Me, too, but there are just so many of Mozilla’s decisions that cause me to go “wait … what?” whenever I’m revisiting the issue.

                                                                                    1. 3

                                                                                      I appreciate your idealistic view here. However, from pragmatic perspective Mozilla is all we’ve got, and if they do go under we’ll all be worse off. So, while it’s not perfect it’s still worth supporting because the alternative is much worse.

                                                                                      The really important part here is that Firefox is pretty much the only widely used alternative to Chrome now. If it goes away, Chromium will effectively be the only game in town. This will allow Google to completely ignore W3C, and just put in whatever features and behaviors they want in it. And the browsers are so complex nowadays that creating an alternative implementation from the ground up will take a herculean effort.

                                                                                      Having at least two independent implementations of the browser engine ensures a minimal common standard is followed.

                                                                                      1. 1

                                                                                        But, even if we ‘just have Mozilla’, Google controls their financial future, so Google could ‘kill’ Mozilla and take over now if they wanted right?

                                                                                        Well, there’s always webkit2. While not nearly as popular anymore as blink and gecko, there are some browsers that use it. Maybe it’s time for webkit (in the form of the maintained webkit2) to make a resurgence!

                                                                                        1. 1

                                                                                          As I’ve already said, if we don’t want Google to control Mozilla’s future then people need to start donating to Mozilla to make them financially independent. And I think we have the same problem with webkit2 as we do with Mozilla, the development has to get funded somehow. Browser engines are incredibly complex nowadays, and it takes a lot of effort to stay competitive.

                                                                              2. 3

                                                                                Currently, as in the IE-era, I am forced to also use Chrome as certain websites just don’t load properly on Firefox.

                                                                                I’ve experienced this on a few sites. My approach has been to a) find alternatives, and b) file detailed bug reports. In some cases (b) bears fruit; e.g. beautiful.ai recently let me know that they now fully support Firefox.

                                                                                So far, I’ve managed to avoid the situation where I have to use Chrome.

                                                                              1. 11

                                                                                I’m very skeptical of the numbers. A fully charged iPhone has a battery of 10-12 Wh (not kWh), depending on the model. You can download more than one GB without fully depleting the battery (in fact, way more than that). The 2.9 kWh per GB is totally crazy… Sure, there are towers and other elements to deliver the data to the phone. Still.

                                                                                The referenced study doesn’t show those numbers, an even their estimation of 0.1 kWh/GB (page 6 of the study) is taking into account a lot of old infrastructure. In the same page they talk about numbers of 2010, but even then the consumption using broadband was estimated as 0.08 kWh/GB and only 2.9 kWh for 3G access. Again, in 2010.

                                                                                Taking into account that consumption for 2020 is totally unrealistic to me… It’s probably a factor of at least 30 times less… Of course, this number will go down as well as more efficient transfers are rolled out, which seems to be happening already, at an exponential rate.

                                                                                So don’t think that shaving a few kbytes here and there is going to make a significant change…

                                                                                1. 7

                                                                                  I don’t know whether the numbers are right or wrong, but I’m very happy with the alternative direction here, and another take at the bloat that the web has become today.

                                                                                  It takes several seconds on my machine to load the website of my bank, a major national bank used by millions of folks in the US (Chase). I looked at the source code, and it’s some sort of encrypted (base64-style, not code minimisation style) JavaScript gibberish, which looks like it uses several seconds of my CPU time each time it runs, in addition to making the website and my whole browser unbearably slow, prompting the slow-site warning to come in and out, and often failing to work at all, requiring a reload of the whole page. (No, I haven’t restarted my browser in a while, and, yes, I do have a bunch of tabs open — but many other sites still work fine as-is, but not Chase.)

                                                                                  I’m kind of amazed how all these global warming people think it’s OK to waste so many of my CPU cycles on their useless fonts and megabytes of JavaScript on their websites to present a KB worth of text and an image or two. We need folks to start taking this seriously.

                                                                                  The biggest cost might not be the actual transmission, but rather the wasted cycles from having to rerender complex designs that don’t add anything to the user experience — far from it, make it slow for lots of people who don’t have the latest and greatest gadgets and don’t devote their whole machine to running a single website in a freshly-reloaded browser. This also has a side effect of people needing to upgrade their equipment on a regular basis, even if the amount of information you require accessing — just a list of a few dozen of transactions from your bank — hasn’t changed that much over the years.

                                                                                  Someone should do some math on how much a popular bank contributes to global warming with its megabyte-sized website that requires several seconds of CPU cycles to see a few dozen transactions or make a payment. I’m pretty sure the number would be rather significant. Add to that the amount of wasted man-hours of folks having to wait several seconds for the pages to load. But mah design and front-end skillz!

                                                                                  1. 3

                                                                                    Chase’s website was one of two reasons I closed my credit card with them after 12 years. I was traveling and needed to dispute a charge, and it took tens of minutes of waiting for various pages to load on my smartphone (Nexus 5x, was connected to a fast ISP via WiFi).

                                                                                    1. 2

                                                                                      The problem is that Chase, together with AmEx, effectively have a monopoly on premium credit cards and travel rewards. It’s very easy to avoid them as a bank otherwise, because credit unions often provide a much better product, and still have outdated-enough websites that simply do the job without whistling at you all the time, but if you’re into getting the best out of your travel, dealing with the subpar CPU-hungry websites of AmEx and Chase is often a requirement for getting certain things done.

                                                                                      (However, I did stop using Chase Ink for many of my actual business transactions, because the decline rate was unbearable, and Chase customer service leaves a lot to be desired.)

                                                                                      What’s upsetting is that with every single redesign, they make things worse, yet the majority of bloggers and reviewers only see the visual “improvements” in graphics, and completely ignore the functional and usability deficiencies and extra CPU requirements of each redesign.

                                                                                  2. 9

                                                                                    Sure, there are towers and other elements to deliver the data to the phone. Still.

                                                                                    Still what? If you’re trying to count the total amount of power required to deliver a GB, then it seems like you should count all the computers involved, not just the endpoint.

                                                                                    1. 4

                                                                                      “still, is too big of a difference”. Of course you’re right ;-)

                                                                                      The study estimates the consumption as 0.1 kWh in 2020. The 2.9 kWh is an estimation in 2010.

                                                                                      1. 2

                                                                                        I see these arguments all the time about “accuracy” of which study’s predictions are “correct” but it must be known that these studies are predictions of the average consumption for just transport, and very old equipment is still in service in many many places in the world; you could very easily be hitting some of that equipment on some requests depending on where your data hops around! We all know an average includes many outliers, and perhaps the average is far less common than the other cases. In any case, wireless is not the answer! We can start trusting numbers once someone develops the energy usage equivalent of dig

                                                                                      2. 3

                                                                                        Yes. Let’s count a couple.

                                                                                        I have a switch (an ordinary cheap switch) here that’ll receive and forward 8Gbps on 5W, so it can forward 3600000 gigabytes per kWh, or 0.0000028kWh/GB. That’s the power supply rating, so it’ll be higher than the peak power requirement, which is in turn will be higher than the sustained, and big switches tend to be more efficient than this small one, so the real number may have another zero. Routers are like switches wrt power (even big fast routers tend to have low-power 40MHz CPUs and do most routing in a switch-like way, since that’s how you get a long MTBF), so if you assume that the sender needs a third of that 0.1kWh/GB, the receiver a third, and the networking a third, then… dumdelidum… the average number of routers and switches between the sender and receiver must be at least 10000. This doesn’t make sense.

                                                                                        The numbers don’t make sense for servers either. Netflix recently announced getting ~200Gbps out of its new hardware. At 0.03kWh/GB, that would require 22kW sustained, so probably a 50kW power supply. Have you ever seen such a thing? A single rack of servers would would need 1MW of power.

                                                                                        1. 1

                                                                                          There was a study that laid out the numbers, but the link seems to have died recently. It stated that about 50% the energy cost for data transfer was datacenter costs, the rest was spread out thinly over the network to get to its destination. Note that datacenter costs does not just involve the power supply for the server itself, but also all related power consumption like cooling, etc.

                                                                                          1. 2

                                                                                            ACEEE, 2012… I seem to remember reading that study… I think I read it when it was new, and when I multiplied the numbers in that with Google’s size and with a local ISP’s size, I found that both of them should have electricity bills far above 100% of their total revenue.

                                                                                            Anyway, if you change the composition that way, then at least 7000 routers/switches on the way, or else some of the switches must use vastly more energy than the ones I’ve dealt with.

                                                                                            And on the server side, >95% of the power must go towards auxiliary services. AIUI cooling isn’t the major auxiliary service, preparing data to transfer costs more than cooling. Netflix needs to encode films, Google needs to run Googlebot, et cetera. Everyone who transfers a lot must prepare data to transfer.

                                                                                      3. 4

                                                                                        I ran a server at Coloclue for a few years, and the pricing is based on power usage.

                                                                                        I stopped in 2013, but I checked my old invoices and monthly power usage fluctuated between 23.58kWh and 18.3kWh, with one outlier at 14kWh. That’s quite a difference! This is all on the same machine (little Supermicro Intel Atom 330) with the same system (FreeBSD).

                                                                                        This is from 2009-2014, and I can’t go back and correlate this with what the machine was doing, but fluctuating activity seems the most logical response? Would be interesting if I had better numbers on this.

                                                                                        1. 2

                                                                                          With you on the skeptic train: would love to see where this estimate:

                                                                                          Let’s assume the average website receives about 10.000 unique visitors per month

                                                                                          it seems way high. We probably will be looking to a pareto distribution, and I don’t know if my intuition is wrong, but I’ve the feeling that your average wordpress site sees way way lower visitors than that.

                                                                                          Very curious about this now, totally worth some more digging

                                                                                        1. 1

                                                                                          Sadly the prefers-color-scheme CSS query isn’t too well supported (yet). Therefore I opted to use a simple JS snippet to toggle sheets. I’ve written a little article on it a while back here: https://timvisee.com/blog/dark-mode-toggle-on-static-website/

                                                                                          1. 2

                                                                                            If you want to know whether your browser supports it or not I wrote this page some time ago: https://prefers-color-scheme.bejarano.io

                                                                                            1. 2

                                                                                              It’s supported by Firefox, Chrome and Edge. That’s like 99.999% of the browser market. :)

                                                                                              1. 1

                                                                                                Isn’t the browser market 50% android 25% windows and 25% miscellaneous, and prefers-color-scheme works on one of those “miscellaneous” varieties?

                                                                                                1. 1

                                                                                                  It really just depends on whether you’re including China or mobile browsers.

                                                                                                  1. 1

                                                                                                    Android and Windows aren’t browsers…

                                                                                                    Firefox, Chrome, Safari and Edge all support this.

                                                                                                    1. 1

                                                                                                      I tested Chrome earlier today, and found no way to actually use this functionality on that particular device. If it can’t be used, does it really work (GP’s word) or is it really supported (your word)?

                                                                                                      The javascript-based alternative mentioned upthread delivers the functionalty all the way to actual users, today. Personally I chose to use prefers-color-scheme because it’s cleaner and the device support will get there eventually.

                                                                                                      1. 1

                                                                                                        https://caniuse.com/#feat=prefers-color-scheme filtered by usage relative % of tracked desktop browsers, 79%. Chrome support = ✔

                                                                                                        After that, it depends on how your device chooses to deliver your preference. It works fine for me on Windows, but for example I couldn’t imagine 99% of Linux systems managing to cobble together the sequence of interprocess interfaces needed to communicate color scheme preferences to a web browser. I personally don’t know about or think of Mac OS. Obviously, it’s a progressive enhancement anyways, so it’s not like talking about CSS grids or flexbox.

                                                                                                2. 1

                                                                                                  caniuse reports it works on around 76% globally.

                                                                                                  1. 1

                                                                                                    If you look at how many browsers parse that CSS and handle it, the figure is 76%. If you look at how many users have a light/dark UI toggle and a browser that uses the toggle to handle the CSS, the figure is much lower.

                                                                                                    Still.