1. 1

    Huh, ::part became fully enabled in release Firefox almost exactly a year ago, wow. Meanwhile, Constructable (constructible? why does everyone disagree on spelling :D) Stylesheets are still behind a flag in nightly :(

    1. 1

      Looks like there’s been disagreement over the spelling. 🙂 I changed my spelling to match the one in the spec.

    1. 2

      Sad to see. I’m glad to have gotten around to visiting this before it closed. It’s a great demonstration of computer history, and amazing that they managed to keep all those old machines working.

      1. 4

        That’s very cool, thanks for sharing the nitty-gritty!

        I wasn’t aware of Chrome’s measureMemory(), that seems useful. Firefox has instilled in me a sort of skepticism towards IndexedDB, but otherwise it sounds like a neat solution to free up some memory.

        A bit tangential, but re: Svelte as the framework of choice — how did it pan out for a component like this? I’m curious if you found that the implementation flows naturally, or whether you had to jump hoops or resort to its escape hatches (e.g. managing event listeners out-of-band).

        1. 3

          Thanks for the feedback! The more worrying Firefox IDB issue to me is that it’s totally broken in private browsing mode. But I’ve already worked around this in my application (i.e. show the user an error), so I’m a bit less concerned about dealing with it in the picker. Also hopefully Firefox will fix this someday. (If not, it’s a privacy leak! This is how news websites know you’re in private browsing! 😛)

          Svelte is a great framework for this kind of thing. The only awkward part was some of the issues with building it as a custom element, e.g. I had to build it as one single component for perf reasons, and also that the destroy logic actually has a memory leak unless you do some odd workarounds.

          Also I’m pretty sure it could have a lower bundle size footprint if it built it fully custom, but frankly that would just be a lot of code, whereas with Svelte the source code is quite terse.

        1. 13

          “Quickly” using the JavaScript tooling is somewhat akin to watching one or two episodes of the 6th Game of Thrones season without watching anything else. You really need to watch season 1-5 to understand what’s going on (I stole this analogy from someone describing Ruby on Rails a few years ago).

          A few days ago I wanted to publish a little “progressive enhancement”-type script I’ve been using for years as a module so other people can easily use it with WebPack and whatnot. That’s much easier said than done, as there is a lot of conflicting and confusing information out there. Publishing to npm isn’t too hard, but making it available as a module and keeping window.imgzoom for non-WebPack users proved to be hard; I’m not entire sure what the “best” way to do this is, or what exactly I need to support; is just ES2015 modules enough? There’s a lot of conflicting information out there.

          Just searching “publish JavaScript module” and trying to make sense of it all is rather hard, it is for me anyway as I’m joining half-way through season 6. I have no strong opinions on the tooling as such – I lack experience to have an informed opinion – but it sure is confusing to “quickly” do something for people who are not heavily involved in it (i.e. me).

          Several people in the HN thread were quick to point out that “but language X also has problems!” Aside from being a boring whataboutism, in my experience “quickly doing something” in other languages I’m not familiar with is typically quite a bit easier.

          1. 6

            This is absolutely a fair criticism. I’ve spent a lot of time in the JS ecosystem and I find that usually the best way is to just completely ignore all the hype and just stick with season 1-2 of the show. Mostly I just assume the new shiny is unnecessary and move on with it. (Note though that I basically do JS on my own time, so if you’re in a company with people who try to keep up with all the new technology, this might not work out.)

            Another way to think of this is: the vast majority of the JavaScript ecosystem is composed of early adopters. Not being one will save you a lot of energy. There’s a difference between keeping up with the times and being an early adopter that jumps on unproven technology and gets burned/has to work to solve problems that didn’t exist in the old stack, but the JavaScript ecosystem is largely unaware of that difference.

            As an example, take your question: what format do I publish a module in? You’ll find lots of excited blog posts and documentation telling you to use Babel and ES6 modules and the new syntax, and here’s the 5 different modes Babel has to compile ES6 module syntax to, and by the way you probably should be using TypeScript too. All of that is noise. As far as I know, the only really compelling reason to switch to ES6 module syntax is that “it’s in the spec and it’s the future,” and that is no reason at all. It doesn’t really have any practical benefit. Traditional Node modules that use require() are Good Enough™, and all the new stuff works with them because there’s so much existing code. ES6 definitely has lots of useful new syntax, and maybe it’ll benefit you to learn some of it. But don’t feel obligated to rewrite everything in the new syntax, because ES5 is Good Enough™. All you’re doing is creating a problem for yourself that has to be solved by learning how to use Babel.

            tl;dr:

            • Ship “boring” ES5 code, maybe with some extra ES6 stuff sprinkled in if it actually helps you (but not too new, because then you’ll have to get Babel involved, and definitely not simply for the sake of it being new)
            • Ship traditional Node modules, because it’s the “lowest” common denominator but honestly it isn’t really lower if you only look at practical benefits
            • If someone wants to use new module syntax or a new loader or whatever, that’s their problem, not yours. Everything should be compatible with require() anyway so why bother catering to someone else’s need for novelty? Let them figure out the problems they’ve created.

            edit: honestly I’m not sure if this will help you. But I hope it does! <3

            1. 2

              Thanks; I’m not really fussed by the “latest greatest”, and would like a simple method which provides:

              1. Simply using window.imgzoom = function() { [..] } if loaded via plain ol’ <script src="imgzoom.js">
              2. Exporting as a module if loaded via WebPack or whatnot and don’t pollute the global namespace.

              I think I got it correct now; but with the plethora of different systems and whatnot it’s hard to be sure (never mind what WebPack on its own is not exactly easy to “just get started” with).

              1. 1

                Well the practical benefit of ES6 modules is shipping less code to your users so their page loads are faster.

                1. 4

                  How so? What does it get you that tree shaking or dead code elimination doesn’t? Those techniques have been standard in any frontend build pipeline for a long time - before ES6 modules IIRC. I know one of the supposed benefits of the syntax is that it makes these things easier but I mean… Browserify and Webpack parsed require()s out of the AST just fine before Harmony modules.

                  (If your goal is maximal page load speed, you already need to be using a build system to uglify your JS and, more importantly, bundle packages together - request-response cycles cost a lot unless you want to go all in on HTTP/2, which has its own issues…)

                  1. 2

                    Because ES6 imports are static, while CommonJS requires are dynamic and thus harder (if not impossible) to tree shake properly.

                    1. 1

                      Sorry for the late reply. If the problem is that require() is dynamic, then the solution is don’t use require() dynamically. I don’t use WebPack but I know Browserify has never supported require() calls that couldn’t be statically parsed. You just made sure not to do that, even if it would work at runtime in Node. This worked just fine before ES6 modules and did not require (ha!) new syntax.

                      1. 0

                        Well I don’t know much about Browserify but if it didn’t support something that Node did, that would make it a non-starter in my opinion.

                        1. 1

                          You should read my post again - you’re misunderstanding and missing the point. If you have, for example:

                          var foo = require('foo');
                          

                          Browserify will parse this just fine. That’s what I mean by “using require() statically”. On the other hand if you do something more dynamic like:

                          var modname = 'foo'; // This could be dynamically set conditionally, from config, etc.
                          var foo = require(modname);
                          

                          this will not work because (at least IIUC) the problem of what module the program is going to run is undecidable. (WebPack does not do this either, so is WebPack also a non-starter in your opinion? There’s a lot of misinformation about Browserify but it can do everything WebPack can do.)

                          The main point that I was making though, which you ignored to focus on Browserify, is that the first example above is just as good as the new module syntax, and it does not require the entire ecosystem to move to brand-new syntax for no reason. It is a strict subset of already-widespread syntax/semantics, statically parseable, and worked just fine in Browserify and WebPack long before ES6 modules. It is, AFAIK, entirely semantically equivalent, meaning that there is no expressive power or ease of use gained or lost by moving to the new module syntax.

                          Of course, you can’t use the second example without some tricks. If you want to make your module compatible with module bundlers, don’t do things like the second example. You might have to rewrite the few modules that work like that so they’re compatible, but that’s not a valid argument, because the alternative is to rewrite every module. And for what?

                          New JavaScript syntax is not magical, and it’s not automatically good just because it’s new or because it’s in the core of the language. Languages are designed by regular people, and people make mistakes or have tunnel vision or any number of other things. ES6 modules create problems for the ecosystem and largely don’t do anything that require() didn’t already.

                          1. 1

                            I understood your point. My point is that ‘Just don’t do XYZ and you’ll be OK’ is basically the same as ‘You’re holding it wrong’. There is a reason a new syntax was introduced, it was to rule out any possibility of dynamic requires no matter what tooling is being used. I agree with you that ES6 modules are not exactly a utopian solution, but they are a step forward for efficient bundling.

                            The reason I focused on Browserify is because you did–as an argument that the old require syntax would work statically as long as long as you used a tool which enforced using a dynamic syntax (require) statically. But like I said, that is a non-starter for a language ecosystem.

              2. 2

                I’ve found Pika Pack (https://github.com/pikapkg/pack) to be the best tool for “I just want to publish this bit of JavaScript and not think about CommonJS, ES modules, UMD, or the 5 billion other JS module formats”.

                Here’s an example library I published. You can see that it published in multiple formats, but the package.json configuration is pretty minimal. Pika isn’t perfect, but it’s better than writing your own Rollup/Webpack config.

              1. 1

                Thank you for this very informative and helpful article :)

                Out of curiosity, did you use the memory tool in Firefox desktop or the profiler extension (https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler)?

                1. 2

                  Thanks! I used the built-in Memory tool in the DevTools. I haven’t tried the new Profiler yet for memory analysis.

                  1. 2
                    <button type="button" aria-pressed="false">
                      Unpressed
                    </button>
                    <button type="button" aria-pressed="true">
                      Pressed
                    </button>
                    

                    I didn’t know about the ARIA attributes, compare MDN.

                    But what’s the advantage of encoding a toggle button like this instead of as an <input type="checkbox">?

                    1. 2

                      I don’t know of any hard-and-fast rules around this, but I think they just have slightly different semantics, same as the visual representation of a checkbox compared to a toggle. The Aria Practices document describes a mute button as a good example of a toggle button: http://w3c.github.io/aria-practices/#button

                      Incidentally there is also a proposal for a “switch” element, which would have its own slightly different semantics: https://github.com/tkent-google/std-switch/blob/master/README.md

                      1. 2

                        https://lobste.rs/s/yvs2xp/don_t_use_checkboxes_2007 pretty much answers why the author (and I) recommend not using checkboxes.

                        Though, most of the time, you’d probably be better off with radio buttons or a <select> menu.

                        1. 1

                          Hmm, but don’t those arguments apply equally to toggle buttons? I don’t see a fundamental difference between toggle buttons and checkboxes.

                      1. 10

                        There’s still one problem with SPAs in particular that doesn’t have a good solution: There’s no standard way to indicate to a screen reader that a new pseudo-page has loaded. An app can work around this with a hidden ARIA live region, but that doesn’t allow a screen reader to provide a standardized user experience whenever a new page loads, whether it was a real browser page load or not, and it also doesn’t allow the screen reader to do something smart like automatically read the content of the new page.

                        To see (or rather, hear) this in action with Pinafore, go to the Pinafore home page in a browser that’s not already logged in, and activate the “Log in” link. The screen reader says nothing.

                        I think the best solution to this would be a new DOM event that the SPA can fire. The browser should then translate that into the appropriate event in the host platform’s accessibility API.

                        1. 1

                          Thanks for the feedback! I agree that, absent any better solution, there should be a DOM event to announce page changes in SPAs. What’s a bit odd to me, though, is that screen readers don’t already do this when the URL changes? Maybe the History API is just not a reliable enough signal for an SPA page change.

                          1. 2

                            I tried that solution at my previous job, where I developed a Windows screen reader. It’s better than nothing, but not perfect. A couple of problems:

                            • If the URL changes before the SPA finishes loading and rendering the new page, the screen reader may start reading too soon. I experienced this sometimes on Medium.
                            • Some applications change the URL without doing the equivalent of a page load. Discourse’s implementation of infinite scroll is one example.

                            Furthermore, no other screen reader is watching for the URL to change, and the one that I developed at my previous job is hardly a market leader (which is partly why I no longer work there). Not naming names because I don’t want this comment to show up in searches, or search engine alerts, for that company or product.

                        1. 8

                          You shouldn’t have to enter an elite tech priesthood just to preserve your privacy, security, and safety online.

                          I disagree with this attitude. There’s nothing special or elite about basic infosec and use of devices. There’s no special magic to “don’t post personal information online unless you’re okay with bad people potentially knowing it.” There’s no one special trick to avoiding toxic online communities: don’t join, or leave.

                          Like, the author even hints at the problem in the preceding paragraph:

                          Does anyone under 21 actually care whether the code on their phone is open-source and whether, Stallman-style, they can dive into an Objective-C file and change something? Probably not many.

                          It seems to me that if we don’t raise people with the expectation that they learn some mastery over basic skills and technology, we do them (and ourselves, when they’re inevitably exploited) a great disservice. If we don’t stop the trend towards digital sharecropping we can’t be surprised when we end up with a bunch of serfs and slaves.

                          1. 16

                            I agree with you that a little tech literacy can go a long way. But I think the bar is far too high, especially for a “full tech vegan” lifestyle. One question I commonly ask myself is, “Could my mother do this? My grandfather? How about someone who has chosen not to work with computers for a living, and to spend their time on something else?”

                            I can drive a car without learning how the internal combustion engine works. I can eat food without knowing about pasteurization or food safety laws. Why is it that with computers, suddenly the only way I can be safe, secure, and private is to be an expert?

                            1. 8

                              One question I commonly ask myself is, “Could my mother do this? My grandfather?

                              My mother is nearly 85 years old, a retired history teacher, and a daily user of Red Hat Enterprise Linux — has been since it was first available, which is almost 20 years now.

                              She never really used Windows systems or PCs outside of exposure to them at work.

                              She had a Cromemco back in the 1980s (I had a Commodore then) but otherwise she always used the computers I either handed down to her or bought for her use - I recall she used a SPARCstation-1 from (around) late 1990 until about 1993, a NeXTstation until 1998 or 1999, and began to use Linux after that, so she’s been using UNIX as an end-user for about 30 years.

                              She was always just an end-user and was never interested in programming or chasing the latest technology; that was always me, back then.

                              Every now and then, she will email me for help with something, and I’m surprised at what she’s doing. She doesn’t have a deep understanding of the inner workings or theory of how the computer works, but when it comes to software, UNIX is all she knows and all she’s ever known.

                              She does use Facebook, but only on an older iPad dedicated to just that task, and she is very skeptical of providing any information, posting pictures, etc.

                              When she first signed up she provided a false name, address, DOB, etc, and only recently updated it to real info - this, she explained to me, was because she wanted to retire “in peace” and not have to feel obligated to respond to former students and coworkers.

                              I guess the point is that I have more faith in mothers and grandmothers than I do millennials and Gen Z kids.

                              We are at the time where it’s our grandmothers and grandfathers (or our parents) who are the ones who grew up during the computer revolution. They were the ones working on mainframes and minicomputers in school in the 1960’s and witnessing the entire computer and Internet revolution from then through today.

                              “Kids” on the other hand, don’t know a world without smartphones and the Internet, and never experienced the progression of the technology or used the older systems.

                              A 24-year old college student today doesn’t know a world without modern PCs - when they were babies in the cradle, we were using Pentium-class computers running Windows 95 with TCP/IP and connecting to the Internet, and using web browsers. They grew up in a world where sharing information with online services was the norm.

                              I worry a lot more for the newest generations than the older ones - the average grandmother or mother has better basic computer literacy and information hygiene than their grandkids.

                              To so many kids, computers and online services are just magical boxes.

                              1. 6

                                I agree with you that a little tech literacy can go a long way.

                                I think a really good starting point for “non-technical people” would be “tech vegetarianism” – that is, a watered down version of “tech veganism”.

                                A good starting point might be deleting Facebook, since they’ve demonstrated beyond any doubt that user abuse is in their DNA, they’re not going to meaningfully stop the abuse, and they have virtually no moral compass to speak of.

                                So I guess I’d recommend tech-savvy folks encourage non-technical folks to start their journey by deleting Facebook.

                                1. 6

                                  Be glad that you can trust the consumer protection laws in your country to keep your food and your car above some minimum safety threshold, but don’t just take them for granted. They have evolved over many decades of genuine struggle in courts and legislatures. This is still early days for information technology in the general public.

                                  1. 2

                                    And even with those laws, if somebody gets sick from eating raw hamburger meat or eggs without cooking them our reaction is not to call up every cook in the world and decry how we can make eating still more foolproof!

                                    1. 3

                                      In many jurisdictions in the US, it’s illegal to sell raw, unpasteurized milk, for food safety reasons. Since there are some people who deliberately want raw milk, a black market in raw milk has sprung up. A black market, of course, means an illegal market, and the only reason the police haven’t (yet) shut this down and thrown the people involved into the criminal justice system is because no one cares all that much about enforcing raw milk laws and the cops don’t want to harass the people who participate in that market for other reasons, using the milk thing as a pretext.

                                      I don’t personally care about drinking raw milk myself, and I do think that it’s good that people in the west can in general assume that the food they buy is safe, but I am opposed to laws that prevent people who do want to drink raw milk from doing so.

                                      1. 5

                                        They do it openly, too, with that being a local example. The trick is they say it’s “for pets only.” Sure, they’re all buying $11 a gallon milk for their cats. Haha.

                                        On a serious note, there’s lots of info popping up connecting gut bacteria to preventing or causing something. Raw milk might end up being a beneficial, harmful, or some mix. Just saying I’d rather it be on the market in case there’s potentially benefit. Worst case, have them inform people of the low risks and they sign a waiver to buy it. Then, uh, be careful about the supplier.

                                        1. 4

                                          It seems there was some progress being made into allowing these type of raw-milk cheese products to be made (or imported and sold here), but momentum seems to have stalled and such French staples are still illegal black market goods in the US.

                                          Interestingly, the artisanal and traditionally produced “high-risk” food products are often safer than the same product mass produced, due to the method of production.

                                          For example, here in the South, fresh squeezed (unpasteurized) orange juice is a staple, but to purchase it from someone else is either often outright illegal (or done legally but highly regulated with alcohol or cigarette-like warning labeling), due to the risk of death and illness from contamination.

                                          This isn’t theoretical.

                                          (But, yes, you can taste the difference between pasteurized and fresh squeezed juice. Blind tastings at my house!)

                                          At the risk of oversimplification, if you squeeze a glass of orange juice at home, you’ll wash the fruit and squeeze a glass worth, and then (hopefully!) clean the juicer and the glass.

                                          In a factory setting, you are rinsing off, squeezing, and storing the juice (of tens of thousands to millions of oranges) on an industrial scale, and any bacteria on even a few of the oranges might contaminate entire vats of product. You are then bottling, transporting, and disseminating this (potentially contaminated) product. Refrigeration doesn’t kill the bacteria either (though it does makes it grow more slowly, depending on the type.)

                                          There are analogies to software and “tech veganism” here. When all your data is with one provider (Facebook for example) or distributed but still a monoculture of software and hardware implementations, then it only takes one crack in the defenses for a potentially devastating breach.

                                          Of course, in this analogy to handmade traditional products, the individual instances/implementations are still subject to the same threats, but the risk of catastrophic loss is spread out so it’s only affecting individuals or small groups (vs. everyone).

                                          This is where the debate comes in!

                                          Does this mean that everyone needs to be “an expert” in best practices to meaningfully protect their federated instances from mass exploitation, or will such best practices eventually become a natural part of the process of implementation?

                                          Does the distributed nature of millions of decentralized federated instances make your data safer from actual exploitation than with a single large, usually corporate, “expert entity”? And we haven’t even contemplated that the “expert entity” is one that may or may not align with your morals and values.

                                          I wish there was more literature that explored these lines of inquiry.

                                          Edit: About the fresh orange juice example and traditional vs. factory methods of production, this is why you can legally buy fresh squeezed, unpasteurized, ‘raw’ orange juice by the glass from a roadside stand or farmers’ market but not in the form of a bottle on a store shelf.

                                          1. 2

                                            Re cheese

                                            Ive seen cheese in my local grocer that said it was made from raw milk. It was mild cheddar, though. We buy sharp or just more interesting miilds like gouda. It got discontinued due to nobody buying it. A lot of the fancy cheeses get marked down, though. That’s how I get real parmesan. ;)

                                            Re OJ

                                            Don’t forget the flavor packs from perfume companies and stuff. Most OJ companies admitted to using them. So, there’s definitely a taste difference if they add flavor back in using chemists.

                                            The organic ones are usually stored frozen. Better comparison. Id still wonder if the difference was pasteurization, freezing, or both.

                                            EDIT: We’re looking at about 100 deaths that might be juices looking at three of your links. I know what Im supposed to stay but… uh… just 100 out of 300+ million a year to make all our juice taste better? Something similar for dairy products? Statistically speaking, it doesnt look that risky. ;)

                                            1. 1

                                              I took a quick look, and it seems that US states may allow the sale of raw milk products, but at the federal level, the FDA bans all interstate sale or distribution of raw milk products, so the importation of my cheese is illegal. I guess the raw milk cheese is fine in your state, as long as it never moves across state lines.

                                              Here in Florida, raw milk products can be sold when labeled as “pet food” and not for human consumption - but that doesn’t help me get my cheese from France.

                                              I’ve never seen super fancy brie cheese marketed to cats - at least not yet!

                                              I’m going to end this here since I can’t imagine a way to bring this back on topic. ;-)

                                              1. 1

                                                I just found some online at Whole Foods. I agree on thread. Ill send it privately.

                                        2. 4

                                          The very first time I ever actually used cryptocurrency, it was to buy illegal cheese from France on a Tor-based black market website.

                                        3. 2

                                          I think this is sort of a poor analogy, if only because I eat raw eggs every morning. The rate of salmonella in eggs is known to be somewhere between 1:20000 and 1:50000.

                                          Worst case, if you eat a raw eggs every weekday from the age of 18 on, eating 260 raw eggs a year, you could reasonably expect to be exposed to salmonella maybe once in your lifetime; and not every exposure will cause illness.

                                          Eating raw eggs is exceedingly safe - and eating pasteurized raw eggs has essentially no risk at all and is sanctioned by the US government.

                                          Raw ground beef is commonly consumed with raw eggs, and is quite tasty! I’ve been eating raw foods essentially my entire life.

                                          The risk of adverse effects from poor data hygiene and subsequent data exposure is much higher than the risk of adverse affects from eating raw foods, and potentially much more damaging to your quality of life, yet people are paranoid about their food but careless with their data!

                                        4. 2

                                          And user education from many of us playing watchdog. These companies try to pull stuff non-stop. Especially trying to redefine artificial or questionable stuff as “natural.” Or just hide ingredients or make them non-obvious to the consumer. The pink slime situation was a nice example.

                                        5. 5

                                          Why is it that with computers, suddenly the only way I can be safe, secure, and private is to be an expert?

                                          That’s emphatically not the case, though.

                                          If you want to be private, don’t put personal information on the internet. This was how we handled things in the chat/BBS days…you don’t put up private info unless you’re damned sure what’s going to happen to it. If you want to be secure, use easy-to-remember long passphrases and don’t re-use them across services.

                                          None of those things requires anything other than a healthy suspicion of a magical box and a willingness to ask “okay, but what if should happen?” and working through the consequences. They don’t need to be compiler designers, system engineers, programmers, or even particularly technical.

                                          This meme that this is somehow complicated or the ken of experts absolves users of the responsibility of learning and us of the responsibility of teaching.

                                          1. 1

                                            Why is it that with computers, suddenly the only way I can be safe, secure, and private is to be an expert?

                                            First part is user demand: people almost never buy stuff that’s actually secure when it’s available due to its tradeoffs or their apathy. Second, there’s no regulations preventing or liability for suppliers damaging customers with preventable vulnerabilities. So, everyone makes things insecure by design externalizing the problems on others. Then, the things people use often interact with each other in ways that create even more problems. The result is a massive pile of externalized problems each person or groups of them must deal with until they address the root cause.

                                            For proof, the market immediately started producing both secure systems after TCSEC was implemented and safer software after DO-178B standard for aerospace kicked in. With TCSEC withdrawn, they went right back to insecure stuff the market was buying the most. The DO-178B standard stayed, got updated to DO-178C, and market continued supplying both certified components for cost reductions and tools to make software safety easier (esp static analysis and test generators). Just need something like that for general, commercial software with a minimum set of practices that make sense.

                                            1. 2

                                              I’d argue that (DO-178) life-critical and mission-critical avionics and aerospace software is a special market segment, due to the very high stakes of failure.

                                              There is a distinct difference when we are talking about security from the standpoint of this “tech veganism” discussion — we are referring to the likes of Twitter and Google and Facebook and social media companies and information aggregators.

                                              When it comes to motivations, these companies have a motivation first and foremost to their shareholders and investors, by selling their product, which is, when you distill it, the personal information of their “users” (or the product of user surveillance). The shareholders and the advertisers are the customers and the users are the product.

                                              I’d argue that the only reason they care about the safety of “user” data at all is to maintain their position of obtaining a continuous stream of it to sell.

                                              They don’t want to lose it all in a breach, and they don’t want “users” to stop “giving” them this data to sell. “Users” aren’t “giving” their data for free either - the cost to the company to “buy” personal data is the expense of research, development, and maintenance of the end-user (‘free’) services used by these “users”, and the internal surveillance and analysis frameworks they use to distill consumer interactions with these services into a product for sale.

                                              If, breach after breach, the “users” keep coming back, and all they have to do is apologize and not actually change anything, it follows that they shouldn’t invest in better security, because it’s simply not needed — not until users begin the change their behavior. Words without action have essentially zero cost.

                                              (Edit: “Wasting” money on better security for the users when it’s demonstrably not needed to continue the business - especially when excuses suffice - and not directly translatable to profitability could even be considered mismanagement, or worse, criminal behavior - defrauding the investors. They are, after all, the highest priority. This lack of caring about end-user/consumer privacy and security isn’t corporate apathy - it’s calculated and intentional.)

                                              Once the consumer begins to consider their privacy as mission-critical, then we will see changes, but until then, I’d argue that no standards or regulations will have meaningful effect.

                                              You’ll never even get to the point of passing meaningful (legally binding legislation) regulation in the first place when consumers are apathetic about privacy and security, or prioritize cost.

                                          2. 6

                                            Reading Objective-C is not a basic skill any more than disassembling and reassembling an automobile engine and having it still work is a basic skill of operating a car. Not saying that there can’t be some expected skill in operating a computer, but there’s a whole range of skill sets between uninformed button-mashing and being able to read and comprehend source code. End-user autonomy really doesn’t have anything to do with the availability of source code.

                                            1. 4

                                              End-user autonomy really doesn’t have anything to do with the availability of source code.

                                              One of the things that makes the GPL special is it actually tries to address this: not only is the source code available, but you - the end user - are also able to shop around for modified versions other people made, so you don’t have to yourself.

                                              I’d argue the freest aspect of the GPL is not the source available to developers, any source-available license does that, but rather forks being available to end users so they can find less offensive versions.

                                              (I understand that in practice actually finding and evaluating other versions is easier said than done, but still, the benefit of it does go beyond people who can modify code themselves.)

                                          1. 18

                                            The original WebKit blog post is worth reading: https://webkit.org/blog/8821/link-click-analytics-and-privacy/

                                            Disabling ping does not make your browser more private. It just spurs websites to switch to methods that are less user-friendly but more reliable (and undetectable by the UA).

                                            Arguably UAs could disable ping but make it undetectable by JavaScript. My hunch though is that websites would just UA-sniff that particular browser and always do the user-unfriendly method, which would be much worse because then that browser would be guaranteed a worse experience, permanently. Then users would complain that it’s “slow” and switch to the “fast” browser.

                                            In short, there are lots of different considerations that a browser needs to weigh when making a decision like this, and there are no easy answers. Browsers and websites are caught in an eternal cat-and-mouse game over things like privacy and performance.

                                            1. 4

                                              Why not let publishers that are unfriendly to users privacy deliver bad user experiences? I don’t think a browser that cares about privacy should be providing features to let bad actors be bad actors more efficiently. Let them destroy UX and reap the consequences, however minor. This strikes me as giving up and sweeping the problem under the rug.

                                              The market share preserving argument doesn’t hold because if in order to retain market share you have to do the same stuff that browsers that don’t respect privacy do, well then you’ve joined the race to the bottom.

                                              1. 3

                                                From a game theoretic analysis here, the current situation of using redirects is a Nash equilibrium: https://en.wikipedia.org/wiki/Nash_equilibrium

                                                    sites  ping |  redirect
                                                browsers        |
                                                ----------------|----------
                                                ping       1 2  |  0 1
                                                ----------------|-----------
                                                no ping    2 0  |  0 1
                                                

                                                The browser/user benefits most from the bottom left corner; however if they chose the “no ping” strategy then the sites are incentivized to switch strategies to redirects, which improves their payoff but gives the worst payoff to the users. The sites’ best payoff is the top left corner; the ping is faster and gets them the data they want, however the user would prefer to have their privacy preserved by turning off pings.

                                                Having the browsers implement pings is an attempt to avoid a prisoners’ dilemma-type situation where everyone is worse off (the bottom right quadrant); anticipating the responses to your own moves is just playing smart.

                                            1. 1

                                              WordPress.com,since 2011. It’s fine for my needs, no big complaints.

                                              1. 1

                                                Thanks for sharing. And thanks for paying attention to accessibility.

                                                Speaking of which, I wonder if the buttons to scroll to particular items should be hidden from assistive technologies with aria-hidden. A blind person using a screen reader would probably regard them as just clutter. But I wonder if having them in the accessibility tree is useful for other ATs.

                                                1. 1

                                                  That’s a good question. 🙂 I tested the Pinafore implementation with VoiceOver, and I can certainly navigate the horizontal list the same as I would any horizontal list, so the buttons are not strictly necessary.

                                                  I noticed though that, despite the snap points, VoiceOver doesn’t focus the image to the center when you navigate with Ctrl-Option-Left/Right. So for someone with low vision, the buttons may still be useful, because they correctly “snap” to the right offset. (Ditto for the left/right keyboard shortcuts.)

                                                  But at the very least, the buttons are after the list in the tree, so users can ignore them if they want.

                                                1. 3

                                                  I’m at https://toot.cafe/@nolan, mostly talking about web dev and OSS. I also created https://pinafore.social which is a web-based Mastodon client; I toot releases from https://mastodon.technology/@pinafore/

                                                  1. 4

                                                    Warning:

                                                    This trick can break fragment scrolling (like https://nolanlawson.com/2018/11/18/scrolling-the-main-document-is-better-for-performance-accessibility-and-usability/#respond, though this page doesn’t have a fixed header, so it doesn’t manifest the problem). Sure, it’ll still scroll to it, but the fixed header will end up covering up the top of the section that you scrolled to.

                                                    1. 6

                                                      A thing that I’ve seen succeed pretty well before is to put make the page anchor which is intended to be the target of a URL fragment start one-fixed-header-width higher than where it looks like it starts. i.e. each fragment target has a hidden invisible bit that pokes up by a fixed-header’s-height in order to cancel out the fixed header when scrolling to it.

                                                      TBH fixed headers are kind of… ewww? I suppose things like floating table headers exist as benign examples, but these days the majority of the fixed onscreen elements I see on a day to day basis on sites are dickbars.

                                                      (Aside: isn’t it just atrocious how ugly Medium is to look at? especially for a site which claims to be all about design and typography.)

                                                      1. 1

                                                        Want to know what’s really annoying? Medium’s top bar switches to “Chrome-for-Android-style” auto-hiding, and the bottom bar goes away entirely, when you are logged in. They are persistent if you are logged out.

                                                        They are trying to incentivize you to make an account. Which, now that I think about it, is a dirty tactic and I should probably delete my Medium account now that I know about it.

                                                        1. 4

                                                          I didn’t know this because it has never, ever for even one nanosecond occurred to me to actually make a Medium account. I’d rather step on a Lego brick barefoot.

                                                      2. 3

                                                        Good point, yes. In fact this is true for that link if you’re logged in to WordPress, because WordPress uses a fixed header. (I always wondered why that happened…)

                                                        In my own app this wasn’t such a big deal, but it did make element.scrollIntoView() a bit challenging. I ultimately had to call scrollIntoView(true) and then immediately update the scrollTop to account for the height of the nav header. Not unworkable, but not ideal either.

                                                      1. 7

                                                        This also fixes tap top of screen to return to top, which breaks when content is scrolled inside an element. (Glaring at you, amp.)

                                                        1. 3

                                                          Great point, yes, this also improves usability on iOS. :) I added a note to the bottom of the post.