Threads for Seirdy

  1. 7

    This article has it backwards. The user controls their device, and how they want to view content, not the author of the website. To say it another way the device and app is a user-agent, not a website-agent. The author of the website has no more right to demand that I view their page in a “normal” browser than they have to demand that I do jumping jacks while reading their website, or that I don’t use an adblocker (which is to say that they could make it a contractual requirement, but that they would have to get me to sign that contract before giving me the content, and that they have no right to impinge on everyones devices to ensure that people are following their strange contract).

    There is something to be a bit concerned about here, but it’s not that apps are somehow unfairly hurting websites. It’s that apps that are abusing their position as a source of links to other content to coerce users into using them to view the content when they might prefer to view the content via another user agent. Probably the appropriate medium for resolving this is regulation - since Google does this themselves it seems unlikely that it will be resolved by the platform just deciding to ban apps that do it from their app store.

    1. 2

      I have the freedom to set the terms on which I will offer access to a website of mine.

      If you do not like those terms, you may reject them and not access it. If you reject them and then attempt to access it anyway, you are the one violating my freedom: the freedom to decide how I will run my site and by whom and on which terms it will be accessed.

      1. 4

        The Web is not built around advance informed consent; there’s no agreement to terms before downloading a public file (besides basic protocol negotiations). This is one reason why “by using this site, you agree to our cookies, privacy policy, kidney harvesting, etc” notices won’t fly under the GDPR.

        A website admin can’t set terms for downloading a linked document; the user-agent just makes a request and the server works with that data to deny or accept it. There’s no obligation for the UA to be honest or accurate.

        Ultimately, nobody is forcing you to run a Web server; however, plenty of people have to use the Web. Respect for the UA is part of the agreement you make when joining a UA-centric network.

        Should you disagree with the precedent set by the HTML Living Standard, nearly every Web Accessibility Initiative standard (users must be able to override and replace stylesheets, colors, distracting elements), the exceptions to e.g. the Content Security Policy in Webappsec standards to allow UA-initiated script injection, etc.: you’re always free to build your own alternative to the Web with your own server-centric standards.

        POSSE note from https://seirdy.one/notes/2022/08/12/user-agents-set-the-terms/

        1. 2

          Who said anything about advance consent? I can put up a splash page laying out terms and tell you to either accept them and continue, or reject them and leave. Or I can login-wall things. And if you try to work around it and access anyway, I have every right to use both technical and legal-system measures to try to prevent you, or to hold you accountable afterward for the violation.

          Or plenty of other low-level tricks and techniques are fair game, too; for example, I believe Jamie Zawinski at least used to (I don’t know if he still does) serve a famous obscene image to any inbound request with a referer from Hacker News.

          But before you go too far into citing standards and accessibility at me, do keep in mind that what we’re discussing here is whether sites should be able to object to Instagram literally MITM’ing users and injecting potentially malicious script. And the original parent comment’s suggestion of regulating this away is actually contradictory to the absolutist “browser is a user agent” moral stance, since that stance requires rejection of any imposed limitation on what the “user agent” may do. After all, some person out there might actively want an “agent” to MITM and inject Instagram trackers for them, so banning the practice by law is as hostile to user freedom as is any technical measure which attempts to prevent it.

          Also, the absolutist “user agent” stance is still hostile to the freedom of a site owner to decide who to offer access to, as I originally pointed out, and that has nothing to do with accessibility or usability or any of the other things you tried to steer the argument off-topic to. If I want to make a secret online club and decide who I do and don’t let in and on what terms, I can do that and you don’t get to tell me otherwise.

          1. 1

            And the original parent comment’s suggestion of regulating this away is actually contradictory to the absolutist “browser is a user agent” moral stance, since that stance requires rejection of any imposed limitation on what the “user agent” may do

            It does not. There are all sorts of restrictions on what one make available to consumers. Whether that’s baby toys covered in lead, or products that abuse their monopoly position to gain monopolies on other unrelated markets (anti-trust law, which is the closest analogy to the regulation I proposed IMO).

            It merely means that you should be making such restrictions to benefit the user, not some third party with no rights to the users device whatsoever.

            hostile to the freedom of a site owner to decide who to offer access to

            The site owner has a freedom to do whatever he likes, such as your examples of serving the user with a contractual agreement that they must agree to before the site owner serves them the actual content. The site owner has no right to have every user attempting to access his site (prior to agreeing to any contract) do it in any particular manner though, it is up to him to not give content away to people that coming asking for it if he wants to require them to agree to contractual limitations before they get the content.

            1. 1

              So if a government were to pass an enforceable law saying that any site which sends an X-Frame-Options with a “deny” value must be opened in the user’s default browser rather than an app-embdded one, would you be OK with that? There are user-centric reasons for doing so, after all, so it would be a law with benefit to the user.

              But it’s also exactly the thing you previously attacked.

              1. 1

                No. Nor did I say so. Rather I have continuously been attacking that idea, and will continue to do so below.

                It is in the users interest to be able to view websites however they want. Rather your suggestion would be the government gifting control over how users view documents on their devices they they lawfully own (the actual instance of the bits, not the copyright, same as owning a book) to website owners. To the extent that there is user harm resulting from the current app ecosystem, it is extremely minimal compared to the utterly draconian measure you are proposing.

                Moreover there is the much less invasive, well tested and understood method of requiring users be given the choice of how to open links (see for example the similar laws for payment providers that are cropping up, and the much older consent decree related to internet explorer). I don’t think the harm is great enough the government necessarily even needs to do something about this, but I wouldn’t mind if they did because it is really just a slight extension to existing anti-trust law and unlike your suggestion does very minimal harm users freedoms to use their devices how they want to.

                Edit:

                I think you generally misunderstand the nature of the relationships here. In order of “should have the most control over how the content is viewed” to “who should have the least control” it goes

                User > Creator of The App that the User chose to install on their device and view the website in > Website Owner

                Not as you seem to have it, Website Owner > User > App Creator, or even the unreasonably charitable reading of your posts of User > Website Owner > App Creator.

                The website owner bears no special relationship to the user, is not trusted, and did nothing but supply some data which they no longer have any relevant rights to once the transfer of data is complete (they continue to own the copyright if they did in the first place, but nothing restricted by copyright is being done to the data). The app creator supplied software that the user chose to run on their device, in a relatively privileged manner, and is far more trusted to act in the users interest.

                1. 1

                  I want to be absolutely crystal clear here. I posed a hypothetical where sending a certain header would be required to use “the user’s default browser rather than an app-embedded one”, And your description of this is “utterly draconian”.

                  How, exactly, is it “utterly draconian” to use the user’s default browser?

                  1. 1

                    Because your hypothetical has just given website owners the ability to legally require that users only view their website through their default browser when they have absolutely no right to demand users do anything of the sort.

                    It has made the decision the users aren’t entitled to view news articles in their news app and social media sites in their social media app.

                    It has made it next to impossible to make a huge variety of tools from simple ones like curl and youtube-dl to complex ones like citation managers and privacy respecting replacement apps for YouTube and Facebook without either the cooperation of website owners or breaking the law.

                    It is fundamentally seizing a fairly significant degree of control of the device from the users, and handing it to the people who serve the content.

                    Maybe you only have the users best intentions at heart (I sort of doubt it given that we’re discussing the under an article whose whole premise is that users aren’t entitled to view websites how they choose because it violates some supposed right of the website owners), but the policy you’re proposing is not going to be only used for good.

                    1. 1

                      This is an inconsistent position, though. The status quo is user-hostile. Any technical solution would also be user-hostile by your definition. And so too would any regulatory solution – no matter how it’s implemented, it will place restrictions on what a “user agent” is allowed to do, or which “user agents” are allowed, and that appears to be anathema to you.

                      Even something like “app must ask” can be turned user-hostile and anticompetitive, as in the case of the iOS Gmail app, which – I don’t know if it still does, but I know it did, upon a time – would “helpfully” ask if you wanted to open a link in your default browser, or install Chrome. With a “remember my choice” that only “remembered” for that single link in that single email message, and would prompt again for the next link it encountered, all in hopes you’d finally give in to its badgering and install Chrome.

                      So I simply don’t see how any position, consistent with the moral values about the user that you keep citing, can be built which would also allow any type of regulation to solve this. All solutions will, by your definitions, end up taking away some freedom from the user, which is something you seem absolutely unwilling to budge even the slightest bit on, and regulatory solutions will do so by force.

    1. 8

      The reason C as of today is performant is there exists giant optimizing compilers that have reasonable freedom to mess around with the code. Try to compile it with tcc instead and see what you give up. A language can be faster than C by making more guarantees than C does. If there is no risk for pointer aliasing, for example, the compiler can go further. Another way to be faster than C is to expose operations that the compiler would have to infer from the C code, which doesn’t always work, like if you have explicit vector operations. A third way to be faster than C is to have generics with specialization for particular datatypes, where C++ shines. Another way is to make it easy to evaluate stuff at compile time.

      And with LLVM you can get half of that giant optimizing compiler for your pet language for free.

      1. 1

        Nowadays, most code that needs to be “faster than C” is written in assembly. LuaJIT, video decoders, some video encoders, image decoders, etc.

      1. 6

        I’m a big fan that I keep seeing a stronger push away from CDNs considered a best practice.

        I really wish prefers-reduced-data had much better support so it was easier to plan around the connection issue the author brings up. <picture> tag lets you use media attributes so you could only load if the user wants the image. The problem comes up when trying to support only reduced because using CSS alone, you can’t really prove the negative of “if reduced or the browser doesn’t support the feature”. I burnt a lot of time on this recently to no avail.

        Another thumbs up for focusing on users changing the fonts on their browser and respecting that instead of ignoring it with a big font stack assuming I’d ever want to see Ubuntu just because I’m on Linux. Related to the above paragraph, when I’m on a stable connection however, I do still prefer a self-hosted font for branding’s sake because I value the typography if well designed–which is subjective and some websites just throw in Roboto because “I dunno, pick something that’s kinda inoffensive to read”.

        I never get tired of always suggesting against px either. I set my font size a bit larger on some contexts (like my media server where I’ll browse the web on the couch). Sites using rem and % have no trouble. I still do prefer px specifically for border as a lot of time I didn’t want a round up error to make things to thick or thin.

        I appreciate calling it “small viewports” instead of “mobile” because “small viewport” makes only one assertion about the user agent: it’s viewport is small.

        More controversially, I’ll also disagree with black for prefers-color-scheme: dark. Because #000 consumes the least amount of energy, it is the best choice for dark environments and the planet. With a true dark and good contrast, I can really crank down the brightness on my laptop and phone (both OLED) which saves battery too. Folks that say it’s unnatural don’t seem to account for the device itself not being a black hole for light. Not everyone, but I do think #000 complainers might have meh monitors with the brightness turned up higher than it needs to be (but just a guess). I’m pretty bummed that the Lobster’s team got bullied out of the #000 background for its dark theme (luckily the CSS vars made my user styles a breeze).

        I hard disagree with SVGO usage though. SVGO’s defaults are far too aggressive, including stripping out licence metadata which is categorized the same as editor attributes—which could put you in violation of CC BY and is generally just not nice to the artist. No separation means you can’t get rid of just your Inkscape attributes that help when editing (like you grid values, etc.). The SVGO maintainer is adamant that all users should be reading the manuals and while that’s somewhat true, a) pick better defaults and b) many tools have it in the toolchain and you cannot configure it (think create-react-app, et. al). You can see some projects like SVGR where most of their issues are SVGO-related. My suggestion: use scour, aggressive options are opt-in.

        1. 6

          Because #000 consumes the least amount of energy, it is the best choice for dark environments and the planet.

          The energy usage difference has been shown to be negligable: https://www.xda-developers.com/amoled-black-vs-gray-dark-mode/

          Not everyone, but I do think #000 complainers might have meh monitors with the brightness turned up higher than it needs to be (but just a guess).

          Personally, I like to keep my phone brightness quite low, and I find the contrast between text and #000 backgrounds to be rather…painful.

          1. 2

            I’m aware that it’s a low difference, but #000 is still the lowest. The internet has a lot of devices connected to it, so it does add up.

            1. 1

              I find the contrast between text and #000 backgrounds to be rather…painful

              Lowering brightness reduces the contrast. Simple as that.

              1. 5

                Contrast is a little more complex than that.

                The Helmholtz–Kohlrausch effect and the APCA’s perceptual contrast research show that the mathematical difference between two colors is quite different from the perceptual contrast we experience.

                There’s more than one type of contrast at play here. Lowering brightness until halation, overstimulation, etc. become non-issues will likely compromise legibility. You can get a much higher contrast that doesn’t trigger halation as easily by giving the background some extra lightness.

                If you still want a solid-black (or almost-solid-black) background, look into the prefers-contrast media query. You can use it to specify a reduced or increased contrast preference. I try to keep a default that balances the different needs, but offer alternative palettes for reduced/increased contrast preferences.

            2. 5

              The main issue I’ve run into with svgo is that it defaults to dropping the viewbox attribute on the svg element in cases where it is not redundant, i.e. does affect rendering.

              1. 1

                Yep. I mentioned SVGR; they have 149 issues related to SVGO which makes it ≅⅓ of their issues and they span the whole spectrum of issues. It’s so flawed that I ban its usage in teams I work around now to save them the headaches it can cause as well as the potential legal trouble you could run into on the licensing front.

            1. 7

              I’ve mostly re-written this article since the last time it was submitted (the canonical URL changed but a redirect is in place).

              I’ve shifted much of its focus to accessibility. Accessibility guidance tends to be generic rather than specific, and any information more specific or detailed than WCAG is scattered across various places.

              Feedback welcome. I’m always adding more.

              1. 4

                I’ve quickly skimmed through your article, stopping mainly at the sections that interest me, and I would have liked it to be split into a series of more focused articles / pages. Right now it’s hard to see where a section starts and when one begins.

                All-in-all I’ve found quite some good advises in there. Thanks for writing it!

                1. 5

                  Given that the article touches on many non-mainstream browsers, I think a special consideration should have also been given to console browsers like lynx, w3m, and others. I know almost nobody uses one of these to browse the internet these days, but they might be used by some automated tools to ingest your contents for archival or quick preview.

                  From my own experience it’s quite hard to get a site to look “good” in all of these, as each have their own quirks. Each renders headings, lists, and other elements in quite different ways. (In my view w3m is more closer to a “readable” output, meanwhile lynx plays a strange game with colors and indentation…)

                  For example I’ve found that using <hr/> are almost a requirement to properly separate various sections, especially the body of an article from the rest of the navigation header / footer. (In fact I’ve used two consecutive <hr/>s for this purpose, because the text might include a proper <hr/> on its own.)


                  On a related topic, also a note regarding how the page “looks” without any CSS / JS might be useful. (One can simulate this in browser by choosing the View -> Page Style -> No Style option.)

                  As with console browsers, I’ve observed that sometimes including some <hr/>s makes things much more readable (Obviously these <hr/> can be given a class and hidden with CSS in a “proper” browser.)

                  1. 4

                    I know almost nobody uses one of these to browse the internet these days

                    I find them essential when on a broken/new machine which doesn’t have X11 set up correctly yet. Or on extremely low-power machines where Firefox is too much of a resource hog. Especially mostly-textual websites should definitely be viewable using just these browsers, as they may contain just the information needed to get a proper browser working.

                    1. 4

                      I actually was recently showing other members on their team that they will do better markup and CSS if they always test with a TUI browser and/or disabling styles in Fx after doing it myself for a few years now. It will often lead to better SEO too since non-Google crawlers will not be running that JS you wrote.

                      Netsurf is still a browser to consider too.

                      1. 4

                        Er, sort of. There are lots of great reasons to test in a textual browser, but “accessibility” is lower on that list than most people realize. It’s easy for sighted users to visually skip over blocks of content in a TUI or GUI, but the content needs to be semantic for assistive technologies to do the same.

                        I consider textual browsers a “sniff test” for accessibility. They’re neither necessary nor sufficient, but they’re a quick and simple test that can expose some issues.

                        I do absolutely advocate for testing with CSS disabled; CSS should be a progressive enhancement.

                    1. 2

                      Well, it taught me how to set the Firefox color theme when my compositor doesn’t do it for me.

                      1. 2

                        Unfortunately it’s another fingerprintable value to set it

                        1. 2

                          Ah well, in for a penny as they say

                          1. 1

                            And most websites ignore it, so I need to use Dark Reader anyway, at which point it doesn’t have that much value.

                            1. 1

                              You don’t need to inject scripts/styles from a privileged extension to do this in Firefox.

                              Go to about:preferences and scroll to the “Colors” section. Select “Manage Colors”. Then pick your favorite palette and set the override preference to “Always”.

                              1. 1

                                Use AI to automatically generate a compatible dark theme for an existing site.

                          1. 2

                            As someone who has not looked at the state of the art for stylometric fingerprinting since early in our eternal September, I wonder:

                            1. How strong is imitation?
                            2. What if I attempt to transform a machine translation so that it imitates the style of someone well known?

                            Naiively, that would seem like a strong approach. The translation would remove signal, and the imitation would add noise.

                            1. 2

                              The first study I linked indicates that obfuscation is a little more effective than imitation, and both beat naive machine-translation output.

                              What if I attempt to transform a machine translation so that it imitates the style of someone well known?

                              I’d rather place the original work, machine translation, and a style-guide side-by-side and transform the original work to match the style-guide while also re-phrasing anything that tripped up the machine translation. That should cover both bases.

                            1. 2

                              This technique should be obsolete in Go 1.19. It introduces features to set memory limits.

                              1. 3

                                Most of these are pages that blur the line between “document” and “app”, containing many interactive controls. Being concerned about them is valid; however, I think the concern is misplaced at this stage.

                                For an independent engine, I’m more interested in simple “web documents”. Those need to work well before tackling “Web 2.0” territory. Specifically: articles progressively enhanced with images, stylesheets, and maybe a script or two. Understanding how well Web 2.0 sites render isn’t really useful to me without first understanding how well documents render.

                                When testing my site, my main pain points are: a lack of support for <details>, misplaced <figcaption> elements, my SVG profile photo not rendering (it renders when I open it in a new tab), and occasional overlapping text. The only non-mainstream independent engine I know of that supports <details> is Servo.

                                POSSE note from https://seirdy.one/notes/2022/07/08/re-trying-real-websites-in-the-serenityos-browser/

                                1. 3

                                  So in order to make your site slightly more accessible to screen readers, you’ll make it completely inaccessible to browsers without JavaScript?

                                  1. 6

                                    i was born without javascript and life has been so hard for me

                                    1. 2

                                      Accessibility isn’t just about disorders.

                                      1. 6

                                        I think @river’s point is that it’s most important to accommodate limitations due to circumstances beyond the user’s control. And these are limitations that can prevent people from getting or keeping a job, pursuing an education, and doing other really important things. In all cases that I’m aware of, at least within the past 15 years or so, complete lack of JavaScript is a user choice, primarily made by very techy people who can easily reverse that choice when needed. The same is obviously not the case for blindness or other disabilities. Of course, excessive use of JavaScript hurts poor people, but that’s not what we’re talking about here.

                                        1. 1

                                          If using <details> made the site impossible to use for blind people, that would obviously be much more important, but here the complaint is that… the screen reader reads it slightly wrong? Is that even a fault of the website?

                                          1. 4

                                            Fair point. Personally, I wouldn’t demand, or even request, that web developers use JavaScript to work around this issue, which is probably a browser bug, particularly since it doesn’t actually block access to the site.

                                            On the other hand, if a web developer decides to use JavaScript to improve the experience of blind people, I wouldn’t hold it against them. IMO, making things easier for a group of people who, as @river pointed out, do have it more difficult due to circumstances beyond their control, is more important than not annoying the kind of nerd who chooses to disable JS.

                                            1. 1

                                              Well, disabling JS is not always a choice. Some browsers, such as Lynx or NetSurf, don’t support it. But yeah, I generally agree.

                                              1. 3

                                                I suppose it’s possible that some people have no choice but to use Lynx or Netsurf because they’re stuck with a very old computer. But for the most part, it seems to me that these browsers are mostly used by tech-savvy people who can, and perhaps sometimes do, choose to use something else.

                                                1. 3

                                                  I suppose it’s possible that some people have no choice but to use Lynx or Netsurf because they’re stuck with a very old computer. But for the most part, it seems to me that these browsers are mostly used by tech-savvy people who can, and perhaps sometimes do, choose to use something else.

                                                  And what percentage of those lynx users is tech-savvy blind people? Or blind people who are old and have no fucks left to give about chasing the latest tech? There are, for instance, blind people out there who still use NetTamer with DOS. DOS, in 2022. I’m totally on board with their choice to do that. Some of these folks aren’t particularly tech savvy either. They learned a thing and learned it well, and so that’s what they use.

                                                  1. 1

                                                    Many users who need a significant degree of privacy will also be excluded, as JavaScript is a major fingerprinting vector. Users of the Tor Browser are encouraged to stick to the “Safest” security level. That security level disables dangerous features such as:

                                                    • Just-in-time compilation
                                                    • JavaScript
                                                    • SVG
                                                    • MathML
                                                    • Graphite font rendering
                                                    • automatic media playback

                                                    Even if it were purely a choice in user hands, I’d still feel inclined to respect it. Of course, accommodating needs should come before accommodation of wants; that doesn’t mean we should ignore the latter.

                                                    Personally, I’d rather treat any features that disadvantage a marginalized group as a last-resort. I prefer selectively using <details> as it was intended—as a disclosure widget—and would rather come up with other creative alternatives to accordion patterns. Only when there’s no other option would I try a progressively-enhanced JS-enabled option. I’m actually a little ambivalent about <details> since I try to support alternative browser engines (beyond Blink, Gecko, and WebKit). Out of all the independent engines I’ve tried, the only one that supports <details> seems to be Servo.

                                                    JavaScript, CSS, and—where sensible—images are optional enhancements to pages. For “apps”, progressive enhancement still applies: something informative (e.g. a skeleton with an error message explaining why JS is required) should be shown and overridden with JS.

                                                    (POSSE from https://seirdy.one/notes/2022/06/27/user-choice-progressive-enhancement/)

                                      2. 2

                                        I mean not for not, but I’m fairly certain you can constrain what can be executed in your browser from the website.

                                        I’m certainly okay with a little more JS if it means folks without sight or poorer sight can use the sites more easily.

                                        1. 5

                                          I’m certainly okay with a little more JS if it means folks without sight or poorer sight can use the sites more easily.

                                          In my experience (the abuse of) JavaScript is what often leads to poor accessibility with screen readers. Like, why can I not upvote a story or comment on this site with either Firefox or Chromium? ISTR I can do it in Edge, but I don’t care enough to spin up a Windows VM and test my memory.

                                          We need a bigger HTML, maybe with a richer set of elements or something. But declarative over imperative!

                                          1. 2

                                            Like, why can I not upvote a story or comment on this site with either Firefox or Chromium?

                                            I use Firefox on desktop and have never had a problem voting or commenting here.

                                            We need a bigger HTML, maybe with a richer set of elements or something. But declarative over imperative!

                                            The fallback is always full-page reloads. If you want interactivity without that, you need a general-purpose programming language capable of capturing and expressing the logic you want; any attempt to make it fully declarative runs into a mess of similar-but-slightly-different one-off declaration types to handle all the variations on “send values from this element to that URL and update the page in this way based on what comes back”.

                                            1. 5

                                              I use Firefox on desktop and have never had a problem voting or commenting here.

                                              Yes, but do you use a screenreader? I do.

                                              The fallback is always full-page reloads. If you want interactivity without that, you need a general-purpose programming language capable of capturing and expressing the logic you want;

                                              Sure, but most web applications are not and do not need to be fully interactive. Like with this details tag we’re talking about here? It’s literally the R in CRUD and the kind of thing that could be dealt with by means of a “richer HTML”.

                                        2. 1

                                          On modern browsers, the <details> element functions builtin, without JS.

                                          In fact that’s the entire point of adding the element.

                                          1. 1

                                            Yes, and the article recommends against using <details>.

                                        1. 1

                                          This is a very comprehensive list!

                                          Anyone else getting a MOZILLA_PKIX_ERROR_REQUIRED_TLS_FEATURE_MISSING on Firefox 101.0.1 on this site? Works fine in Safari, though.

                                          1. 1

                                            101.0.1 (64-bit)

                                            Works fine

                                            1. 1

                                              Thanks!

                                              Regarding the error: can you share more details? Which OS is this happening on?

                                              If you open the “Network” panel in DevTools, what does it say when you click the main/top request?

                                              This looks to me like an OCSP issue, which is odd since I use certbot-ocsp-fetcher.

                                              1. 1

                                                I’m not sure what to look for, but some more poking around tells me that it has to do with OCSP Must Staple. I was able to load the site after disabling security.ssl.enable_ocsp_must_staple. Nothing looks awry on the SSLLabs report but this is not my area of expertise, haha.

                                                1. 1

                                                  Which OS does this happen on?

                                                  1. 1

                                                    I’m going to take this to DM!

                                            1. 3

                                              Started as a response to another article which was discussed here earlier this week: https://lobste.rs/s/av7f4o/ux_patterns_for_cli_tools

                                              1. 1

                                                Oh, and if anyone has more ideas to add, please share them! I plan on updating the article indefinitely.

                                                1. 1

                                                  I like it when CLI tools make me feel safe experimenting.

                                                  • Don’t do anything destructive without a confirmation (and make the safe option the default).

                                                  It would also be really interesting to see more localisation in CLI tools. They could support multiple languages at once, easily, unlike GUIs.

                                                  • Let me do both tool commit and tool skrásetja.
                                                  • Make synonyms work (can be done manually with alias but would be nice.)

                                                  In general I feel like there’s a lot of untapped potential in CLI tools.

                                                  1. 2

                                                    I updated the article to mention --dry-run. Diff. (the article looks a bit different since that commit since I split the recommendations into subsections)

                                                    I don’t think non-destructive should usually be the default; if I run delete-thing foo, I expect foo to be deleted. This is less of an issue if you follow the advice to make functionality obvious from the command line invocation.

                                                    Exceptions exist. I’ll have to think about this.

                                                    1. 1

                                                      I think rm is pretty obvious.

                                                      One time I had accidentally created a directory named ~ somewhere, so standing in the parent directory I issued rm -rf ~ :)

                                                      So I learned the hard way about tilde expansion, which isn’t exactly the fault of rm or anything but myself, but maybe wiping your home directory could be a bit harder.

                                                      But it also would be really annoying if rm asked you every time, so I don’t know what a good solution is. Maybe it’s actually to teach people about a different tool that moves things to “trash” instead of rm, as they are getting started.

                                                      But even with such a tool, you probably wouldn’t want to send ~ to the trash.

                                                      Edit: undo would be nice to have in more tools.

                                                  2. 1

                                                    I have a request: please could you elaborate on point 6 in the first list, about common formats for --help output? I’m just not sure what good practice looks like here and could do with a specific example. Do you mean like using an off the shelf library like Python’s argparse that automatically generates the --help message in a structured way from the tables which specify what options should exist?

                                                    1. 1

                                                      Thanks for the feedback; I just cited Busybox as a good example. Diff.

                                                      Not nearly as verbose as a full man page, but good enough for a quick reference.

                                                      1. 1

                                                        Thank you

                                                  3. 1

                                                    I noticed I cannot search this story by using Search on the menu bar.

                                                    1. 1

                                                      Are you referring to browser built-in page search? If so, what browser/OS are you using, and does it work without browser extensions?

                                                  1. 7

                                                    Common Crawl is the basis of most (or all?) of the index used by the Alexandria search engine. It’s also used by the Web Data Commons which extracts, publishes, and measures usage of structured data across the Web.

                                                    It’s a great resource.

                                                    1. 9

                                                      Is there any proof that the telemetry data is NOT put to good use to improve VSCode ?

                                                      1. 28

                                                        I think there’s a tinge of paranoia that runs through the anti-telemetry movement (for lack of a better term; I’m not sure it’s really a movement). Product usage telemetry can be incredibly valuable to teams trying to decide how best to allocate their resources. It isn’t inherently abusive or malignant. VSCode is a fantastic tool that I get to use for free to make myself money. If they say they need telemetry to help make it better than I am okay with that.

                                                        1. 9

                                                          I think the overly generic name does not help the situation. When people are exposed to telemetry like “we’ll monitor everything and sell your data”, I’m disappointed but not surprised when they block everything including (for example) rollbar, newrelic, etc.

                                                          But MS shot itself in the foot by making telemetry mysterious and impossible to inspect or disable. They made people allergic to the very idea.

                                                          1. 12

                                                            I think the overly generic name does not help the situation. When people are exposed to telemetry like “we’ll monitor everything and sell your data”, I’m disappointed but not surprised when they block everything including (for example) rollbar, newrelic, etc

                                                            It’s a bit uncharitable to read “they blocked my crash reporting service” as “they must have some kind of misunderstanding about what telemetry means” (if that’s what you’re implying when you say you’re disappointed but not surprised that people block them).

                                                            I know exactly what services like rollbar do and what kinds of info they transmit, and I choose to block them anyways.

                                                            One of the big takeaways from the Snowden (I think?) disclosures was that the NSA found crash reporting data to be an invaluable source of information they could then use to help them penetrate a target. Anybody who’s concerned about nation-state (or other privledged-network-position actor) surveillance, or the ability of law enforcement or malicious actors impersonating law enforcement to get these services to divulge this data (now or at any point in the foreseeable future), might well want to consider blocking these services for perfectly informed reasons.

                                                            1. 5

                                                              I believe that’s actually correct - people in general don’t understand what different types of telemetry do. A few tech people making informed choices don’t contradict this. You can see that for example through adblock blocking rollbar, datadog, newrelic, elastic and others. You can also see it on bug trackers where people start talking about pii in telemetry reports, where the app simply does version/license checks. You can see people thinking that Windows does keylogger level reporting back to MS.

                                                              So no, I don’t believe the general public understands how many things are lumped into the telemetry idea and they don’t have tools to make informed decisions.

                                                              Side-topic: MS security actually does aggregate analysis of crash reports to spot exploit attempts in the wild. So how that works out for security is a complex case… I lean towards report early, fix early.

                                                              1. 7

                                                                You can see that for example through adblock blocking rollbar, datadog, newrelic, elastic and others.

                                                                I’m not following this argument. People install adblockers because they care about their privacy, and dislike ads and the related harms associated with the tracking industry – which includes the possibility of data related to their machines being used against them.

                                                                Adblocker developers (correctly!) recognize that datadog/rollbar/etc are vectors for some of those harms. The not every person who installs an adblocker could tell you which specific harm rollbar.com corresponds to vs which adclick.track corresponds to, does not imply that if properly informed about what rollbar.com tracks and how that data could be exploited, they wouldn’t still choose to block it. After all, they’re users who are voluntarily installing software to prevent just such harms. I think a number of these people understand just fine that some of that telemetry data is “my computer is vulnerable and this data could help someone harm it” and not just “Bob has a diaper fetish” stuff.

                                                                It’s kind of infantilizing to imagine that most people “would really want to” give you their crash data but they’re just too stupid to know it, given how widely reported stuff like Snowden was.

                                                                You can also see it on bug trackers where people start talking about pii in telemetry reports, where the app simply does version/license checks. You can see people thinking that Windows does keylogger level reporting back to MS.

                                                                That some incorrect people are vocal does not tell us anything, really.

                                                                1. 3

                                                                  It’s kind of infantilizing to imagine that most people “would really want to” give you their crash data but they’re just too stupid to know it, given how widely reported stuff like Snowden was.

                                                                  Counterpoint: Every time my app crashed, people not only gave me all data i asked for, they just left me with a remote session to their desktop. At some point I switched to rollbar and they were happy when I emailed them about an update before they got around to reporting the issue to me. So yeah, based on my experience, people are very happy to give crash data in exchange for better support. In a small pool of customers, not a single one even asked about it (and due to the industry they had to sign a separate agreement about it).

                                                                  That some incorrect people are vocal does not tell us anything, really.

                                                                  The bad part is not that they’re vocal, but that they cannot learn the truth themselves and even if I wanted to tell them it’s not true - I cannot be 100% sure, because a lot of current telemetry is opaque.

                                                                  1. 3

                                                                    I don’t know how many customers you have or how directly they come in contact with you, but I would hazard a guess that your business is not a faceless megacorp like Microsoft. This makes all the difference; I would much more readily trust a human I can talk to directly than some automated code that sends god-knows-what information off to who-knows-where, with the possibility of it being “monetized” to earn something extra on the side.

                                                                  2. 3

                                                                    People install adblockers because they care about their privacy, and dislike ads and the related harms associated with the tracking industry

                                                                    ooof that’s reading way too much into it. I just don’t want to watch ads. And as for telemetry, I just don’t want the bloat it introduces.

                                                            2. 7

                                                              The onus is not on users to justify disabling telemetry. The ones receiving and using the data must be able to make a case for enabling it.

                                                              Obviously, you need to be GDRP-compliant too; that should go without saying, but it’s such a low bar.

                                                              Copy-pasting my thoughts on why opt-out telemetry is unethical:

                                                              Being enrolled in a study should require prior informed consent. Terms of the data collection, including what data can be collected and how that data will be used, must be presented to all participants in language they can understand. Only then can they provide informed consent.

                                                              Harvesting data without permission is just exploitation. Software improvements and user engagement are not more important than basic respect for user agency.

                                                              Moreover, not everyone is like you. People who do have reason to care about data collection should not have their critical needs outweighed for the mere convenience of the majority. This type of rhetoric is often used to dismiss accessibility concerns, which is why we have to turn to legislation.

                                                              If you make all your decisions based on telemetry, your decisions will be biased towards the type of user who forgot to turn it off.

                                                            3. 9

                                                              This presumes that both:

                                                              a) using data obtained from monitoring my actions to “improve VSCode” (Meaning what? Along what metrics is improvement defined? For whose benefit do these improvements exist? Mine, or the corporation’s KPIs? When these goals conflict, whose improvements will be given preference?) is something I consider a good use in any case

                                                              b) that if this data is not being misused right now (along any definition of misuse) it will never in the future cross that line (however you choose to define it)

                                                              1. 2

                                                                Along what metrics is improvement defined?

                                                                First step would be to get data about usage. If MS finds out a large number of VSCode users are often using the json formatter (just a example) i assume they will try to improve that : make it faster, add more options etc etc.

                                                                Mine, or the corporation’s KPIs

                                                                It’s an OSS project which is not commercialized in any way by the “corporation”. They are no comemrcial licenses to sell, with VSCode all they earn is goodwill.

                                                                will never in the future cross that line

                                                                Honest question, in what way do you think VSCode usage data be “missused” ?

                                                                1. 12

                                                                  i assume they will try to improve that : make it faster, add more options etc etc.

                                                                  You assume. I assume that some day, now or in the future, some PM’s KPI will be “how do we increase conversion spend of VSCode customers on azure” or similar. I’ve been in too many meeting with goals just like that to imagine otherwise.

                                                                  It’s an OSS project which is not commercialized in any way by the “corporation”

                                                                  I promise you that the multibillion dollar corporation is not doing this out of the goodness of their heart. If it is not monetized now (doubtful – all those nudges towards azure integrations aren’t coincidental), it certainly will be at some point.

                                                                  Honest question, in what way do you think VSCode usage data be “missused” ?

                                                                  Well, first and most obviously, advertising. It does not take much of anything to connect me back to an ad network profile and start connecting my tools usage data to that profile – things like “uses AWS-related plugins” would be a decent signal to advertisers that I’m in the loop on an organization’s cloud-spend decisions, and ads targeted at me to influence those decisions would then make sense.

                                                                  Beyond that, crash telemetry data is rich for exploitation uses, like I mentioned in another comment here. Even if you assume the NSA-or-local-gov-equivalent isn’t interested in you, J Random ransomware group is just successfully pretending to be a law enforcement agency with a subpoena away (which, as we discovered this year, most orgs are doing very little to prevent) from vscode-remote-instance crash data from servers people were SSH’d into. Paths recorded in backtraces tend to have usernames, server names, etc.

                                                                  “This data collected about me is harmless” speaks more to a lack of imagination than to the safety of data about you or your organization’s equipment.

                                                              2. 4

                                                                That point is irrelevant, since it’s impossible to prove that microsoft is NOT misusing it now and that they will NOT misuse it in the future.

                                                                1. 3

                                                                  No, so should we blindly trust Microsoft with our data, or be cautious?

                                                                1. 4

                                                                  I’ve been a fish shell user for more than a decade. I cannot for the love of me understand this fixation on zshell. Why is it more popular than fish?

                                                                  The author suggests writting autocompletion, installing a plugin dependent on an external program. If you wrote a manpage (which is also suggested) you get those three things for free with fish. Out of the box.

                                                                  1. 9

                                                                    Why is it more popular than fish?

                                                                    zsh is mostly bash compatible and fish is not really. I don’t want to learn a new shell language right now. The few times I need to write something bashy interactively (not a script I can call bash on) it normally works inside zsh.

                                                                    1. 5

                                                                      Having tried both, I think fish is nicer but I couldn’t use it. I already had muscle memory for bourne shell syntax and switching to fish made that stop working - I was constantly trying to type things that did not work, and then having to go manually look up the fish equivalent.

                                                                      I would go so far as to say that there are a lot of places where fish is gratuitously different from sh, in that the syntax is different but not meaningfully better, just different.

                                                                      1. 3

                                                                        Off topic: I’m a full-time fish user now, and I had no problem with it, although I’m rather used to using multiple languages at the same time. I contributed new completions for fish—something I never got to do with bash. However, these days I also avoid shell scripting like a plague, so I may be starting to forget how to write in Bourne shell already, but it doesn’t really bother me anymore. ;)

                                                                        On topic: I agree that CLI confusion with TUI is a grave sin. Borders and other decorations have no place in CLIs at all. All CLI tools must be capable of detecting non-interactive use and automatically stoppind all output except actual data (and should provide a way to switch to that mode by hand). In a TUI, of course, anything goes, although accessibility considerations still exist there.

                                                                        1. 2

                                                                          From what I remember, completion scripts are one of the places where fish is vastly easier to understand than bash or zsh.

                                                                          However, these days I also avoid shell scripting like a plague

                                                                          It wasn’t that I was trying to write shell scripts, it’s that at that point in my life I was doing a lot of ad-hoc for X in $(y); do ... done and xyz | while read X; do ... done one-off one-liners. For scripts worth reusing I had Python and liked it.

                                                                        2. 1

                                                                          This comment reflects my experience and opinions on fish as well. In summary: in theory, I like it more than zsh, but in practice its uniqueness gets in the way too much.

                                                                        3. 1

                                                                          Bash compatibility really seems to be the primary reason. Unix refuses to die, and so does Bash (I blame POSIX sh).

                                                                          1. 1

                                                                            I’ve personally found fzf-tab to be miles better than Fish tab-completion, especially alongside “z” or others like it (z.lua, zoxide), fzf history search, etc. Being able to fuzzy-search directories and tab-completions with the same interface, often with preview windows, is an absolute game-changer.

                                                                            I just used fzf-tab as an example of the benefit of adding simple, short, busybox-style help text alongside a more comprehensive manpage. Manpage explanations of CLI flags tend to be too long for ideal shell-completion; I’d be less likely to get the most relevant fuzzy matches first.

                                                                          1. 6

                                                                            This is fine for simple things that won’t be scraped but if you’re building something that might be scraped, please, from someone who spent years writing scrapers and crawlers, write standards-compliant, validating HTML5. It’s easier to introduce syntax errors and other problems when doing shorthand stuff. If you want to be lazy, consider a preprocessor like HAML or Jade that can emit good HTML that satisfies human eyes, browsers, and scrapers alike.

                                                                            1. 19

                                                                              This is standards-compliant, valid HTML. That’s part of what’s so great about it :)

                                                                              Check it out: https://validator.w3.org/nu/?doc=http%3A%2F%2Flofi.limo%2Fblog%2Fwrite-html-right

                                                                              1. 9

                                                                                Validators and most browsers will certainly handle this correctly, since it is, after all, valid. But I wouldn’t be surprised if most other HTML parsers will (incorrectly) not handle it.

                                                                                Oh, did you know there’s an even better shoelace knot than tying the bow as a square knot? It’s more secure, and much harder to mess up.

                                                                                1. 3

                                                                                  Oh no! I just got the hang of tying them this way… but thank you!

                                                                                  I think my coworkers thought I was joking, but I’ve often commented that shoes and socks that don’t let you down are more important than we realize for leading a happy life.

                                                                                  1. 6

                                                                                    Sam Vimes would agree 100%.

                                                                              2. 14

                                                                                I’ve written my fair share of scrapers as well, and my feeling is as a scraper author it’s your responsibility to handle the content you’re ingesting. Generally when scraping, your gain >>> their gain (if their gain is even positive, which it often is not), so asking them to do extra work to make your scraping effort easier feels unfair.

                                                                                Also, a standards-compliant parsing library should handle this fine. For example, bs4 does:

                                                                                >>> from bs4 import BeautifulSoup
                                                                                >>> BeautifulSoup(...) # snippet from article with unclosed <p>, etc.
                                                                                <!DOCTYPE html>
                                                                                <html><head><title>Building a Streaming Music Service with Phoenix and Elixir</title>
                                                                                </head><body><h1>Building a Streaming Music Service with Phoenix and Elixir</h1>
                                                                                <p>
                                                                                I thought it would be nice to make a streaming music service focused on
                                                                                bringing lo-fi artists and listeners together.
                                                                                Early on, I built a series of prototypes to explore with a small group of
                                                                                listeners and artists.
                                                                                Since this is a technical article, I'll jump right into the requirements we
                                                                                arrived at, though I'd love to also write an article on the strategies
                                                                                and principles that guided our exploration.
                                                                                
                                                                                </p><h2>Requirements</h2>
                                                                                <p>
                                                                                We liked a loose retro-computing aesthetic with a looping background that
                                                                                changed from time to time.
                                                                                We preferred having every listener hear the same song and see the same
                                                                                background at the same time.
                                                                                And we liked the idea of sprinkling some "bumpers" or other DJ announcements
                                                                                between the songs.</p></body></html>
                                                                                
                                                                                1. 1

                                                                                  This is a great demonstration of the evolution of communal tooling over time. BeautifulSoup was unavailable to me in the environment I was using at the time (2009-2013; we did spectacularly questionable amazing things with mostly just XSL 1.0). I speculate that BS was not quite so robust back then, either. For really complex stuff, we could sub out to Selenium but it was very expensive to our crawling timelines to do so or we could switch to a JVM stack with an extended project scope and cost. I had the privilege of asking some government website authors things like, “Could you fix this one broken tag?” with a 15-30 day wait and it was done, saving the taxpayer some money.

                                                                                2. 3

                                                                                  Yeah, I wanted my site to be really easy to scrape. Even Bing (powers DuckDuckGo, Ecosia, Yahoo, and most other alternative engines) gets tripped up when you eliminate optional tags.

                                                                                  I ended up going the other way by writing well-formed polygot XHTML5/HTML5 markup and validate all my pages with both xmllint and the Nu HTML Checker before each push.

                                                                                1. 8

                                                                                  As usual with accessibility, going really hard in one direction is often not great site everyone. Here’s another article about why you really want to keep grey / lower contrast for accessibility reasons: https://blog.tiia.rocks/web-apps-why-offering-a-low-contrast-mode-makes-you-more-accessible-not-less

                                                                                  1. 18

                                                                                    If only there were some kind of style sheet, that could cascade in priority depending on where it was defined.

                                                                                    There’s no reason for this to be handled by the developers of every single website. This should be handled by the browser. If a user wants high contrast mode, there’s absolutely no reason there can’t be a setting on the client that forces text to a high contrast setting. It’s just numbers in a configuration file. Those numbers can be changed, automatically.

                                                                                    1. 12

                                                                                      I wish there was some way of saying to browsers, “use modern defaults; I don’t care what they are, or if they change over time; I won’t touch the style; just make the page look good based on the semantic markup.”

                                                                                      1. 1

                                                                                        could this be accomplished with a stylus style sheet or something similar? maybe an addon that just removes all style tags and links to style tags?

                                                                                        1. 4

                                                                                          There are generic Firefox addons like this but they are generally quite CPU hungry.

                                                                                          I end up just doing:

                                                                                          • disabling all custom or web fonts
                                                                                          • setting minimum font size to 12
                                                                                          • setting default zoom level to 120 %

                                                                                          For web apps that I need to use for work (Outlook web app, Jira, &c.) that have bad contrast, I do try to add custom style sheets to fix some text that’s still unreadable to me after all this.

                                                                                          For articles, I can generally use “reader mode” that does switch to black-on-white, since, you know, that’s the best for reading, but that’s generally not helpful on web apps.

                                                                                          In short, I’d be very happy if it was practical to make a stylesheet or plugin to do this, but currently I would say it’s not, or someone would have made one already.

                                                                                          It seems unlikely ranting at web developers will help with this, so I think it would need to be fixed in browsers, However, I see that as an unlikely development, given that e.g. disabling custom fonts is becoming harder and harder, with for example Firefox for Android removing the ability to disable custom fonts.

                                                                                          1. 3

                                                                                            https://github.com/jayesh-bhoot/enforce-browser-fonts is an add-on that disables custom fonts, and it works quite nicely on Firefox for Android if you’re using Nightly or F-Droid that supports custom add-ons.

                                                                                      2. 2

                                                                                        This is the reality. Firefox has allowed, and continues to allow, forced colors. Go to about:preferences -> Colors, and activate “Manage Colors”. In the menu that pops up, set your preferred colors and set the “Override the colors specified by the page with your selections above” pref to “Always”. This feature is a lifesaver for me, as I deal with overstimulation and can’t stand having a new palette thrown at me every time I open a new page. It’s also replaced my dark mode addon; anything that gets rid of privileged addons is a win in my book.

                                                                                        On Windows, you can enable this system-wide with High Contrast mode. Contrary to the name, WHCM isn’t necessarily for high-contrast themes; you can set any palette you want. Every decent program will then receive a forced palette.

                                                                                        1. 1

                                                                                          I would like to use this, but it’s regrettably not practical. As an example it prevents me from seeing the upvote arrow on your comment, and whether I’ve voted already.

                                                                                          1. 2

                                                                                            This is something that browsers (all 3? 4? of them) can improve, though, rather than asking a billion web content creators to behave nicely towards any of hundreds of access concerns.

                                                                                            1. 2

                                                                                              Yeaaah…I used to be a fan of the lobste.rs interface before I started learning about accessibility. This is far from the only a11y issue on this site.

                                                                                              I try to avoid complaining things before filing issues properly and leaving constructive feedback, so here are two I just filed:

                                                                                        2. 13

                                                                                          Why not just lower the screen brightness?

                                                                                          1. 4

                                                                                            You could, but then you get extra-lower-brightness for other apps/pages which don’t buy into the white/black idea. I don’t think we’ll get the perfect solution either way.

                                                                                            And even if you adjust brightness, it can be a bad experience. I don’t get any actual health/sight issues from brightness or high contrast, but even with everything turned down to minimum on my phone, that medium post is tiring to read because of the white background.

                                                                                            1. 5

                                                                                              So it actually is low contrast that causes the problem.

                                                                                              1. 6

                                                                                                Only if you can adjust all screens to both go down in brightness low enough and without destroying the colour accuracy.

                                                                                                We’ve got a system with at least 3 interdependent elements in it (defaults, preferences, design ideas, hardware capabilities, accessibility limits, …) - you can’t just point so one of them and say that causes all the problems. (Well, you can, but that’s oversimplification and doesn’t solve any issue)

                                                                                          2. 10

                                                                                            …and I immediately had to flip that article into reader mode in order to read it. Which is not to say it’s wrong, but the contrast between the author’s experience and mine is illustrative, and one-size-fits all probably just isn’t going to work here. As she points out there are media queries, but:

                                                                                            @fly suggested in another comment here that this really should just be the browser’s responsibility, not the page’s, and I agree. For articles I tend to flip into reader mode at the first sign of trouble, but for apps I don’t really have that option. But for desktop apps developers mostly don’t have different styles of buttons; they just use the OS’s widget toolkit and accept what the OS vendor has decided. (of course, that’s changing as everything seems to be electron these days anyway…)

                                                                                            1. 10

                                                                                              This is one reason why the APCA (next generation contrast algorithm) recommends against excessively high contrast, especially for dark themes.

                                                                                              It’s not just halation and migrations: overstimulation is another issue that I personally experience quite a bit. Foreground colors that have excellent contrast against dark backgrounds, like yellow, can cause overstimulate if they’re not appropriately de-saturated.

                                                                                              Special palettes that respond to media queries requesting dark/light schemes and more/less contrast are good, but I believe that defaults should also be as accommodating as we can make them; not everyone is okay with the fingerprinting potential of all these media queries. An APCA contrast of ~90 LcP seems to do the trick. You can go lower if you bump up the font size to compensate.

                                                                                              1. 2

                                                                                                Typo: s/migrations/migraines/

                                                                                                s/cause overstimulate/overstimulate/

                                                                                              2. 9

                                                                                                The goal of accessibility design is not making things “great [for] everyone”. It is ALLOWING people to make things great FOR THEM. Some users will have needs that require high contrast. Some users will have needs that require high contrast. Others won’t particularly care at all, and just want things to be pretty. Others don’t care about the contrast because they’re using a screen reader.

                                                                                                You can’t make things accessible by choosing colors. We have to enable users to configure their interfaces with the colors that they, personally, need or want. This has to be done largely at the browser level, although of course stuff like clear, semantic HTML that doesn’t use clever tricks to do things purely visually is a big part of the ask. But developers shouldn’t be forced to worry about colors. We should be forced to worry about allowing browsers to configure those colors.

                                                                                                1. 1

                                                                                                  Agreed. I think a better message here would be “use grey text responsibly”.

                                                                                                  I often set body text to 65–75% opacity and reserve 100% opacity for headings, etc. It helps build visual hierarchy while retaining a good amount of contrast. This produces both a nice appearance as well as readable content.

                                                                                                1. 2

                                                                                                  There are definitely times when px is the right value. Usually with a max-width and usually when a raster images involved.

                                                                                                  Most of the examples of px in this article are better served by some combination of %, em, vh/vw

                                                                                                  1. 1

                                                                                                    px is great for setting minimum margin sizes. You don’t want your margins to scale with zoom, or text will get progressively narrower. They’re also good for border widths.

                                                                                                    1. 1

                                                                                                      Usually you want margin in % or vh/vw to scale wit the amount of space available. Anything in px makes assumptions about viewport size, etc

                                                                                                      1. 1

                                                                                                        I was referring to minimum margin sizes, for narrow screens. Just big enough to keep clear of tap targets like overlay scrollbars.

                                                                                                        They work well in combination with a maximum content width for wide screens.

                                                                                                  1. 8

                                                                                                    Agree. There should be one for “shell” if there’s already one for Python, Go, Rust, Swift, C, Elixir, and others.