The link doesn’t explain how the project is different from MoonScript, which also compiles to Lua, or in what way it’s a “dialect”. Does anyone know?
Past the “import” statement, the “An Overview of YueScript” (the first sub-section of the Introduction) shows new language additions. I compared them recently on my microblog entry “Yuescript first impressions”:
I just discovered Yuescript, which is like MoonScript with more features. I have mixed feelings.
I like features like pipelines (much cleaner than repeated assignment or nested parentheses in function calls) and compile-time macros. The sugar for multiple and destructuring assignment is handy.
I find the additional operators unnecessary, and not worth their cognitive overhead. The
?
operator was already used as sugar for a parameter-free function call. The[]
operator could easily have been a function in a library instead.One of the trade-offs for this much syntactic sugar is some syntactic ambiguity. An opinionated formatter could resolve some of this.
A friend suggested we do a side project that does exactly this type of overlay to help with accessibility.
It is a very powerful pitch to be able to say “install our turbo 3000 power a11y plugin and your website will magically be wcag 3 compliant.” so why not give it a shot? The market is definitely there.
I spent a week or so reading up on wcag and doing some prototyping and came to the conclusion that this is a very hard (impossible?) thing to do right, and ultimately I felt like we would be doing people a disservice by even trying. We would absolve the website owner from having to do the right thing. In the end I backed out.
I would happily help clients make their website accessible. But this quick fix thing because you decided it wasn’t worth your time does not sit right with me.
If anything, an overlay creates new accessibility barriers. It’s why ad-blocker filter-lists for overlays are getting some popularity right now.
An alternative would be to downgrade to OpenSSL 1.1.x and then upgrade back once fix is published and stabilized.
Although BoringSSL is an open source project, it is not intended for general use, as OpenSSL is. We don’t recommend that third parties depend upon it. Doing so is likely to be frustrating because there are no guarantees of API or ABI stability.
Despite BoringSSL’s “not intended for general use” warning, it’s used by many projects:
I use nginx-quic with BoringSSL without issue, although I did have to use a separate script to manage the OCSP cache. The script manages the cache better than Nginx ever did, so I recommend it; it should be trivial to switch it from OpenSSL to LibreSSL.
POSSE note from https://seirdy.one/notes/2022/10/30/using-boringssl/
I’m surprised to hear that Apple would be using it for secure transport, when their “OpenSSL” has been LibreSSL for several years.
Huh, I didn’t know they used LibreSSL. A quick look seems to reveal that their LibreSSL is really bare-bones, and doesn’t seem to include any engines.
Interestingly, Windows uses LibreSSL in programs like the pre-included OpenSSH.
Just to confirm:
> which openssl; ls -l `which openssl`; openssl version
/usr/bin/openssl
-rwxr-xr-x 1 root wheel 1064768 13 Oct 17:06 /usr/bin/openssl
LibreSSL 2.8.3
Given that they stated one shouldn’t use BoringSSL, LibreSSL might be the better option.
it also comes with a nice API you can use instead of the OpenSSL one intended to be more foolproof, called libtls.
I think that Internet.nl is probably my new favorite security scanner for web and mail servers. Its TLS checks are much more strict than SSL Labs, and it includes all the checks from Hardenize. It’s also the only service I know of that checks a server’s RPKI.
I’m especially curious about existing implementations anyone uses here on Lobsters. My Lobsters-based POSSEs have been manual so far; the only automated POSSE-ing I do is via a shell script that calls toot
to post to the Fediverse.
I am using my own[1] to cross-post to mastodon[2], pleroma[3], pinboard[4] and twitter[5] accounts. The last via https://crossposter.masto.donte.com.br/.
Currently rewriting to be an activitypub instance on it’s own[6].
[1] http://mro.name/shaarligo
[2] https://digitalcourage.social/@mro
[3] https://pleroma.tilde.zone/@mro
[4] https://pinboard.in/u:mro/
[5] https://twitter.com/mrohrmoser
[6] https://seppo.app/en/
I use this for my blog. I post to my blog and then have it poke another service that spreads out notifications. I wrote about it in my most recent talk.
I personally self-host an espial instance (https://github.com/jonschoning/espial). Then I also self-host a node-red instance (https://nodered.org) where I created 4 workflows:
You get the idea, the workflows looks for RSS feed from my blog, my public espial bookmarks and my public espial notes. On any new item, it is published to twitter (and the bookmarks are copied to pinboard).
More details:
I did some manual POSSE to Lobsters in the past for comments, and when I’ve posted blog posts I’ve made sure I’ve recorded the syndication / cross post to Lobsters, but would prefer to move to more of a backfeed approach where I can write posts in the Lobsters UI then later they’ll automagically sync back to my site
See also: https://indieweb.org/PESOS
The problem with PESOS is that it prevents original-post-discovery, undermining the “own your content” goal of the IndieWeb. POSSE-over-PESOS also fits nicely into the logic behind POSSE before sending Webmentions.
Backfeeding does make sense for aggregating responses, though. It’d be cool if Lobsters gave replies permalinks (rather than anchor-links) so we could enable that functionality…maybe I’ll file a ticket.
I can’t say it would be truly automatic, but if I had to spend a lot of time copying and pasting text across multiple sites, that’s when I’d start using some level of copy/paste and browser tab automation. Nothing fancier than echo $text | xclip -sel clipboard
and using a firefox $url
invocation to spawn each site’s URL to make a new post on. This is what I did for a job with not-so-great automation aspects.
Q: Why bother? You can’t make a new browser engine without billions of dollars and hundreds of staff.
Sure you can. Don’t listen to armchair defeatists who never worked on a browser.
Armchair defeatist here 👋 I don’t believe it takes “billions of dollars” to create a new basic browser engine (i.e. HTML, CSS, JS); after all, there are already multiple projects doing exactly that (e.g. Netsurf and Dillo). I’m unsure however that the newer technologies like WebGL, WebDRM, WASM, etc. can be implemented completely in a feasible timeframe. You’d wind up with a browser that’s nice for reading news sites and maybe watching Youtube, but anything more complex would be at least partially broken. Maybe someone more knowledgeable can correct me on this.
You’d wind up with a browser that’s nice for reading news sites and maybe watching Youtube, but anything more complex would be at least partially broken.
Sounds great to me.
Worth noting that the SerenityOS browser has some support for JavaScript, WebAssembly, WebGL, websockets, and other “modern” Web features. They plan to eventually support web apps like Discord, since that’s where they chose to host their community (/me sighs).
Wrote my thoughts over at https://seirdy.one/notes/2022/07/08/re-trying-real-websites-in-the-serenityos-browser/
Sounds great to me.
Indeed. Sites who qualify for 1MB Club would probably work well.
Case in point: My own site is generated via Hugo. The markup is very very simple. I’ve added a splash of (ready-made) CSS, but that’s mostly to get a nice typeface and neat margins – the stylesheet is not at all required to read the text, and there’s no JavaScript in use.
And I’m far from alone in building sites like this.
While it may sound great to you, it’s going to kill adoption if a new browser doesn’t have sufficient parity. And given how much Google is driving the specs these days and forcing everyone else to play catch-up, I’m not really sure that independent browser engines can maintain meaningful parity.
I also worry that the final Chrome/Chromium monoculture will arrive pretty soon regardless of what anyone does at this point.
I highly doubt their goals or expectations are mass adoption. And if, like you say, there is no way to beat google anyways they might as well not worry about it and just make whatever they enjoy.
I also worry that the final Chrome/Chromium monoculture will arrive pretty soon regardless of what anyone does at this point.
ya me too. but efforts like these will do one of three things: 1) nothing at all to improve the march towards a chrome/chromium “death”, 2) delay it, 3) provide a viable alternative and a way out from it
#2 and #3 seem highly unlikely, but I’d rather not give up all hope and accept #1 as our fate. But I’m one of those crazy people who would rather use/promote webkit, even if it’s not perfect, since its survival is absolutely necessary to reach #2 or #3 (even if #1 is much more likely at this point…). Ya, it’s a sad situation out there.
The author worked on WebKit / Safari for a long time, so I’d trust his judgement a lot more than mine on the amount of work. I wonder how many of the older web technologies can be implemented in the newer ones. Firefox, for example, decided not to implement a native PDF renderer and instead built a PDF viewer in JavaScript (which had the added advantage that it was memory safe). It would be very interesting to see if you could implement the whole of the DOM in JavaScript, for example.
You could have said the same thing about Linux. How is it possible for a hobbyist who had never had a real job to create an operating system that’s fast and portable? That’s for companies like Sun, IBM, and HP, which were huge Unix vendors at the time.
I also found it funny that as recently as ~2009 there were knowledgeable people saying that Clang/LLVM were impossible. You could never re-do all the work that GCC had built up over decades.
That’s completely alright. Ladybird is a system made by its developers, for its developers. It does not intend to compete with other web browsers, and that’s okay. It’s the epitome of the https://justforfunnoreally.dev/ mindset.
I also have a defeatist stance here. Various streaming services such as Netflix and Co are a hard wall since the web was made non-free and gatekeepers like Widevine (Google) don’t even grant pretty successful browser projects entry.
But then maybe it’s time to just leave that stuff behind us anyways.
While this is certainly true, using a browser as a user agent for hypertext documents and not as a cross-platform application runtime is a worthy exercise on its own. IMO, of course.
WebDRM is likely the killer because it’s stupid :-/
But Kling spent many years working on webkit and khtml, so the layout and rendering of the bulk of html and css shouldn’t be a problem for him alone. Bigger issues I suspect will be xml, xslt, and xpath :-D
Overlooking the one-letter name, q
is now my favorite DNS client.
Good portability and support for basically every client-server DNS protocol in use makes it feel like “cURL for DNS”. And little QoL features like human-readable durations and the option for color add up.
I’m especially interested in using it to test ODoH.
This article has it backwards. The user controls their device, and how they want to view content, not the author of the website. To say it another way the device and app is a user-agent, not a website-agent. The author of the website has no more right to demand that I view their page in a “normal” browser than they have to demand that I do jumping jacks while reading their website, or that I don’t use an adblocker (which is to say that they could make it a contractual requirement, but that they would have to get me to sign that contract before giving me the content, and that they have no right to impinge on everyones devices to ensure that people are following their strange contract).
There is something to be a bit concerned about here, but it’s not that apps are somehow unfairly hurting websites. It’s that apps that are abusing their position as a source of links to other content to coerce users into using them to view the content when they might prefer to view the content via another user agent. Probably the appropriate medium for resolving this is regulation - since Google does this themselves it seems unlikely that it will be resolved by the platform just deciding to ban apps that do it from their app store.
I have the freedom to set the terms on which I will offer access to a website of mine.
If you do not like those terms, you may reject them and not access it. If you reject them and then attempt to access it anyway, you are the one violating my freedom: the freedom to decide how I will run my site and by whom and on which terms it will be accessed.
The Web is not built around advance informed consent; there’s no agreement to terms before downloading a public file (besides basic protocol negotiations). This is one reason why “by using this site, you agree to our cookies, privacy policy, kidney harvesting, etc” notices won’t fly under the GDPR.
A website admin can’t set terms for downloading a linked document; the user-agent just makes a request and the server works with that data to deny or accept it. There’s no obligation for the UA to be honest or accurate.
Ultimately, nobody is forcing you to run a Web server; however, plenty of people have to use the Web. Respect for the UA is part of the agreement you make when joining a UA-centric network.
Should you disagree with the precedent set by the HTML Living Standard, nearly every Web Accessibility Initiative standard (users must be able to override and replace stylesheets, colors, distracting elements), the exceptions to e.g. the Content Security Policy in Webappsec standards to allow UA-initiated script injection, etc.: you’re always free to build your own alternative to the Web with your own server-centric standards.
POSSE note from https://seirdy.one/notes/2022/08/12/user-agents-set-the-terms/
Who said anything about advance consent? I can put up a splash page laying out terms and tell you to either accept them and continue, or reject them and leave. Or I can login-wall things. And if you try to work around it and access anyway, I have every right to use both technical and legal-system measures to try to prevent you, or to hold you accountable afterward for the violation.
Or plenty of other low-level tricks and techniques are fair game, too; for example, I believe Jamie Zawinski at least used to (I don’t know if he still does) serve a famous obscene image to any inbound request with a referer from Hacker News.
But before you go too far into citing standards and accessibility at me, do keep in mind that what we’re discussing here is whether sites should be able to object to Instagram literally MITM’ing users and injecting potentially malicious script. And the original parent comment’s suggestion of regulating this away is actually contradictory to the absolutist “browser is a user agent” moral stance, since that stance requires rejection of any imposed limitation on what the “user agent” may do. After all, some person out there might actively want an “agent” to MITM and inject Instagram trackers for them, so banning the practice by law is as hostile to user freedom as is any technical measure which attempts to prevent it.
Also, the absolutist “user agent” stance is still hostile to the freedom of a site owner to decide who to offer access to, as I originally pointed out, and that has nothing to do with accessibility or usability or any of the other things you tried to steer the argument off-topic to. If I want to make a secret online club and decide who I do and don’t let in and on what terms, I can do that and you don’t get to tell me otherwise.
And the original parent comment’s suggestion of regulating this away is actually contradictory to the absolutist “browser is a user agent” moral stance, since that stance requires rejection of any imposed limitation on what the “user agent” may do
It does not. There are all sorts of restrictions on what one make available to consumers. Whether that’s baby toys covered in lead, or products that abuse their monopoly position to gain monopolies on other unrelated markets (anti-trust law, which is the closest analogy to the regulation I proposed IMO).
It merely means that you should be making such restrictions to benefit the user, not some third party with no rights to the users device whatsoever.
hostile to the freedom of a site owner to decide who to offer access to
The site owner has a freedom to do whatever he likes, such as your examples of serving the user with a contractual agreement that they must agree to before the site owner serves them the actual content. The site owner has no right to have every user attempting to access his site (prior to agreeing to any contract) do it in any particular manner though, it is up to him to not give content away to people that coming asking for it if he wants to require them to agree to contractual limitations before they get the content.
So if a government were to pass an enforceable law saying that any site which sends an X-Frame-Options
with a “deny” value must be opened in the user’s default browser rather than an app-embdded one, would you be OK with that? There are user-centric reasons for doing so, after all, so it would be a law with benefit to the user.
But it’s also exactly the thing you previously attacked.
No. Nor did I say so. Rather I have continuously been attacking that idea, and will continue to do so below.
It is in the users interest to be able to view websites however they want. Rather your suggestion would be the government gifting control over how users view documents on their devices they they lawfully own (the actual instance of the bits, not the copyright, same as owning a book) to website owners. To the extent that there is user harm resulting from the current app ecosystem, it is extremely minimal compared to the utterly draconian measure you are proposing.
Moreover there is the much less invasive, well tested and understood method of requiring users be given the choice of how to open links (see for example the similar laws for payment providers that are cropping up, and the much older consent decree related to internet explorer). I don’t think the harm is great enough the government necessarily even needs to do something about this, but I wouldn’t mind if they did because it is really just a slight extension to existing anti-trust law and unlike your suggestion does very minimal harm users freedoms to use their devices how they want to.
Edit:
I think you generally misunderstand the nature of the relationships here. In order of “should have the most control over how the content is viewed” to “who should have the least control” it goes
User > Creator of The App that the User chose to install on their device and view the website in > Website Owner
Not as you seem to have it, Website Owner > User > App Creator, or even the unreasonably charitable reading of your posts of User > Website Owner > App Creator.
The website owner bears no special relationship to the user, is not trusted, and did nothing but supply some data which they no longer have any relevant rights to once the transfer of data is complete (they continue to own the copyright if they did in the first place, but nothing restricted by copyright is being done to the data). The app creator supplied software that the user chose to run on their device, in a relatively privileged manner, and is far more trusted to act in the users interest.
I want to be absolutely crystal clear here. I posed a hypothetical where sending a certain header would be required to use “the user’s default browser rather than an app-embedded one”, And your description of this is “utterly draconian”.
How, exactly, is it “utterly draconian” to use the user’s default browser?
Because your hypothetical has just given website owners the ability to legally require that users only view their website through their default browser when they have absolutely no right to demand users do anything of the sort.
It has made the decision the users aren’t entitled to view news articles in their news app and social media sites in their social media app.
It has made it next to impossible to make a huge variety of tools from simple ones like curl and youtube-dl to complex ones like citation managers and privacy respecting replacement apps for YouTube and Facebook without either the cooperation of website owners or breaking the law.
It is fundamentally seizing a fairly significant degree of control of the device from the users, and handing it to the people who serve the content.
Maybe you only have the users best intentions at heart (I sort of doubt it given that we’re discussing the under an article whose whole premise is that users aren’t entitled to view websites how they choose because it violates some supposed right of the website owners), but the policy you’re proposing is not going to be only used for good.
This is an inconsistent position, though. The status quo is user-hostile. Any technical solution would also be user-hostile by your definition. And so too would any regulatory solution – no matter how it’s implemented, it will place restrictions on what a “user agent” is allowed to do, or which “user agents” are allowed, and that appears to be anathema to you.
Even something like “app must ask” can be turned user-hostile and anticompetitive, as in the case of the iOS Gmail app, which – I don’t know if it still does, but I know it did, upon a time – would “helpfully” ask if you wanted to open a link in your default browser, or install Chrome. With a “remember my choice” that only “remembered” for that single link in that single email message, and would prompt again for the next link it encountered, all in hopes you’d finally give in to its badgering and install Chrome.
So I simply don’t see how any position, consistent with the moral values about the user that you keep citing, can be built which would also allow any type of regulation to solve this. All solutions will, by your definitions, end up taking away some freedom from the user, which is something you seem absolutely unwilling to budge even the slightest bit on, and regulatory solutions will do so by force.
The reason C as of today is performant is there exists giant optimizing compilers that have reasonable freedom to mess around with the code. Try to compile it with tcc instead and see what you give up. A language can be faster than C by making more guarantees than C does. If there is no risk for pointer aliasing, for example, the compiler can go further. Another way to be faster than C is to expose operations that the compiler would have to infer from the C code, which doesn’t always work, like if you have explicit vector operations. A third way to be faster than C is to have generics with specialization for particular datatypes, where C++ shines. Another way is to make it easy to evaluate stuff at compile time.
And with LLVM you can get half of that giant optimizing compiler for your pet language for free.
Nowadays, most code that needs to be “faster than C” is written in assembly. LuaJIT, video decoders, some video encoders, image decoders, etc.
I’m a big fan that I keep seeing a stronger push away from CDNs considered a best practice.
I really wish prefers-reduced-data
had much better support so it was easier to plan around the connection issue the author brings up. <picture>
tag lets you use media
attributes so you could only load if the user wants the image. The problem comes up when trying to support only reduced
because using CSS alone, you can’t really prove the negative of “if reduced or the browser doesn’t support the feature”. I burnt a lot of time on this recently to no avail.
Another thumbs up for focusing on users changing the fonts on their browser and respecting that instead of ignoring it with a big font stack assuming I’d ever want to see Ubuntu just because I’m on Linux. Related to the above paragraph, when I’m on a stable connection however, I do still prefer a self-hosted font for branding’s sake because I value the typography if well designed–which is subjective and some websites just throw in Roboto because “I dunno, pick something that’s kinda inoffensive to read”.
I never get tired of always suggesting against px
either. I set my font size a bit larger on some contexts (like my media server where I’ll browse the web on the couch). Sites using rem
and %
have no trouble. I still do prefer px
specifically for border
as a lot of time I didn’t want a round up error to make things to thick or thin.
I appreciate calling it “small viewports” instead of “mobile” because “small viewport” makes only one assertion about the user agent: it’s viewport is small.
More controversially, I’ll also disagree with black for prefers-color-scheme: dark
. Because #000
consumes the least amount of energy, it is the best choice for dark environments and the planet. With a true dark and good contrast, I can really crank down the brightness on my laptop and phone (both OLED) which saves battery too. Folks that say it’s unnatural don’t seem to account for the device itself not being a black hole for light. Not everyone, but I do think #000
complainers might have meh monitors with the brightness turned up higher than it needs to be (but just a guess). I’m pretty bummed that the Lobster’s team got bullied out of the #000
background for its dark theme (luckily the CSS vars made my user styles a breeze).
I hard disagree with SVGO usage though. SVGO’s defaults are far too aggressive, including stripping out licence metadata which is categorized the same as editor attributes—which could put you in violation of CC BY and is generally just not nice to the artist. No separation means you can’t get rid of just your Inkscape attributes that help when editing (like you grid values, etc.). The SVGO maintainer is adamant that all users should be reading the manuals and while that’s somewhat true, a) pick better defaults and b) many tools have it in the toolchain and you cannot configure it (think create-react-app
, et. al). You can see some projects like SVGR where most of their issues are SVGO-related. My suggestion: use scour
, aggressive options are opt-in.
Because #000 consumes the least amount of energy, it is the best choice for dark environments and the planet.
The energy usage difference has been shown to be negligable: https://www.xda-developers.com/amoled-black-vs-gray-dark-mode/
Not everyone, but I do think #000 complainers might have meh monitors with the brightness turned up higher than it needs to be (but just a guess).
Personally, I like to keep my phone brightness quite low, and I find the contrast between text and #000 backgrounds to be rather…painful.
I’m aware that it’s a low difference, but #000
is still the lowest. The internet has a lot of devices connected to it, so it does add up.
I find the contrast between text and #000 backgrounds to be rather…painful
Lowering brightness reduces the contrast. Simple as that.
Contrast is a little more complex than that.
The Helmholtz–Kohlrausch effect and the APCA’s perceptual contrast research show that the mathematical difference between two colors is quite different from the perceptual contrast we experience.
There’s more than one type of contrast at play here. Lowering brightness until halation, overstimulation, etc. become non-issues will likely compromise legibility. You can get a much higher contrast that doesn’t trigger halation as easily by giving the background some extra lightness.
If you still want a solid-black (or almost-solid-black) background, look into the prefers-contrast
media query. You can use it to specify a reduced or increased contrast preference. I try to keep a default that balances the different needs, but offer alternative palettes for reduced/increased contrast preferences.
The main issue I’ve run into with svgo is that it defaults to dropping the viewbox attribute on the svg element in cases where it is not redundant, i.e. does affect rendering.
Yep. I mentioned SVGR; they have 149 issues related to SVGO which makes it ≅⅓ of their issues and they span the whole spectrum of issues. It’s so flawed that I ban its usage in teams I work around now to save them the headaches it can cause as well as the potential legal trouble you could run into on the licensing front.
I’ve mostly re-written this article since the last time it was submitted (the canonical URL changed but a redirect is in place).
I’ve shifted much of its focus to accessibility. Accessibility guidance tends to be generic rather than specific, and any information more specific or detailed than WCAG is scattered across various places.
Feedback welcome. I’m always adding more.
I’ve quickly skimmed through your article, stopping mainly at the sections that interest me, and I would have liked it to be split into a series of more focused articles / pages. Right now it’s hard to see where a section starts and when one begins.
All-in-all I’ve found quite some good advises in there. Thanks for writing it!
Given that the article touches on many non-mainstream browsers, I think a special consideration should have also been given to console browsers like lynx
, w3m
, and others. I know almost nobody uses one of these to browse the internet these days, but they might be used by some automated tools to ingest your contents for archival or quick preview.
From my own experience it’s quite hard to get a site to look “good” in all of these, as each have their own quirks. Each renders headings, lists, and other elements in quite different ways. (In my view w3m
is more closer to a “readable” output, meanwhile lynx
plays a strange game with colors and indentation…)
For example I’ve found that using <hr/>
are almost a requirement to properly separate various sections, especially the body of an article from the rest of the navigation header / footer. (In fact I’ve used two consecutive <hr/>
s for this purpose, because the text might include a proper <hr/>
on its own.)
On a related topic, also a note regarding how the page “looks” without any CSS / JS might be useful. (One can simulate this in browser by choosing the View -> Page Style -> No Style
option.)
As with console browsers, I’ve observed that sometimes including some <hr/>
s makes things much more readable (Obviously these <hr/>
can be given a class and hidden with CSS in a “proper” browser.)
I know almost nobody uses one of these to browse the internet these days
I find them essential when on a broken/new machine which doesn’t have X11 set up correctly yet. Or on extremely low-power machines where Firefox is too much of a resource hog. Especially mostly-textual websites should definitely be viewable using just these browsers, as they may contain just the information needed to get a proper browser working.
I actually was recently showing other members on their team that they will do better markup and CSS if they always test with a TUI browser and/or disabling styles in Fx after doing it myself for a few years now. It will often lead to better SEO too since non-Google crawlers will not be running that JS you wrote.
Netsurf is still a browser to consider too.
Er, sort of. There are lots of great reasons to test in a textual browser, but “accessibility” is lower on that list than most people realize. It’s easy for sighted users to visually skip over blocks of content in a TUI or GUI, but the content needs to be semantic for assistive technologies to do the same.
I consider textual browsers a “sniff test” for accessibility. They’re neither necessary nor sufficient, but they’re a quick and simple test that can expose some issues.
I do absolutely advocate for testing with CSS disabled; CSS should be a progressive enhancement.
And most websites ignore it, so I need to use Dark Reader anyway, at which point it doesn’t have that much value.
You don’t need to inject scripts/styles from a privileged extension to do this in Firefox.
Go to about:preferences
and scroll to the “Colors” section. Select “Manage Colors”. Then pick your favorite palette and set the override preference to “Always”.
As someone who has not looked at the state of the art for stylometric fingerprinting since early in our eternal September, I wonder:
Naiively, that would seem like a strong approach. The translation would remove signal, and the imitation would add noise.
The first study I linked indicates that obfuscation is a little more effective than imitation, and both beat naive machine-translation output.
What if I attempt to transform a machine translation so that it imitates the style of someone well known?
I’d rather place the original work, machine translation, and a style-guide side-by-side and transform the original work to match the style-guide while also re-phrasing anything that tripped up the machine translation. That should cover both bases.
Most of these are pages that blur the line between “document” and “app”, containing many interactive controls. Being concerned about them is valid; however, I think the concern is misplaced at this stage.
For an independent engine, I’m more interested in simple “web documents”. Those need to work well before tackling “Web 2.0” territory. Specifically: articles progressively enhanced with images, stylesheets, and maybe a script or two. Understanding how well Web 2.0 sites render isn’t really useful to me without first understanding how well documents render.
When testing my site, my main pain points are: a lack of support for <details>
, misplaced <figcaption>
elements, my SVG profile photo not rendering (it renders when I open it in a new tab), and occasional overlapping text. The only non-mainstream independent engine I know of that supports <details>
is Servo.
POSSE note from https://seirdy.one/notes/2022/07/08/re-trying-real-websites-in-the-serenityos-browser/
So in order to make your site slightly more accessible to screen readers, you’ll make it completely inaccessible to browsers without JavaScript?
I think @river’s point is that it’s most important to accommodate limitations due to circumstances beyond the user’s control. And these are limitations that can prevent people from getting or keeping a job, pursuing an education, and doing other really important things. In all cases that I’m aware of, at least within the past 15 years or so, complete lack of JavaScript is a user choice, primarily made by very techy people who can easily reverse that choice when needed. The same is obviously not the case for blindness or other disabilities. Of course, excessive use of JavaScript hurts poor people, but that’s not what we’re talking about here.
If using <details>
made the site impossible to use for blind people, that would obviously be much more important, but here the complaint is that… the screen reader reads it slightly wrong? Is that even a fault of the website?
Fair point. Personally, I wouldn’t demand, or even request, that web developers use JavaScript to work around this issue, which is probably a browser bug, particularly since it doesn’t actually block access to the site.
On the other hand, if a web developer decides to use JavaScript to improve the experience of blind people, I wouldn’t hold it against them. IMO, making things easier for a group of people who, as @river pointed out, do have it more difficult due to circumstances beyond their control, is more important than not annoying the kind of nerd who chooses to disable JS.
Well, disabling JS is not always a choice. Some browsers, such as Lynx or NetSurf, don’t support it. But yeah, I generally agree.
I suppose it’s possible that some people have no choice but to use Lynx or Netsurf because they’re stuck with a very old computer. But for the most part, it seems to me that these browsers are mostly used by tech-savvy people who can, and perhaps sometimes do, choose to use something else.
I suppose it’s possible that some people have no choice but to use Lynx or Netsurf because they’re stuck with a very old computer. But for the most part, it seems to me that these browsers are mostly used by tech-savvy people who can, and perhaps sometimes do, choose to use something else.
And what percentage of those lynx users is tech-savvy blind people? Or blind people who are old and have no fucks left to give about chasing the latest tech? There are, for instance, blind people out there who still use NetTamer with DOS. DOS, in 2022. I’m totally on board with their choice to do that. Some of these folks aren’t particularly tech savvy either. They learned a thing and learned it well, and so that’s what they use.
Many users who need a significant degree of privacy will also be excluded, as JavaScript is a major fingerprinting vector. Users of the Tor Browser are encouraged to stick to the “Safest” security level. That security level disables dangerous features such as:
Even if it were purely a choice in user hands, I’d still feel inclined to respect it. Of course, accommodating needs should come before accommodation of wants; that doesn’t mean we should ignore the latter.
Personally, I’d rather treat any features that disadvantage a marginalized group as a last-resort. I prefer selectively using <details>
as it was intended—as a disclosure widget—and would rather come up with other creative alternatives to accordion patterns. Only when there’s no other option would I try a progressively-enhanced JS-enabled option. I’m actually a little ambivalent about <details>
since I try to support alternative browser engines (beyond Blink, Gecko, and WebKit). Out of all the independent engines I’ve tried, the only one that supports <details>
seems to be Servo.
JavaScript, CSS, and—where sensible—images are optional enhancements to pages. For “apps”, progressive enhancement still applies: something informative (e.g. a skeleton with an error message explaining why JS is required) should be shown and overridden with JS.
(POSSE from https://seirdy.one/notes/2022/06/27/user-choice-progressive-enhancement/)
I mean not for not, but I’m fairly certain you can constrain what can be executed in your browser from the website.
I’m certainly okay with a little more JS if it means folks without sight or poorer sight can use the sites more easily.
I’m certainly okay with a little more JS if it means folks without sight or poorer sight can use the sites more easily.
In my experience (the abuse of) JavaScript is what often leads to poor accessibility with screen readers. Like, why can I not upvote a story or comment on this site with either Firefox or Chromium? ISTR I can do it in Edge, but I don’t care enough to spin up a Windows VM and test my memory.
We need a bigger HTML, maybe with a richer set of elements or something. But declarative over imperative!
Like, why can I not upvote a story or comment on this site with either Firefox or Chromium?
I use Firefox on desktop and have never had a problem voting or commenting here.
We need a bigger HTML, maybe with a richer set of elements or something. But declarative over imperative!
The fallback is always full-page reloads. If you want interactivity without that, you need a general-purpose programming language capable of capturing and expressing the logic you want; any attempt to make it fully declarative runs into a mess of similar-but-slightly-different one-off declaration types to handle all the variations on “send values from this element to that URL and update the page in this way based on what comes back”.
I use Firefox on desktop and have never had a problem voting or commenting here.
Yes, but do you use a screenreader? I do.
The fallback is always full-page reloads. If you want interactivity without that, you need a general-purpose programming language capable of capturing and expressing the logic you want;
Sure, but most web applications are not and do not need to be fully interactive. Like with this details tag we’re talking about here? It’s literally the R in CRUD and the kind of thing that could be dealt with by means of a “richer HTML”.
On modern browsers, the <details>
element functions builtin, without JS.
In fact that’s the entire point of adding the element.
This is a very comprehensive list!
Anyone else getting a MOZILLA_PKIX_ERROR_REQUIRED_TLS_FEATURE_MISSING
on Firefox 101.0.1 on this site? Works fine in Safari, though.
Thanks!
Regarding the error: can you share more details? Which OS is this happening on?
If you open the “Network” panel in DevTools, what does it say when you click the main/top request?
This looks to me like an OCSP issue, which is odd since I use certbot-ocsp-fetcher.
I’m not sure what to look for, but some more poking around tells me that it has to do with OCSP Must Staple. I was able to load the site after disabling security.ssl.enable_ocsp_must_staple
. Nothing looks awry on the SSLLabs report but this is not my area of expertise, haha.
How does Warp stack against other toolkits when it comes to accessibility and system integration?
In my system settings I set colors, default fonts (with fallback and hinting settings), animation preferences (reduce/eliminate animations), disable overlay scrollbars, set buttons to include text where possible, enable dark mode, configure keyboard shortcuts, and sometimes enable a screen reader. Windows users can enable High Contrast Mode to force their preferred palettes. To what degree will this toolkit respect these settings?
On Linux: the only options I know of with decent system integration, accessibility, and some presence outside the Freedesktop.org ecosystem are are Qt, GTK, and the Web. Flutter falls flat, with outstanding WCAG level A blockers like functional keyboard navigation and basic levels of customization (e.g. disabling animation); relevant issues typically get de-prioritized. This is despite its massive funding and development efforts, so I’m not optimistic about other contenders.
AccessKit looks like a start for cross-platform interoperability between accessibility APIs. Until it’s ready, support for each platform’s accessibility APIs and screen readers will need to be implemented and tested. It’s a monumental task. I worry that releasing yet another inaccessible toolkit will merely increase the disability gap.
POSSE note from https://seirdy.one/notes/2023/02/16/ui-toolkits-accessibility-gap/