Those of you who really can’t stand web development can always filter. Unsupported categorical statements like this don’t reflect well on the lobsters community. I ask that people who upvoted it reconsider. The author may have left themselves open to misinterpretation by not providing context in the article. However, looking at their other blog posts, it seems they’re working on some non-trivial graphics, which might warrant some of the technical choices they describe.
Web developers are the only category of programmer who have to endure this level of condescension frequently. I feel like I’m seeing posts shaming developers for their supposedly bloated JS or mocking users of JS/frameworks/electron on a daily basis on the lobste.rs front page nowadays.
This may be because the ecosystem is deeply fucked, many developers are complicit in this, and all of us are victims of the previous two points to one degree or another.
I’ve been doing web development, front-end and back-end, for a while now, and I agree with the root comment. That said, the author probably has a reason for tackling things this way, and we got to see some neat notes on using <template>.
No lib is going to be used for routing. Routing is going to consist of simple manipulations of the browser history with popstate and pushstate. The basic approach is: [four increasingly complex steps that don’t even do error handling]
…if only there was some sort of semantic element that allowed us to tell the browser to do the navigation for us…
I’m assuming your sarcasm is aimed at what you presume to be overlooked anchor tags. Depending on what the author is building, it may be necessary to support in-app transitions without loading a whole new page. However, the article’s lack of context make it difficult to assess the appropriateness of their technology choices.
But you could still do <a href=”#some_id”> to transition on the same page, right? And use CSS to dictate what section is visible to make it “app-like” rather than just one long page. But yeah, it’s hard to tell if the author is against this for some reason.
That’s an interesting idea, but it assumes the routing is directly related to DOM content. If the routing is in relation to canvas content, some kind of JS-based routing will be necessary. But again, I don’t think any meaningful assessments can be made without more context.
I recently had a taste of this on my own site; I vented about it a bit in the commit log. Even standards for seemingly simple things like icons have grown to include way too many things.
This also goes beyond standards. The growth of huge frameworks and prioritization of developer speed and convenience over the quality of the software itself has been awful for progressive enhancement. CSS, images, JS, etc. all may or may not load; they are optional enhancements that pages should work well without. Exceptions exist, but they shouldn’t be common.
Most sites don’t even need to be complex enough to warrant advanced cross-browser testing; they just need to be marked-up text documents. I posted a blog post to lobsters a few weeks ago that summarized my views.
It boggles my mind that there are more and more websites that just contain text and images, but are completely broken, blank or even outright block you if you disable JavaScript. There can be great value in interactive demos and things like MathJax, but there is no excuse to ever use JavaScript for buttons, menus, text and images which should be done in HTML/CSS as mentioned in the blog post. Additionally, the website should degrade gracefully if JavaScript is missing, e.g. interactive examples revert to images or stop rendering, but the text and images remain in place.
I wonder how we can combat this “JavaScript for everything” trend. Maybe there should be a website that names and shames offending frameworks and websites (like https://plaintextoffenders.com/ but for bloat), but by now there would probably be more websites that belong on this list than websites that don’t. The web has basically become unbrowsable without JavaScript. Google CAPTCHAs make things even worse. Frankly, I doubt that the situation is even salvageable at this point.
I feel like we’re witnessing the Adobe Flash story all over again, but this time with HTML5/JS/Browser bloat and with the blessing of the major players like Apple. It’ll be interesting to see how the web evolves in the coming decades.
You just proved my point. That is precisely the mechanism by which bloat finds its way into every crevice of software. It’s all about incentives, and the incentives are often stacked against the user’s best interest, particularly if minorities are affected. It is easier to write popular software than it is to write good software.
Every advance in computers and UI has been called bloat at one time or another.
The fact of the matter is that web browsers “ship” with javascript enabled. A very small minority actually disable it. It is not worth the effort in time or expense to cater to a group that disables stuff and expects everything to still work.
Am I using a framework?
Most of the time, yes I am. To deliver what I need to deliver it is the most economical method.
The only thing I am willing to spend extra time on is reasonable accommodation for disabilities. But most of the solutions for web accessibility (like screenreaders) have javascript enabled anyhow.
You might get some of what you want with server side rendering.
Good software is software that serves the end user’s needs. If there is interactivity, such as an app, obviously it is going to have javascript. Most things I tend to make these days are web apps. So no, Good Software doesn’t always require javascript.
I actually block javascript to help me filter bad sites. If you are writing a blog and I land there, and it doesn’t work with noscript on, I will check what domains are being blocked. If it is just the one I am accessing I will temp unblock and read on. If it is more than a couple of domains, or if any of them are unclear as to why they need to be loaded, you just lost a reader. It is not about privacy so much as keeping things neat and tidy and simple.
People like me are probably a small enough subset that you don’t need our business.
The absolute worst offender is microsoft. Not only is their average No-Script index around 3, but you also get multiple cross site scripting attack warnings. Additionally when it fails to load a site because of js not working it quite often redirects you to another page, so set temp trusted doesn’t even catch the one that caused the failure. Often you have to disable no-script altogether before you can log in and then once you are logged in you can re-enable it and set the domains to trusted for next time.
That is about 3% of my total rant about why microsoft websites are the worst. I cbf typing up the rest.
i do this too, and i have no regrets, only gratitude. i’ve saved myself countless hours once i realized js-only correlates heavily with low quality content.
i’ve also stopped using medium, twitter, instagram, reddit. youtube and gmaps, i still allow for now. facebook has spectacular accessibility, ages ahead of others, and i still use it, after years away.
My guess is that a lot of people who use JS for everything, especially their personal blogs and other static projects, are either lazy or very new to web development and programming in general. You can expect such people to be less willing or less able to put the effort into making worthwhile content.
The only thing I am willing to spend extra time on is reasonable accommodation for disabilities.
Why do you care more about disabled people than the privacy conscious? What makes you willing to spend time for accommodations for one group, but not the other? What if privacy consciousness were a mental health issue, would you spend time on accommodations then?
It might be a “choice” if your ability to have a normal life, avoid prison, or not be executed depends on less surveillance. Increasingly, that choice is made for them if they want to use any digital device. It also stands out in many places to not use a digital device.
This bears no relation at all to anything that’s being discussed here. This moving of goalposts from “a bit of unnecessary JavaScript on websites” to “you will be executed by a dictatorship” is just weird.
You framed privacy as an optional choice people might not need as compared to the need for eyesight. I’d say people need sight more than privacy in most situations. It’s more critical. However, for many people, privacy is also a need that supports them having a normal, comfortable life by avoiding others causing them harm. The harm ranges from social ostracism upon learning specific facts about them to government action against them.
So, I countered that privacy doesn’t seem like a meaningless choice for those people any more than wanting to see does. It is a necessity for their life not being miserable. In rarer cases, it’s necessary for them even be alive. Defaulting on privacy as a baseline increases the number of people that live with less suffering.
No, I didn’t. Not even close. Not even remotely close. I just said “using JavaScript doesn’t mean it’s not privacy-friendly”. I don’t know what kind of assumptions you’re making here, but they’re just plain wrong.
“Being blind is not a choice: disabling JavaScript is.”
My impression was that you thought disabling Javascript was a meaningless choice vs accessibility instead of another type of necessity for many folks. I apologize if I misunderstood what you meant by that statement.
My replies don’t apply to you then: just any other readers that believed no JS was a personal preference instead of a necessity for a lot of people.
The question isn’t about whether it’s privacy-friendly, though. The question is about whether you can guarantee friendliness when visiting any arbitrary site.
If JS is enabled then you can’t. Even most sites with no intention of harming users are equipped to do exactly that.
When you can get a quad core raspberry pi for $30 and similar hardware in a $50 phone, I really doubt that there are devices that can’t run most JS sites and someone who has a device of some sort can’t afford.
What devices do you see people using which can’t run JS?
The bigger question in terms of people being disadvantaged is network speed, where some sites downloading 1MB of scripts makes them inaccessible - but that’s an entirely separate discussion.
I literally have the cheapest phone you can buy in Indonesia (~€60) and I have the almost-cheapest laptop you can buy in Indonesia (~€250). So yeah, I’d say I’m “disadvantaged”.
Turns out, that many JavaScript sites work just fine. Yeah, Slack and Twitter don’t always – I don’t know how they even manage to give their inputs such input latency – but Lobsters works just fine (which uses JavaScript), my site works just fine as well (which uses JavaScript), and my product works great on low-end devices (which requires JavaScript), etc. etc. etc.
You know I actually tried very hard to make my product work 100% without JavaScript? It was a horrible experience for both JS and non-JS users and a lot more code. Guess I’m just too lazy to make it work correct 🤷♂️
So yeah, please, knock it with this attitude. This isn’t bloody Reddit.
“I literally have the cheapest phone you can buy in Indonesia (~€60) and I have the almost-cheapest laptop you can buy in Indonesia (~€250). So yeah, I’d say I’m “disadvantaged”. Turns out, that many JavaScript sites work just fine.”
I’ve met lots of people in America who live dollar to dollar having to keep slow devices for a long time until better hand-me-downs show up on Craigslist or just clearance sales. Many folks in the poor or lower classes do have capable devices because they would rather spend money on that than other things. Don’t let anyone fool you that being poor always equals bad devices.
That said, the ones taking care of their families, doing proper budgeting, not having a car for finding deals, living in rural areas, etc often get stuck with bad devices and/or connections. I don’t have survey data on how many are in the U.S.. I know poor and rural populations are huge, though. It makes sense that some people push for a baseline that includes them when the non-inclusive alternative isn’t actually even necessary in many cases. When it is, there were lighter alternatives not used because of apathy. I’ve rarely seen situations where what they couldn’t easily use was actually necessary.
The real argument behind most of the sites is that they didn’t care. The ones that didn’t know often also didn’t care because they didn’t pay enough attention to people, esp low-income, to find out. If they say that, the conversations get more productive because we start out with their actual position. Then, strategies can be formed to address the issue in an environment where most suppliers don’t care. Much like we had to do in a lot of other sectors and situations where suppliers didn’t care about human cost of their actions. We got a lot of progress by starting with the truth. The web has many, dark truths to expose and address.
thank you for writing this out. the cheapest new phone in indonesia is probably much faster than your typical “obamaphone” or 3-year-old average device.
Are you going to cite any devices here? Which JS do they run slowly?
My guess is that the issue is on specific documents. I’d think that the fact that JS is so often used in ways that don’t perform well is a much larger issue than this one. Sites using JS in ways that are slow is a completely different debate to be had in my opinion. Although giving someone a version of the page without JS seems a solution, it ignores the entire concept of progressive web apps and the history of the web that got us to them.
EG, would you prefer the 2008 style of having a separate m.somesite.com that works without JS but tends to be made for small devices which tends to let corporations be okay with removing necessary functionality to simplify the “mobile experience”? Generally, that’s what we got that solution.
The fact that even JS-enabled documents like https://m.uber.com allow you to view a JS map and get a car to come pick you up with reasonable performance on even the cheapest burner phones shows just how much bad programming plays into your opinion here instead of simply whether or not JS is the problem itself.
It’s also worth noting that I am strongly interested in people doing less JS and the web being JS-less, but this isn’t the hill to die on in that battle if you ask me. Not only are you going to generally find people that aren’t sympathetic to disadvantaged people (because most programmers tend to not give any fucks unfortunately) but also because the devices that run JS are generally not going to be slow enough that decent JS isn’t going to run. If we introduce some new standard that replaces HTML, it’ll likely still be read by browsers that still support HTML / JS - which means the issue still remains because people aren’t going to prioritize a separate markup for their entire site depending on devices which is the exact reason that most companies stopped doing m.example.com. The exception to this rule seems to be bank & travel companies in my experience.
This iPad is less than 10 years old, and still works well on most sites with JS disabled. With JS enabled, even many text-based sites slow it down to the point of being unresponsive.
This version of iOS and Safari are gracious enough to include a JavaScript on/off toggle under Advanced, but no fine-grained control. This means that every time I want to toggle JS, I have to exit Safari, open Settings, scroll down to Safari, scroll down to Advanced, toggle JS, and then return to Safari.
Or are you going to tell me that my device is too old to visit your website? I’ll be on my way, then.
It’s also worth noting that I am strongly interested in people doing less JS and the web being JS-less, but this isn’t the hill to die on in that battle if you ask me. Not only are you going to generally find people that aren’t sympathetic to disadvantaged people (because most programmers tend to not give any fucks unfortunately)
I think this is changing for the better, slowly but faster more recently.
but also because the devices that run JS are generally not going to be slow enough that decent JS isn’t going to run. If we introduce some new standard that replaces HTML, it’ll likely still be read by browsers that still support HTML / JS - which means the issue still remains because people aren’t going to prioritize a separate markup for their entire site depending on devices which is the exact reason that most companies stopped doing m.example.com.
I think with some feature checking and progressive enhancement, you can do a lot. For example, my demo offers basic forum functionality in Mosaic, Netscape, Opera 3.x, IE 3.x, and modern browsers with and without JS. If you have JS, you get some extra features like client-side encryption and voting buttons which update in-place instead of loading a new page.
I think it’s totally doable, with a little bit of effort, to live up to the original dream of HTML which works in any browser.
The exception to this rule seems to be bank & travel companies in my experience.
Facebook (ok, mbasic.facebook.com)
MetaFilter
Lobste.rs (for reading)
old.reddit.com (for reading)
Most blogs posted to lobsters and hn are actually nojs-friendly
I’m going to try to replace my grandmother’s laptop soon. I’ve verified it runs unbearably slow in general but especially on JS-heavy sites she uses. It’s a Toshiba Satellite with Sempron SI-42, 2GB of RAM and Windows 7. She got it from a friend as a gift presumably replacing her previous setup. Eventually, despite many reinstalls to clear malware, the web sites she uses were unbearably slow.
“When you can get a quad core raspberry pi for $30 and similar hardware in a $50 phone,”
She won’t use a geeky setup. She has a usable, Android phone. She leaves it in her office, occasionally checking the messages. In her case, she wants a nice-looking laptop she can set on her antique-looking desk. Big on appearances.
An inexpensive, decent-looking, Windows laptop seems like the best idea if I can’t get her on a Chromebook or something. I’ll probably scour eBay eventually like I did for my current one ($240 Thinkpad T420). If that’s $240, there’s gotta be some deals out there in the sub-Core i7 range. :)
Sure, but just to clarify - we are talking about people who may need to save money to get the $30 for something like a raspberry pi. Not someone who can drop $240 on a new laptop.
Oh yeah. I was just giving you the device example you asked for. She’s in the category of people who would need to save money: she’s on Social Security. These people still usually won’t go with a geeky rig even if their finances justify it. Psychology in action.
I do actually have a Pi 3 I could try to give her. I’d have to get her some kind of nice monitor, keyboard, and mouse for it. I’m predicting, esp with the monitor, the sum of the components might cost the same as or more than a refurbished laptop for web browsing. I mentioned my refurbished Core i7 for $240 on eBay as an example that might imply lower-end laptops with good performance might be much cheaper. I’ll find out soon.
But what about a device people got in 2015 or 2010? Or, dare I say, older devices, which still work fine, and may be kept around for any number of reasons like nostalgia or sentimental attachment?
Sure, you can tell all these people to also stuff it, but don’t pretend they don’t exist.
You have some good points. One thing I didn’t see addressed is the number of people on dial-up, DSL, satellite, cheap mobile, or other bad connections. The HTML/CSS-type web pages usually load really fast on them. The Javascript-type sites often don’t. They can act pretty broken, too. Here’s some examples someone posted to HN showing impact of JavaScript loads.
“If there is interactivity, such as an app, obviously it is going to have javascript. “
I’ll add that this isn’t obvious. One of the old models was client sending something, server-side processing, and server returns modified HTML. With HTML/CSS and fast language on server, the loop can happen so fast that the user can barely perceive a difference vs a slow, bloated, JS setup. It would also work for vast majority of websites I use and see.
The JS becomes necessary as the UI complexity, interactivity (esp latency requirements), and/or local computations increase past a certain point. Google Maps is an obvious example.
It is interesting to see people still using dialup. Professionally, I use typescript and angular. The bundle sizes on that are rather insane without much code. Probably unusable on dialup.
However, for my personal sites I am interested in looking at things like svelte mixed with dynamic loading. It might help to mitigate some of the issues that Angular itself has. But fundamentally, it is certainly hard to serve clients when you have apps like you mention - Google Maps. Perhaps a compromise is to try to be as thrifty as can be justified by the effort, and load most of the stuff up front, cache it as much as possible, and use smaller api requests so most of the usage of the app stays within the fast local interaction.
Google Maps used to have an accessibility mode which was just static pages with arrow buttons – the way most sites like MapQuest worked 15 years ago. I can only guess why they took it away, but now you just get a rather snarky message.
Not only that, but to add insult to injury, the message is cached, and doesn’t go away even when you reload with JS enabled again. Only when you Shift+reload do you get the actual maps page.
This kind of experience is what no-JS browsers have to put up with every fucking day, and it’s rather frustrating and demoralizing. Not only am I blocked from accessing the service, but I’m told that my way of accessing it itself invalid.
Sometimes I’m redirected to rather condescending “community” sites that tell me step by step how to re-enable JavaScript in my browser, which by some random, unfortunate circumstance beyond my control must have become disabled.
All I want to say to those web devs at times like that is: Go fuck yourself, you are all lazy fucking hacks, and you should be ashamed that you participated in allowing, through action or inaction, this kind of half-baked tripe to see the light of day.
My way of accessing the Web is just as valid as someone’s with JS enabled, and if you disagree, then I’m going to do everything in my power to never visit your shoddy establishment again.
</rant>
Edit: I just want to clarify, that this rant was precipitated by other discussions I’ve been involved in, my overall Web experience, and finally, parent comment’s mention of Google Maps. This is not aimed specifically at you, @zzing.
It shouldn’t be extra effort, is the point. If you’re just writing some paragraphs of text, or maybe a contact form, or some page navigation, etc etc you should just create those directly instead of going through all the extra effort of reinventing your own broken versions.
Ok, well, that wasn’t at all clear since you were replying to this:
It boggles my mind that there are more and more websites that just contain text and images, but are completely broken, blank or even outright block you if you disable JavaScript.
Many websites I see fit this description. They’re not apps, they don’t have any “behaviour” (at least none that a user can notice), but they still have so much JS that it takes over 256MB of RAM to load them up and with JS turned off they show a blank white page. That’s the topic of this thread, at least by the OP.
Very few websites today have just text or a basic form.
Uhh… Personal websites? Blogs? Many of the users here on Lobsters maintain sites like these. No need to state falsehoods to try and prove your point; there are plenty of better arguments you could be making.
As an aside, have you seen Sourcehut? That’s an entire freakin’ suite of web apps which don’t just function without JavaScript but work beautifully. Hell, Lobsters almost makes it into this category as well.
Some types of buttons, menus, text and images aren’t implemented in plain HTML. These kinds should still be built in JS. For instance, 3-state buttons. There are CSS hacks to make a button appear 3-state, but no way to define behavior for them without JS. People can hack together radio inputs to look like a single multi-state button, but that’s a wild hack that most developers aren’t going to want to tackle.
I’m a big fan of Dune (hell, I have a Dune tattoo), but I always struggled playing the game. The UX and controls just feel too clunky for me, as someone who played the game for the first time a few years ago. It’s was clear to me that the game was pretty neat underneath all of that, but in the end I just wasn’t enjoying it because of that, and gave up quite soon and never really tried again.
I actually played a lot of Herzog back in the day on our MSX (the first one, not Herzog Zwei), which is also kind-of an RTS.
I suspect there will be more Dune games in the next few years, with the upcoming movie and all.
I was glad to see it’s going to be in two parts (though I would’ve preferred three, to match up with the book divisions). Trying to cram Dune into two hours just isn’t feasible. (The Weirding Way, you see, is based on sound…)
I was disappointed with the SyFy miniseries too. I had to turn it off when I realized they had Baron Harkonnen speaking in rhyming couplets and what the hell was going on with Irulan….
I couldn’t bear to finish watching the Sci-Fi channel miniseries. I actually saw it before I saw the Lynch film or read the books and felt it was just boring. I also thought Paul Atreides was a spoiled annoying brat.
I never got the whole sound thing until I became a bigger fan of Lynch and realized his whole deal is sound design. Of course he’d add in a sound that kills!
This is somewhat the issue with old games: their controls don’t keep up with semi-modern conventions. I would have loved to play Fallout 1 since I think the setting is absolutely fascinating but the second I have to fight with the damn door controls I feel like throwing the computer across the room.
A year ago I was playing Pillars of Eternity and was impressed how it managed to capture the essence of Baldurs Gate 2 but without reminding me how it might not have aged well. It felt like playing BG2 back in the day did for me, but I am sure there have been plenty of adjustments to make it palatable for modern audiences.
Yeah, Pillars of Eternity did a really good job of polishing the UX of the 2000-era cRPG games, although BG2 is still pretty playable today, and there’s the enhanced edition which makes things a bit easier too. It’s also worth checking out Tyranny by the way, I thought it was even better than PoE.
Fallout 1 and 2 (and Arcanum, by the Fallout 1 team) are … special. They’re all absolutely fantastic games, but you almost need some xanax to be able to play them. There’s no real technical reason for it as such, and I’m surprised at how much patience gamers (including myself) had for clearly bad UX even just 20 years ago.
Web developers really are a parody of themselves. Everything they create must be a Rube Goldberg machine.
Those of you who really can’t stand web development can always filter. Unsupported categorical statements like this don’t reflect well on the lobsters community. I ask that people who upvoted it reconsider. The author may have left themselves open to misinterpretation by not providing context in the article. However, looking at their other blog posts, it seems they’re working on some non-trivial graphics, which might warrant some of the technical choices they describe.
Web developers are the only category of programmer who have to endure this level of condescension frequently. I feel like I’m seeing posts shaming developers for their supposedly bloated JS or mocking users of JS/frameworks/electron on a daily basis on the lobste.rs front page nowadays.
This may be because the ecosystem is deeply fucked, many developers are complicit in this, and all of us are victims of the previous two points to one degree or another.
I’ve been doing web development, front-end and back-end, for a while now, and I agree with the root comment. That said, the author probably has a reason for tackling things this way, and we got to see some neat notes on using
<template>
.I stopped reading when I reached
…if only there was some sort of semantic element that allowed us to tell the browser to do the navigation for us…
I’m assuming your sarcasm is aimed at what you presume to be overlooked anchor tags. Depending on what the author is building, it may be necessary to support in-app transitions without loading a whole new page. However, the article’s lack of context make it difficult to assess the appropriateness of their technology choices.
But you could still do <a href=”#some_id”> to transition on the same page, right? And use CSS to dictate what section is visible to make it “app-like” rather than just one long page. But yeah, it’s hard to tell if the author is against this for some reason.
That’s an interesting idea, but it assumes the routing is directly related to DOM content. If the routing is in relation to canvas content, some kind of JS-based routing will be necessary. But again, I don’t think any meaningful assessments can be made without more context.
So you stopped right before the recommendation not to use that technique for the most part, because ordinarily you just need a link and no JS.
I recently had a taste of this on my own site; I vented about it a bit in the commit log. Even standards for seemingly simple things like icons have grown to include way too many things.
This also goes beyond standards. The growth of huge frameworks and prioritization of developer speed and convenience over the quality of the software itself has been awful for progressive enhancement. CSS, images, JS, etc. all may or may not load; they are optional enhancements that pages should work well without. Exceptions exist, but they shouldn’t be common.
Most sites don’t even need to be complex enough to warrant advanced cross-browser testing; they just need to be marked-up text documents. I posted a blog post to lobsters a few weeks ago that summarized my views.
what are you, a non-web developer making desktop applications using non-web protocols or what?
It boggles my mind that there are more and more websites that just contain text and images, but are completely broken, blank or even outright block you if you disable JavaScript. There can be great value in interactive demos and things like MathJax, but there is no excuse to ever use JavaScript for buttons, menus, text and images which should be done in HTML/CSS as mentioned in the blog post. Additionally, the website should degrade gracefully if JavaScript is missing, e.g. interactive examples revert to images or stop rendering, but the text and images remain in place.
I wonder how we can combat this “JavaScript for everything” trend. Maybe there should be a website that names and shames offending frameworks and websites (like https://plaintextoffenders.com/ but for bloat), but by now there would probably be more websites that belong on this list than websites that don’t. The web has basically become unbrowsable without JavaScript. Google CAPTCHAs make things even worse. Frankly, I doubt that the situation is even salvageable at this point.
I feel like we’re witnessing the Adobe Flash story all over again, but this time with HTML5/JS/Browser bloat and with the blessing of the major players like Apple. It’ll be interesting to see how the web evolves in the coming decades.
Rendering math on the server/static site build host with KaTeX is much easier than one might have thought: https://soap.coffee/~lthms/cleopatra/soupault.html#org97bbcd3
Of course this won’t work for interactice demos, but most pages aren’t interactice demos.
If I am making a website, there is virtually no incentive to care about people not allowing javascript.
The fact is the web runs on javascript. The extra effort does not really give any tangible benefits.
You just proved my point. That is precisely the mechanism by which bloat finds its way into every crevice of software. It’s all about incentives, and the incentives are often stacked against the user’s best interest, particularly if minorities are affected. It is easier to write popular software than it is to write good software.
Every advance in computers and UI has been called bloat at one time or another.
The fact of the matter is that web browsers “ship” with javascript enabled. A very small minority actually disable it. It is not worth the effort in time or expense to cater to a group that disables stuff and expects everything to still work.
Am I using a framework?
Most of the time, yes I am. To deliver what I need to deliver it is the most economical method.
The only thing I am willing to spend extra time on is reasonable accommodation for disabilities. But most of the solutions for web accessibility (like screenreaders) have javascript enabled anyhow.
You might get some of what you want with server side rendering.
Good software is software that serves the end user’s needs. If there is interactivity, such as an app, obviously it is going to have javascript. Most things I tend to make these days are web apps. So no, Good Software doesn’t always require javascript.
I actually block javascript to help me filter bad sites. If you are writing a blog and I land there, and it doesn’t work with noscript on, I will check what domains are being blocked. If it is just the one I am accessing I will temp unblock and read on. If it is more than a couple of domains, or if any of them are unclear as to why they need to be loaded, you just lost a reader. It is not about privacy so much as keeping things neat and tidy and simple.
People like me are probably a small enough subset that you don’t need our business.
Ah, the No-Script Index!
How many times does one have to click “Set all this page to temporarily trusted” to get a working website? (i.e. you get the content you came for)
Anything above zero, but definitely everything above one is too much.
The absolute worst offender is microsoft. Not only is their average No-Script index around 3, but you also get multiple cross site scripting attack warnings. Additionally when it fails to load a site because of js not working it quite often redirects you to another page, so set temp trusted doesn’t even catch the one that caused the failure. Often you have to disable no-script altogether before you can log in and then once you are logged in you can re-enable it and set the domains to trusted for next time.
That is about 3% of my total rant about why microsoft websites are the worst. I cbf typing up the rest.
i do this too, and i have no regrets, only gratitude. i’ve saved myself countless hours once i realized js-only correlates heavily with low quality content.
i’ve also stopped using medium, twitter, instagram, reddit. youtube and gmaps, i still allow for now. facebook has spectacular accessibility, ages ahead of others, and i still use it, after years away.
My guess is that a lot of people who use JS for everything, especially their personal blogs and other static projects, are either lazy or very new to web development and programming in general. You can expect such people to be less willing or less able to put the effort into making worthwhile content.
that’s exactly how i think it work, and why i’m happy to skip the content on js-only sites.
Why do you care more about disabled people than the privacy conscious? What makes you willing to spend time for accommodations for one group, but not the other? What if privacy consciousness were a mental health issue, would you spend time on accommodations then?
Being blind is not a choice: disabling JavaScript is. And using JavaScript doesn’t mean it’s not privacy-friendly.
It might be a “choice” if your ability to have a normal life, avoid prison, or not be executed depends on less surveillance. Increasingly, that choice is made for them if they want to use any digital device. It also stands out in many places to not use a digital device.
This bears no relation at all to anything that’s being discussed here. This moving of goalposts from “a bit of unnecessary JavaScript on websites” to “you will be executed by a dictatorship” is just weird.
You framed privacy as an optional choice people might not need as compared to the need for eyesight. I’d say people need sight more than privacy in most situations. It’s more critical. However, for many people, privacy is also a need that supports them having a normal, comfortable life by avoiding others causing them harm. The harm ranges from social ostracism upon learning specific facts about them to government action against them.
So, I countered that privacy doesn’t seem like a meaningless choice for those people any more than wanting to see does. It is a necessity for their life not being miserable. In rarer cases, it’s necessary for them even be alive. Defaulting on privacy as a baseline increases the number of people that live with less suffering.
No, I didn’t. Not even close. Not even remotely close. I just said “using JavaScript doesn’t mean it’s not privacy-friendly”. I don’t know what kind of assumptions you’re making here, but they’re just plain wrong.
You also said:
“Being blind is not a choice: disabling JavaScript is.”
My impression was that you thought disabling Javascript was a meaningless choice vs accessibility instead of another type of necessity for many folks. I apologize if I misunderstood what you meant by that statement.
My replies don’t apply to you then: just any other readers that believed no JS was a personal preference instead of a necessity for a lot of people.
The question isn’t about whether it’s privacy-friendly, though. The question is about whether you can guarantee friendliness when visiting any arbitrary site.
If JS is enabled then you can’t. Even most sites with no intention of harming users are equipped to do exactly that.
When you can get a quad core raspberry pi for $30 and similar hardware in a $50 phone, I really doubt that there are devices that can’t run most JS sites and someone who has a device of some sort can’t afford.
What devices do you see people using which can’t run JS?
The bigger question in terms of people being disadvantaged is network speed, where some sites downloading 1MB of scripts makes them inaccessible - but that’s an entirely separate discussion.
how is that a separate discussion? it’s just one more scenario when js reduces accessibility.
as for devices, try any device over 5 years old.
I literally have the cheapest phone you can buy in Indonesia (~€60) and I have the almost-cheapest laptop you can buy in Indonesia (~€250). So yeah, I’d say I’m “disadvantaged”.
Turns out, that many JavaScript sites work just fine. Yeah, Slack and Twitter don’t always – I don’t know how they even manage to give their inputs such input latency – but Lobsters works just fine (which uses JavaScript), my site works just fine as well (which uses JavaScript), and my product works great on low-end devices (which requires JavaScript), etc. etc. etc.
You know I actually tried very hard to make my product work 100% without JavaScript? It was a horrible experience for both JS and non-JS users and a lot more code. Guess I’m just too lazy to make it work correct 🤷♂️
So yeah, please, knock it with this attitude. This isn’t bloody Reddit.
“I literally have the cheapest phone you can buy in Indonesia (~€60) and I have the almost-cheapest laptop you can buy in Indonesia (~€250). So yeah, I’d say I’m “disadvantaged”. Turns out, that many JavaScript sites work just fine.”
I’ve met lots of people in America who live dollar to dollar having to keep slow devices for a long time until better hand-me-downs show up on Craigslist or just clearance sales. Many folks in the poor or lower classes do have capable devices because they would rather spend money on that than other things. Don’t let anyone fool you that being poor always equals bad devices.
That said, the ones taking care of their families, doing proper budgeting, not having a car for finding deals, living in rural areas, etc often get stuck with bad devices and/or connections. I don’t have survey data on how many are in the U.S.. I know poor and rural populations are huge, though. It makes sense that some people push for a baseline that includes them when the non-inclusive alternative isn’t actually even necessary in many cases. When it is, there were lighter alternatives not used because of apathy. I’ve rarely seen situations where what they couldn’t easily use was actually necessary.
The real argument behind most of the sites is that they didn’t care. The ones that didn’t know often also didn’t care because they didn’t pay enough attention to people, esp low-income, to find out. If they say that, the conversations get more productive because we start out with their actual position. Then, strategies can be formed to address the issue in an environment where most suppliers don’t care. Much like we had to do in a lot of other sectors and situations where suppliers didn’t care about human cost of their actions. We got a lot of progress by starting with the truth. The web has many, dark truths to expose and address.
thank you for writing this out. the cheapest new phone in indonesia is probably much faster than your typical “obamaphone” or 3-year-old average device.
The Obama phones are actually Android devices that also have pre-installed government malware that can’t be removed. They have Chrome and run JS fine.
They have Chrome, and they run JS very slowly.
Are you going to cite any devices here? Which JS do they run slowly?
My guess is that the issue is on specific documents. I’d think that the fact that JS is so often used in ways that don’t perform well is a much larger issue than this one. Sites using JS in ways that are slow is a completely different debate to be had in my opinion. Although giving someone a version of the page without JS seems a solution, it ignores the entire concept of progressive web apps and the history of the web that got us to them.
EG, would you prefer the 2008 style of having a separate m.somesite.com that works without JS but tends to be made for small devices which tends to let corporations be okay with removing necessary functionality to simplify the “mobile experience”? Generally, that’s what we got that solution.
The fact that even JS-enabled documents like https://m.uber.com allow you to view a JS map and get a car to come pick you up with reasonable performance on even the cheapest burner phones shows just how much bad programming plays into your opinion here instead of simply whether or not JS is the problem itself.
It’s also worth noting that I am strongly interested in people doing less JS and the web being JS-less, but this isn’t the hill to die on in that battle if you ask me. Not only are you going to generally find people that aren’t sympathetic to disadvantaged people (because most programmers tend to not give any fucks unfortunately) but also because the devices that run JS are generally not going to be slow enough that decent JS isn’t going to run. If we introduce some new standard that replaces HTML, it’ll likely still be read by browsers that still support HTML / JS - which means the issue still remains because people aren’t going to prioritize a separate markup for their entire site depending on devices which is the exact reason that most companies stopped doing m.example.com. The exception to this rule seems to be bank & travel companies in my experience.
Here is an example device I test with regularly:
iPad 528LL/A, iOS 9.3.5
This iPad is less than 10 years old, and still works well on most sites with JS disabled. With JS enabled, even many text-based sites slow it down to the point of being unresponsive.
This version of iOS and Safari are gracious enough to include a JavaScript on/off toggle under Advanced, but no fine-grained control. This means that every time I want to toggle JS, I have to exit Safari, open Settings, scroll down to Safari, scroll down to Advanced, toggle JS, and then return to Safari.
Or are you going to tell me that my device is too old to visit your website? I’ll be on my way, then.
I think this is changing for the better, slowly but faster more recently.
I think with some feature checking and progressive enhancement, you can do a lot. For example, my demo offers basic forum functionality in Mosaic, Netscape, Opera 3.x, IE 3.x, and modern browsers with and without JS. If you have JS, you get some extra features like client-side encryption and voting buttons which update in-place instead of loading a new page.
I think it’s totally doable, with a little bit of effort, to live up to the original dream of HTML which works in any browser.
Facebook (ok, mbasic.facebook.com) MetaFilter Lobste.rs (for reading) old.reddit.com (for reading) Most blogs posted to lobsters and hn are actually nojs-friendly
Aside from devices without a real browser, JavaScript should run fine on any device people are going to get in 2020 - even through hand-me-downs.
I’m going to try to replace my grandmother’s laptop soon. I’ve verified it runs unbearably slow in general but especially on JS-heavy sites she uses. It’s a Toshiba Satellite with Sempron SI-42, 2GB of RAM and Windows 7. She got it from a friend as a gift presumably replacing her previous setup. Eventually, despite many reinstalls to clear malware, the web sites she uses were unbearably slow.
“When you can get a quad core raspberry pi for $30 and similar hardware in a $50 phone,”
She won’t use a geeky setup. She has a usable, Android phone. She leaves it in her office, occasionally checking the messages. In her case, she wants a nice-looking laptop she can set on her antique-looking desk. Big on appearances.
An inexpensive, decent-looking, Windows laptop seems like the best idea if I can’t get her on a Chromebook or something. I’ll probably scour eBay eventually like I did for my current one ($240 Thinkpad T420). If that’s $240, there’s gotta be some deals out there in the sub-Core i7 range. :)
Sure, but just to clarify - we are talking about people who may need to save money to get the $30 for something like a raspberry pi. Not someone who can drop $240 on a new laptop.
Oh yeah. I was just giving you the device example you asked for. She’s in the category of people who would need to save money: she’s on Social Security. These people still usually won’t go with a geeky rig even if their finances justify it. Psychology in action.
I do actually have a Pi 3 I could try to give her. I’d have to get her some kind of nice monitor, keyboard, and mouse for it. I’m predicting, esp with the monitor, the sum of the components might cost the same as or more than a refurbished laptop for web browsing. I mentioned my refurbished Core i7 for $240 on eBay as an example that might imply lower-end laptops with good performance might be much cheaper. I’ll find out soon.
But what about a device people got in 2015 or 2010? Or, dare I say, older devices, which still work fine, and may be kept around for any number of reasons like nostalgia or sentimental attachment?
Sure, you can tell all these people to also stuff it, but don’t pretend they don’t exist.
Oh that one is easy: Its the law.
Being paranoid isn’t a protected class, it might be a mental health issue - but my website has nothing to do with its treatment.
For the regular privacy, you have other extensions and cookie management you can do.
You have some good points. One thing I didn’t see addressed is the number of people on dial-up, DSL, satellite, cheap mobile, or other bad connections. The HTML/CSS-type web pages usually load really fast on them. The Javascript-type sites often don’t. They can act pretty broken, too. Here’s some examples someone posted to HN showing impact of JavaScript loads.
“If there is interactivity, such as an app, obviously it is going to have javascript. “
I’ll add that this isn’t obvious. One of the old models was client sending something, server-side processing, and server returns modified HTML. With HTML/CSS and fast language on server, the loop can happen so fast that the user can barely perceive a difference vs a slow, bloated, JS setup. It would also work for vast majority of websites I use and see.
The JS becomes necessary as the UI complexity, interactivity (esp latency requirements), and/or local computations increase past a certain point. Google Maps is an obvious example.
It is interesting to see people still using dialup. Professionally, I use typescript and angular. The bundle sizes on that are rather insane without much code. Probably unusable on dialup.
However, for my personal sites I am interested in looking at things like svelte mixed with dynamic loading. It might help to mitigate some of the issues that Angular itself has. But fundamentally, it is certainly hard to serve clients when you have apps like you mention - Google Maps. Perhaps a compromise is to try to be as thrifty as can be justified by the effort, and load most of the stuff up front, cache it as much as possible, and use smaller api requests so most of the usage of the app stays within the fast local interaction.
<rant>
Google Maps used to have an accessibility mode which was just static pages with arrow buttons – the way most sites like MapQuest worked 15 years ago. I can only guess why they took it away, but now you just get a rather snarky message.
Not only that, but to add insult to injury, the message is cached, and doesn’t go away even when you reload with JS enabled again. Only when you Shift+reload do you get the actual maps page.
This kind of experience is what no-JS browsers have to put up with every fucking day, and it’s rather frustrating and demoralizing. Not only am I blocked from accessing the service, but I’m told that my way of accessing it itself invalid.
Sometimes I’m redirected to rather condescending “community” sites that tell me step by step how to re-enable JavaScript in my browser, which by some random, unfortunate circumstance beyond my control must have become disabled.
All I want to say to those web devs at times like that is: Go fuck yourself, you are all lazy fucking hacks, and you should be ashamed that you participated in allowing, through action or inaction, this kind of half-baked tripe to see the light of day.
My way of accessing the Web is just as valid as someone’s with JS enabled, and if you disagree, then I’m going to do everything in my power to never visit your shoddy establishment again.
</rant>
Edit: I just want to clarify, that this rant was precipitated by other discussions I’ve been involved in, my overall Web experience, and finally, parent comment’s mention of Google Maps. This is not aimed specifically at you, @zzing.
It shouldn’t be extra effort, is the point. If you’re just writing some paragraphs of text, or maybe a contact form, or some page navigation, etc etc you should just create those directly instead of going through all the extra effort of reinventing your own broken versions.
Often the stuff I am making has a lot more than that. I use front end web frameworks to help with it.
Very few websites today have just text or a basic form.
Ok, well, that wasn’t at all clear since you were replying to this:
Many websites I see fit this description. They’re not apps, they don’t have any “behaviour” (at least none that a user can notice), but they still have so much JS that it takes over 256MB of RAM to load them up and with JS turned off they show a blank white page. That’s the topic of this thread, at least by the OP.
Uhh… Personal websites? Blogs? Many of the users here on Lobsters maintain sites like these. No need to state falsehoods to try and prove your point; there are plenty of better arguments you could be making.
As an aside, have you seen Sourcehut? That’s an entire freakin’ suite of web apps which don’t just function without JavaScript but work beautifully. Hell, Lobsters almost makes it into this category as well.
Some types of buttons, menus, text and images aren’t implemented in plain HTML. These kinds should still be built in JS. For instance, 3-state buttons. There are CSS hacks to make a button appear 3-state, but no way to define behavior for them without JS. People can hack together radio inputs to look like a single multi-state button, but that’s a wild hack that most developers aren’t going to want to tackle.
I’m trying to learn more about accessibility, and recently came across a Twitter thread with this to say: “Until the platform improves, you need JS to properly implement keyboard navigation”, with a couple video examples.
I think that people that want keyboard navigation will use a browser that supports that out of the box, they won’t rely on each site to implement it.
The world needs more browsers like Qutebrowser.
Interesting write-up.
I’m a big fan of Dune (hell, I have a Dune tattoo), but I always struggled playing the game. The UX and controls just feel too clunky for me, as someone who played the game for the first time a few years ago. It’s was clear to me that the game was pretty neat underneath all of that, but in the end I just wasn’t enjoying it because of that, and gave up quite soon and never really tried again.
I actually played a lot of Herzog back in the day on our MSX (the first one, not Herzog Zwei), which is also kind-of an RTS.
I suspect there will be more Dune games in the next few years, with the upcoming movie and all.
I was glad to see it’s going to be in two parts (though I would’ve preferred three, to match up with the book divisions). Trying to cram Dune into two hours just isn’t feasible. (The Weirding Way, you see, is based on sound…)
I was disappointed with the SyFy miniseries too. I had to turn it off when I realized they had Baron Harkonnen speaking in rhyming couplets and what the hell was going on with Irulan….
So, I gotta ask: what’s the tattoo?
I couldn’t bear to finish watching the Sci-Fi channel miniseries. I actually saw it before I saw the Lynch film or read the books and felt it was just boring. I also thought Paul Atreides was a spoiled annoying brat.
A picture especially for you 😚: https://imgur.com/a/aSOzVIv
That’s awesome.
I never got the whole sound thing until I became a bigger fan of Lynch and realized his whole deal is sound design. Of course he’d add in a sound that kills!
This is somewhat the issue with old games: their controls don’t keep up with semi-modern conventions. I would have loved to play Fallout 1 since I think the setting is absolutely fascinating but the second I have to fight with the damn door controls I feel like throwing the computer across the room.
A year ago I was playing Pillars of Eternity and was impressed how it managed to capture the essence of Baldurs Gate 2 but without reminding me how it might not have aged well. It felt like playing BG2 back in the day did for me, but I am sure there have been plenty of adjustments to make it palatable for modern audiences.
Yeah, Pillars of Eternity did a really good job of polishing the UX of the 2000-era cRPG games, although BG2 is still pretty playable today, and there’s the enhanced edition which makes things a bit easier too. It’s also worth checking out Tyranny by the way, I thought it was even better than PoE.
Fallout 1 and 2 (and Arcanum, by the Fallout 1 team) are … special. They’re all absolutely fantastic games, but you almost need some xanax to be able to play them. There’s no real technical reason for it as such, and I’m surprised at how much patience gamers (including myself) had for clearly bad UX even just 20 years ago.