I 100% disagree with this, but it’s an interesting read nonetheless. What does lobsters think?
Counter offer: In 2016, it should not take multiple seconds for a page of text (be that a blog post or a page of tweets or a news article) to render in a browser.
This comment reminded me of this and this.
It’s not even about loading times. I’ve seen significant issues with a number of sites that rely on JS for their core functionality, like navigation not working because the JS package was somehow broken or the backspace button not working properly due to a bug in the JS code, something like this.
Experienced web devs like Jen Simmons are highly aware of new web technologies and the fact that yes you can build an accessible website made out of JS-wrapped html and html injected through JS.
Two things about advocating for preferring non-JS pages:
Yes it is universally better to build the same page with less code. If you can manage to attain the same goal with only HTML and text then you have reached the pinnacle!
Plain html/css is by default more accessible. Advocating for the easier solution will work best for the most people.
I also love it when people list the specs of mid or low-end Android phones as if a $200 Android phone with 1GB of RAM actually has anywhere near the same web performance as a 1GB iPhone. Every piece of evidence points to the fact that these cheap Android phones and even the higher-end ones have a significantly worse experience with web apps.
Even the ones that don’t technically require JS can be nightmarishly hard to successfully exploit without running a script in the browser. The difference between sending hundreds of megabytes (even gigabytes) of payload or asking the user to reload some resource a few hundred thousand times vs spraying the heap with the help of a client side script and then automatically feeding the sometimes-triggering data in a loop to the vulnerable component is staggering.
I 100% disagree with this
What exactly is it that you disagree with?
I’m curious what’s your reasoning? I think Nolan laid out a pretty good reasoning on why “progressive enhancement” as it used to mean 5 or more years ago should be revisited.
As someone who’s day job is supporting browsers as old as IE9, I find that I actually agree with him.
My company is in a rare niche where a single conversion can mean thousands in revenue, and even though we’ve only had single digit <IE10 conversions in the last 2 month, the engineering-time spent on supporting IE9+ has paid back in multiples compared to the amount of resources spent on implementing them because of the said niche.
As a concrete example, IE9 falls back to a file input uploader, whereas IE10+ uses Drag & Drop HTML5 API for file uploading. But this was not “free” to implement, and unless your company’s business model supports 2x'ing the engineering effort (there are essentially two independent workflows for IE9 users vs everyone else) to develop the same feature set, you’re likely better off spending that time making sure the majority of your users are well taken care of, rather than worrying about the stragglers.
I can’t really think of a good reason in 2016 for interpreting “progressive enhancement” as it’s strictly formulated (functional HTML, followed by CSS spice, augmented by JS flavouring) outside of very niche market environments.
I also totally disagree with his post as well, I am almost getting as angry as the Tweets of the people he quotes.
My opinion is that we should make a clear distinction between “the web” and “web apps”.
“The web”, as existed since its inception should work without any JS, and I think our browsers should not have any JS support at all. JS is not needed for publishing and contributing. Sites like lobste.rs, Youtube and Twitter can easily work without any JS, and they do. All the JS does is spying and refreshing your feed automatically, which is actually very annoying as stuff keeps jumping around and kills the back browser button after you come back from visiting a link someone posted. Oh, and JS makes your computer unusable while it executes its spying logic and mines some bitcoins in the background.
The “web apps”, can be seen as something separate that use JS because it is a cross platform language, much like Java, but for “web” hipsters. Those apps have a JS runtime and (not so) accidentally use HTML and CSS for the layout. This is perfectly fine as an alternative to developing in Java, C# or Objective C and it should be treated as such. This is actually exactly how I use the “Signal” application. I have an installation of Chromium with the Signal “web application” installed and I use Chromium+Signal exclusively for this purpose. This way, the dependencies of the Signal app are a bit excessive, but this could (easily) be improved upon and that is the way forward. This way, the applications can also be vetted and be installed “offline first”, so depending on the application’s use case it works perfectly fine without network connection.
I also found this further discussion of the root issue by Laurie Voss compelling: https://lobste.rs/s/crhr7d/web_development_has_two_flavors_graceful
The generational disconnect I mentioned earlier seems to be coming from the fact that web developers are in two groups, with different “default” ways of thinking about web development. The first group, who turned up in the past 5 or maybe even 10 years, think of it as application development with the web as a medium. The second group, which includes myself, who started 20 years ago think of it as building a set of discrete pages. Obviously, both groups can and do build both types.