That image really captures it: “good UX” is a lever for user behavior, and aligns with users’ incentives when the business model wants that.
For new companies without much existing trust or incumbency advantage, an extremely low-friction well-designed experience supports the business model hugely. A user has a choice, they see this much easier thing to use, and they choose it instead of the clunky thing they’ve used forever. Win-win!
But when I read this article, I get an implicit definition of UX as “reducing friction in experience toward’s the user’s own goals” as if that’s universally the business goal being pursued. Perhaps I’m even more jaded than the author, but I’m not sure that’s what Really Existing UX is these days either (or ever was.)
If we suspend belief, and define “UX” very neutrally as the applied study of the relationship between service design and user behavior, a lot of the things the author points out make sense as part of using design to achieve business goals.
When the business model is to limit churn, a design adding friction to cancellation makes sense.
When the organizational goal is to be super duper compliant with rules (as in a lot of government work), a design with a lot of legalese (“disclosure” in positive terms) also makes sense.
I really like this piece, but it left me thinking: okay, find thee a place where the organizational model aligns with users’ interests. It’s pushing water most anywhere else.
Heroku as a platform tends to already do a lot of the work to abstract things away. I think the main tradeoff for you is you would have to constrain some options and fit some of their patterns (such as preferring configuration via environment variables.)
I recently deployed the Klaxon project using it and while the instructions were slightly outdated overall it was an incredibly easy experience and one I wouldn’t hesitate to send to a non-dev: https://github.com/themarshallproject/klaxon
“Capabilities are assets; custom code is a liability.”
A lot of software is written with a large degree of uncertainty about what the set of end-users actually need. In the early days, custom code is a very expensive way to learn relative to using off the shelf tools.
So you want to build a form — let’s say an eligibility screener.
Building it with custom code usually takes a lot longer than an off-the-shelf tool, and the main things you’re seeking to learn (really core stuff to the form: what are the right questions? how do people respond to this or that phrasing?) isn’t supported by writing custom code.
So in those cases I almost always recommend an off-the-shelf form SaaS tool for alpha versions. You move fast, and the cost of change is incredibly low.
Once you’ve reduced uncertainty, you probably start to hit the limitations of that tool, so maybe you consider a framework, and mayyyybe you consider custom code. But, honestly, before that point custom code is just slowing you down and not helping you learn much.
I often say: “I’m very confident you can build a custom form later. Let’s figure out everything else by putting actual features in front of actual users and iterating fast, shall we?”
The generational disconnect I mentioned earlier seems to be coming from the fact that web developers are in two groups, with different “default” ways of thinking about web development. The first group, who turned up in the past 5 or maybe even 10 years, think of it as application development with the web as a medium. The second group, which includes myself, who started 20 years ago think of it as building a set of discrete pages. Obviously, both groups can and do build both types.
Am I the only one that found this somewhat weird to read because it frames what many (definitely I) consider the most sensible default way to build web applications as a novel experiment?
Relatively few user needs actually require the interactions that single page apps offer, and SPAs are, in my experience, far more costly to build and, especially, to maintain.
Why is no JavaScript a sensible default? 99% of users have JavaScript enabled, including mobile.
Sure, if you don’t need JS don’t use it. But the purist way of saying “I will NOT use JS, I will find difficult workarounds for things that would be easy in JS.” is ridiculous. It’s like designing a car with tank treads because you don’t like tires.
I avoided where possible for security, efficiency, and portability. Sandboxing a renderer doing HTML and CSS is so much easier than a Turing Complete language running malicious or inefficient code. All this extra complexity in web browsers has also reduced the market to mostly two or three of them. The simpler stuff can be handled by the budget browsers. Increases diversity of codebases and number of odd platforms that can use site/service. Finally, making the stuff simpler increases efficiency as you’re handling both less capabilities and less combinations of them. Easier to optimize.
So, the above are all the reasons I opposed the rise of JavaScript in favor of old-school DHTML where possible. Also, there were alternatives like Juice (Oberon) that were better. Worse is Better won out again, though. Now I limit the stuff mainly for security, predictability, and performance on cheap hardware.
the above are all the reasons I opposed the rise of JavaScript in favor of old-school DHTML where possible
Huh. Please correct me if I’m wrong, but my impression was that “DHTML” was a term created by Microsoft to describe websites that used HTML markup with a scripting language (like JavaScript or VBScript) to manipulate the DOM. They used it to market the capabilities of Internet Explorer.
I ran into it on sites that either used JavaScript mainly to enhance but not replace presentation layer or used CSS tricks. I know nothing of it term itsslf past that.
Ah, okay. I think I misunderstood what you meant by “the rise of JavaScript” - not its mere usage, but its increasing responsibilities in contemporary web development.
Isn’t css turing complete by now? :)
It is disappointing that if you make a browser from scratch as a hobby you have to add a javascript engine to be able to use the big 3 or 4 most popular social media sites.
Soon adding an SSL library will be a requirement for most sites.
For serious browsers none of this is a real issue, it’s just kind of sad that a useful browser has a much larger minimum complexity now days.
For serious browsers none of this is a real issue, it’s just kind of sad that a useful browser has a much larger minimum complexity now days.
That’s a big part of the reason I pushed for the simpler standards. Too much money and complexity going into stuff always ends up as an oligopoly. Those usually get corrupted by money at some point. So, the simpler browser engines would be easier to code up. Secure, extensible, cool browsers on language and platform combo of one’s choosing would be possible. Much diversity and competition would show up like the old days. This didn’t happen.
An example was the Lobo browser that was done in Java. Browsers were getting hit by memory-safety bugs all the time. One in Java dodges that while benefiting from its growing ecosystem. It supported a lot, too, but missing some key features as the complexity went up over time. (sighs) Heck, even a browser company with a new, safe language is currently building a prototype for learning instead of production. Even the domain experts can’t do a real one with small teams in reasonable time at this complexity level. That’s saying something.
It is profoundly easier to write useful acceptance tests for a static HTML versus a page with any amount of JavaScript. The former is a simple text-based protocol and the latter is a fractal API with a complex runtime and local state. That’s why no JS is a sensible default.
That doesn’t mean finding difficult workarounds to avoid JS at all costs. It means being clear about the downsides and coming up with the simplest strategy to mitigate them for your application.
I’ve taken little bits of this course over a few of its iterations (one when I was at Berkeley myself), and have to say I find it quite good.
Prof. Fox is a compelling lecturer — who, like me, speaks like he’s mainlining coffee — and the material strikes a great balance of practical Rails knowledge and (more importantly) the agile practices (TDD, BDD, legacy refactoring) that Rails simply makes easy to do, and which are incredibly valuable agnostic on the particular web framework one is using.
Worth it just for the anglerfish image that really says it all.
That image really captures it: “good UX” is a lever for user behavior, and aligns with users’ incentives when the business model wants that.
For new companies without much existing trust or incumbency advantage, an extremely low-friction well-designed experience supports the business model hugely. A user has a choice, they see this much easier thing to use, and they choose it instead of the clunky thing they’ve used forever. Win-win!
But when I read this article, I get an implicit definition of UX as “reducing friction in experience toward’s the user’s own goals” as if that’s universally the business goal being pursued. Perhaps I’m even more jaded than the author, but I’m not sure that’s what Really Existing UX is these days either (or ever was.)
If we suspend belief, and define “UX” very neutrally as the applied study of the relationship between service design and user behavior, a lot of the things the author points out make sense as part of using design to achieve business goals.
When the business model is to limit churn, a design adding friction to cancellation makes sense.
When the organizational goal is to be super duper compliant with rules (as in a lot of government work), a design with a lot of legalese (“disclosure” in positive terms) also makes sense.
I really like this piece, but it left me thinking: okay, find thee a place where the organizational model aligns with users’ interests. It’s pushing water most anywhere else.
This goes against your first stated goal (host agnostic) but I am a huge fan of Heroku’s deploy button: https://devcenter.heroku.com/articles/heroku-button
Heroku as a platform tends to already do a lot of the work to abstract things away. I think the main tradeoff for you is you would have to constrain some options and fit some of their patterns (such as preferring configuration via environment variables.)
I recently deployed the Klaxon project using it and while the instructions were slightly outdated overall it was an incredibly easy experience and one I wouldn’t hesitate to send to a non-dev: https://github.com/themarshallproject/klaxon
A line I say a lot comes to mind:
“Features are assets; code is a liability.”
I tend to modify this to be more precise:
“Capabilities are assets; custom code is a liability.”
A lot of software is written with a large degree of uncertainty about what the set of end-users actually need. In the early days, custom code is a very expensive way to learn relative to using off the shelf tools.
So you want to build a form — let’s say an eligibility screener.
Building it with custom code usually takes a lot longer than an off-the-shelf tool, and the main things you’re seeking to learn (really core stuff to the form: what are the right questions? how do people respond to this or that phrasing?) isn’t supported by writing custom code.
So in those cases I almost always recommend an off-the-shelf form SaaS tool for alpha versions. You move fast, and the cost of change is incredibly low.
Once you’ve reduced uncertainty, you probably start to hit the limitations of that tool, so maybe you consider a framework, and mayyyybe you consider custom code. But, honestly, before that point custom code is just slowing you down and not helping you learn much.
I often say: “I’m very confident you can build a custom form later. Let’s figure out everything else by putting actual features in front of actual users and iterating fast, shall we?”
Built a tiny thing to get some muscle memory and scratch my own itch.
It finds a random point in a city of your choice (for exploring, which is a lot of what I’m doing right now).
I also found this further discussion of the root issue by Laurie Voss compelling: https://lobste.rs/s/crhr7d/web_development_has_two_flavors_graceful
Am I the only one that found this somewhat weird to read because it frames what many (definitely I) consider the most sensible default way to build web applications as a novel experiment?
Relatively few user needs actually require the interactions that single page apps offer, and SPAs are, in my experience, far more costly to build and, especially, to maintain.
Why is no JavaScript a sensible default? 99% of users have JavaScript enabled, including mobile.
Sure, if you don’t need JS don’t use it. But the purist way of saying “I will NOT use JS, I will find difficult workarounds for things that would be easy in JS.” is ridiculous. It’s like designing a car with tank treads because you don’t like tires.
I avoided where possible for security, efficiency, and portability. Sandboxing a renderer doing HTML and CSS is so much easier than a Turing Complete language running malicious or inefficient code. All this extra complexity in web browsers has also reduced the market to mostly two or three of them. The simpler stuff can be handled by the budget browsers. Increases diversity of codebases and number of odd platforms that can use site/service. Finally, making the stuff simpler increases efficiency as you’re handling both less capabilities and less combinations of them. Easier to optimize.
So, the above are all the reasons I opposed the rise of JavaScript in favor of old-school DHTML where possible. Also, there were alternatives like Juice (Oberon) that were better. Worse is Better won out again, though. Now I limit the stuff mainly for security, predictability, and performance on cheap hardware.
Huh. Please correct me if I’m wrong, but my impression was that “DHTML” was a term created by Microsoft to describe websites that used HTML markup with a scripting language (like JavaScript or VBScript) to manipulate the DOM. They used it to market the capabilities of Internet Explorer.
I ran into it on sites that either used JavaScript mainly to enhance but not replace presentation layer or used CSS tricks. I know nothing of it term itsslf past that.
Ah, okay. I think I misunderstood what you meant by “the rise of JavaScript” - not its mere usage, but its increasing responsibilities in contemporary web development.
Isn’t css turing complete by now? :)
It is disappointing that if you make a browser from scratch as a hobby you have to add a javascript engine to be able to use the big 3 or 4 most popular social media sites.
Soon adding an SSL library will be a requirement for most sites. For serious browsers none of this is a real issue, it’s just kind of sad that a useful browser has a much larger minimum complexity now days.
That’s a big part of the reason I pushed for the simpler standards. Too much money and complexity going into stuff always ends up as an oligopoly. Those usually get corrupted by money at some point. So, the simpler browser engines would be easier to code up. Secure, extensible, cool browsers on language and platform combo of one’s choosing would be possible. Much diversity and competition would show up like the old days. This didn’t happen.
An example was the Lobo browser that was done in Java. Browsers were getting hit by memory-safety bugs all the time. One in Java dodges that while benefiting from its growing ecosystem. It supported a lot, too, but missing some key features as the complexity went up over time. (sighs) Heck, even a browser company with a new, safe language is currently building a prototype for learning instead of production. Even the domain experts can’t do a real one with small teams in reasonable time at this complexity level. That’s saying something.
It is profoundly easier to write useful acceptance tests for a static HTML versus a page with any amount of JavaScript. The former is a simple text-based protocol and the latter is a fractal API with a complex runtime and local state. That’s why no JS is a sensible default.
That doesn’t mean finding difficult workarounds to avoid JS at all costs. It means being clear about the downsides and coming up with the simplest strategy to mitigate them for your application.
I’ve taken little bits of this course over a few of its iterations (one when I was at Berkeley myself), and have to say I find it quite good.
Prof. Fox is a compelling lecturer — who, like me, speaks like he’s mainlining coffee — and the material strikes a great balance of practical Rails knowledge and (more importantly) the agile practices (TDD, BDD, legacy refactoring) that Rails simply makes easy to do, and which are incredibly valuable agnostic on the particular web framework one is using.
Trying to wrangle Selenium as my integration point for a service serving low-income clients. Boy, can web driving be flaky.