my sort of default argument for this nostalgia is that you can just do what you were doing 20 years ago. You can spin up php and run it. You can ship hand-crafted HTML. You can do all that stuff and get the same experience.
Except, yeah, people expect higher quality stuff than what you might be able to offer with those tools and your skillset. Or maybe they don’t and you are fine!
You can keep on doing whatever you want. You can. of course, complain about tools etc but I feel like there’s an implicit “none of this is needed”. But I mean…. the author is the one trying to use webfonts! You can just use sans-serif! You can still use PNGs! You can just target desktop! Especially given that this person is running their own studio, I feel like they could just do whatever they want.
I dislike a lot of this stuff but I’m at least willing to admit that most of it serves a purpose, just not in a great way.
I love this article, everything in it rings very true.
I want to point out a thing about serving images:
Even images are now complicated. … And with raster, the need to send along the best-sized image for the right device is complicated enough that paid services have come along to manage this for you.
Not only is this a huge complication, it’s also pretty bad UX in most cases. You see, when I read things on my phone and encounter an image, I have this special technique where I use two fingers and pinch on the screen to zoom in. This allows me to see details which are otherwise too small. But these days, I usually find that the image is served at just the right size to not look too blocky on a phone screen at 1x zoom, meaning the fine detail is unintelligible even when zooming in.
This kind of illustrates a lot of the “progress” in web over the years. A new, hugely more complex approach arrives, it has some advantages over the previous way, it has some disadvantages, and it’s not clear if the new approach is actually, on the whole, better than the old.
I guess the main advantage for this image approach is less bandwidth on mobile devices. But let’s be honest: If that image was relevant it wouldn’t be a problem to have it in the intended resolution and detail it was taken. Otherwise its probably just some “eye candy” ( or eye cancer for ads) that didn’t really need to be there in the first place. So you could say we throttled advertising bandwidth.
Yeah it’s like the article says, instead of trying to tackle complexities we just hide them in more and more layers of tooling every year. It feels like there’s little innovation in problem solving, it’s all problem management.
instead of trying to tackle complexities we just hide them in more and more layers of tooling every year.
Hah, that complaint is the source of a big controversy from back in 2008 when Jonathon Blow said what you just said, criticizing Linux for doing exactly that.
Specifically, his complaint was that X handled mouse input poorly (if you moved the mouse all the way outside of the window within a single frame, the mouse delta would be capped to the window width/height instead of representing the true distance), and people responded “why are you using X anyway, just use SDL” and his response was “because SDL has the exact same problem as SDL is just a wrapper around the X function anyway”.
Next to me on my desk I have a very good book, “Making and Breaking the Grid”, which is about graphic design.
I am not very good at graphics design, but I want to be better, so over the years I’ve collected a dozen or two books on it.
These books are classics. I can pick them up 50 years from now and get information on things like the grid, typography, A/B testing, and so forth. They contain no tech.
By the same token, I’ve got easily 100x that on technology to actually implement the concepts in these books. These books are good for about a year or three. Every so often you have to either toss out what you have or put them in cold storage. (Anybody looking for a good AJAX book?)
The problems don’t change on these crazy fast cycles. The technology does, however. So as a professional you have to ask yourself: is your job to master solving problems and then choosing and applying tech to do so, or is your job mastering tech and then taking whatever problem you’re given and throwing that new tech at it.
This is not an anti-tech/Luddite argument. If the cool thing you’re using can solve the problem better, use it. This is an argument about what things drive what other things. I’m willing to bet that there are a lot of folks using things like CSS Grid who’ve never read about or used grids “in the wild”. Some of these folks might even be UI/UX folks. For those folks, how would they ever be able to tell whether a new tech was actually useful for something they needed, or just something new and flashy? We’ve gotten something seriously backwards in our industry, and it’s not just limited to CSS and graphics.
</rant>
Indeed! I wish I had a better sense of what factor(s) drives this too. Is it people trying to get promoted and optimizing for “new shiny tools” to say they fixed a problem (without actually fixing the root of the issue)? Are we afraid of going “deep” into the problems and so we just build superficial wrappers / abstractions that make sense to us but no one else?
I love photography. I’m not that great at it, but I’ve been shooting pictures for decades and I feel like I’m getting better.
I don’t read photography things online. Why? Because, in my mind, the business of making and buying photography gear is not related at all to the business of actually shooting good pictures. Don’t get me wrong: I’m a huge photography gearhead and have serious nerd envy about some of the hi-res/low-light/HDR gear I’m seeing. It’s just that the gear doesn’t make the picture. Sure, it can help take a good idea and turn it into an excellent picture, but it’s the idea and the expertise to translate it that drives whatever gear you might or might not need, not the other way around. Mozart would kick ass on a ukulele. No matter how hard you bang on that $100K piano, you’re not going to sound like Mozart.
People buy things based on the benefits they’re promised. When these things get complex, the person buying is never sure if their average/poor results are a result of poor gear or something else. This leaves them open to buying even more cool gear next year. When you don’t have a feedback cycle, the feedback cycles becomes something akin to “what the other cool kids are doing”. This is why fashion designers pay models to be seen in their outfits. We are herd creatures. If you can’t do a good job, you can at least hang with some really awesome people playing with some seriously cool stuff while you blame your poor results on something else.
That’s true there is certainly a ‘mimesis’ component to all of this (both individually and organizationally).
I have seen what you’re describing re: photography / music in a lot of hobbies. Mastering the fundamentals is difficult and takes time, so people probably figure why not spend more money on more powerful gear to on one level emulate the masters, and on another feel like they’re getting more immediate rewards.
I suppose the same general train of thought applies here in how we manage software abstractions, complexity, etc.
This is related to something I realized recently while looking for a CAD package. I am very familiar with FreeCAD, and it’s great. But the interface is clunky, and I have a large project I wanted to undertake so I thought I’d look at the alternatives: AutoCAD, Archicad, Rhino, and a few others. I was shocked, shocked I tell you, to learn that FreeCAD is actually the best of the bunch. Easiest to use, complete feature set, easily extended.
I compared this thought to when I met some architects a couple years back. I noticed pretty quickly that these architects weren’t architects at all: they were AutoCAD users. They didn’t learn architecture, they learned AutoCAD. Some learned Archicad.
The same is true in programming: we have Java Programmers, where we used to have Software Developers (many are the latter, still, but it’s not typical.)
I suspect we’re seeing the same thing in “web design” and so on.
I used to do Agile Tech coaching, ie, I knew both how to code and how to run good software projects.
Time and time again, I’d see somebody ask another coach a question, like “How do we split User Stories?”
The answer would be “Well, you right-click on the link in the kanban column, then choose ….”
They were explaining how to split User Stories, that is, the physical process of using the tool. But they weren’t actually explaining how to do it regardless of the tool. The building were full of folks like that. We were using traditional classroom methods to teach people how to operate tools, not how to actually do the work.
I stopped doing that job. It was too depressing. A lot of activity and work, but progress was actually going backwards.
I feel like some really great coders and mentors came up with some things that worked, managed to generalize them so they continued to work (which usually doesn’t happen), but had no idea why they worked.
Nothing wrong with that, at least until you get thrown a lot of edge cases and the field fills up with people who want to look smart and publish. Then, this simple idea grows into an entire industry. It becomes a monster, a monster that’s not actually about making stuff people want. (It’s about doing coaching that people like. Not the same thing!)
I also did my share of publishing, figuring out (I think) why things work. And know what? It doesn’t make a difference. It’s still about “meeting people where they are” and playing “tell me what you want and I’ll tell you that’s what you need” Note: it’s not that it’s crooked, it’s that trying to help people without a good theoretical foundation can never scale past a few clients and a couple of years. It grows into a marketing monster and stagnates. Nothing is ever going to change there, and still those initial ideas have a lot of merit. You just can’t build on them without destroying the thing you’re trying to promote.
I hope never to go back. I think I’d rather sweep streets. My life’s missing is to make developers happer and more productive. I’m sticking to that :)
My life’s missing is to make developers happer and more productive. I’m sticking to that :)
My personal approach to coaching is basically this. I’m fortunate to be in a situation where I’m able to do it this way, though. And I don’t know why I can, so I can’t replicated it. :D
I wonder if there a dev/engineering philosophy that goes against the need to be in “red queen”* mode the whole time? (*constantly moving but never moving forward)
Eventually, the process of breaking code into smaller pieces will create more complexity than it eliminates, because of intermodule dependencies. Software becomes incomprehensible after a point. The human limits in processing information can get overloaded at only two or three levels of nesting.
The whole article is good, but this was my favorite part.
i came here to quote that line too :) i laughed out loud when i hit it.
my sort of default argument for this nostalgia is that you can just do what you were doing 20 years ago. You can spin up php and run it. You can ship hand-crafted HTML. You can do all that stuff and get the same experience.
Except, yeah, people expect higher quality stuff than what you might be able to offer with those tools and your skillset. Or maybe they don’t and you are fine!
You can keep on doing whatever you want. You can. of course, complain about tools etc but I feel like there’s an implicit “none of this is needed”. But I mean…. the author is the one trying to use webfonts! You can just use sans-serif! You can still use PNGs! You can just target desktop! Especially given that this person is running their own studio, I feel like they could just do whatever they want.
I dislike a lot of this stuff but I’m at least willing to admit that most of it serves a purpose, just not in a great way.
“Quality” is not really how I would describe my expectations of the contemporary web.
I love this article, everything in it rings very true.
I want to point out a thing about serving images:
Not only is this a huge complication, it’s also pretty bad UX in most cases. You see, when I read things on my phone and encounter an image, I have this special technique where I use two fingers and pinch on the screen to zoom in. This allows me to see details which are otherwise too small. But these days, I usually find that the image is served at just the right size to not look too blocky on a phone screen at 1x zoom, meaning the fine detail is unintelligible even when zooming in.
This kind of illustrates a lot of the “progress” in web over the years. A new, hugely more complex approach arrives, it has some advantages over the previous way, it has some disadvantages, and it’s not clear if the new approach is actually, on the whole, better than the old.
I guess the main advantage for this image approach is less bandwidth on mobile devices. But let’s be honest: If that image was relevant it wouldn’t be a problem to have it in the intended resolution and detail it was taken. Otherwise its probably just some “eye candy” ( or eye cancer for ads) that didn’t really need to be there in the first place. So you could say we throttled advertising bandwidth.
This has absolutely been my experience doing web stuff… but also doing Python. Do you want to use virtualenv, pipenv, poetry, or…?
Yeah it’s like the article says, instead of trying to tackle complexities we just hide them in more and more layers of tooling every year. It feels like there’s little innovation in problem solving, it’s all problem management.
Hah, that complaint is the source of a big controversy from back in 2008 when Jonathon Blow said what you just said, criticizing Linux for doing exactly that.
Specifically, his complaint was that X handled mouse input poorly (if you moved the mouse all the way outside of the window within a single frame, the mouse delta would be capped to the window width/height instead of representing the true distance), and people responded “why are you using X anyway, just use SDL” and his response was “because SDL has the exact same problem as SDL is just a wrapper around the X function anyway”.
Perhaps the answer is complexity layering limitations?
complexity limitations? how does that work?
<rant>
Next to me on my desk I have a very good book, “Making and Breaking the Grid”, which is about graphic design.
I am not very good at graphics design, but I want to be better, so over the years I’ve collected a dozen or two books on it.
These books are classics. I can pick them up 50 years from now and get information on things like the grid, typography, A/B testing, and so forth. They contain no tech.
By the same token, I’ve got easily 100x that on technology to actually implement the concepts in these books. These books are good for about a year or three. Every so often you have to either toss out what you have or put them in cold storage. (Anybody looking for a good AJAX book?)
The problems don’t change on these crazy fast cycles. The technology does, however. So as a professional you have to ask yourself: is your job to master solving problems and then choosing and applying tech to do so, or is your job mastering tech and then taking whatever problem you’re given and throwing that new tech at it.
This is not an anti-tech/Luddite argument. If the cool thing you’re using can solve the problem better, use it. This is an argument about what things drive what other things. I’m willing to bet that there are a lot of folks using things like CSS Grid who’ve never read about or used grids “in the wild”. Some of these folks might even be UI/UX folks. For those folks, how would they ever be able to tell whether a new tech was actually useful for something they needed, or just something new and flashy? We’ve gotten something seriously backwards in our industry, and it’s not just limited to CSS and graphics. </rant>
Indeed! I wish I had a better sense of what factor(s) drives this too. Is it people trying to get promoted and optimizing for “new shiny tools” to say they fixed a problem (without actually fixing the root of the issue)? Are we afraid of going “deep” into the problems and so we just build superficial wrappers / abstractions that make sense to us but no one else?
(unsubstantiated opinion follows)
I love photography. I’m not that great at it, but I’ve been shooting pictures for decades and I feel like I’m getting better.
I don’t read photography things online. Why? Because, in my mind, the business of making and buying photography gear is not related at all to the business of actually shooting good pictures. Don’t get me wrong: I’m a huge photography gearhead and have serious nerd envy about some of the hi-res/low-light/HDR gear I’m seeing. It’s just that the gear doesn’t make the picture. Sure, it can help take a good idea and turn it into an excellent picture, but it’s the idea and the expertise to translate it that drives whatever gear you might or might not need, not the other way around. Mozart would kick ass on a ukulele. No matter how hard you bang on that $100K piano, you’re not going to sound like Mozart.
People buy things based on the benefits they’re promised. When these things get complex, the person buying is never sure if their average/poor results are a result of poor gear or something else. This leaves them open to buying even more cool gear next year. When you don’t have a feedback cycle, the feedback cycles becomes something akin to “what the other cool kids are doing”. This is why fashion designers pay models to be seen in their outfits. We are herd creatures. If you can’t do a good job, you can at least hang with some really awesome people playing with some seriously cool stuff while you blame your poor results on something else.
That’s true there is certainly a ‘mimesis’ component to all of this (both individually and organizationally).
I have seen what you’re describing re: photography / music in a lot of hobbies. Mastering the fundamentals is difficult and takes time, so people probably figure why not spend more money on more powerful gear to on one level emulate the masters, and on another feel like they’re getting more immediate rewards.
I suppose the same general train of thought applies here in how we manage software abstractions, complexity, etc.
This is related to something I realized recently while looking for a CAD package. I am very familiar with FreeCAD, and it’s great. But the interface is clunky, and I have a large project I wanted to undertake so I thought I’d look at the alternatives: AutoCAD, Archicad, Rhino, and a few others. I was shocked, shocked I tell you, to learn that FreeCAD is actually the best of the bunch. Easiest to use, complete feature set, easily extended.
I compared this thought to when I met some architects a couple years back. I noticed pretty quickly that these architects weren’t architects at all: they were AutoCAD users. They didn’t learn architecture, they learned AutoCAD. Some learned Archicad.
The same is true in programming: we have Java Programmers, where we used to have Software Developers (many are the latter, still, but it’s not typical.)
I suspect we’re seeing the same thing in “web design” and so on.
I used to do Agile Tech coaching, ie, I knew both how to code and how to run good software projects.
Time and time again, I’d see somebody ask another coach a question, like “How do we split User Stories?”
The answer would be “Well, you right-click on the link in the kanban column, then choose ….”
They were explaining how to split User Stories, that is, the physical process of using the tool. But they weren’t actually explaining how to do it regardless of the tool. The building were full of folks like that. We were using traditional classroom methods to teach people how to operate tools, not how to actually do the work.
I stopped doing that job. It was too depressing. A lot of activity and work, but progress was actually going backwards.
I just got back into doing coaching, and I’m finding the same things now that I did almost a decade ago. It’s like we’ve learned nothing.
I feel like some really great coders and mentors came up with some things that worked, managed to generalize them so they continued to work (which usually doesn’t happen), but had no idea why they worked.
Nothing wrong with that, at least until you get thrown a lot of edge cases and the field fills up with people who want to look smart and publish. Then, this simple idea grows into an entire industry. It becomes a monster, a monster that’s not actually about making stuff people want. (It’s about doing coaching that people like. Not the same thing!)
I also did my share of publishing, figuring out (I think) why things work. And know what? It doesn’t make a difference. It’s still about “meeting people where they are” and playing “tell me what you want and I’ll tell you that’s what you need” Note: it’s not that it’s crooked, it’s that trying to help people without a good theoretical foundation can never scale past a few clients and a couple of years. It grows into a marketing monster and stagnates. Nothing is ever going to change there, and still those initial ideas have a lot of merit. You just can’t build on them without destroying the thing you’re trying to promote.
I hope never to go back. I think I’d rather sweep streets. My life’s missing is to make developers happer and more productive. I’m sticking to that :)
I feel a lot of this.
My personal approach to coaching is basically this. I’m fortunate to be in a situation where I’m able to do it this way, though. And I don’t know why I can, so I can’t replicated it. :D
reminds me of Software disenchantment
I wonder if there a dev/engineering philosophy that goes against the need to be in “red queen”* mode the whole time? (*constantly moving but never moving forward)
Eventually, the process of breaking code into smaller pieces will create more complexity than it eliminates, because of intermodule dependencies. Software becomes incomprehensible after a point. The human limits in processing information can get overloaded at only two or three levels of nesting.