In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs. Then Node started warning me about security problems in some of those libraries. I ended up taking some time finding alternative packages with fewer dependencies.
On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t. It’s cool to look at how tiny and efficient code can be — a Scheme interpreter in 4KB! The original Mac OS was 64KB! — but yowza, is it ever difficult to code that way.
There was an early Mac word processor — can’t remember the name — that got a lot of love because it was super fast. That’s because they wrote it in 68000 assembly. It was successful for some years, but failed by the early 90s because it couldn’t keep up with the feature set of Word or WordPerfect. (I know Word has long been a symbol of bloat, but trust me, Word 4 and 5 on Mac were awesome.) Adding features like style sheets or wrapping text around images took too long to implement in assembly compared to C.
The speed and efficiency of how we’re creating stuff now is crazy. People are creating fancy OSs with GUIs in their bedrooms with a couple of collaborators, presumably in their spare time. If you’re up to speed with current Web tech you can bring up a pretty complex web app in a matter of days.
I don’t know, I think there’s more to it than just “these darn new languages with their package managers made dependencies too easy, in my day we had to manually download Boost uphill both ways” or whatever. The dependencies in the occasional Swift or Rust app aren’t even a tenth of the bloat on my disk.
It’s the whole engineering culture of “why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application, and then implement your glorified scp GUI application inside that, so that you never have to learn anything other than the one and only tool you know”. Everything’s turned into 500megs worth of nail because we’ve got an entire generation of Hammer Engineers who won’t even consider that it might be more efficient to pick up a screwdriver sometimes.
We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t
That’s the argument, but it’s not clear to me that we haven’t severely over-corrected at this point. I’ve watched teams spend weeks poking at the mile-high tower of leaky abstractions any react-native mobile app teeters atop, just to try to get the UI to do what they could have done in ten minutes if they’d bothered to learn the underlying platform API. At some point “make all the world a browser tab” became the goal in-and-of-itself, whether or not that was inefficient in every possible dimension (memory, CPU, power consumption, or developer time). It’s heretical to even question whether or not this is truly more developer-time-efficient anymore, in the majority of cases – the goal isn’t so much to be efficient with our time as it is to just avoid having to learn any new skills.
The industry didn’t feel this sclerotic and incurious twenty years ago.
It’s heretical to even question whether or not this is truly more developer-time-efficient anymore
And even if we set that question aside and assume that it is, it’s still just shoving the costs onto others. Automakers could probably crank out new cars faster by giving up on fuel-efficiency and emissions optimizations, but should they? (Okay, left to their own devices they probably would, but thankfully we have regulations they have to meet.)
left to their own devices they probably would, but thankfully we have regulations they have to meet.
Regulations. This is it.
I’ve long believed that this is very important in our industry. As earlier comments say, you can make a complex web app after work in a weekend. But then there are people, in the mentioned above autoindustry, that take three sprints to set up a single screen with a table, a popup, and two forms. That’s after they pulled in the internet worth of dependencies.
On the one hand, we don’t want to be gatekeeping. We want everyone to contribute. When dhh said we should stop celebrating incompetence, majority of people around him called this gatekeeping. Yet when we see or say something like this - don’t build bloat or something along the line - everyone agrees.
I think the middle line should be in between. Let individuals do whatever the hell they want. But regulate “selling” stuff for money or advertisement eyeballs or anything similar. If an app is more then x MB (some reasonable target), it has to get certified before you can publish it. Or maybe, if a popular app does. Or, if a library is included in more then X, then that lib either gets “certified”, or further apps using it are banned.
I am sure that is huge, immensely big, can of worms. There will be many problems there. But if we don’t start cleaning up shit, it’s going to pile up.
A simple example - if controversial - is Google. When they start punishing a webapp for not rendering within 1 second, everybody on internet (that wants to be on top of google) starts optimizing for performance. So, it can be done. We just have to setup - and maintain - a system that deals with the problem ….well, systematically.
why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application
Yeah. One of the things that confuses me is why apps bundle a browser when platforms already come with browsers that can easily be embedded in apps. You can use Apple’s WKWebView class to embed a Safari-equivalent browser in an app that weighs in at under a megabyte. I know Windows has similar APIs, and I imagine Linux does too (modulo the combinatorial expansion of number-of-browsers times number-of-GUI-frameworks.)
I can only imagine that whoever built Electron felt that devs didn’t want to deal with having to make their code compatible with more than one browser engine, and that it was worth it to shove an entire copy of Chromium into the app to provide that convenience.
On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.
The problem is that your dependencies can behave strangely, and you need to debug them.
Code bloat makes programs hard to debug. It costs programmer time.
The problem is that your dependencies can behave strangely, and you need to debug them.
To make matters worse, developers don’t think carefully about which dependencies they’re bothering to include. For instance, if image loading is needed, many applications could get by with image read support for one format (e.g. with libpng). Too often I’ll see an application depend on something like ImageMagick which is complete overkill for that situation, and includes a ton of additional complex functionality that bloats the binary, introduces subtle bugs, and wasn’t even needed to begin with.
On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.
The problem is that computational resources vs. programmer time is just one axis along which this tradeoff is made: some others include security vs. programmer time, correctness vs. programmer time, and others I’m just not thinking of right now I’m sure. It sounds like a really pragmatic argument when you’re considering your costs because we have been so thoroughly conditioned into ignoring our externalities. I don’t believe the state of contemporary software would look like it does if the industry were really in the habit of pricing in the costs incurred by others in addition to their own, although of course it would take a radically different incentive landscape to make that happen. It wouldn’t look like a code golfer’s paradise, either, because optimizing for code size and efficiency at all costs is also not a holistic accounting! It would just look like a place with some fewer amount of data breaches, some fewer amount of corrupted saves, some fewer amount of Watt-hours turned into waste heat, and, yes, some fewer amount of features in the case where their value didn’t exceed their cost.
We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t
But we aren’t. Because modern resource-wastfull software isn’t really realeased quicker. Quite the contrary, there is so much development overhead that we don’t see those exciting big releases anymore with a dozen of features every ones loves at first sight. They release new features in microscopic increments so slowly that hardly any project survives 3-5 years without becoming obsolete or out of fashion.
What we are trading is quality, by quantity. We lower the skill and knowledge barrier so much to acompdate for millions of developers that “learned how tonprogra in one week” and the results are predictably what this post talks about.
I’m as much against bloat as everyone else (except those who make bloated software, of course—those clearly aren’t against it). However, it’s easy to forget that small software from past eras often couldn’t do much. The original Mac OS could be 64KB, but no one would want to use such a limited OS today!
I have yet to see modern software that is saving the programmer’s time.
I’m here for it, I’ll be cheering when it happens.
This whole thread reminds me of a little .txt file that came packaged into DawnOS.
It read:
Imagine that software development becomes so complex and expensive that no software is being written anymore, only apps designed in devtools. Imagine a computer, which requires 1 billion transistors to flicker the cursor on the screen. Imagine a world, where computers are driven by software written from 400 million lines of source code. Imagine a world, where the biggest 20 technology corporation totaling 2 million employees and 100 billion USD revenue groups up to introduce a new standard. And they are unable to write even a compiler within 15 years.
I have yet to see modern software that is saving the programmer’s time.
People love to hate Docker, but having had the “pleasure” of doing everything from full-blown install-the-whole-world-on-your-laptop dev environments to various VM applications that were supposed to “just work”… holy crap does Docker save time not only for me but for people I’m going to collaborate with.
Meanwhile, programmers of 20+ years prior to your time are equally as horrified by how wasteful and disgusting all your favorite things are. This is a never-ending cycle where a lot of programmers conclude that the way things were around the time they first started (either programming, or tinkering with computers in general) was a golden age of wise programmers who respected the resources of their computers and used them efficiently, while the kids these days have no respect and will do things like use languages with garbage collectors (!) because they can’t be bothered to learn proper memory-management discipline like their elders.
I’m of the generation that started programming at the tail end of ruby, and Objective-C, and I would definitely not call this the golden age, if anything, looking back at this period now it looks like mid-slump.
Node is an especially good villain here because JavaScript has long specifically encouraged lots of small dependencies and has little to no stdlib so you need a package for near everything.
It’s kind of a turf war as well. A handful of early adopters created tiny libraries that should be single functions or part of a standard library. Since their notoriety depends on these libraries, they fight to keep them around. Some are even on the boards of the downstream projects and fight to keep their own library in the list of dependencies.
We’re trading CPU time and memory, which are ridiculously abundant
CPU time is essentially equivalent to energy, which I’d argue is not abundant, whether at the large scale of the global problem of sustainable energy production, or at the small scale of mobile device battery life.
for programmer time, which isn’t.
In terms of programmer-hours available per year (which of course unit-reduces to active programmers), I’m pretty sure that resource is more abundant than it’s ever been any point in history, and only getting more so.
When you divide it by the CPU’s efficiency, yes. But CPU efficiency has gone through the roof over time. You can get embedded devices with the performance of some fire-breathing tower PC of the 90s, that now run on watch batteries. And the focus of Apple’s whole line of CPUs over the past decade has been power efficiency.
There are a lot of programmers, yes, but most of them aren’t the very high-skilled ones required for building highly optimal code. The skills for doing web dev are not the same as for C++ or Rust, especially if you also constrain yourself to not reaching for big pre-existing libraries like Boost, or whatever towering pile of crates a Rust dev might use.
(I’m an architect for a mobile database engine, and my team has always found it very difficult to find good developers to hire. It’s nothing like web dev, and even mobile app developers are mostly skilled more at putting together GUIs and calling REST APIs than they are at building lower-level model-layer abstractions.)
Hey, I don’t mean to be a smart ass here, but I find it ironic that you start your comment blaming the “high-level languages with package systems” and immediately admit that you blindly picked a library for the job and that you could solve the problem just by “taking some time finding alternative packages with fewer dependencies”. Does not sound like a problem with neither the language nor the package manager honestly.
What would you expect the package manager to do here?
I think the problem in this case actually lies with the language in this case. Javascript has such a piss-poor standard library and dangerous semantics (that the standard library doesn’t try to remedy, either) that sooner, rather than later, you will have a transient dependency on isOdd, isEven and isNull because even those simple operations aren’t exactly simple in JS.
Despite being made to live in a web browser, the JS standard library has very few affordances to working with things like URLs, and despite being targeted toward user interfaces, it has very few affordances for working with dates, numbers, lists, or localisations. This makes dependency graphs both deep and filled with duplicated efforts since two dependencies in your program may depend on different third-party implementations of what should already be in the standard library, themselves duplicating what you already have in your operating system.
It’s really difficult for me to counter an argument that it’s basically “I don’t like JS”. The question was never about that language, it was about “high-level languages with package systems” but your answer hyper focuses on JS and does not address languages like python for example, that is a “high-level language with a package system”, which also has an “is-odd” package (which honestly I don’t get what that has to do with anything).
The response you were replying to was very much about JS:
In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs.
For what it’s worth, whilst Python may have an isOdd package, how often do you end up inadvertently importing it in Python as opposed to “batteries-definitely-not-included” Javascript? Fewer batteries included means more imports by default, which themselves depend on other imports, and a few steps down, you will find leftPad.
As for isOdd, npmjs.com lists 25 versions thereof, and probably as many isEven.
What? What kind of data do you have to back up a statement like this?
You don’t like JS, I get it, I don’t like it either. But the unfair criticism is what really rubs me the wrong way. We are technical people, we are supposed to make decisions based on data. But this kind of comments that just generates division without the slightest resemblance of a solid argument do no good to a healthy discussion.
Again, none of the arguments are true for js exclusively. Python is batteries included, sure, but it’s one of the few. And you conveniently leave out of your quote the part when OP admits that with a little effort the “problem” became a non issue. And that little effort is what we get paid for, that’s our job.
I’m not blaming package managers. Code reuse is a good idea, and it’s nice to have such a wealth of libraries available.
But it’s a double edged sword. Especially when you use a highly dynamic language like JS that doesn’t support dead-code stripping or build-time inlining, so you end up having to copy an entire library instead of just the bits you’re using.
On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.
We’re trading CPU and memory for the time of some programmers, but we’re also adding the time of other programmers onto the other side of the balance.
I definitely agree with your bolded point - I think that’s the main driver for this kind of thing.
Things change if there’s a reason for them to be changed. The incentives don’t really line up currently to the point where it’s worth it for programmers/companies to devote the time to optimize things that far.
That is changing a bit already, though. For example, performance and bundle size are getting seriously considered for web dev these days. Part of the reason for that is that Google penalizes slow sites in their rankings - a very direct incentive to make things faster and more optimized!
I think the reason it’s primarily “grumpy old developers” (and I count myself amongst that crowd) complaining about software bloat is that we were there 20 years ago, so we have the benefit of perspective. We know was possible with the limited hardware available at the time, and it doesn’t put today’s software in a very flattering light.
The other day I was editing a document in Pages and it made my MacBook Pro slow down to a crawl. To be fair my machine isn’t exactly new, but as far as I can tell Pages isn’t doing anything that MS Word 2000 wasn’t doing 20 years ago without straining my 200Mhz Pentium. Sure, Pages renders documents in HD but does that really require 30 times the processing power?
This might be selective memory of the good old days. I was in high school when Office 97 came out, and I vaguely remember one of my classmates complaining about it being sluggish.
I think there’s A LOT of this going around. I used Office 97 in high school and it was dog shit slow (tick tick tick goes the hard disk)! Yes, the school could have sprung for $2,500 desktops instead of $1,500 desktops (or whatever things cost back then) but, adjusted for inflation, a high-end laptop today costs what a low-end laptop cost in 1995. So we’re also comparing prevailing hardware.
Word processing programs were among the pioneers of the “screenshot your state and paint it on re-opening” trick to hide how slow they actually were at reaching the point where the user could interact with the app. I can’t remember a time when they were treated as examples of good computing-resource citizens, and my memory stretches back a good way — I was using various office-y tools on school computers in the late 90s, for example.
Modern apps also really are generally doing more; it’s not like they stood still, feature-wise, for two decades. Lots of things have “AI” running in the background to offer suggestions and autocompletions and offer to convert to specific document templates based on what they detect you writing; they have cloud backup and live collaborative editing; they have all sorts of features that, yes, consume resources. And that some significant number of people rely on, so cutting them out and going back to something which only has the feature set of Word 97 isn’t really an option.
When a friend of mine showed me Youtube, before the Google acquisition, on the high-school library computers, I told him “Nobody will ever use this, it uses Macromedia Flash in the browser and Flash in browser is incredibly slow and nobody will be able to run it. Why don’t we just let users download the videos from an FTP server?” I ate those words hard. “grumy old developers” complain about software bloat because they’re always looking on the inside, never the out. When thinking about Youtube, I too was looking on the inside. But fundamentally people use software not for the sake of software but for the sake of deriving value from software.
In other words, “domain expert is horrified at the state of their own domain. News at 11.”
When you tell them the original game Elite had a sprawling galaxy, space combat in 3D, a career progression system, trading and thousands of planets to explore, and it was 64k, I guess they HEAR you, but they don’t REALLY understand the gap between that, and what we have now.
Hi! I’m a young programmer. When someone says “this game had x, y, and z in (N < lots) bytes”, which I hear is that it was built by dedicated people working on limited hardware who left out features and polish that is often included in software today, didn’t integrate it with other software in the ecosystem that uses freeform, self-describing formats that require expensive parsers, and most importantly took a long time to build and port the software.
Today, we use higher-level languages which give us useful properties like:
portability
various levels of static analysis
various levels of memory safety
scalability
automatic optimization
code reuse via package managers
and the tradeoff there is that less time is spent in manual optimization. It’s a tradeoff, like anything in engineering.
While I’m curious about the free-form, self-describing formats you’re talking about (and why their parsers should be so expensive), cherry picking from your arguments, there are a lot of interesting mismatches between expectations and reality
which I hear is that it was built by dedicated people (…) and most importantly took a long time to build and port the software.
Elite was written by two(!) undergraduate students, and ran on more, and more different CPU architectures than any software developed today. It’s true that the ports were complete rewrites, but if wikipedia is correct, these were single-person efforts.
various levels of static analysis
various levels of memory safety
code reuse via package managers
Which are productivity boosters; modern developers should be faster, not slower than those from the assembly era.
scalability
Completely irrelevant for desktop software, as described in the article.
automatic optimization
If optimization is so easy, why is software so slow and big?
My personal theory is that software development in the home computer era was sufficiently difficult that it demotivated all but the most dedicated people. That, and survivor bias, makes their efforts seem particularly heroic and effective compared to modern-day, industrialized software development.
My personal theory is that software development in the home computer era was sufficiently difficult that it demotivated all but the most dedicated people. That, and survivor bias, makes their efforts seem particularly heroic and effective compared to modern-day, industrialized software development.
I tend to agree. How many games like Elite were produced, for example? Also, how many epic failures were there? I’m not saying I know the answers, I just don’t think the debate is productive without them. Pointing to Elite and saying “software was better back then” is just nostalgia.
Edit: Another thought, how much crap software was created with BASIC for specific purposes and we’ve long since forgotten about it?
I’m curious about the free-form, self-describing formats you’re talking about (and why their parsers should be so expensive)
I’m mostly talking about JSON. JSON is, inherently, a complex format! It requires that you have associative maps, for one thing, and arbitrarily large ones at that. Interoperating with most web APIs requires unbounded memory.
You respond to my assertion that building software in assembly on small computers requires dedication by saying:
Elite was written by two(!) undergraduate students,
But then say:
My personal theory is that software development in the home computer era was sufficiently difficult that it demotivated all but the most dedicated people.
It seems like you agree with me here. Two undergraduates can be dedicated and spend a lot of time on something.
[scalability is] Completely irrelevant for desktop software, as described in the article.
No, it’s not. Scalability in users is irrelevant, but not in available resources. Software written in Python on a 32-bit system can easily be run on a 64-bit one with all the scaling features that implies. There are varying shades of this; C and Rust, for instance, make porting from 32 to 64 bit easy, but not trivial, and assembly makes it a gigantic pain in the ass.
Which are productivity boosters; modern developers should be faster, not slower than those from the assembly era.
I don’t agree. These are not productivity boosters; they can be applied that way, but they are often applied to security, correctness, documentation, and other factors.
I’m mostly talking about JSON. JSON is, inherently, a complex format! It requires that you have associative maps, for one thing, and arbitrarily large ones at that. Interoperating with most web APIs requires unbounded memory.
JSON is mostly complex because it inherits all the string escaping rules from JavaScript; other than that, SAX-style parsers for JSON exist, they’re just not commonly used. And yes, theoretically, I could make a JSON document that just contains a 32GB long string, blowing the memory limit on most laptops, but I’m willing to bet that most JSON payloads are smaller than a kilobyte. If your application needs ‘unbounded memory’ in theory, that’s a security vulnerability, not a measure of complexity.
(And JSON allows the same key to exist twice in a document, so associative maps are not a good fit)
It seems like you agree with me here. Two undergraduates can be dedicated and spend a lot of time on something.
But it also puts a bound on the ‘enormous effort’ involved here. Just two people with other obligations, just two years of development.
No, it’s not. Scalability in users is irrelevant, but not in available resources. Software written in Python on a 32-bit system can easily be run on a 64-bit one with all the scaling features that implies. There are varying shades of this; C and Rust, for instance, make porting from 32 to 64 bit easy, but not trivial, and assembly makes it a gigantic pain in the ass.
As someone who has spent time both porting C code from 32 bit to 64 bit, and porting Python2 string handling code to Python3 string handling code, I’d say the former is much easier.
And that’s part of my pet theory for why modern software development is so incredibly slow: a lot of effort goes into absorbing breaking changes from libraries and language runtimes.
You’re moving the goalposts. My initial point was that, given some source code and all necessary development tools, it’s far easier to expand a Python or Lua or Java program to use additional resources - such as but not limited to bus width and additional memory - than an equivalent assembly program. You’re now talking about something totally else: problems with dependencies and the constant drive to stay up to date.
my pet theory for why modern software development is so incredibly slow: a lot of effort goes into absorbing breaking changes from libraries and language runtimes.
I agree with you here, but it’s a complete non-sequitur from what we were talking about before. It’s at least as hard, if not harder, to port an assembly program to a new operating system, ABI, or processor as it is to port a Python 2 program to Python 3.
You’re moving the goalposts. My initial point was that, given some source code and all necessary development tools, it’s far easier to expand a Python or Lua or Java program to use additional resources - such as but not limited to bus width and additional memory - than an equivalent assembly program.
That is most definitely true. I actually think the use of extremes doesn’t make this discussion any easier. I don’t think anyone wants to go back to assembly programming. But at the same time there’s obviously something wrong if it takes 200 megabytes of executables to copy some files.
There are externalities to code bloat, in the form of e-waste (due to code bloat obsoleting less powerful computers), and energy use. It’s not very relevant in the case of one 200MB file transfer program, but over an industry, it adds up horribly.
Agreed. These externalities are not taken into account by producers or most consumers. That said, I think there are more important things to focus on before one gets to software bloat: increased regulation regarding privacy and accessibility among them.
It’s easy to “blame” (or “credit” depending on your disposition) modern tooling and development ecosystems to for making it easy to pull in dependencies (and their dependencies) for every problem that we might run across, but I think that’s only part of the cause. Another factor that I think isn’t discussed much is how modern development processes lead us toward accumulating a lot of dependencies.
Code review in particular seems to bias people toward bringing in dependencies for everything, largely because adding dependencies is rarely subject to scrutiny during the review process. I’ve never in my life had a reviewer comment on the code inside of a dependency I added, and I could count on one hand the number of times I’ve even been asked about the choice of a particular dependency that I added to a project. On the other hand, if you implement some data structures or any sort of marginally complex algorithm yourself, it’s almost guaranteed that you will be under fire from reviewers. In most cases you need to defend the choice to implement the code in the first place, rather than adding a dependency. Even if the reviewer lets that slide, you still have much more code that must be shepherded through the review process. Even if the cumulative weight of many dependencies is a burden, even if avoiding those dependencies is easy enough, the process by which we ship software seems to strongly encourage external dependencies to remove friction.
It’s funny how that might differ from project to project and between ecosystems.
On my last few jobs, the amount of dependencies has been no greater than five or six (I think it’s four on my current one) — half of which are mandated by business (basically analytics and integration with CRM). Any new dependency would be preceded by a serious discussion if it can’t be implemented trivially in a day’s work, and we make sure to wrap said dependency in an interface so it can be mocked or swapped out with minimal code changes.
This view comes from judging file sizes and requirements from a modern perspective (we have GB, they had KB!) but judging features in their past context (it was great for ’80s/’90s).
Do you realize the discrepancy? There’s no way to win this. These days we use lots of code, lots of dependencies, lots of hardware resources, because basic expectations for everything are so much higher. If you don’t scale your expectations to past hardware limitations, from modern perspective all of these amazing classics are objectively trash. If not for nostalgia, these days Elite 1 wouldn’t sell even as a cheap indie game.
I’ve heard the same bloat stories about 1MB Amiga vs microcomputers. Amiga made programmers write code in poorly optimized C that wasted a kilobyte on a Hello World. It had filesystems and libraries, and system calls, and multitasking and windowing with abstractions, and Workbench in the background doing nothing eating more memory than the entire C64 had!
When Elite 1 came out there probably was somebody complaining how bloated it was to use a general-purpose CPU, big expensive RAM chips, and pixel-based graphics chips with pointless digital-to-analog circuitry when an oscilloscope-based display would have drawn way nicer lines with a fraction of this hardware.
These sorts of articles always rub me the wrong way. Assuming that so many people are just blinkered idiots that don’t know what they’re doing is pretty uncharitable. A couple other hypotheses that might explain so-called “bloat” that don’t require all modern developers to be incompetent babies:
the professionalization and commodification of what was once fun hobby works means that devs need to make a hard-to-justify financial case for spending time on optimizing, instead of releasing as soon as it’s just barely viable
the expectations for features are much higher and world-wide, cross-platform releases instantly get all sorts of people demanding all sorts of features off the bat, instead of a gradual, word-of-mouth, floppy-disc-sneakernet distribution that gives you time to iterate
there’s much more competition in the space and consumers care much more about features than file size or number of dependencies (I’ve yet to meet a non-developer than even knows what Electron is, much less is angry that some app they use is written using it)
(I’ve yet to meet a non-developer than even knows what Electron is, much less is angry that some app they use is written using it)
I suspect many people who are stuck with old computers would be angry about Electron if we explained to them why so much software runs slowly on their computers. Of course, that would require us to get out of our bubbles and talk to such people in the first place. Myself included.
Perhaps! But also, how many people would be glad to learn Electron is why they have an app at all, instead of the company only having the resources to release, say, a native Windows app?
My guess is that, among non-developers, there are many more people stuck with crappy Windows machines than people with Macs, let alone Linux machines. So the native Windows app would have been the correct tradeoff for the majority of users that are poorly served by Electron apps.
Perhaps! Even then though, that might not be the correct trade-off for the company, if now there’s no app for the “long-tail” of Mac & Linux users, not to mention having to hire for a whole new skill-set of native Windows developers.
I have two alternative (not mutually exclusive) hypotheses:
When hardware was very constrained, we couldn’t just work around the poor choices of those implementing our platforms and tools. Today we can, so the poor choices compound, and once you’re dealing with a certain number, trying to go replace any one of them becomes basically impossible as they tend to lock each other in place.
People default to wanting to release across multiple platforms today. The tooling and developer experience for cross platform work is kind of miserable compared to the experience developed for specific platforms.
Compare developing a Windows only app on, say, Dolphin Smalltalk with trying to write an app that works across Windows, Linux, macOS, Android, and iOS.
I agree, and I’m not even in my 30s yet.. I guess most people are taught to just “download this and that library to do one thing” and it just snowballs from there.
Computers don’t have much limitations either, they were quite powerful when I was young too, but I had to scrape by with whatever decommissioned old hardware I had. This gave me insight to how much you can do with way less.
I guess that’s something that should be properly taught. Give people something very limited to learn on.
Indeed, and it‘a very useful. Although this isn’t the agent, but rather their tracer. So it gets pulled in to your binary, but same idea. It’s just jarring to see your go.mod blow up as a result of pulling parts of this in for the first time.
For perspective: A Spellchecker Used to Be a Major Feat of Software Engineering
In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs. Then Node started warning me about security problems in some of those libraries. I ended up taking some time finding alternative packages with fewer dependencies.
On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t. It’s cool to look at how tiny and efficient code can be — a Scheme interpreter in 4KB! The original Mac OS was 64KB! — but yowza, is it ever difficult to code that way.
There was an early Mac word processor — can’t remember the name — that got a lot of love because it was super fast. That’s because they wrote it in 68000 assembly. It was successful for some years, but failed by the early 90s because it couldn’t keep up with the feature set of Word or WordPerfect. (I know Word has long been a symbol of bloat, but trust me, Word 4 and 5 on Mac were awesome.) Adding features like style sheets or wrapping text around images took too long to implement in assembly compared to C.
The speed and efficiency of how we’re creating stuff now is crazy. People are creating fancy OSs with GUIs in their bedrooms with a couple of collaborators, presumably in their spare time. If you’re up to speed with current Web tech you can bring up a pretty complex web app in a matter of days.
I don’t know, I think there’s more to it than just “these darn new languages with their package managers made dependencies too easy, in my day we had to manually download Boost uphill both ways” or whatever. The dependencies in the occasional Swift or Rust app aren’t even a tenth of the bloat on my disk.
It’s the whole engineering culture of “why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application, and then implement your glorified scp GUI application inside that, so that you never have to learn anything other than the one and only tool you know”. Everything’s turned into 500megs worth of nail because we’ve got an entire generation of Hammer Engineers who won’t even consider that it might be more efficient to pick up a screwdriver sometimes.
That’s the argument, but it’s not clear to me that we haven’t severely over-corrected at this point. I’ve watched teams spend weeks poking at the mile-high tower of leaky abstractions any react-native mobile app teeters atop, just to try to get the UI to do what they could have done in ten minutes if they’d bothered to learn the underlying platform API. At some point “make all the world a browser tab” became the goal in-and-of-itself, whether or not that was inefficient in every possible dimension (memory, CPU, power consumption, or developer time). It’s heretical to even question whether or not this is truly more developer-time-efficient anymore, in the majority of cases – the goal isn’t so much to be efficient with our time as it is to just avoid having to learn any new skills.
The industry didn’t feel this sclerotic and incurious twenty years ago.
And even if we set that question aside and assume that it is, it’s still just shoving the costs onto others. Automakers could probably crank out new cars faster by giving up on fuel-efficiency and emissions optimizations, but should they? (Okay, left to their own devices they probably would, but thankfully we have regulations they have to meet.)
Regulations. This is it.
I’ve long believed that this is very important in our industry. As earlier comments say, you can make a complex web app after work in a weekend. But then there are people, in the mentioned above autoindustry, that take three sprints to set up a single screen with a table, a popup, and two forms. That’s after they pulled in the internet worth of dependencies.
On the one hand, we don’t want to be gatekeeping. We want everyone to contribute. When dhh said we should stop celebrating incompetence, majority of people around him called this gatekeeping. Yet when we see or say something like this - don’t build bloat or something along the line - everyone agrees.
I think the middle line should be in between. Let individuals do whatever the hell they want. But regulate “selling” stuff for money or advertisement eyeballs or anything similar. If an app is more then x MB (some reasonable target), it has to get certified before you can publish it. Or maybe, if a popular app does. Or, if a library is included in more then X, then that lib either gets “certified”, or further apps using it are banned.
I am sure that is huge, immensely big, can of worms. There will be many problems there. But if we don’t start cleaning up shit, it’s going to pile up.
A simple example - if controversial - is Google. When they start punishing a webapp for not rendering within 1 second, everybody on internet (that wants to be on top of google) starts optimizing for performance. So, it can be done. We just have to setup - and maintain - a system that deals with the problem ….well, systematically.
Yeah. One of the things that confuses me is why apps bundle a browser when platforms already come with browsers that can easily be embedded in apps. You can use Apple’s WKWebView class to embed a Safari-equivalent browser in an app that weighs in at under a megabyte. I know Windows has similar APIs, and I imagine Linux does too (modulo the combinatorial expansion of number-of-browsers times number-of-GUI-frameworks.)
I can only imagine that whoever built Electron felt that devs didn’t want to deal with having to make their code compatible with more than one browser engine, and that it was worth it to shove an entire copy of Chromium into the app to provide that convenience.
Here’s an explanation from the Slack developer who moved Slack for Mac from WebKit to Electron. And on Windows, the only OS-provided browser engine until quite recently was either the IE engine or the abandoned EdgeHTML.
The problem is that your dependencies can behave strangely, and you need to debug them.
Code bloat makes programs hard to debug. It costs programmer time.
To make matters worse, developers don’t think carefully about which dependencies they’re bothering to include. For instance, if image loading is needed, many applications could get by with image read support for one format (e.g. with libpng). Too often I’ll see an application depend on something like ImageMagick which is complete overkill for that situation, and includes a ton of additional complex functionality that bloats the binary, introduces subtle bugs, and wasn’t even needed to begin with.
The problem is that computational resources vs. programmer time is just one axis along which this tradeoff is made: some others include security vs. programmer time, correctness vs. programmer time, and others I’m just not thinking of right now I’m sure. It sounds like a really pragmatic argument when you’re considering your costs because we have been so thoroughly conditioned into ignoring our externalities. I don’t believe the state of contemporary software would look like it does if the industry were really in the habit of pricing in the costs incurred by others in addition to their own, although of course it would take a radically different incentive landscape to make that happen. It wouldn’t look like a code golfer’s paradise, either, because optimizing for code size and efficiency at all costs is also not a holistic accounting! It would just look like a place with some fewer amount of data breaches, some fewer amount of corrupted saves, some fewer amount of Watt-hours turned into waste heat, and, yes, some fewer amount of features in the case where their value didn’t exceed their cost.
But we aren’t. Because modern resource-wastfull software isn’t really realeased quicker. Quite the contrary, there is so much development overhead that we don’t see those exciting big releases anymore with a dozen of features every ones loves at first sight. They release new features in microscopic increments so slowly that hardly any project survives 3-5 years without becoming obsolete or out of fashion.
What we are trading is quality, by quantity. We lower the skill and knowledge barrier so much to acompdate for millions of developers that “learned how tonprogra in one week” and the results are predictably what this post talks about.
I’m as much against bloat as everyone else (except those who make bloated software, of course—those clearly aren’t against it). However, it’s easy to forget that small software from past eras often couldn’t do much. The original Mac OS could be 64KB, but no one would want to use such a limited OS today!
Seems some people (@neauoire) do want exactly that: https://merveilles.town/@neauoire/108419973390059006
I have yet to see modern software that is saving the programmer’s time.
I’m here for it, I’ll be cheering when it happens.
This whole thread reminds me of a little .txt file that came packaged into DawnOS.
It read:
People love to hate Docker, but having had the “pleasure” of doing everything from full-blown install-the-whole-world-on-your-laptop dev environments to various VM applications that were supposed to “just work”… holy crap does Docker save time not only for me but for people I’m going to collaborate with.
Meanwhile, programmers of 20+ years prior to your time are equally as horrified by how wasteful and disgusting all your favorite things are. This is a never-ending cycle where a lot of programmers conclude that the way things were around the time they first started (either programming, or tinkering with computers in general) was a golden age of wise programmers who respected the resources of their computers and used them efficiently, while the kids these days have no respect and will do things like use languages with garbage collectors (!) because they can’t be bothered to learn proper memory-management discipline like their elders.
I’m of the generation that started programming at the tail end of ruby, and Objective-C, and I would definitely not call this the golden age, if anything, looking back at this period now it looks like mid-slump.
What’s “modern”? Because I would pick a different profession if I had to write code the way people did prior to maybe the late 90s (at minimum).
Edit: You can pry my modern IDEs and toolchains from my cold, dead hands :-)
Node is an especially good villain here because JavaScript has long specifically encouraged lots of small dependencies and has little to no stdlib so you need a package for near everything.
It’s kind of a turf war as well. A handful of early adopters created tiny libraries that should be single functions or part of a standard library. Since their notoriety depends on these libraries, they fight to keep them around. Some are even on the boards of the downstream projects and fight to keep their own library in the list of dependencies.
CPU time is essentially equivalent to energy, which I’d argue is not abundant, whether at the large scale of the global problem of sustainable energy production, or at the small scale of mobile device battery life.
In terms of programmer-hours available per year (which of course unit-reduces to active programmers), I’m pretty sure that resource is more abundant than it’s ever been any point in history, and only getting more so.
When you divide it by the CPU’s efficiency, yes. But CPU efficiency has gone through the roof over time. You can get embedded devices with the performance of some fire-breathing tower PC of the 90s, that now run on watch batteries. And the focus of Apple’s whole line of CPUs over the past decade has been power efficiency.
There are a lot of programmers, yes, but most of them aren’t the very high-skilled ones required for building highly optimal code. The skills for doing web dev are not the same as for C++ or Rust, especially if you also constrain yourself to not reaching for big pre-existing libraries like Boost, or whatever towering pile of crates a Rust dev might use.
(I’m an architect for a mobile database engine, and my team has always found it very difficult to find good developers to hire. It’s nothing like web dev, and even mobile app developers are mostly skilled more at putting together GUIs and calling REST APIs than they are at building lower-level model-layer abstractions.)
Hey, I don’t mean to be a smart ass here, but I find it ironic that you start your comment blaming the “high-level languages with package systems” and immediately admit that you blindly picked a library for the job and that you could solve the problem just by “taking some time finding alternative packages with fewer dependencies”. Does not sound like a problem with neither the language nor the package manager honestly.
What would you expect the package manager to do here?
I think the problem in this case actually lies with the language in this case. Javascript has such a piss-poor standard library and dangerous semantics (that the standard library doesn’t try to remedy, either) that sooner, rather than later, you will have a transient dependency on
isOdd
,isEven
andisNull
because even those simple operations aren’t exactly simple in JS.Despite being made to live in a web browser, the JS standard library has very few affordances to working with things like URLs, and despite being targeted toward user interfaces, it has very few affordances for working with dates, numbers, lists, or localisations. This makes dependency graphs both deep and filled with duplicated efforts since two dependencies in your program may depend on different third-party implementations of what should already be in the standard library, themselves duplicating what you already have in your operating system.
It’s really difficult for me to counter an argument that it’s basically “I don’t like JS”. The question was never about that language, it was about “high-level languages with package systems” but your answer hyper focuses on JS and does not address languages like python for example, that is a “high-level language with a package system”, which also has an “is-odd” package (which honestly I don’t get what that has to do with anything).
The response you were replying to was very much about JS:
For what it’s worth, whilst Python may have an
isOdd
package, how often do you end up inadvertently importing it in Python as opposed to “batteries-definitely-not-included” Javascript? Fewer batteries included means more imports by default, which themselves depend on other imports, and a few steps down, you will findleftPad
.As for
isOdd
, npmjs.com lists 25 versions thereof, and probably as manyisEven
.What? What kind of data do you have to back up a statement like this?
You don’t like JS, I get it, I don’t like it either. But the unfair criticism is what really rubs me the wrong way. We are technical people, we are supposed to make decisions based on data. But this kind of comments that just generates division without the slightest resemblance of a solid argument do no good to a healthy discussion.
Again, none of the arguments are true for js exclusively. Python is batteries included, sure, but it’s one of the few. And you conveniently leave out of your quote the part when OP admits that with a little effort the “problem” became a non issue. And that little effort is what we get paid for, that’s our job.
I’m not blaming package managers. Code reuse is a good idea, and it’s nice to have such a wealth of libraries available.
But it’s a double edged sword. Especially when you use a highly dynamic language like JS that doesn’t support dead-code stripping or build-time inlining, so you end up having to copy an entire library instead of just the bits you’re using.
We’re trading CPU and memory for the time of some programmers, but we’re also adding the time of other programmers onto the other side of the balance.
I definitely agree with your bolded point - I think that’s the main driver for this kind of thing.
Things change if there’s a reason for them to be changed. The incentives don’t really line up currently to the point where it’s worth it for programmers/companies to devote the time to optimize things that far.
That is changing a bit already, though. For example, performance and bundle size are getting seriously considered for web dev these days. Part of the reason for that is that Google penalizes slow sites in their rankings - a very direct incentive to make things faster and more optimized!
[Comment removed by author]
[Comment removed by author]
[Comment removed by author]
I think the reason it’s primarily “grumpy old developers” (and I count myself amongst that crowd) complaining about software bloat is that we were there 20 years ago, so we have the benefit of perspective. We know was possible with the limited hardware available at the time, and it doesn’t put today’s software in a very flattering light.
The other day I was editing a document in Pages and it made my MacBook Pro slow down to a crawl. To be fair my machine isn’t exactly new, but as far as I can tell Pages isn’t doing anything that MS Word 2000 wasn’t doing 20 years ago without straining my 200Mhz Pentium. Sure, Pages renders documents in HD but does that really require 30 times the processing power?
This might be selective memory of the good old days. I was in high school when Office 97 came out, and I vaguely remember one of my classmates complaining about it being sluggish.
I think there’s A LOT of this going around. I used Office 97 in high school and it was dog shit slow (tick tick tick goes the hard disk)! Yes, the school could have sprung for $2,500 desktops instead of $1,500 desktops (or whatever things cost back then) but, adjusted for inflation, a high-end laptop today costs what a low-end laptop cost in 1995. So we’re also comparing prevailing hardware.
Should’ve gone for the Pentium II with MMX
Word processing programs were among the pioneers of the “screenshot your state and paint it on re-opening” trick to hide how slow they actually were at reaching the point where the user could interact with the app. I can’t remember a time when they were treated as examples of good computing-resource citizens, and my memory stretches back a good way — I was using various office-y tools on school computers in the late 90s, for example.
Modern apps also really are generally doing more; it’s not like they stood still, feature-wise, for two decades. Lots of things have “AI” running in the background to offer suggestions and autocompletions and offer to convert to specific document templates based on what they detect you writing; they have cloud backup and live collaborative editing; they have all sorts of features that, yes, consume resources. And that some significant number of people rely on, so cutting them out and going back to something which only has the feature set of Word 97 isn’t really an option.
When a friend of mine showed me Youtube, before the Google acquisition, on the high-school library computers, I told him “Nobody will ever use this, it uses Macromedia Flash in the browser and Flash in browser is incredibly slow and nobody will be able to run it. Why don’t we just let users download the videos from an FTP server?” I ate those words hard. “grumy old developers” complain about software bloat because they’re always looking on the inside, never the out. When thinking about Youtube, I too was looking on the inside. But fundamentally people use software not for the sake of software but for the sake of deriving value from software.
In other words, “domain expert is horrified at the state of their own domain. News at 11.”
Hi! I’m a young programmer. When someone says “this game had x, y, and z in (N < lots) bytes”, which I hear is that it was built by dedicated people working on limited hardware who left out features and polish that is often included in software today, didn’t integrate it with other software in the ecosystem that uses freeform, self-describing formats that require expensive parsers, and most importantly took a long time to build and port the software.
Today, we use higher-level languages which give us useful properties like:
and the tradeoff there is that less time is spent in manual optimization. It’s a tradeoff, like anything in engineering.
While I’m curious about the free-form, self-describing formats you’re talking about (and why their parsers should be so expensive), cherry picking from your arguments, there are a lot of interesting mismatches between expectations and reality
Elite was written by two(!) undergraduate students, and ran on more, and more different CPU architectures than any software developed today. It’s true that the ports were complete rewrites, but if wikipedia is correct, these were single-person efforts.
Which are productivity boosters; modern developers should be faster, not slower than those from the assembly era.
Completely irrelevant for desktop software, as described in the article.
If optimization is so easy, why is software so slow and big?
My personal theory is that software development in the home computer era was sufficiently difficult that it demotivated all but the most dedicated people. That, and survivor bias, makes their efforts seem particularly heroic and effective compared to modern-day, industrialized software development.
I tend to agree. How many games like Elite were produced, for example? Also, how many epic failures were there? I’m not saying I know the answers, I just don’t think the debate is productive without them. Pointing to Elite and saying “software was better back then” is just nostalgia.
Edit: Another thought, how much crap software was created with BASIC for specific purposes and we’ve long since forgotten about it?
I’m mostly talking about JSON. JSON is, inherently, a complex format! It requires that you have associative maps, for one thing, and arbitrarily large ones at that. Interoperating with most web APIs requires unbounded memory.
You respond to my assertion that building software in assembly on small computers requires dedication by saying:
But then say:
It seems like you agree with me here. Two undergraduates can be dedicated and spend a lot of time on something.
No, it’s not. Scalability in users is irrelevant, but not in available resources. Software written in Python on a 32-bit system can easily be run on a 64-bit one with all the scaling features that implies. There are varying shades of this; C and Rust, for instance, make porting from 32 to 64 bit easy, but not trivial, and assembly makes it a gigantic pain in the ass.
I don’t agree. These are not productivity boosters; they can be applied that way, but they are often applied to security, correctness, documentation, and other factors.
JSON is mostly complex because it inherits all the string escaping rules from JavaScript; other than that, SAX-style parsers for JSON exist, they’re just not commonly used. And yes, theoretically, I could make a JSON document that just contains a 32GB long string, blowing the memory limit on most laptops, but I’m willing to bet that most JSON payloads are smaller than a kilobyte. If your application needs ‘unbounded memory’ in theory, that’s a security vulnerability, not a measure of complexity.
(And JSON allows the same key to exist twice in a document, so associative maps are not a good fit)
But it also puts a bound on the ‘enormous effort’ involved here. Just two people with other obligations, just two years of development.
As someone who has spent time both porting C code from 32 bit to 64 bit, and porting Python2 string handling code to Python3 string handling code, I’d say the former is much easier.
And that’s part of my pet theory for why modern software development is so incredibly slow: a lot of effort goes into absorbing breaking changes from libraries and language runtimes.
You’re moving the goalposts. My initial point was that, given some source code and all necessary development tools, it’s far easier to expand a Python or Lua or Java program to use additional resources - such as but not limited to bus width and additional memory - than an equivalent assembly program. You’re now talking about something totally else: problems with dependencies and the constant drive to stay up to date.
I agree with you here, but it’s a complete non-sequitur from what we were talking about before. It’s at least as hard, if not harder, to port an assembly program to a new operating system, ABI, or processor as it is to port a Python 2 program to Python 3.
That is most definitely true. I actually think the use of extremes doesn’t make this discussion any easier. I don’t think anyone wants to go back to assembly programming. But at the same time there’s obviously something wrong if it takes 200 megabytes of executables to copy some files.
What’s wrong, exactly? The company providing the service was in business at the time of the rant, and there’s no mention of files being lost.
The only complaint is an aesthetic one. Having 200MB of executables to move files feels “icky”.
There are externalities to code bloat, in the form of e-waste (due to code bloat obsoleting less powerful computers), and energy use. It’s not very relevant in the case of one 200MB file transfer program, but over an industry, it adds up horribly.
Agreed. These externalities are not taken into account by producers or most consumers. That said, I think there are more important things to focus on before one gets to software bloat: increased regulation regarding privacy and accessibility among them.
It’s easy to “blame” (or “credit” depending on your disposition) modern tooling and development ecosystems to for making it easy to pull in dependencies (and their dependencies) for every problem that we might run across, but I think that’s only part of the cause. Another factor that I think isn’t discussed much is how modern development processes lead us toward accumulating a lot of dependencies.
Code review in particular seems to bias people toward bringing in dependencies for everything, largely because adding dependencies is rarely subject to scrutiny during the review process. I’ve never in my life had a reviewer comment on the code inside of a dependency I added, and I could count on one hand the number of times I’ve even been asked about the choice of a particular dependency that I added to a project. On the other hand, if you implement some data structures or any sort of marginally complex algorithm yourself, it’s almost guaranteed that you will be under fire from reviewers. In most cases you need to defend the choice to implement the code in the first place, rather than adding a dependency. Even if the reviewer lets that slide, you still have much more code that must be shepherded through the review process. Even if the cumulative weight of many dependencies is a burden, even if avoiding those dependencies is easy enough, the process by which we ship software seems to strongly encourage external dependencies to remove friction.
It’s funny how that might differ from project to project and between ecosystems.
On my last few jobs, the amount of dependencies has been no greater than five or six (I think it’s four on my current one) — half of which are mandated by business (basically analytics and integration with CRM). Any new dependency would be preceded by a serious discussion if it can’t be implemented trivially in a day’s work, and we make sure to wrap said dependency in an interface so it can be mocked or swapped out with minimal code changes.
And you’re just talking about the decision to add the dependency.
What if we included reviewing the library code as well? “Does this library we’re adding clear our own, in-house, bars for quality?”
My favourite post about this sort of thing is still: Things That Turbo Pascal is Smaller Than
This view comes from judging file sizes and requirements from a modern perspective (we have GB, they had KB!) but judging features in their past context (it was great for ’80s/’90s).
Do you realize the discrepancy? There’s no way to win this. These days we use lots of code, lots of dependencies, lots of hardware resources, because basic expectations for everything are so much higher. If you don’t scale your expectations to past hardware limitations, from modern perspective all of these amazing classics are objectively trash. If not for nostalgia, these days Elite 1 wouldn’t sell even as a cheap indie game.
I’ve heard the same bloat stories about 1MB Amiga vs microcomputers. Amiga made programmers write code in poorly optimized C that wasted a kilobyte on a Hello World. It had filesystems and libraries, and system calls, and multitasking and windowing with abstractions, and Workbench in the background doing nothing eating more memory than the entire C64 had!
When Elite 1 came out there probably was somebody complaining how bloated it was to use a general-purpose CPU, big expensive RAM chips, and pixel-based graphics chips with pointless digital-to-analog circuitry when an oscilloscope-based display would have drawn way nicer lines with a fraction of this hardware.
These sorts of articles always rub me the wrong way. Assuming that so many people are just blinkered idiots that don’t know what they’re doing is pretty uncharitable. A couple other hypotheses that might explain so-called “bloat” that don’t require all modern developers to be incompetent babies:
I suspect many people who are stuck with old computers would be angry about Electron if we explained to them why so much software runs slowly on their computers. Of course, that would require us to get out of our bubbles and talk to such people in the first place. Myself included.
Perhaps! But also, how many people would be glad to learn Electron is why they have an app at all, instead of the company only having the resources to release, say, a native Windows app?
My guess is that, among non-developers, there are many more people stuck with crappy Windows machines than people with Macs, let alone Linux machines. So the native Windows app would have been the correct tradeoff for the majority of users that are poorly served by Electron apps.
Perhaps! Even then though, that might not be the correct trade-off for the company, if now there’s no app for the “long-tail” of Mac & Linux users, not to mention having to hire for a whole new skill-set of native Windows developers.
I have two alternative (not mutually exclusive) hypotheses:
When hardware was very constrained, we couldn’t just work around the poor choices of those implementing our platforms and tools. Today we can, so the poor choices compound, and once you’re dealing with a certain number, trying to go replace any one of them becomes basically impossible as they tend to lock each other in place.
People default to wanting to release across multiple platforms today. The tooling and developer experience for cross platform work is kind of miserable compared to the experience developed for specific platforms.
Compare developing a Windows only app on, say, Dolphin Smalltalk with trying to write an app that works across Windows, Linux, macOS, Android, and iOS.
I agree, and I’m not even in my 30s yet.. I guess most people are taught to just “download this and that library to do one thing” and it just snowballs from there.
Computers don’t have much limitations either, they were quite powerful when I was young too, but I had to scrape by with whatever decommissioned old hardware I had. This gave me insight to how much you can do with way less.
I guess that’s something that should be properly taught. Give people something very limited to learn on.
While I agree with the sentiment, and the example is quite bad, their argument is undermined by blaming Twitter for a third party apps issues.
Twitter bought TweetDeck 11 years ago.
Thanks for the correction!
…as Niklaus Wirth said so as of 1995: https://cr.yp.to/bib/1995/wirth.pdf and Alan Kay reflects about software pollution in https://youtu.be/watch?v=kgmAwnNxdgw
IT is neither cathedral nor bazaar but a slum.
This is always a fun one for this topic: https://github.com/DataDog/dd-trace-go/blob/main/go.sum
Don’t take this as a knock on datadog. They make a great product. Much respect.
The value is that they bundle all the integrations into a single binary, and it’s easy for them to do because they can pull in the client lib.
Indeed, and it‘a very useful. Although this isn’t the agent, but rather their tracer. So it gets pulled in to your binary, but same idea. It’s just jarring to see your go.mod blow up as a result of pulling parts of this in for the first time.