All of these are actually things that you can do on new hardware as well.
I emulate a number of different user experiences from my workstations (I have a laptop and a desktop, both of which are top end). I have a few different VMs that I use to emulate:
old phones
old operating systems
computers with very limited resources
whatever else we decide to test
I can fully empathize with users that have limited hardware, without being limited by the hardware that I use for myself. I’m able to not just empathize - I can actually spin up a machine with the exact same specifications and try to experience things exactly like the end user who is having an issue.
It’s not bad to have older hardware, and I think it’s incredibly important for servers to be appropriately specced, but I don’t think that there is actually much benefit to having an old workstation other than not spending money, which is a sufficiently significant reason on its own.
The problem with this approach is that poor hardware is a costume you put on for an evening. For many people, poor hardware is their life. Unless you’re constantly enduring the limitations of poor hardware yourself, you’re going to be constantly making life worse for those who do.
There’s a huge difference between immediately feeling the pain of your poor code before you even commit it, and hearing about it from a ticket from your user.
Unless you’re constantly enduring the limitations of poor hardware yourself, you’re going to be constantly making life worse for those who do.
To be frank, I could not disagree more.
If I hire someone to do a kitchen renovation, and they show up with a hammer, a screwdriver and a handsaw and that’s it, then I’m going to start thinking about if that person is the right choice for the job. It’s not that you need more than a hammer, screwdriver, and handsaw to do the renovation; it’s that there are a bunch of newer tools that make doing that work much easier.
poor hardware is a costume you put on for an evening
Not at all; in fact, I’d argue that I’m probably more rigorous about testing poor hardware than you are. You are limited to testing your particular configuration of poor hardware, and you probably write things to address the issues that you have, but those are not universal experiences. How do you test things when you have, say, a hard requirement for IE6 that is run with no network connectivity? I’d imagine that you probably just don’t, because you can’t actually test that.
Again, I’m not saying that it’s bad to have old hardware, or that new hardware is a necessity for being a good developer, and I certainly agree that appropriately speccing your servers is of utmost importance, but I don’t think that poor hardware is actually a bonus. It’s a way you have elected to live, and that’s fine, but it doesn’t mean that you’re automatically giving more consideration to people who themselves have bad hardware.
To be frank, I could not disagree more. If I hire someone to do a kitchen renovation, and they show up with a hammer, a screwdriver and a handsaw
That analogy just doesn’t make sense. It ignores exactly the point you’re responding to: the use of old hardware so you can see how well your code runs on old hardware.
How do you test things when you have, say, a hard requirement for IE6 that is run with no network connectivity?
How is that a hardware requirement? Testing lack of network connectivity doesn’t require special hardware, and IE6 isn’t hardware.
It’s a way you have elected to live, and that’s fine, but it doesn’t mean that you’re automatically giving more consideration to people who themselves have bad hardware.
It certainly does. It means you’re automatically giving more consideration than the average developer to people who themselves have bad hardware. It doesn’t mean it’s impossible for other people to also test on bad hardware, but it means they’re doing just that: testing on bad hardware. They’re not living bad hardware. And the vast, vast majority of developers don’t test on bad hardware at all, so you’re automatically a few steps ahead of them.
To be frank, I could not disagree more. If I hire someone to do a kitchen renovation, and they show up with a hammer, a screwdriver and a handsaw
That analogy just doesn’t make sense. It ignores exactly the point you’re responding to: the use of old hardware so you can see how well your code runs on old hardware.
I’m not sure it’s particularly helpful, but a more accurate kitchen-fitter analogy would be something like if you were looking to get your tiny cramped kitchen refitted, and find yourself looking round a kitchen showroom or brochure full of enormous, spacious show-kitchens. You wouldn’t question their ability to fit kitchen units, but you might wonder whether they will be able to come up with a plan that makes the best possible use of the limited space available.
You have a great point. I’ll also add that your method of using better boxes allows you, if you choose, to run more analyses and verification of the code. That’s one of reasons I have an i7 now. That’s why some shops have entire clusters of machines. I think I read Intel uses around a million cores to verify their CPU’s. Mind-boggling.
If multicore, you can also run IDE’s, VM’s, code generators, and V&V in parallel. That lets you reduce time to feedback on each work item. Then, you can iterate faster. Maybe even stay in mental state of flow for long time.
Thanks! I was honestly taken aback by the fact that almost all the comments I received were supportive of the idea that the quality of the tools defines the quality of the artisan, which I think is entirely wrong.
Having access to local VMs, IDEs, generators, etc. running parallel to each other is incredibly convenient, and I really just don’t buy into the notion that having a good computer makes you unaware or uncaring that bad computers exist.
It does cause that problem for a lot of people. It doesn’t for everyone. It’s more evident when I read the Hacker News thread on a person’s experience with dial-up on modern sites. The comments made it clear a lot of people didn’t realize (a) how many folks had bad connections (dialup or mobile) and (b) how much worse their practices made their experience. Like you, there were some who were aware using techniques like rate limiting to simulate those connections. Most weren’t.
So, it is a risk worth bringing up. That doesn’t mean it will apply to everyone though. Any artisan who cares about achieving a result in a certain context will do whatever it takes to do that.
I really just don’t buy into the notion that having a good computer makes you unaware or uncaring that bad computers exist.
It follows from the fact that if you don’t even notice that the code you’re writing runs poorly because your computer is quick, you’re unlikely to improve it until someone with slow hardware tells you it’s slow code.
this is not the usual behavior of at least 90% of developers around.
That’s an interesting number because it dovetails nicely with Sturgeon’s Law (tl;dr: 90% of everything is crap).
I don’t think that having a bad machine makes you think more about how people interact with your software. It makes you think more about how a particular subset of users interact with your software. Specifically, it limits you to thinking about how people like you interact with your software.
This is a particular problem for a lot of people. I don’t want to harp on the author because I believe that they’re a great developer that delivers great software, but a specific example of the limitations that they introduce seems to be that they haven’t thought about how https://sr.ht looks for people who have large monitors; the site is pushed off to the left. I think this is partially because he does not or cannot use a large monitor with his setup, so it’s not something he tests for. I think this limitation is the result of the technology he is using and the philosophy of why he chooses to use technology in this way; he favours a particular set of users because those users have a similar situation to how he has chosen to use technology. This is not the only issue with sr.ht that I (or others) have identified, and I think that sr.ht is developed more for Drew and people like Drew than it is for everyone. Edit: Drew corrected me on this. I’m out to lunch here.
It’s important to note that I think that is is okay that sr.ht is aimed at people like Drew; I think sr.ht is fantastic, I think that Drew is a great developer, and I am very happy that sr.ht exists. I’ve told dozens of people about it, and hopefully some will convert to paid users. It’s just not a service that works with my workflow, because I have chosen a different approach to things. The concern for me is that Drew seems to be convinced that developers like me are worse developers because we don’t do things the way he does. I think that’s clear from what he has stated even within this thread, and I think it’s an unfortunate position to hold.
I think you should stop reading so much into my comments. If I didn’t say it, it’s not what I meant, and I never said that you were a worse developer for disagreeing with me.
Unless you’re constantly enduring the limitations of poor hardware yourself, you’re going to be constantly making life worse for those who do.
There’s a huge difference between immediately feeling the pain of your poor code before you even commit it, and hearing about it from a ticket from your user.
What you wrote is pretty straight forward, but I added the inference that bad developers make life worse for other people and good developers don’t. If that’s not what you mean, then my apologies. To me, it looks like it’s what you wrote.
for the record I have thought about those people
Thank you for considering those with wider screens - the last time I saw it brought up, it was dismissed. My sincere apologies for spreading that misinformation, and I’ve edited my parent comment accordingly.
If I didn’t say it, it’s not what I meant, and I never said that you were a worse developer for disagreeing with me.
I’m not trying to dump on you or put words in your mouth. I’ve made some inferences from what you’ve said, and it seems like they’re unfair, so I’d love to understand your position more, but I’ll understand if you don’t reply.
My stance is that a good developer is not defined by his tools. We can create software whether we’re using an 11-year old laptop, a souped up current generation MBP, or a whiteboard and notebook because these tools don’t define our ability to create things. We all have tools at our disposal to try to generalize from our experience to the experiences of all the people that might use our software, but the important thing is that we are making that generalization and trying to accommodate the people who have different experiences from our own.
Also, even if you simulate poor hardware and test your code on it, there is probably a temptation to say, “It’s a bit slow, but that’s to be expected on low end hardware”.
As someone who uses ‘poor’ hardware every day, you know how well other applications performing tasks of similar complexity work: It’s less easy to justify poor performance if you’re used to, e.g. a web browser, performing much better on the same hardware.
even if you simulate poor hardware and test your code on it, there is probably a temptation to say, “It’s a bit slow, but that’s to be expected on low end hardware”.
That’s not necessarily an unacceptable solution. It depends on the parameters of the project.
I recently completed a project that had super fast sprints and spent no time on optimization. The project was 100% internally facing for the client company; every person had the same hardware, and it ran fine on the hardware. I don’t think I’m a worse coder because we took advantage of that, or because we didn’t spend the time to optimize for other setups. Based on the requirements, we delivered exactly what was needed, for exactly the use case that they had. On the opposite end of the spectrum, we are also currently working on a project that is confined to using Windows XP, IE6, and has very limited network connectivity. If I had an 11-year old laptop, I probably couldn’t adequately test for that scenario; I would have to procure other hardware to actually work on that, and working on it would be more painful than it already is.
My whole point simply is that if you’re good at what you’re doing, the quality of the tool that you’re using is less important than the fact that you are good at what you’re doing.
I don’t think I’m a worse coder because we took advantage of that, or because we didn’t spend the time to optimize for other setups.
Nobody is suggesting that you are. Most people develop software for a very wide range of devices, especially for mobile software where the range of speeds in common use is vastly wider than the range of speeds of PCs in common use. This topic is clearly focusing on that, not on people developing software intended to be run on one particular piece of hardware that is known ahead of time.
The author of the article did not merely suggest it, he stated it in no uncertain terms it in this very comment chain:
Unless you’re constantly enduring the limitations of poor hardware yourself, you’re going to be constantly making life worse for those who do.
I think this unambiguously says that if you use good hardware, you make bad software.
Also you said this:
Most people develop software for a very wide range of devices, especially for mobile software… This topic is clearly focusing on that,
The author also specifically says this:
I’m talking about making end-user applications which run on a PC.
So I think it’s possible that we’re all talking at cross purposes.
My entire point is that the tool does not define the skill of the developer. One can build great software whether using a $5K MacBook Pro or a $150 Chromebook, and the challenges that are faced are different for each.
If your code takes X-times longer to compile just because you want to know it feels and learn from it then go ahead but it’s definitely useless. A VM gives you exactly the same experience + the choice to take a break from it.
Making Android apps is a miserable fate I wish on no one. And yeah it’d be neato to make games on a PS4. But these are both beside the point and I think you know it - I’m talking about making end-user applications which run on a PC.
The idea that you are using a programming language that takes noticeably long to compile while lecturing someone else on their choice of tools is pretty laughable.
Make sure it will run fast on anyone’s machine like Drew.
Help small-time sellers and reduce waste by buying good, used products.
For model-checking and stuff, I did recently need a beefy machine vs what I had before. Plus, I had to consider all the slow-downs that will add up as new vulnerabilities are found in CPU’s. I also wanted support for many FOSS OS’s. So, I bought a Thinkpad 420 refurbished with Core i7. Been feeling really middle class with these boot and run times.
You can do 1 & 2 on new machines. I would argue you can do it better than you can on old hardware. For most projects I do, I test on a variety of virtual machines that emulate old hardware, so I know how well it runs. Let’s say you have a use case where you’re doing a project for a company and half the company runs three different operating systems across four different hardware configurations; I can test all of those pretty reasonably without getting 12 different work stations.
The third option is a great one, but I would argue that it’s a subset of the “saving money” option.
I didn’t think of that, but to be fair, I have a whole storage area in my house for old hardware, going all the way back to the Aptiva we got in 1995. Ecological concerns are certainly important!
All of these are actually things that you can do on new hardware as well.
You can’t buy a new thinkpad with a 4:3 or 16:10 aspect ratio as far as I can tell. It’s also getting difficult to find models where the battery is easily swappable. Plus optimizing for thinness means nearly all modern laptop keyboards have very shallow, uncomfortable key actuation.
I keep saying “I’ll upgrade when they make a new laptop that’s actually better than the one I have in ways I actually care about” and they keep not making one.
You can’t buy a new thinkpad with a 4:3 or 16:10 aspect ratio as far as I can tell.
I think that you’re absolutely right with your underlying point - if you have a specific hardware configuration and not having that is a dealbreaker for you, then that’s also a good reason for old hardware. That said, I think it’s a tradeoff - I’ve happily traded some of my keyboard preferences for a more powerful machine. I disliked the new MacBook Pro keyboard quite a bit when I got it, but I have found that the tradeoffs are sufficient enough that overall I enjoy the machine.
And admittedly, this is less of an issue if you have a desktop workstation - you can choose the monitor configuration or keyboard that you want.
I will add Drew’s strategy has one benefit over yours in that he’s forced to use low-end hardware non-stop. That can reduce folks’ tendency to cheat. I like your approach, too, though. :)
Cheating this system definitely happens. Not to get super sidetracked, but this is actually part of the project management axes of restraint: cost, quality, time, scope. This level of testing is covered by the “quality” aspect; if a client values high quality, then we do more testing of these sorts of things. If quality is lower on the scale, we may omit rigorous testing.
All of these are actually things that you can do on new hardware as well.
I don’t think the author ever implied that these were things you can only do on older hardware.
I’m able to not just empathize - I can actually spin up a machine with the exact same specifications and try to experience things exactly like the end user who is having an issue.
For some cases, yes, but VMs are not 100% accurate in terms of the underlying hardware they emulate/virtualize, so for some work (e.g. reproducing customer issues in a wayland compositor) they are not really useful.
I don’t think the author ever implied that these were things you can only do on older hardware.
Here’s another quote from the author:
Unless you’re constantly enduring the limitations of poor hardware yourself, you’re going to be constantly making life worse for those who do.
I think they’re clearly saying that older hardware makes one a better coder. I think that is not a good way to think. To be clear, I’m certainly not saying that new hardware makes one a better coder. I think that the two things just aren’t related.
VMs are not 100% accurate in terms of the underlying hardware they emulate/virtualize, so for some work (e.g. reproducing customer issues in a wayland compositor) they are not really useful.
Absolutely correct - VMs aren’t a magical answer to every single problem. There are definitely cases where they’re not useful, and I’d go further and say there are even edge cases where they’re not only not-useful but actively misleading! They’re still very useful, and especially useful in testing for widely used devices in the sort of situations that are outlined in this article (low power laptops, old phones, multiple operating systems).
I think they’re clearly saying that older hardware makes one a better coder.
That brings me to another risk: older hardware can force one to use development practices that are sub-optimal. As in, you can do stuff during development that don’t have to slow down the release versions. Using older hardware could, in theory, limit what a person does in development or have a negative effect on how released app is implemented.
This is my theory about C where BCPL and C authors made a lot of decisions based solely on the limitations of their hardware that are biting people to this day. People with much better hardware were designing languages with safety, modules, metaprogramming, better concurrency, and so on. So, the hardware limited their productivity, correctness/safety/security, and maintainability.
Yeah, that is kind of funny. I think the support of many of those machines happened due to its strong ecosystem of talent and tools. It was already preferred thing for bare-metal efficiency. Why not adapt it to the new cruddy devices. Talk about going back to its roots, though, for what appear on the surface to be the same reasons.
On the technical side, it’s amusing to note it was the only thing that could run on ESDAC but better designs could run on today’s embedded systems (i.e. 32-bit ones). Missed opportunity. Well, a bunch of people and companies ported other languages. They’re just super-super-niche. Astrobe Oberon comes to mind.
VMs are fine too, but it’s like using earplugs to pretend you’re deaf.
To be fair, muting your software to test how easy it is to use without sound is probably a better option than deafening yourself. Sometimes temporary software solutions are better than permanent ‘hardware’ choices.
Which virtualization software do you use, and how does it emulate the speed of old computers? Every virt platform can limit the amount of RAM, but how about the speed of disk and CPU so forth? The VM doesn’t really know what host its running on (i.e. is there a HDD or SSD in the host), and I think the common design is to run as fast as possible.
Speed is multi-dimensional, because your disk, memory, CPU, and network (especially network) may have different slowdowns, and different workloads will obviously be slowed down by different amounts due to this.
VirtualBox allows you to change the clock speed of the CPU (execution cap) and VMWare allows you to set resources as well. Those are the two that I use with any real frequency.
With respect to write speeds on the disk, I rarely consider that (I almost exclusively write web software) though we do consider write speeds for one project - specifically, we want to check something writing to an HDD and not a SSD. To do that, we actually use an external HDD.
Network is what we actually consider the most, and write everything with that in mind. Luckily Network is kind of a firehose situation - if you reduce how much you send, you improve the perceived speed at which you receive it, so we work at always reducing the footprint of all traffic; reduce the number of requests, reduce the amount in requests, reduce, reduce, reduce.
In terms of network, Chrome dev tools lets you simulate slow network speeds. I find that incredibly useful when writing publicly facing web apps; it’s very easy to forget how much network latency affects user experience when you’re always hitting a local server
Does Chrome support adding latency now? Last time I used those developer tools (more than 1 year ago now) I think it only supported reducing bandwidth.
Good to know. I use FF as my mian browser, but still use Chrome for webdev. Partly because we’re targeting Chrome (it’s an internal app) and partly because FF still seems to lock up the entire browser on occasion.
Thanks for the details. That makes sense and I think it’s worlds better than what most people do.
I was thinking along the lines of estimating the speed of a Raspberry Pi on my Dell Intel Core i7 desktop. I found that although the clock speeds differ by a factor of 5 or so (3+ Ghz to 700 Mhz), the Pi is more like 30-50x slower! (e.g. with a workload of compiling Python)
Most software companies aren’t targeting the Pi, but they are targeting old ARM phones. I’d be interested to hear solutions for that. I guess VirtualBox only does x86 so it doesn’t apply to that problem.
I think the Android emulator is based on QEMU? I wonder if it tries to simulate speed too?
I think it’s worth emphasizing that the Thinkpad X200 isn’t just any old hardware. As the post says, it’s a well built machine, highly compatible with Linux (and the BSDs too!). When new, it wasn’t cheap. Mine (obtained used, 5 years ago) has lasted just as well, but in the past I have had cheaper laptops that didn’t last as long and weren’t as nice to use. I still fire up my IBM-branded X40 on occasion, and despite some minor issues, it’s still usable enough for some purposes. I’ve kept it not because it’s “still good”, but because I’m sentimental. By contrast, my 2009 Mac Mini (Core 2 Duo, 4GB RAM) was also well built and still ran fine, but gradually became unpleasant to use under supported versions of MacOS, a certain Electron app, and of course the “modern web”. I considered trading it in last year, but Apple said it has zero value – which I take as an insult on top of an injury, as market value on eBay looked to be about $80. I didn’t feel sentimental enough to keep that machine. I still have my old PowerPC and 68040 “Classic” Macs, though, but they live next to my Commodores. That was a different Apple.
Just like all durable goods, all else being equal, you’re better off buying a used high-quality thing than a cheaply made new thing. Software bloat is so pernicious because it steals value from otherwise good hardware. Free and open source software at least gives you the option of avoiding bloat, and that’s one important reason why I use it.
The problem with old laptops is probably mostly the RAM limit and poor battery life. For discontinued laptop lines it’s nearly impossible to find a good battery, at least that’s my experience.
My laptop needs to be the lowest impedance interface between me and the software I work on. A solid Thinkpad with its unequalled keyboard and Trackpoint allows me to interact with my window manager and text editor without moving my hands or looking away from the screen. Most of the time I’m connected to a server with dozens of CPUs and a metric truckload of RAM to take care of all the processing. I don’t have any particular graphics needs and I find there are far more interesting things to do with a computer than play games on them. That’s definitely not for everyone but I empathise strongly with this blog post.
I can clearly remember a time where you had to upgrade your workstation/laptop computer every 2-3 years in order to run the latest software with any kind of reasonable speed. Those days are over and have been for a while. Obviously computers will continue to get faster but we’ve reached a point where all the extra cycles and memory are spent not on application features but on eye candy and abstraction layers (e.g. web browsers as application runtimes) and we’re starting to plateau on those too. Meaning, essentially, that “old hardware” is capable of doing most anything that new hardware is, except in certain niches like research and gaming.
As an example, my daily driver is a Dell Latitude that is around 6 years old. It is certainly not my ideal machine but I really can’t justify spending $1500 or more on a new laptop when this does absolutely everything I ask of it.
Me too! Lenovo X230. Okay, not that old, but still.
However, I do notice that some things are not really possible with old(er) hardware: running Android Studio. This piece of bloatware requires the latest possible hardware and super fast Internet (Gradle WTF!) to be bearable. So that just means no Android app development for me :-) On macOS it is the same with my old MBPs and Xcode… That’s soooooo slow…
It is indeed. And there is no way that I’m aware of to develop software for Android on Android. This is relevant here because the ‘poor hardware’ that most poor people (so maybe not your target market or intended user base, but anyway most of humanity) have access to is a low-end Android phone.
So let me ask, who exactly are you trying to empathize with?
I think it’s reasonable to not spend more than you use… However the argument that 640K ought to be enough for anybody is simply untrue. Some problems grow to fit whatever machines you throw at them. Sometimes you genuinely do need newer and faster hardware.
EDIT: the 640k is a bit of a stretch, but “software filling up the available hardware resources” looks more true to me than the inverse. (Note: I have an 8-year-old laptop. I’m a student, so I can’t afford it to immediately buy all the new stuff.)
I use old hardware because it’s “good enough” for everything I work on.
I have a ~10 year old workstation that I recently upgraded with an SSD and a couple GPUs from the e-recycling bin at my last job, but upgrading much more would just be a waste of time and money.
And, as the article points out, as a developer, I like using old hardware because it sets realistic performance expectations. If it’s fast enough on this machine, it should be fast enough for anybody.
I ended up using old hardware without really realizing it because it was “good enough” indeed. Until very recently I still had a mid-2008 Macbook (the first unibody ones). I was mostly working on small CLI/web projects, but it did kinda force me to optimize my code a bit more than if I had a more powerful machine – getting that script to run under 5 seconds takes a little extra work on a 10 year old machine.
I just recently upgraded to a used 2015 Macbook (the last good Macbook model before the new keyboards and touchbars) mostly because the old one’s hinges and motherboard were failing.
I used a Thinkpad T430s until recently at home for some light development - Advent of Code and other fun stuff, mainly writing Elixir in VS Code or just Vim. My eyes needed a brighter screen with better viewing angles, however, so I now have a T450s.
I’ve tried Windows (10) but Linux runs perfectly (KDE Neon), is snappier, and isn’t so much of a pain with the way it handles the continuous stream of updates, so I’ve been sticking with it.
This is truly admirable. It’s one thing to emulate a low-end workstation, but it is quite another thing to use one every day. It’s easier to ignore problems when you don’t have to deal with them.
Same here when it comes to client-facing hardware, I’m using a number of older (~15yo) laptops (mostly Thinkpad T42p and HP’s of similar vintage) for development purposes. On the server side I’m still running a pair of Intel SS4200’s (upgraded to dual-core Pentium E2220 and ‘maxed’ to 2GB) but those are now on the way out to be replaced by a single HP DL380G7 (2 * Xeon X5675, 128GB) which will serve as build server and VM/container host for a variety of purposes. Using older ‘client’ hardware ensures that I whatever I create runs fast on everything. Using older ‘server’ hardware during development makes less sense, especially given the fact that many projects end up being deployed on totally different hardware - embedded devices, mobile gizmos, single-board computers etc.
I love how cheap you can get refurbished or second hand think pads, new laptop here is ~ 1200 NZD, or old thinkpad that gets the job done for around 300-400 NZD.
u/SirCmpwn, do you see Moore’s law catching up with this at any point? If I have my math right that machine in a few years that machine will have two orders of magnitude fewer transistors than a new machine.
(I looked into it because I currently have an X201 serving as a ham radio logging machine with no problems)
Moore’s law has been mostly dead in the consumer space for near a decade now. I’m not too concerned. It’s the extra features on new silicon that makes newer hardware attractive
All of these are actually things that you can do on new hardware as well.
I emulate a number of different user experiences from my workstations (I have a laptop and a desktop, both of which are top end). I have a few different VMs that I use to emulate:
I can fully empathize with users that have limited hardware, without being limited by the hardware that I use for myself. I’m able to not just empathize - I can actually spin up a machine with the exact same specifications and try to experience things exactly like the end user who is having an issue.
It’s not bad to have older hardware, and I think it’s incredibly important for servers to be appropriately specced, but I don’t think that there is actually much benefit to having an old workstation other than not spending money, which is a sufficiently significant reason on its own.
The problem with this approach is that poor hardware is a costume you put on for an evening. For many people, poor hardware is their life. Unless you’re constantly enduring the limitations of poor hardware yourself, you’re going to be constantly making life worse for those who do.
There’s a huge difference between immediately feeling the pain of your poor code before you even commit it, and hearing about it from a ticket from your user.
To be frank, I could not disagree more.
If I hire someone to do a kitchen renovation, and they show up with a hammer, a screwdriver and a handsaw and that’s it, then I’m going to start thinking about if that person is the right choice for the job. It’s not that you need more than a hammer, screwdriver, and handsaw to do the renovation; it’s that there are a bunch of newer tools that make doing that work much easier.
Not at all; in fact, I’d argue that I’m probably more rigorous about testing poor hardware than you are. You are limited to testing your particular configuration of poor hardware, and you probably write things to address the issues that you have, but those are not universal experiences. How do you test things when you have, say, a hard requirement for IE6 that is run with no network connectivity? I’d imagine that you probably just don’t, because you can’t actually test that.
Again, I’m not saying that it’s bad to have old hardware, or that new hardware is a necessity for being a good developer, and I certainly agree that appropriately speccing your servers is of utmost importance, but I don’t think that poor hardware is actually a bonus. It’s a way you have elected to live, and that’s fine, but it doesn’t mean that you’re automatically giving more consideration to people who themselves have bad hardware.
That analogy just doesn’t make sense. It ignores exactly the point you’re responding to: the use of old hardware so you can see how well your code runs on old hardware.
How is that a hardware requirement? Testing lack of network connectivity doesn’t require special hardware, and IE6 isn’t hardware.
It certainly does. It means you’re automatically giving more consideration than the average developer to people who themselves have bad hardware. It doesn’t mean it’s impossible for other people to also test on bad hardware, but it means they’re doing just that: testing on bad hardware. They’re not living bad hardware. And the vast, vast majority of developers don’t test on bad hardware at all, so you’re automatically a few steps ahead of them.
I’m not sure it’s particularly helpful, but a more accurate kitchen-fitter analogy would be something like if you were looking to get your tiny cramped kitchen refitted, and find yourself looking round a kitchen showroom or brochure full of enormous, spacious show-kitchens. You wouldn’t question their ability to fit kitchen units, but you might wonder whether they will be able to come up with a plan that makes the best possible use of the limited space available.
A good analogy, dancing around the concept of dogfiod
You have a great point. I’ll also add that your method of using better boxes allows you, if you choose, to run more analyses and verification of the code. That’s one of reasons I have an i7 now. That’s why some shops have entire clusters of machines. I think I read Intel uses around a million cores to verify their CPU’s. Mind-boggling.
If multicore, you can also run IDE’s, VM’s, code generators, and V&V in parallel. That lets you reduce time to feedback on each work item. Then, you can iterate faster. Maybe even stay in mental state of flow for long time.
Thanks! I was honestly taken aback by the fact that almost all the comments I received were supportive of the idea that the quality of the tools defines the quality of the artisan, which I think is entirely wrong.
Having access to local VMs, IDEs, generators, etc. running parallel to each other is incredibly convenient, and I really just don’t buy into the notion that having a good computer makes you unaware or uncaring that bad computers exist.
It does cause that problem for a lot of people. It doesn’t for everyone. It’s more evident when I read the Hacker News thread on a person’s experience with dial-up on modern sites. The comments made it clear a lot of people didn’t realize (a) how many folks had bad connections (dialup or mobile) and (b) how much worse their practices made their experience. Like you, there were some who were aware using techniques like rate limiting to simulate those connections. Most weren’t.
So, it is a risk worth bringing up. That doesn’t mean it will apply to everyone though. Any artisan who cares about achieving a result in a certain context will do whatever it takes to do that.
It follows from the fact that if you don’t even notice that the code you’re writing runs poorly because your computer is quick, you’re unlikely to improve it until someone with slow hardware tells you it’s slow code.
Good for you, but this is not the usual behavior of at least 90% of developers around.
That’s an interesting number because it dovetails nicely with Sturgeon’s Law (tl;dr: 90% of everything is crap).
I don’t think that having a bad machine makes you think more about how people interact with your software. It makes you think more about how a particular subset of users interact with your software. Specifically, it limits you to thinking about how people like you interact with your software.
This is a particular problem for a lot of people. I don’t want to harp on the author because I believe that they’re a great developer that delivers great software,
but a specific example of the limitations that they introduce seems to be that they haven’t thought about how https://sr.ht looks for people who have large monitors; the site is pushed off to the left. I think this is partially because he does not or cannot use a large monitor with his setup, so it’s not something he tests for. I think this limitation is the result of the technology he is using and the philosophy of why he chooses to use technology in this way; he favours a particular set of users because those users have a similar situation to how he has chosen to use technology. This is not the only issue with sr.ht that I (or others) have identified, and I think that sr.ht is developed more for Drew and people like Drew than it is for everyone.Edit: Drew corrected me on this. I’m out to lunch here.It’s important to note that I think that is is okay that sr.ht is aimed at people like Drew; I think sr.ht is fantastic, I think that Drew is a great developer, and I am very happy that sr.ht exists. I’ve told dozens of people about it, and hopefully some will convert to paid users. It’s just not a service that works with my workflow, because I have chosen a different approach to things. The concern for me is that Drew seems to be convinced that developers like me are worse developers because we don’t do things the way he does. I think that’s clear from what he has stated even within this thread, and I think it’s an unfortunate position to hold.
Aside: for the record I have thought about those people https://todo.sr.ht/~sircmpwn/sr.ht/112
I think you should stop reading so much into my comments. If I didn’t say it, it’s not what I meant, and I never said that you were a worse developer for disagreeing with me.
What you wrote is pretty straight forward, but I added the inference that bad developers make life worse for other people and good developers don’t. If that’s not what you mean, then my apologies. To me, it looks like it’s what you wrote.
Thank you for considering those with wider screens - the last time I saw it brought up, it was dismissed. My sincere apologies for spreading that misinformation, and I’ve edited my parent comment accordingly.
I’m not trying to dump on you or put words in your mouth. I’ve made some inferences from what you’ve said, and it seems like they’re unfair, so I’d love to understand your position more, but I’ll understand if you don’t reply.
My stance is that a good developer is not defined by his tools. We can create software whether we’re using an 11-year old laptop, a souped up current generation MBP, or a whiteboard and notebook because these tools don’t define our ability to create things. We all have tools at our disposal to try to generalize from our experience to the experiences of all the people that might use our software, but the important thing is that we are making that generalization and trying to accommodate the people who have different experiences from our own.
Also, even if you simulate poor hardware and test your code on it, there is probably a temptation to say, “It’s a bit slow, but that’s to be expected on low end hardware”.
As someone who uses ‘poor’ hardware every day, you know how well other applications performing tasks of similar complexity work: It’s less easy to justify poor performance if you’re used to, e.g. a web browser, performing much better on the same hardware.
That’s not necessarily an unacceptable solution. It depends on the parameters of the project.
I recently completed a project that had super fast sprints and spent no time on optimization. The project was 100% internally facing for the client company; every person had the same hardware, and it ran fine on the hardware. I don’t think I’m a worse coder because we took advantage of that, or because we didn’t spend the time to optimize for other setups. Based on the requirements, we delivered exactly what was needed, for exactly the use case that they had. On the opposite end of the spectrum, we are also currently working on a project that is confined to using Windows XP, IE6, and has very limited network connectivity. If I had an 11-year old laptop, I probably couldn’t adequately test for that scenario; I would have to procure other hardware to actually work on that, and working on it would be more painful than it already is.
My whole point simply is that if you’re good at what you’re doing, the quality of the tool that you’re using is less important than the fact that you are good at what you’re doing.
Nobody is suggesting that you are. Most people develop software for a very wide range of devices, especially for mobile software where the range of speeds in common use is vastly wider than the range of speeds of PCs in common use. This topic is clearly focusing on that, not on people developing software intended to be run on one particular piece of hardware that is known ahead of time.
The author of the article did not merely suggest it, he stated it in no uncertain terms it in this very comment chain:
I think this unambiguously says that if you use good hardware, you make bad software.
Also you said this:
The author also specifically says this:
So I think it’s possible that we’re all talking at cross purposes.
My entire point is that the tool does not define the skill of the developer. One can build great software whether using a $5K MacBook Pro or a $150 Chromebook, and the challenges that are faced are different for each.
If your code takes X-times longer to compile just because you want to know it feels and learn from it then go ahead but it’s definitely useless. A VM gives you exactly the same experience + the choice to take a break from it.
Incremental compilation is a thing. I spend way more time thinking and editing than I spend compiling.
And if your low-end setup is something you want to “take a break from”, you’ve missed the point.
How is using the target system as your main one better than choosing the best one available (or fitting your development needs?)
Are we all going to start making Android apps on Android or make a game directly on a PS4?
You work on whichever system makes the development most efficient (for you) and then test the resulting application on the target platform(s).
Only my opinion, of course.
Making Android apps is a miserable fate I wish on no one. And yeah it’d be neato to make games on a PS4. But these are both beside the point and I think you know it - I’m talking about making end-user applications which run on a PC.
Hehe, it would be fun for sure :P
Anyway, even making games or using certain desktop frameworks can take some time to properly build.
The idea that you are using a programming language that takes noticeably long to compile while lecturing someone else on their choice of tools is pretty laughable.
[Comment from banned user removed]
I personally would add environmental reasons as well (taking into account the newer hardware and what would happen to the old hardware)
I kept using older hardware for three reasons:
Assess the efficiency of my software.
Make sure it will run fast on anyone’s machine like Drew.
Help small-time sellers and reduce waste by buying good, used products.
For model-checking and stuff, I did recently need a beefy machine vs what I had before. Plus, I had to consider all the slow-downs that will add up as new vulnerabilities are found in CPU’s. I also wanted support for many FOSS OS’s. So, I bought a Thinkpad 420 refurbished with Core i7. Been feeling really middle class with these boot and run times.
You can do 1 & 2 on new machines. I would argue you can do it better than you can on old hardware. For most projects I do, I test on a variety of virtual machines that emulate old hardware, so I know how well it runs. Let’s say you have a use case where you’re doing a project for a company and half the company runs three different operating systems across four different hardware configurations; I can test all of those pretty reasonably without getting 12 different work stations.
The third option is a great one, but I would argue that it’s a subset of the “saving money” option.
Reducing waste is not about saving money
I didn’t think of that, but to be fair, I have a whole storage area in my house for old hardware, going all the way back to the Aptiva we got in 1995. Ecological concerns are certainly important!
You can’t buy a new thinkpad with a 4:3 or 16:10 aspect ratio as far as I can tell. It’s also getting difficult to find models where the battery is easily swappable. Plus optimizing for thinness means nearly all modern laptop keyboards have very shallow, uncomfortable key actuation.
I keep saying “I’ll upgrade when they make a new laptop that’s actually better than the one I have in ways I actually care about” and they keep not making one.
I think that you’re absolutely right with your underlying point - if you have a specific hardware configuration and not having that is a dealbreaker for you, then that’s also a good reason for old hardware. That said, I think it’s a tradeoff - I’ve happily traded some of my keyboard preferences for a more powerful machine. I disliked the new MacBook Pro keyboard quite a bit when I got it, but I have found that the tradeoffs are sufficient enough that overall I enjoy the machine.
And admittedly, this is less of an issue if you have a desktop workstation - you can choose the monitor configuration or keyboard that you want.
I will add Drew’s strategy has one benefit over yours in that he’s forced to use low-end hardware non-stop. That can reduce folks’ tendency to cheat. I like your approach, too, though. :)
Cheating this system definitely happens. Not to get super sidetracked, but this is actually part of the project management axes of restraint: cost, quality, time, scope. This level of testing is covered by the “quality” aspect; if a client values high quality, then we do more testing of these sorts of things. If quality is lower on the scale, we may omit rigorous testing.
I don’t think the author ever implied that these were things you can only do on older hardware.
For some cases, yes, but VMs are not 100% accurate in terms of the underlying hardware they emulate/virtualize, so for some work (e.g. reproducing customer issues in a wayland compositor) they are not really useful.
Here’s another quote from the author:
I think they’re clearly saying that older hardware makes one a better coder. I think that is not a good way to think. To be clear, I’m certainly not saying that new hardware makes one a better coder. I think that the two things just aren’t related.
Absolutely correct - VMs aren’t a magical answer to every single problem. There are definitely cases where they’re not useful, and I’d go further and say there are even edge cases where they’re not only not-useful but actively misleading! They’re still very useful, and especially useful in testing for widely used devices in the sort of situations that are outlined in this article (low power laptops, old phones, multiple operating systems).
That brings me to another risk: older hardware can force one to use development practices that are sub-optimal. As in, you can do stuff during development that don’t have to slow down the release versions. Using older hardware could, in theory, limit what a person does in development or have a negative effect on how released app is implemented.
This is my theory about C where BCPL and C authors made a lot of decisions based solely on the limitations of their hardware that are biting people to this day. People with much better hardware were designing languages with safety, modules, metaprogramming, better concurrency, and so on. So, the hardware limited their productivity, correctness/safety/security, and maintainability.
And C, until very very recently, was the only language you knew could run on everything.
You mean was capable of in theory or had compiler support already?
Had compiler support already
Yeah, that is kind of funny. I think the support of many of those machines happened due to its strong ecosystem of talent and tools. It was already preferred thing for bare-metal efficiency. Why not adapt it to the new cruddy devices. Talk about going back to its roots, though, for what appear on the surface to be the same reasons.
On the technical side, it’s amusing to note it was the only thing that could run on ESDAC but better designs could run on today’s embedded systems (i.e. 32-bit ones). Missed opportunity. Well, a bunch of people and companies ported other languages. They’re just super-super-niche. Astrobe Oberon comes to mind.
AFAICT he’s just saying that low end hardware forces your sympathy with the rest of the crowd that did not buy a high-end laptop this year.
VMs are fine too, but it’s like using earplugs to pretend you’re deaf.
To be fair, muting your software to test how easy it is to use without sound is probably a better option than deafening yourself. Sometimes temporary software solutions are better than permanent ‘hardware’ choices.
Which virtualization software do you use, and how does it emulate the speed of old computers? Every virt platform can limit the amount of RAM, but how about the speed of disk and CPU so forth? The VM doesn’t really know what host its running on (i.e. is there a HDD or SSD in the host), and I think the common design is to run as fast as possible.
Speed is multi-dimensional, because your disk, memory, CPU, and network (especially network) may have different slowdowns, and different workloads will obviously be slowed down by different amounts due to this.
VirtualBox allows you to change the clock speed of the CPU (execution cap) and VMWare allows you to set resources as well. Those are the two that I use with any real frequency.
With respect to write speeds on the disk, I rarely consider that (I almost exclusively write web software) though we do consider write speeds for one project - specifically, we want to check something writing to an HDD and not a SSD. To do that, we actually use an external HDD.
Network is what we actually consider the most, and write everything with that in mind. Luckily Network is kind of a firehose situation - if you reduce how much you send, you improve the perceived speed at which you receive it, so we work at always reducing the footprint of all traffic; reduce the number of requests, reduce the amount in requests, reduce, reduce, reduce.
In terms of network, Chrome dev tools lets you simulate slow network speeds. I find that incredibly useful when writing publicly facing web apps; it’s very easy to forget how much network latency affects user experience when you’re always hitting a local server
Does Chrome support adding latency now? Last time I used those developer tools (more than 1 year ago now) I think it only supported reducing bandwidth.
Luckily I live in Canada and rarely have to simulate a poor connection.
Jokes aside, the Chrome dev tools are great for a lot of things; I’ll point out that Firefox dev tools also have a way of doing the same!
Good to know. I use FF as my mian browser, but still use Chrome for webdev. Partly because we’re targeting Chrome (it’s an internal app) and partly because FF still seems to lock up the entire browser on occasion.
Thanks for the details. That makes sense and I think it’s worlds better than what most people do.
I was thinking along the lines of estimating the speed of a Raspberry Pi on my Dell Intel Core i7 desktop. I found that although the clock speeds differ by a factor of 5 or so (3+ Ghz to 700 Mhz), the Pi is more like 30-50x slower! (e.g. with a workload of compiling Python)
Most software companies aren’t targeting the Pi, but they are targeting old ARM phones. I’d be interested to hear solutions for that. I guess VirtualBox only does x86 so it doesn’t apply to that problem.
I think the Android emulator is based on QEMU? I wonder if it tries to simulate speed too?
I think it’s worth emphasizing that the Thinkpad X200 isn’t just any old hardware. As the post says, it’s a well built machine, highly compatible with Linux (and the BSDs too!). When new, it wasn’t cheap. Mine (obtained used, 5 years ago) has lasted just as well, but in the past I have had cheaper laptops that didn’t last as long and weren’t as nice to use. I still fire up my IBM-branded X40 on occasion, and despite some minor issues, it’s still usable enough for some purposes. I’ve kept it not because it’s “still good”, but because I’m sentimental. By contrast, my 2009 Mac Mini (Core 2 Duo, 4GB RAM) was also well built and still ran fine, but gradually became unpleasant to use under supported versions of MacOS, a certain Electron app, and of course the “modern web”. I considered trading it in last year, but Apple said it has zero value – which I take as an insult on top of an injury, as market value on eBay looked to be about $80. I didn’t feel sentimental enough to keep that machine. I still have my old PowerPC and 68040 “Classic” Macs, though, but they live next to my Commodores. That was a different Apple.
Just like all durable goods, all else being equal, you’re better off buying a used high-quality thing than a cheaply made new thing. Software bloat is so pernicious because it steals value from otherwise good hardware. Free and open source software at least gives you the option of avoiding bloat, and that’s one important reason why I use it.
The problem with old laptops is probably mostly the RAM limit and poor battery life. For discontinued laptop lines it’s nearly impossible to find a good battery, at least that’s my experience.
This!
My laptop needs to be the lowest impedance interface between me and the software I work on. A solid Thinkpad with its unequalled keyboard and Trackpoint allows me to interact with my window manager and text editor without moving my hands or looking away from the screen. Most of the time I’m connected to a server with dozens of CPUs and a metric truckload of RAM to take care of all the processing. I don’t have any particular graphics needs and I find there are far more interesting things to do with a computer than play games on them. That’s definitely not for everyone but I empathise strongly with this blog post.
I can clearly remember a time where you had to upgrade your workstation/laptop computer every 2-3 years in order to run the latest software with any kind of reasonable speed. Those days are over and have been for a while. Obviously computers will continue to get faster but we’ve reached a point where all the extra cycles and memory are spent not on application features but on eye candy and abstraction layers (e.g. web browsers as application runtimes) and we’re starting to plateau on those too. Meaning, essentially, that “old hardware” is capable of doing most anything that new hardware is, except in certain niches like research and gaming.
As an example, my daily driver is a Dell Latitude that is around 6 years old. It is certainly not my ideal machine but I really can’t justify spending $1500 or more on a new laptop when this does absolutely everything I ask of it.
My daily driver for the last year and a half has been an X200 and it’s my favorite laptop, and my ideal form factor.
I have three old Eeepcs running services in my basement. Turns out k8s is too resource hungry for those.
My personal main driver is a late 2011 MBP.
All of the above were destined for a dumpster. All of the above were gotten for free.
I use excessively new hardware for a similar reason. There’s a lot of work to support new hardware well and I need hardware access to do it.
Me too! Lenovo X230. Okay, not that old, but still.
However, I do notice that some things are not really possible with old(er) hardware: running Android Studio. This piece of bloatware requires the latest possible hardware and super fast Internet (Gradle WTF!) to be bearable. So that just means no Android app development for me :-) On macOS it is the same with my old MBPs and Xcode… That’s soooooo slow…
Sounds like a feature, not a bug :) Android development is miserable.
It is indeed. And there is no way that I’m aware of to develop software for Android on Android. This is relevant here because the ‘poor hardware’ that most poor people (so maybe not your target market or intended user base, but anyway most of humanity) have access to is a low-end Android phone.
So let me ask, who exactly are you trying to empathize with?
I think it’s reasonable to not spend more than you use… However the argument that 640K ought to be enough for anybody is simply untrue. Some problems grow to fit whatever machines you throw at them. Sometimes you genuinely do need newer and faster hardware.
And then someone says “hold my beer”
EDIT: the 640k is a bit of a stretch, but “software filling up the available hardware resources” looks more true to me than the inverse. (Note: I have an 8-year-old laptop. I’m a student, so I can’t afford it to immediately buy all the new stuff.)
Sure, but are dragging windows, playing music and movies, and rendering documents really those problems?
Your name is friendlysock not friendlystrawman :P
I use old hardware because it’s “good enough” for everything I work on.
I have a ~10 year old workstation that I recently upgraded with an SSD and a couple GPUs from the e-recycling bin at my last job, but upgrading much more would just be a waste of time and money.
And, as the article points out, as a developer, I like using old hardware because it sets realistic performance expectations. If it’s fast enough on this machine, it should be fast enough for anybody.
I ended up using old hardware without really realizing it because it was “good enough” indeed. Until very recently I still had a mid-2008 Macbook (the first unibody ones). I was mostly working on small CLI/web projects, but it did kinda force me to optimize my code a bit more than if I had a more powerful machine – getting that script to run under 5 seconds takes a little extra work on a 10 year old machine.
I just recently upgraded to a used 2015 Macbook (the last good Macbook model before the new keyboards and touchbars) mostly because the old one’s hinges and motherboard were failing.
I used a Thinkpad T430s until recently at home for some light development - Advent of Code and other fun stuff, mainly writing Elixir in VS Code or just Vim. My eyes needed a brighter screen with better viewing angles, however, so I now have a T450s.
I’ve tried Windows (10) but Linux runs perfectly (KDE Neon), is snappier, and isn’t so much of a pain with the way it handles the continuous stream of updates, so I’ve been sticking with it.
This is truly admirable. It’s one thing to emulate a low-end workstation, but it is quite another thing to use one every day. It’s easier to ignore problems when you don’t have to deal with them.
Don’t you miss performance for example to do property testing and mutation testing? This are resource hungry almost by definition…
These are almost always trivial to farm out to more capable hardware. IME there’s no real need to run them locally.
So I will end with a slow HW and a fast HW and need to integrate them instead of fast HW only? ;)
Same here when it comes to client-facing hardware, I’m using a number of older (~15yo) laptops (mostly Thinkpad T42p and HP’s of similar vintage) for development purposes. On the server side I’m still running a pair of Intel SS4200’s (upgraded to dual-core Pentium E2220 and ‘maxed’ to 2GB) but those are now on the way out to be replaced by a single HP DL380G7 (2 * Xeon X5675, 128GB) which will serve as build server and VM/container host for a variety of purposes. Using older ‘client’ hardware ensures that I whatever I create runs fast on everything. Using older ‘server’ hardware during development makes less sense, especially given the fact that many projects end up being deployed on totally different hardware - embedded devices, mobile gizmos, single-board computers etc.
I love how cheap you can get refurbished or second hand think pads, new laptop here is ~ 1200 NZD, or old thinkpad that gets the job done for around 300-400 NZD.
u/SirCmpwn, do you see Moore’s law catching up with this at any point? If I have my math right that machine in a few years that machine will have two orders of magnitude fewer transistors than a new machine.
(I looked into it because I currently have an X201 serving as a ham radio logging machine with no problems)
Moore’s law has been mostly dead in the consumer space for near a decade now. I’m not too concerned. It’s the extra features on new silicon that makes newer hardware attractive