Perhaps this opting for “boring technology” is just an inevitable result of professionalization in the field. I’ve seen plenty of examples where people would paint themselves into a corner using fancy new technology that didn’t work out so well and now you’re stuck with legacy code that has to be rewritten. Or some also-ran framework which was abandoned by its maintainers and now you’re stuck on some old tech (which doesn’t have to be that bad per se) which cannot run on modern versions of the language (which is a ticking time bomb as the base language becomes EOLed).
On the other hand, instead of experimenting with languages, people have now started experimenting with approaches like noSQL and microservices which for the vast majority of projects and teams are just bad ideas. And of course JS frameworks still are coming out at light speed, instead of at least having a boring option.
Perhaps this opting for “boring technology” is just an inevitable result of professionalization in the field
I suspect it’s also a case of diminishing returns. The first time I played with Linux, for example, the dominant home OS was Windows 95. The benefits of Linux on the same hardware were huge. Windows licenses cost around 10% of the total cost of a new PC and a free offering meant you could spend 10% more, which typically came with a 50%+ performance improvement (doubling the amount of RAM or increasing the CPU clock speed by 30-50% cost as much as the Windows license for a typical desktop back then).
Every Windows, Linux, or *BSD (and, later, OS X) release came with exciting new features. These are now all mature things that I use daily but the last set of core OS features that I really cared about are now all about a decade old. There have been incremental improvements and a few things that I find mildly annoying to not have if I use an older kernel, but very few things.
The same is true with programming languages. First-class objects, higher-order functions, and so on (in languages that I could run on cheap consumer hardware) were a big win. Ownership types are useful, but until a language comes with first-class compartmentalisation model that it uses to interface with foreign code and so the term ‘safe language’ actually means something, most of the mainstream languages just give me a few nice bits of syntax. With a few exceptions (Erlang, Pony), they’re all giving me some thin abstractions over a PDP-11 abstract machine.
20-30 years ago, the difference between mainstream tools and the new shiny (Delphi, Linux, PHP, whatever) was a huge increase in productivity, often a factor of two or more in terms of time to market or overall performance. Intel shipped the first Pentium at 60 or 66 MHz in 1993. In 1994, they shipped the 75-100 MHz ones. In 1995, they shipped the Pentium Pro at 150-200 MHz, then in 1997 the Pentium II at 300-450 MHz. Using this last year’s technology put you at a huge disadvantage.
A lot of this has slowed down. My current personal laptop is now 10 years old. It’s now at the point where I should probably replace it and I’ll probably get a slightly faster CPU, maybe twice as much RAM, and probably the same amount of storage, in a similar form factor, probably with better battery life (a noticeable amount more memory bandwidth and GPU compute). The machine I owned 10 years previously was a desktop with a single core that was less than a quarter of the speed of the four in my laptop, with 1/32 as much RAM, a slow disk that was 4/1000th of the size of the fast SSD in the laptop and a GPU that was barely programmable.
And of course JS frameworks still are coming out at light speed, instead of at least having a boring option
React has been around for 10 years and there’s absolutely no sign of it going away in any serious capacity within the corporate space any time soon. This isn’t really accurate.
Another thing: these techs don’t actually standardize. I’ve worked at 4 companies using Flask; none had comparable app structure or used the same plugins (yay microframeworks!). “Just use React,” but for a while, it was “to state manager or not state manager (Redux),” then it was “to hooks or not to hooks,” then it was “to CSS-in-JS or Something Else (and/or BEM, or now, Tailwind).”
And it’s not like React has been “boring” for 10 years already. Linux and Python have both existed since 1991, but Python had a (much!) longer ramp-up time to become mainstream than Linux has. Simply stating the age of a technology isn’t enough to determine if it should be considered boring. That’s actually a bit disingenuous IMO.
A thing I recently learned which shocked me is that Perl came out in 1987, which means it’s only about 4 years older than Python. Yet Perl’s reputation includes the word “old” and as far as I’m aware Python’s does not.
It’s a function of usage and popularity. C and Pascal are roughly contemporaneous, but most people would consider Pascal obsolete while C is still current.
Fair enough - with old/new I was trying to get at “widely used for a longer time” rather than merely “existing for a longer time”. When there’s a lot of buy-in from multiple teams, a project is more likely to be maintained over the long term, even if the original maintainers go away.
I’d add one more category. If “soil” is functionality, “bedrock” is safety. It’s about what can’t happen when you run a program:
A program can’t get a syntax error after it has been running for hours (see: Forth, and I think PHP?)
It can’t attempt to invoke a method on a string (see: all dynamically typed languages)
It can’t attempt to invoke a method on null (see: Java, Go, etc. etc.)
It can’t attempt to read and write the same memory at the same time from different threads (for some rare languages that disallow this, see: Rust, Haskell, and also Python and JS simply for being single threaded)
It can’t run out of memory as it runs (is there a language that pre-allocates all memory for this reason?)
It can’t send invalid SQL to the database, so long as the schema for that database didn’t change after the program was compiled
It can’t access the network if NetworkAccess wasn’t declared in the main function (see: capability safety)
It can’t crash due to a missing dependency (remember, this is at run time)
It can’t infinite loop (see: some non-Turing-complete config languages, and the C PreProcessor language).
It can’t fail due to passing the wrong number of args to a function (JS)
It can’t fail due to passing the wrong type of argument (most dynamically-typed languages)
It can’t accidentally modify some state that wasn’t supposed to be modifiable (see: languages without a notion of const, like Go, or without good privacy modifiers, like C or again Go.)
Re. Syntax errors — I actually can’t think of a language in use today that can literally hit a syntax error at runtime (without using eval.) Every language pre-parses the source into at least an AST. Old-school BASIC could, though, because many implementations only tokenized code and did the rest of the parsing at runtime.
All three of your bullet points are possible for C/C++, sadly, though first two are largely fine if you avoid variadic functions and don’t ignore compiler warnings. Avoiding these three is probably a large part of the attraction of Rust (the last is possible I’m rust if you incorrectly implement a Sync type, but at least that’s signposted by an unsafe block).
I’m a former “hacker” turned corporate coder. Perhaps it is a natural transition from youthful idealism to aging conservatism, but I don’t see young hackers very much, not in this city at least. Everyone has an internship now, no one goes to the hackerspace after school. It would be easy to say that it is the money. Yes, to a large extent that is true. I’ve been remodeling my house for the last few years, and that is awful expensive… But if I didn’t have the possibility to make a lot of money coding, I’m not sure I’d still be a hacker. Maybe I’d be a teacher instead, or a plumber, or I’d sell my house and go do permaculture somewhere. I don’t know. Maybe I’d even be happier without the option to have the money… But would I still be reading about Agda and pushing %100 free software? Not likely.
The main reason for my transition, is that I am socially driven, and the sad fact is; projects with corporate sponsorship get more stars on github, they get more shares, more people talk about them, they get more pull requests, they get more atmosphere, more social air to breath. If you want to build free software, you’re very unlikely to be successful in promoting your project without corporate backing. Yes, there are a few non-corporate projects that are still being created (and it’s not like the old ones have gone away). Nix, to some degree rust, mastodon? maybe you could claim Vue… But the fact is, that ever since corporate open source took the world by storm, technical aspects (the soil) and even the surface aspects dwarf the power of a corporate logo. There were a lot of interesting cross platform distribution and isolation tools being built before and around the time of flatpak and snap. But flatpak and snap won out, even when neither were out of the alpha stage of software development. The Redhat and Canonical logos could build a community before the code had even been written. Even those projects that were successful without corporate branding were quickly bought up by the likes of Redhat. They hired most of the people who had been working on Gnome in their free time…
The hackers got hired, and then they stopped hacking, and the ground became infertile for anyone without a logo or a series A.
My hope is that now that everyone is being laid off, we can go back to hacking… And now that we see the risk of an OpenAI LLM monopoly, we’ll learn that we should support those projects that DO NOT have corporate backing.
Gemini and the Fediverse are taking off; folks are still hacking on Plan 9(!), and the PinePhone is almost (or entirely, depending on your requirements) fine as a daily driver these days.
Yes they’re dwarfed by corporate open source. But that’s part of their charm.
There’s been a huge explosion in new kernels in the last 10 years. I’ve written some internal memos about this trend because I strongly suspect that one of them will have a huge impact, most will die completely, and a few will eventually find some interesting niches. None of them are being written with Hyper-V / Azure support because it’s much easier to run Xen or KVM locally, test, and then deploy to AWS. By the time that we know which one will be the next disruptive technology, it will be well integrated with our competitors’ stacks.
What is the reason to not def send_emails(list_of_recipients, bccs=[]):?
From my Clojure backend I wonder what is sending the emails (and how you can possibly test that unless you are passing in something that will send the email), but perhaps there is something I miss from the Python perspective?
The answer is mutable default arguments; if you’re coming from Clojure, I can see why this wouldn’t be an issue 😛
When Python is defining this function, it’s making the default argument a single instance of an empty list, not “a fresh empty list created when invoking the function.” So if that function mutates the parameter in some way, that mutation will persist across calls, which leads to buggy behavior.
An example might be (with a whole module):
from sendgrid import actually_send_emails
from database_functions import get_alternates_for
def send_emails(list_of_recipients, bccs=[]):
# suppose this app has some settings where users can specify alternate
# addresses where they always want a copy sent. It's
# contrived but demonstrates the bug:
alternate_recipients = get_alternates_for(list_of_recipients)
bccs.extend(alternate_recipients) # this mutates bccs
actually_send_email(
to=list_of_recipients,
subject="Welcome to My app!",
bccs=bccs,
message="Thanks for joining!")
What happens is: every time you call it without specifying a second argument, the emails added from previous invocations will still be in the default bccs list.
The way to actually do a “default to empty list” in Python is:
def send_emails(list_of_recipients, bccs=None):
if bccs is None:
bccs = []
# rest of the function
As for how you test it, it’s probably with mocking. Python has introspection/monkey patching abilities, so if you relied on another library to handle actually sending the email (the example above I pretended sendgrid had an SDK with a function called actually_send_email, in Python you would usually do something like
# a test module
from unittest.mock import patch
def test_send_emails():
with patch('my_module.sendgrid.actually_send_email') as mocked_email_send:
to = ['pablo@example.com']
bccs = ['pablo_alternate@example.com']
send_emails(to, bccs)
mocked_email_send.assert_called_once_with(
to=to,
subject="Welcome to My app!",
bccs=bccs,
message="Thanks for joining!")
This mucks with the module at runtime to convert sendgrid.actually_send_email to instead record its calls instead of what the function did originally.
Describes a three-dimensional framework to harness Programming Language Theory, and Practice that makes it possible, or at least conceivable to build a deep opinion, and eventually have deep convo that may improve the state of the art, and ship better products, make more, and hopefully better money, et al…
OP use that framework to reveal the whys, and the consequences of “boring technology” culture.
NB: I dare to promote stellar writing, from diversity fame, that is insightful.
i liked this in general - but i think that “boring” is a moving target. when i opt for boring tech, i mostly mean well-understood tech. the author clearly likes golang - i do too - but i consider it a Boring choice these days.
also, staffing is a real concern - it’s not practical for a medium/large company to hire a ton of “hackers”. there simply aren’t enough out there - but there is a surplus of react devs who are totally capable of working on your stack.
Agree with boring being a moving target. I’ll always remember being told to choose “boring” Visual Basic over flash-in-the-pan Python. After all, even if we can find a Python programmer today, would anyone even still know what it is in 2010?
To be fair, Ruby was more of a flash in the pan than Python. Nobody would’ve known in advance that Python would continue to rise and rise in popularity like it has, mostly due to lots of scientific/HPC stuff coming out and now the ML stuff being available. And probably nobody would’ve expected Perl’s demise, either.
It’s only boring in a proprietary/Windows context. It hasn’t really taken off in FOSS/Linux environments AFAIK. But I could be wrong due to living in this tech sites bubble. We don’t get a lot of .NET submissions here either.
I mean, C#/.NET has been my breadwinning langauge for around a decade at this point, and “only boring in a Windows Context” ignores Unity, and the growing ASP.NET Core deployments in various flavors of Docker and/or Kubernetes. From a business side of things, I’ve worked at companies up and down the size spectrum that have a lot of code based in C#. It is, after all, used to run one of the most popular programming websites (Stack Overflow).
It’s not used much inside the Valley, but at this point, that’s mostly bias and inertia.
I had no idea Unity was a C# thing! I’m not in the Valley, but still very much in the FOSS/UNIX world and I’ve been avoiding the “traditional” tech companies (who would typically all use Java, C# and Windows) like the plague.
lmao this is a great callout; I hear amazing things about it (esp. LINQ) and think for years it was very ahead of Java (pre-Java 8); I think the omission is because I’ve spent too much time in high-growth VC companies that avoided .NET. 😛
I think it’s absolutely in the category though, and like Java, probably one of the ones I’d prefer.
A few years ago I tried using F# for an Advent of Code problem; I’m still curious to try it and may get back to it.
Perhaps this opting for “boring technology” is just an inevitable result of professionalization in the field. I’ve seen plenty of examples where people would paint themselves into a corner using fancy new technology that didn’t work out so well and now you’re stuck with legacy code that has to be rewritten. Or some also-ran framework which was abandoned by its maintainers and now you’re stuck on some old tech (which doesn’t have to be that bad per se) which cannot run on modern versions of the language (which is a ticking time bomb as the base language becomes EOLed).
On the other hand, instead of experimenting with languages, people have now started experimenting with approaches like noSQL and microservices which for the vast majority of projects and teams are just bad ideas. And of course JS frameworks still are coming out at light speed, instead of at least having a boring option.
I suspect it’s also a case of diminishing returns. The first time I played with Linux, for example, the dominant home OS was Windows 95. The benefits of Linux on the same hardware were huge. Windows licenses cost around 10% of the total cost of a new PC and a free offering meant you could spend 10% more, which typically came with a 50%+ performance improvement (doubling the amount of RAM or increasing the CPU clock speed by 30-50% cost as much as the Windows license for a typical desktop back then).
Every Windows, Linux, or *BSD (and, later, OS X) release came with exciting new features. These are now all mature things that I use daily but the last set of core OS features that I really cared about are now all about a decade old. There have been incremental improvements and a few things that I find mildly annoying to not have if I use an older kernel, but very few things.
The same is true with programming languages. First-class objects, higher-order functions, and so on (in languages that I could run on cheap consumer hardware) were a big win. Ownership types are useful, but until a language comes with first-class compartmentalisation model that it uses to interface with foreign code and so the term ‘safe language’ actually means something, most of the mainstream languages just give me a few nice bits of syntax. With a few exceptions (Erlang, Pony), they’re all giving me some thin abstractions over a PDP-11 abstract machine.
20-30 years ago, the difference between mainstream tools and the new shiny (Delphi, Linux, PHP, whatever) was a huge increase in productivity, often a factor of two or more in terms of time to market or overall performance. Intel shipped the first Pentium at 60 or 66 MHz in 1993. In 1994, they shipped the 75-100 MHz ones. In 1995, they shipped the Pentium Pro at 150-200 MHz, then in 1997 the Pentium II at 300-450 MHz. Using this last year’s technology put you at a huge disadvantage.
A lot of this has slowed down. My current personal laptop is now 10 years old. It’s now at the point where I should probably replace it and I’ll probably get a slightly faster CPU, maybe twice as much RAM, and probably the same amount of storage, in a similar form factor, probably with better battery life (a noticeable amount more memory bandwidth and GPU compute). The machine I owned 10 years previously was a desktop with a single core that was less than a quarter of the speed of the four in my laptop, with 1/32 as much RAM, a slow disk that was 4/1000th of the size of the fast SSD in the laptop and a GPU that was barely programmable.
React has been around for 10 years and there’s absolutely no sign of it going away in any serious capacity within the corporate space any time soon. This isn’t really accurate.
The article goes into that:
And it’s not like React has been “boring” for 10 years already. Linux and Python have both existed since 1991, but Python had a (much!) longer ramp-up time to become mainstream than Linux has. Simply stating the age of a technology isn’t enough to determine if it should be considered boring. That’s actually a bit disingenuous IMO.
A thing I recently learned which shocked me is that Perl came out in 1987, which means it’s only about 4 years older than Python. Yet Perl’s reputation includes the word “old” and as far as I’m aware Python’s does not.
It’s a function of usage and popularity. C and Pascal are roughly contemporaneous, but most people would consider Pascal obsolete while C is still current.
If simply stating the age of a technology isn’t a good metric, you probably shouldn’t have framed it that way in your initial reply
Fair enough - with old/new I was trying to get at “widely used for a longer time” rather than merely “existing for a longer time”. When there’s a lot of buy-in from multiple teams, a project is more likely to be maintained over the long term, even if the original maintainers go away.
I’d add one more category. If “soil” is functionality, “bedrock” is safety. It’s about what can’t happen when you run a program:
null
(see: Java, Go, etc. etc.)NetworkAccess
wasn’t declared in themain
function (see: capability safety)const
, like Go, or without good privacy modifiers, like C or again Go.)Re. Syntax errors — I actually can’t think of a language in use today that can literally hit a syntax error at runtime (without using
eval
.) Every language pre-parses the source into at least an AST. Old-school BASIC could, though, because many implementations only tokenized code and did the rest of the parsing at runtime.Bash and sh in general. This is a substantial part of the whole rationale for Oil Shell?
Python, IMO, though it’s a little debatable if you think NameError doesn’t count. I feel that it does.
All three of your bullet points are possible for C/C++, sadly, though first two are largely fine if you avoid variadic functions and don’t ignore compiler warnings. Avoiding these three is probably a large part of the attraction of Rust (the last is possible I’m rust if you incorrectly implement a Sync type, but at least that’s signposted by an unsafe block).
I ❤️ these bullet points.
Most of these are well handled at the language level in Ecstasy.
Esterel (https://en.wikipedia.org/wiki/Esterel) has no dynamic memory allocation
I’m a former “hacker” turned corporate coder. Perhaps it is a natural transition from youthful idealism to aging conservatism, but I don’t see young hackers very much, not in this city at least. Everyone has an internship now, no one goes to the hackerspace after school. It would be easy to say that it is the money. Yes, to a large extent that is true. I’ve been remodeling my house for the last few years, and that is awful expensive… But if I didn’t have the possibility to make a lot of money coding, I’m not sure I’d still be a hacker. Maybe I’d be a teacher instead, or a plumber, or I’d sell my house and go do permaculture somewhere. I don’t know. Maybe I’d even be happier without the option to have the money… But would I still be reading about Agda and pushing %100 free software? Not likely.
The main reason for my transition, is that I am socially driven, and the sad fact is; projects with corporate sponsorship get more stars on github, they get more shares, more people talk about them, they get more pull requests, they get more atmosphere, more social air to breath. If you want to build free software, you’re very unlikely to be successful in promoting your project without corporate backing. Yes, there are a few non-corporate projects that are still being created (and it’s not like the old ones have gone away). Nix, to some degree rust, mastodon? maybe you could claim Vue… But the fact is, that ever since corporate open source took the world by storm, technical aspects (the soil) and even the surface aspects dwarf the power of a corporate logo. There were a lot of interesting cross platform distribution and isolation tools being built before and around the time of flatpak and snap. But flatpak and snap won out, even when neither were out of the alpha stage of software development. The Redhat and Canonical logos could build a community before the code had even been written. Even those projects that were successful without corporate branding were quickly bought up by the likes of Redhat. They hired most of the people who had been working on Gnome in their free time…
The hackers got hired, and then they stopped hacking, and the ground became infertile for anyone without a logo or a series A.
My hope is that now that everyone is being laid off, we can go back to hacking… And now that we see the risk of an OpenAI LLM monopoly, we’ll learn that we should support those projects that DO NOT have corporate backing.
Maybe you’re looking in the wrong places?
Gemini and the Fediverse are taking off; folks are still hacking on Plan 9(!), and the PinePhone is almost (or entirely, depending on your requirements) fine as a daily driver these days.
Yes they’re dwarfed by corporate open source. But that’s part of their charm.
There’s been a huge explosion in new kernels in the last 10 years. I’ve written some internal memos about this trend because I strongly suspect that one of them will have a huge impact, most will die completely, and a few will eventually find some interesting niches. None of them are being written with Hyper-V / Azure support because it’s much easier to run Xen or KVM locally, test, and then deploy to AWS. By the time that we know which one will be the next disruptive technology, it will be well integrated with our competitors’ stacks.
The most interesting thing to look at with a programming language is its culture and that culture’s associated path dependence.
What is the reason to not
def send_emails(list_of_recipients, bccs=[]):
?From my Clojure backend I wonder what is sending the emails (and how you can possibly test that unless you are passing in something that will send the email), but perhaps there is something I miss from the Python perspective?
Hi! Thanks for reading! 🙂
The answer is mutable default arguments; if you’re coming from Clojure, I can see why this wouldn’t be an issue 😛
When Python is defining this function, it’s making the default argument a single instance of an empty list, not “a fresh empty list created when invoking the function.” So if that function mutates the parameter in some way, that mutation will persist across calls, which leads to buggy behavior.
An example might be (with a whole module):
What happens is: every time you call it without specifying a second argument, the emails added from previous invocations will still be in the default
bccs
list.The way to actually do a “default to empty list” in Python is:
This is horrifying. TIL, but I almost wish I hadn’t.
It makes sense outside of the pass-by-reference/pass-by-value dichotomy but it’s still a massive footgun.
As for how you test it, it’s probably with mocking. Python has introspection/monkey patching abilities, so if you relied on another library to handle actually sending the email (the example above I pretended
sendgrid
had an SDK with a function calledactually_send_email
, in Python you would usually do something likeThis mucks with the module at runtime to convert
sendgrid.actually_send_email
to instead record its calls instead of what the function did originally.docs here
It’s not about sending emails, but the mutable default argument: https://docs.python-guide.org/writing/gotchas/#mutable-default-arguments
Python will initialize the function argument variables on module import, rather than whenever the function is called.
Major essay.
Describes a three-dimensional framework to harness Programming Language Theory, and Practice that makes it possible, or at least conceivable to build a deep opinion, and eventually have deep convo that may improve the state of the art, and ship better products, make more, and hopefully better money, et al…
OP use that framework to reveal the whys, and the consequences of “boring technology” culture.
NB: I dare to promote stellar writing, from diversity fame, that is insightful.
i liked this in general - but i think that “boring” is a moving target. when i opt for boring tech, i mostly mean well-understood tech. the author clearly likes golang - i do too - but i consider it a Boring choice these days.
also, staffing is a real concern - it’s not practical for a medium/large company to hire a ton of “hackers”. there simply aren’t enough out there - but there is a surplus of react devs who are totally capable of working on your stack.
Agree with boring being a moving target. I’ll always remember being told to choose “boring” Visual Basic over flash-in-the-pan Python. After all, even if we can find a Python programmer today, would anyone even still know what it is in 2010?
To be fair, Ruby was more of a flash in the pan than Python. Nobody would’ve known in advance that Python would continue to rise and rise in popularity like it has, mostly due to lots of scientific/HPC stuff coming out and now the ML stuff being available. And probably nobody would’ve expected Perl’s demise, either.
Ruby is still rapidly gaining popularity and seeing a lot of investment.
And I’m not sure Perl can ever die. But it’s less popular for new people than it used to be I agree.
I find it entertaining that C# isn’t on the list of boring languages listed here.
It’s only boring in a proprietary/Windows context. It hasn’t really taken off in FOSS/Linux environments AFAIK. But I could be wrong due to living in this tech sites bubble. We don’t get a lot of .NET submissions here either.
I mean, C#/.NET has been my breadwinning langauge for around a decade at this point, and “only boring in a Windows Context” ignores Unity, and the growing ASP.NET Core deployments in various flavors of Docker and/or Kubernetes. From a business side of things, I’ve worked at companies up and down the size spectrum that have a lot of code based in C#. It is, after all, used to run one of the most popular programming websites (Stack Overflow).
It’s not used much inside the Valley, but at this point, that’s mostly bias and inertia.
I had no idea Unity was a C# thing! I’m not in the Valley, but still very much in the FOSS/UNIX world and I’ve been avoiding the “traditional” tech companies (who would typically all use Java, C# and Windows) like the plague.
Godot also supports C# as a scripting language, fwiw.
Use of C# and use of Java don’t overlap much, if at all. Use of C# does overlap more than you’d expect with usages of RabbtiMQ, tho
lmao this is a great callout; I hear amazing things about it (esp. LINQ) and think for years it was very ahead of Java (pre-Java 8); I think the omission is because I’ve spent too much time in high-growth VC companies that avoided .NET. 😛
I think it’s absolutely in the category though, and like Java, probably one of the ones I’d prefer.
A few years ago I tried using F# for an Advent of Code problem; I’m still curious to try it and may get back to it.
I mean, the larger programming world owes C# the
async/await
syntax becoming popular, good, bad, or ugly.