The examples from my own past that come to mind all did something else on their way to the trash bin: taught me how to do better. If I was doing my job as best possible, I ended up learning how to do someone else’s job better (learning from their failures and decisions as well as my own).
Maybe your code won’t stick around (see: https://www.groundedsage.dev/posts/the-clojure-mindshare/ for some amazing visuals) but so what? If the true work product (yourself, your team, your skills, your portfolio, your war stories) improved, you still came out on top. Next time you’ll be better poised to read the market, to build the right thing, to pass internal reviews, to create things that stand the test of time.
I’m confused by the idea of writing code that is at risk of being subject to technical obsolescence before it reaches production. What kind of shipping schedule are you on? Are you writing a SaaS application for a Google product?
It can be pretty common in the DevOps space. A cloud vendor may not have function X, you have to implement an entire pipeline of services to get that functionality. Build test environments, test extensively. Start projects to move production workload to it. Then the vendor announces X at a conference, and it also does Y, which you have no hope of achieving anytime soon. It’s even less work to integrate with that than to roll out the current project into production.
Exactly. Back in the 90s, we used to complain about Microsoft doing exactly this; breaking API compatibility, then quickly re-implementing competitors’ features using the new APIs, which are only known internally. By the time a new version of Windows was released, everyone else was at least two years behind Microsoft’s offerings.
Even if current-day SV giants seem indifferent to moving into competitors’ niches, they break their APIs frequently and freely, imposing enormous costs on everyone integrating with them.
And on the other extreme you can be convicted of murder, triggering a series of events that lets your work go to waste. Hans Reiser and Reiser4.
This is a weird, unnecessary comment. “Can be” like it’s something that just happened to him, instead of something he did and faced consequences for. And why bring it up at all, when it’s totally not relevant?
I feel fortunate that relatively little of the code I’ve written over the last 30+ years has been “wasted” by the author’s definition, but the observation about power-law distribution definitely rings true. The code I wrote in the ~6 years I was at a FAANG had more effect on the world than the entire rest of my career, and it’s not even close: my FAANG code has been used by literally billions of people and the combined audiences of all my other projects would be a couple million tops.
What if your metric isn’t how many people used your code but what they did with it? One person charting his family tree so his children will know their own history… that one user could easily outweigh a thousand users who have to click away from an ad.
One person who saved days of his life so he could spend that time with aging grandparents versus a thousand who kept scrolling videos.
You could compose examples all day long of profitable and not, meaningful and not, positive and not, ways to engage code with people. Who cares if your work product is wasted if it means more positivity in the world!
Both things matter. It’s kind of multiplicative. One person charting his family tree is good! But ten thousand people charting their family trees is even better.