I have heard this succinctly phrased as “use new tech for old products and old tech for new products.” You want to at least know either what you’re doing, or how you’re doing it. Figuring out both simultaneously is bad news.
I agree. Unless you work in a playground/sandbox or new tech is your product.
Anecdotally, as an indie dev, I am looking at things like Rust to build tools. But at work, I’m using PHP and MySQL (OK, I admit: mariaDB) for web dev. Python for embedded hardware. My recovery strategy is focused on being able to install an OS and copy some files back. It’s hard to KISS with a moving target.
I’m somewhere in the middle on this article but I hear you.
But I think you’re onto something with Rust at home, PHP at work. For me it’s more C# at work, Go at home. I want to stay ahead - I just don’t want my client to have to pay for missteps.
C++2003 at work (But used more like C with classes) and C++11/14/17 at home.
C++ for fun, that’s a bold move.
C++11 and C++14 are fun again
(Aaand actually a tiny bit of Common Lisp at work I managed to sneak in.)
When I was a student, I made a comment about how they taught what I felt was old-hat technology instead of the cutting edge new toys.
The professor replied “no manager ever got fired for picking Java and MySQL for a project”.
That’s a pretty prevalent bonmot, but it has issues. It assumes people get fired over technology choices.
No one ever got fired for choosing Elasticsearch or MongoDB. That might be a bitter pill to swallow for most technologists. They get fired because the project failed to reach expectations. Careless choice of technology can be an issue here and is rather often also at the core of the issue. Obviously, if a project fails horribly, you will search for the easiest approach angle to relieve a person off their post, and careless tech choices can be one of them. And pretty often, a risky choice and inability to master is also one of the actual issues.
If the project succeeds, no one will question all that.
Yes, of course.
But when a project fails, some people will point to the unproven technology stack that was selected. Whether or not it was the cause of the failure or not.
At this point, you are on the defensive trying to validate your choices and shift blame. If you’d have picked safe, conservative bets over new frontiers, this choice would have been easier to justify.
Python for Embedded? That qualifies as hipster!
Haha! I know! I use a Windows Phone and App.net is my only social network! :-]
If we’re talking specifically about startups, and specifically about some kind of a generic web product (vs say you building and selling a proprietary database), then the technology of choice has rather humble impact on the outcome of the business.
Maybe the pros and cons are very subtle and hard to measure, but you hardly ever hear about startups that failed because their web framework sucked, or their language couldn’t deliver what their customers wanted, or they couldn’t scale. Gosh, if only those engineers had stuck to Python instead of going with Haskell, said no one ever. Mostly, they die because the business was bad, the customer wasn’t there, the product wasn’t really making a difference or wasn’t marketed right etc. It wasn’t because they used OCaml instead of Java. Yeah, maybe they lost a couple of iterations that would have made all the difference, but who the heck really knows, that’s pure conjecture.
At that scale, given a decent team, a lot of poor original choices can be fixed up in a reasonable amount of time. We’ve changed tooling a couple of times over the years based on the newly discovered needs of the product, it was fine. We moved much faster with every evolutionary step, and all learned something good in the process.
Gosh, if only those engineers had stuck to Python instead of going with Haskell, said no one ever
“Gosh, if only those engineers had stuck with Lisp instead of going to Python” was a common refrain in lisp communities, about reddit, for quite a lot longer than was justified by the continued existence of the company.
I agree with your overall point, just pointing out that this sort of judgment is made quite frequently. It’s just made by technologists, not hire/fire managers. Unsurprising, really, as “successful” is defined differently depending on your background.
Wasn’t Twitter kind of the poster child for rails doesn’t scale? I guess they resolved that, but it also seemed to involve a scala injection. Not sure what the big lesson is though.
If Twitter had not used Rails early on and instead spent the time inventing whatever they are running on now, they would have never shipped and never grown to the scale that caused them to have to ditch Rails.
Indeed. When I read a “it was bad; then it got better” story, I never know if the lesson is “don’t make this mistake” or “don’t worry, it’ll work out”.
I get where you’re going with this, but this is not a counterfactual; it is post hoc, ergo prompter hoc.
Many, many, many companies develop complex products quickly on top of non-exotic technologies and do just fine.
I suspect that every discipline in an organization vastly overestimates their overall importance and contribution to the success of the business. It’s simple human egocentricity, that’s how we’re wired to work.
And yes, I certainly hope that any developer working for a startup is measuring their success by the success of the business. If that’s not your primary goal at that role, then I’m not sure what exactly you’re up to there.
Well, you could have other interests. :) I sell myself as a technologist, so I promise only to make decisions that, in my view, are technologically sound. What will or won’t make the company money is not something I claim to be my area of expertise.
Besides not being my area, fundamentally I also don’t really care about business either. I see business as purely a means to an end: we have businesses because they arguably organize economic activity better than central planning does, or at least have done so in the past. But the goal is still to improve technology and/or its application in society. I don’t have any real attachment to the businesses per se; most businesses die, and that’s fine, because they’re expendable and easily replaceable by other businesses. What matters are their durable achievements. (That’s all especially applicable to startups: the only reason startups are interesting is that it is widely believed that startups are a way of catalyzing technological innovation.)
These examples seem to argue for limiting the number of technologies, rather than choosing the boring ones. They apply just as well to adding Java to your Lisp codebase as the other way around.
Some of the arguments only suggest limiting the number of technologies. But the author also argues that “for shiny new technology, the magnitude of unknown unknowns is significantly larger” than for boring old technology.
Yes, he asserts that, but his examples don’t really speak to that thesis.
Also, at the risk of being overly literal, in my Lisp vs Java example, Java is the “shiny new technology,” while Lisp is arguably better-understood, with fewer unknown unknowns. The article is written as though “boring” and “old” were interchangeable, but they’re not.
In my opinion, using new pieces of technology in your stack can be a good idea; in small doses.
Trying out a new library or framework in a project is great, and if tested and trialled enough, can greatly help both you (the user) and the creator / developer.
It gets silly, however, when someone decides to experiment with a new piece of technology in a major part of their stack (e.g. databases or alpha-grade programming languages). Of course, ever piece of software needs testers, but it’s definitely not a good idea to test these kinds of things in products that need to be reliable for users, etc.