I appreciate this article. Sure, I disagree with a lot of it — it’s got 20 rules of thumb in it, of course I’m not going to agree with them all. But the longer I’m in this line of work, the more I really appreciate the value of having heuristics and good default choices.
For example: unless there’s a strong reason to do otherwise, I’m generally going to use Postgres where I need a networked database and SQLite if I need a “file format”. I know when I’m going to write Python, Go, or Ansible, and when I actually have to go hunting for a different language. I have my CI, code hosting, etc tools of choice. And so on.
“Right tool for the job” is all well and good, but only when the costs of making an active choice are lower than the marginal benefits of the the tool you choose.
One observation after switching from 15 years of project development, usually in the lead development role, to working at a company with 300 developers working on a 20 year old product: while I used to feel more like a 10x developer, I now feel much more like a 1x developer.
I have come to see the 10x versus 1x developer debate less as a matter of talent and dedication but more as a matter of who happens to have started the code base or a new approach versus who is enlisted to maintain it.
Working at Amazon particularly seems like the job in which 90% of your time is spent on deciphering and dealing with the idiosyncrasies the previous developers left for you, meaning your visible output will be only 10% of that of the guy who started the code from scratch.
while I used to feel more like a 10x developer, I now feel much more like a 1x developer.
I really hate this characterisation, for several reasons. First, it conflates multiplicative effect (what’s your multiplier on the team) with additive effect (how much do you contribute individually relative in some arbitrary unit). Second, the scale is entirely wrong. Third, it assumes developers don’t change and are entirely fungible.
Let’s assume that x is some arbitrary unit of developer productivity, such that an given project needs px to succeed. In theory, you can achieve that with either p 1x developers or p/10 10x developers. That doesn’t tell the whole story though.
I’ve worked with a few (thankfully a very few) developers who are -1x developers, and more that are -0.1x developers in terms of additive effect. The project would have gone faster if they’d just stepped away from the keyboard and never come back. In comparison to them, a 1x developer is great! They may make progress slowly, but they do make forward progress. The -0.1x developers are the ones where it takes more code-review time to get their work into a reasonable state than it would take someone vaguely competent to just do it.
These people aren’t always a lost cause, they may just be inexperienced. I’ve worked a lot with inexperienced contributors to open source projects who started needing 5-10 times as much of my time in code review and feedback than it would have taken for me to just write the code myself but ended up learning, improving, and then contributing a huge amount more overall than I could have written in the time I spent helping them. This is also often true for an experienced developer joining a new large project: it takes a while to understand a new codebase.
Even that; however, is ignoring the biggest impact for most developers: how much do they alter the productivity of the rest of the team. The productivity of the team is the sum of the additive impacts of each developer multiplied by the multiplicative impact of each developer. On moderately large teams, the multiplicative factor is far more important than the additive one. A developer that makes everyone in their team 10-20% more productive is far more valuable than a prima donna who writes ten times as much working code as everyone else but demotivates everyone so much that they each contribute only 80% of what they otherwise would. There are a lot of ways that developers can have a high multiplicative effect. Some are obvious, such as mentoring, doing good code reviews, and so on. Some relate to maintaining infrastructure (a developer who is willing and able to replace a crufty old build system is worth their weight in gold), properly prioritising work, and so on.
I also think it depends on how green field development you do and the total size of the codebase. I can be a 10x developer on a completely no feature that has no dependencies or interaction with any other part of our product, but touching any of our core features sometimes makes me feel like a 0.5x developer, as I need to careful what I change, check that all interaction with other parts of the code work as intended, etc.
I imagine product with 1M+ LOC will have mostly - if not solely - 1x developers, with maybe the 10 year+ lead architect (or similar role) slightly above that.
Hot take: when you’re an early employee at a startup you might be able to be a 10X developer easily even if you weren’t before and you won’t be anymore after you leave.
Huge problems can be solved easily with a little thinking and duct tape. That doesn’t mean you’re producing shit code, but ‘satisfactory’ code. Your horizon ends in a month, not in a roadmap 3 years down the road.
Source: Been there, done that. I don’t think I’m a better developer than most people but when I was working with a small team in a small company, quickly switching problems and languages to get stuff done and move quick makes you feel super productive and you can get easily annoyed if you don’t have an OK to good solution after a week.
Whereas if you work anywhere where a tiny feature in a code base that’s even just 2-3 years old can take 2 weeks you suddenly realize it was all a lie :)
I think your average “dark matter” type of programmer isn’t going to have much exposure to these. Of course, I think Go is likely to become one of those dark matter developer languages like Java or VB became.
I thought that a) Go is already a dark matter language b) people who think Rust is like Go already know Go and fell victim to the Blub phenomenon. I may be wrong on both counts of course,
Depends on the timeframe. For your typical Bay Area shop, your typical “SRE DevOps Peon” will likely use Go. Outside of the HN bubble, most companies doing line-of-business type stuff that aren’t hopelessly outdated are usually doing it still in Java/C#/PHP, maybe JS.
It could be. I tried to do a project in Go. I couldn’t easily express ideas due to how basic its primitives are. That Rob Pike quote became real for me. I just tossed that entire prototype, kept using Python, and started looking at more expressive languages.
On the plus side, I can see how its simplicity makes onboarding easier, probably easier to maintain random codebases than C++ or Java/.Net, and dramatically improves ability to make the tooling. It just didn’t fit me. I could see me using a more expressive language that outputs something like Go in human-readable form to keep those benefits, though. I just gotta keep my personal productivity up since it’s scarce to begin with (outside work).
I have to use both Rust and Go at work - compared to other mainstream languages these happen to be two of three that give you automatic memory management and memory safety without outsized and unpredictable latency penalties for typical software.
I understand where you’re coming from, because the mechanisms they for this are quite different - Rust accomplishes this via stack allocation by default via the borrow checker and RAII and Go accomplishes this via the combination of automatic stack allocation via escape analysis and the core team’s focus on the impact of garbage collection latency - but that doesn’t mean that they aren’t reasonable to group together among other mainstream programming languages in 2020.
I feel like this is the point of the article, though! OP hasn’t had time to actually put in research about it and won’t have time, unless it becomes a direct part of their job.
I wouldn’t go that far; while this person is self-admittedly a very average programmer, and while there are some reasons to view Rust and Go as languages that are best suited for two different categories of work, they do share some important commonalities that the author is rightly picking up on. Both of them had the design goals of being immediately, practically useful to mainstream software engineers (rather than being research languages exploring esoteric PL theory features). They were consolidations of a few decades of lessons learned from the flaws of other mainstream programming languages, rather than trying to do anything radically new. They both spent a lot of effort on building practical tooling around the compiler. And they were both developed at roughly the same time. There are quite a few areas of programming where either Go or Rust would be a reasonable choice.
Disagree, I see quite a few problems and requirements where I’d use one of the two over, say, C++ and with the same reasons I wouldn’t use Python or PHP.
You’re just as much overgeneralizing as the author did.
Is that actually used outside of Spring internals? I’m not a Java developer but to my understanding Spring is a very old and very complex framework, so you’d expect to see this kind of thing. If that’s so, it’s no worse than a language having Zygohistomorphic prepromorphisms.
Is that actually used outside of Spring internals?
To be honest, I have no idea. I’m also not a Java developer. Though, I have worked with some enterprise C++ libraries that have a similar pattern: classes with long names that try to describe the some abstract design pattern.
I’m not necessarily saying that an AbstractSingletonProxyFactoryBean is a bad design. If you’re familar with the design patterns and framework, I’m sure the name “AbstractSingletonProxyFactoryBean” alone probably provides a lot of context clues to developer, in the same way that zygohistomorphic prepromorphisms might.
The name itself is pretty funny though – its sounds like a bunch of meaningless buzz words tacked together. I also got a good laugh out of zygohistomorphic prepromorphisms :).
Haskell or Erlang maybe I would use if I were doing something that required a very elegant or mathematical functional approach without a lot of business logic.
…Is business logic not usually comprised of maths stuff?
Refreshing to read a post from a normal guy with a life. Kudos for putting it out there.
I appreciate this article. Sure, I disagree with a lot of it — it’s got 20 rules of thumb in it, of course I’m not going to agree with them all. But the longer I’m in this line of work, the more I really appreciate the value of having heuristics and good default choices.
For example: unless there’s a strong reason to do otherwise, I’m generally going to use Postgres where I need a networked database and SQLite if I need a “file format”. I know when I’m going to write Python, Go, or Ansible, and when I actually have to go hunting for a different language. I have my CI, code hosting, etc tools of choice. And so on.
“Right tool for the job” is all well and good, but only when the costs of making an active choice are lower than the marginal benefits of the the tool you choose.
One observation after switching from 15 years of project development, usually in the lead development role, to working at a company with 300 developers working on a 20 year old product: while I used to feel more like a 10x developer, I now feel much more like a 1x developer.
I have come to see the 10x versus 1x developer debate less as a matter of talent and dedication but more as a matter of who happens to have started the code base or a new approach versus who is enlisted to maintain it.
Working at Amazon particularly seems like the job in which 90% of your time is spent on deciphering and dealing with the idiosyncrasies the previous developers left for you, meaning your visible output will be only 10% of that of the guy who started the code from scratch.
I really hate this characterisation, for several reasons. First, it conflates multiplicative effect (what’s your multiplier on the team) with additive effect (how much do you contribute individually relative in some arbitrary unit). Second, the scale is entirely wrong. Third, it assumes developers don’t change and are entirely fungible.
Let’s assume that x is some arbitrary unit of developer productivity, such that an given project needs px to succeed. In theory, you can achieve that with either p 1x developers or p/10 10x developers. That doesn’t tell the whole story though.
I’ve worked with a few (thankfully a very few) developers who are -1x developers, and more that are -0.1x developers in terms of additive effect. The project would have gone faster if they’d just stepped away from the keyboard and never come back. In comparison to them, a 1x developer is great! They may make progress slowly, but they do make forward progress. The -0.1x developers are the ones where it takes more code-review time to get their work into a reasonable state than it would take someone vaguely competent to just do it.
These people aren’t always a lost cause, they may just be inexperienced. I’ve worked a lot with inexperienced contributors to open source projects who started needing 5-10 times as much of my time in code review and feedback than it would have taken for me to just write the code myself but ended up learning, improving, and then contributing a huge amount more overall than I could have written in the time I spent helping them. This is also often true for an experienced developer joining a new large project: it takes a while to understand a new codebase.
Even that; however, is ignoring the biggest impact for most developers: how much do they alter the productivity of the rest of the team. The productivity of the team is the sum of the additive impacts of each developer multiplied by the multiplicative impact of each developer. On moderately large teams, the multiplicative factor is far more important than the additive one. A developer that makes everyone in their team 10-20% more productive is far more valuable than a prima donna who writes ten times as much working code as everyone else but demotivates everyone so much that they each contribute only 80% of what they otherwise would. There are a lot of ways that developers can have a high multiplicative effect. Some are obvious, such as mentoring, doing good code reviews, and so on. Some relate to maintaining infrastructure (a developer who is willing and able to replace a crufty old build system is worth their weight in gold), properly prioritising work, and so on.
I also think it depends on how green field development you do and the total size of the codebase. I can be a 10x developer on a completely no feature that has no dependencies or interaction with any other part of our product, but touching any of our core features sometimes makes me feel like a 0.5x developer, as I need to careful what I change, check that all interaction with other parts of the code work as intended, etc.
I imagine product with 1M+ LOC will have mostly - if not solely - 1x developers, with maybe the 10 year+ lead architect (or similar role) slightly above that.
Hot take: when you’re an early employee at a startup you might be able to be a 10X developer easily even if you weren’t before and you won’t be anymore after you leave.
Huge problems can be solved easily with a little thinking and duct tape. That doesn’t mean you’re producing shit code, but ‘satisfactory’ code. Your horizon ends in a month, not in a roadmap 3 years down the road.
Source: Been there, done that. I don’t think I’m a better developer than most people but when I was working with a small team in a small company, quickly switching problems and languages to get stuff done and move quick makes you feel super productive and you can get easily annoyed if you don’t have an OK to good solution after a week.
Whereas if you work anywhere where a tiny feature in a code base that’s even just 2-3 years old can take 2 weeks you suddenly realize it was all a lie :)
“Go/Rust” is a great shortcut for “I don’t know anything about programming languages”. ;)
I think your average “dark matter” type of programmer isn’t going to have much exposure to these. Of course, I think Go is likely to become one of those dark matter developer languages like Java or VB became.
I thought that a) Go is already a dark matter language b) people who think Rust is like Go already know Go and fell victim to the Blub phenomenon. I may be wrong on both counts of course,
Depends on the timeframe. For your typical Bay Area shop, your typical “SRE DevOps Peon” will likely use Go. Outside of the HN bubble, most companies doing line-of-business type stuff that aren’t hopelessly outdated are usually doing it still in Java/C#/PHP, maybe JS.
It could be. I tried to do a project in Go. I couldn’t easily express ideas due to how basic its primitives are. That Rob Pike quote became real for me. I just tossed that entire prototype, kept using Python, and started looking at more expressive languages.
On the plus side, I can see how its simplicity makes onboarding easier, probably easier to maintain random codebases than C++ or Java/.Net, and dramatically improves ability to make the tooling. It just didn’t fit me. I could see me using a more expressive language that outputs something like Go in human-readable form to keep those benefits, though. I just gotta keep my personal productivity up since it’s scarce to begin with (outside work).
I don’t think this is fair.
I have to use both Rust and Go at work - compared to other mainstream languages these happen to be two of three that give you automatic memory management and memory safety without outsized and unpredictable latency penalties for typical software.
I understand where you’re coming from, because the mechanisms they for this are quite different - Rust accomplishes this via stack allocation by default via the borrow checker and RAII and Go accomplishes this via the combination of automatic stack allocation via escape analysis and the core team’s focus on the impact of garbage collection latency - but that doesn’t mean that they aren’t reasonable to group together among other mainstream programming languages in 2020.
I feel like this is the point of the article, though! OP hasn’t had time to actually put in research about it and won’t have time, unless it becomes a direct part of their job.
I wouldn’t go that far; while this person is self-admittedly a very average programmer, and while there are some reasons to view Rust and Go as languages that are best suited for two different categories of work, they do share some important commonalities that the author is rightly picking up on. Both of them had the design goals of being immediately, practically useful to mainstream software engineers (rather than being research languages exploring esoteric PL theory features). They were consolidations of a few decades of lessons learned from the flaws of other mainstream programming languages, rather than trying to do anything radically new. They both spent a lot of effort on building practical tooling around the compiler. And they were both developed at roughly the same time. There are quite a few areas of programming where either Go or Rust would be a reasonable choice.
Disagree, I see quite a few problems and requirements where I’d use one of the two over, say, C++ and with the same reasons I wouldn’t use Python or PHP.
You’re just as much overgeneralizing as the author did.
In one of the rule on C / C++, Rust could be an alternative there. I think comparing Rust to C / C++ would be better than Go.
Ok wait but the rest of the sentence qualifies:
Maybe a more precise spelling would have been “Go or Rust” instead of “Go/Rust”, but I think the meaning was clear given the whole sentence.
Fantastic piece of writing.
If you got caught or triggered by the technical details (“go/rust”) you are probably missing the point.
I really liked the quick and simple definitions for various testing approaches and kind of deadlines.
AbstractSingletonProxyFactoryBean
Is that actually used outside of Spring internals? I’m not a Java developer but to my understanding Spring is a very old and very complex framework, so you’d expect to see this kind of thing. If that’s so, it’s no worse than a language having Zygohistomorphic prepromorphisms.
To be honest, I have no idea. I’m also not a Java developer. Though, I have worked with some enterprise C++ libraries that have a similar pattern: classes with long names that try to describe the some abstract design pattern.
I’m not necessarily saying that an AbstractSingletonProxyFactoryBean is a bad design. If you’re familar with the design patterns and framework, I’m sure the name “AbstractSingletonProxyFactoryBean” alone probably provides a lot of context clues to developer, in the same way that zygohistomorphic prepromorphisms might.
The name itself is pretty funny though – its sounds like a bunch of meaningless buzz words tacked together. I also got a good laugh out of zygohistomorphic prepromorphisms :).
No, this class isn’t used outside of Spring’s internals. But, yes, this pattern is sadly common in Java codebases.
IMHO AOP is just as bad in Guice.
…Is business logic not usually comprised of maths stuff?
I’d hazard a guess that tax rules are different from most scientific computing, at least in how they’re approached.