I’ve always hated the term thought leader. It feels unironically Orwellian and the fact that people can use it in a positive manner with a straight face bothers me.
I’ve claimed for myself the title of “Thought Lord” at work, which conjures images of a medieval scholar-warrior, wielding a sword of insight and lopping off the heads of ignorance.
As the leading architect of a project, I asked some developers what they thought of my „leading“ there. One suggestion was that I could have been more confident.
I believe it is a human thing to long for confident leaders. Developers are no exception. The „strong opinions, weakly held“ meme is a symptom. It isn’t generally good or bad.
With the developers we concluded that I was roughly as confident as the circumstances permitted.
Oh yeah, definitely. I’ll add to that that people also want leaders with prestige (or high status if you will).
There is one negative interpretation and a positive one that I oscillate between:
People are bad with uncertainty, so it’s not received well if leadership says “we will do X, and it has a 75% chance of success”. Or worse: “we want X, but it’s not a strongly held opinion, feel free to disagree”
Part of leadership’s job is to create clarity and it’s necessary to just say “we’re sure about this decision, let’s go”. That doesn’t necessarily imply skewing the facts. But it helps tremendously to not have decision-makers that seem confused and fluffy and all over the place. Having insecure managers is terrible and not helpful at all.
But seriously, good article. My usual criteria is “if you can’t rip a position limb-from-limb, you have no business advocating it.” A little tangential to foxes vs hedgehogs, but also a useful heuristic.
I ask this too, along with “what’s something we’re probably doing wrong, what would it be?”
I get to find out what they’re going to be advocating for and how they think about it. “Hedgehogs” and “Foxes” is really good vocabulary for a thing I’ve found with these - the best answers are questions about our stack, choices we’ve made, etc. - contingent advice I guess.
Something someone pointed out last time I brought this up, that I think also applies to yours, is that someone might be reluctant to criticize a thing their interviewee likes. For example, if someone’s interviewing at a Rust shop, they might be unwilling to say “your company might not actually need Rust’s memory-safety guarantees if a GCed language is good enough” or “I don’t like how hard Rust makes it to do self-referential/recursive data structures”. I don’t know how to deal with that.
This is a little bit too harsh on hedgehogs. The thing is, when “big ideas” are right, the impact is enormous – and the people arguing for those big ideas often end up forgotten because the underlying idea becomes part of ‘common sense’. When big ideas are even mostly right, it’s enough to shift the landscape so that instances when they don’t apply are dedicated niches, identified and explored much more quickly because all you need to do to find the pathological cases is to show when the norm falls apart (which requires establishing the norm in the first place). So, lots of people want to be the ones who are publicly associated with big ideas in case the ones they support are right or mostly right.
In software, experienced devs tend to become foxes – because part of experience is working on a variety of very different codebases and toolchains. Experienced devs who become hedgehogs drift away from tech and toward PR work. But ‘big ideas’ are also situated & contextual, and because software is socially constructed, the popularity of a big idea can by itself change the environment in such a way that the idea becomes ‘true’ (for instance, TDD is a great fit with an agile model, and an agile model fits the economic model of a small company with more engineers than experienced managers working on new products in a field that doesn’t have established norms, and for a while the most profitable companies in software were like that so agile & TDD gained a following – and now it’s baked into policy a lot of places & you can’t circumvent agile & TDD without a bureaucratic nightmare, even when working with big globs of inherited or purchased legacy code).
The other thing is that thought leaders aren’t, typically speaking, hedgehogs in Tetlock’s model. A hedgehog is somebody like Karl Marx, who had a deep and broad knowledge of economics, philosophy, and history and brought it all to bear on a specific model of economics wherein capitalism is born out of a particular set of tensions that feudalism inevitably produced, expecting to project out possible future economic systems from capitalism’s tensions, or Charles Darwin, whose knowledge of natural history (both first-hand and theoretical) was beyond his peers due to unique experiences, and whose experiences led him to create a grand unifying theory of evolution based on the twin forces of mutation and selection. In other words, hedgehogs are experts in at least one domain. Most thought leaders in software are not really experts in software, and the kind of guys he’s talking about his experiences with in OP are a different type entirely – folks who have attached themselves to a grand narrative because they heard some emotionally compelling defense of it, and then have never let go. Even hedgehogs are willing to recognize the borders around where their Big Idea applies. If your answer to everything is TDD, you’re not just single-minded but actually lack experience!
There are two possible reasons why people may insist on incontingent advice: they may be parroting platitudes; or, more interestingly, assuming it as a precondition for providing contingent advice instead.
The latter can be also interpreted as (mostly healthy) insecurity and frustration. An expert in, say, unit testing, may not know how to do their best job in environments where it is not widely adopted.
It is also a sign that their expertise might not have been properly considered at the moment they have joined the project, which is absolutely not their fault.* In that scenario, incentivizing them to provide contingent advice under different circumstances will be pretty hard.
So I would take this advice with a grain of salt. Although it might look great in the short term, and eventually lead to a working product, it will only unnecessarily increase stress on (well-intentioned) hedgehogs.
* Also, it is usually recruitment stakeholders that look for professionals on a very trendy field of expertise and have them integrate projects that cannot incorporate their knowledge anytime soon.
Any broad idea that can be explained in just one or two paragraphs is almost certainly wrong at least some of the time, and usually actually a lot of the time.
The best “hedgehog” example I know of is the proposal to solve child abuse by … creating a free market for children, where people are free to buy children and parents are free to sell them off, because “free market good”. And since “government bad” it also means parents should have the right to let their children die from e.g. starvation if they choose to. This was actually proposed by an alleged “thought leader”, although it seems to me not all that much thought went in to this.
Well, I didn’t want to make an already kind-of off-topic and political post even more off-topic and political, so I intentionally left that out 😅 But search for Murray Rothbard if you want to know more.
This is a great article about how to identify false prophets.
There is however a rather pervasive religion that has been unreasonably effective at predicting the future; mathematics and assuming balance / eventual equilibrium / equality when equivalent.
That last bit about the reproducible builds is exactly the kind of admission that leads down the path to finding the “one thing” that actually works, <I said something wrong; personally I believe “patience” may fit>.
Whatever actually works is locally consistent and globally not so.
There is however a rather pervasive religion that has been unreasonably effective at predicting the future; mathematics …
I’m not sure if I agree.
The scientific method, not mathematics per se, is designed to (iteratively) find causal mechanisms using a particular kind of experimental design (the controlled experiment).
That said, there are “non-scientific” mathematical methods that don’t require controlled experiments and still have various degrees of usefulness (i.e. statistical correlation and sometimes predictive power)
… and assuming balance / eventual equilibrium / equality when equivalent.
What do you mean here?
My experience: I was trained as an electrical engineer, which I summarize in this way. The field of engineering, built by understanding physics deeply, allows ‘working’ engineers to do their jobs without understanding the full complexity of the universe. Engineering is founded by identifing operating conditions that allow for good approximations. This allows for things like: approximating systems as being linear and/or assuming equilibrium.
As a corollary, traditional engineers are skeptical of the term “software engineering”.
As a corollary, traditional engineers are skeptical of the term “software engineering”.
I found this series of posts by @hwayne a pretty convincing argument in the other direction. It changed my view, in fact; I used to be a bit uncomfortable with the term “software engineering” for the same reasons.
An example of what I mean would be in the context of newtonian physics, we assume two equivalent things (same shape + weight) to be literally equal in our model.
Another example would be in economics where we assume eventual equilibrium (justified by arbitrage?).
The scientific method is a method for testing how well mathematical models hold up against reality, often the assumptions made rely on some “naturality” that is justified by these eq* concepts.
Excellent article. I have always considered “right tool for the job” as one of the most important principle. To my ind, building software is a far nuanced conversation with shades of gray, than being black and white.
In my experience, “right tool for the job” is a two-edged sword. If pushed too far, you tend to have dozens of unique solutions to similar problems in the same company, and you may loose the benefits of having more similar solutions/approaches to similar problems. I prefer to see things as “global optimization” VS “local optimization”. As you say, this is a continuum anyway. Curious if you’ve been bitten by “right tool for the job” too ?
I think we need to understand “right tool for the right job” also as what people who designed the tool have used / know. (Because of insights of how a tool work in opposition to what we may think a techno/framework/tool works).
I guess the answer is always “well, it depends” and being pragmatic and open minded about things.
I’ve always hated the term thought leader. It feels unironically Orwellian and the fact that people can use it in a positive manner with a straight face bothers me.
I’ve claimed for myself the title of “Thought Lord” at work, which conjures images of a medieval scholar-warrior, wielding a sword of insight and lopping off the heads of ignorance.
You mean you don’t travel in a police box saving people from dumb ideas?
Except the police box is shaped like a horse.
(This is tongue-in-cheek, btw, in case that isn’t clear.)
As the leading architect of a project, I asked some developers what they thought of my „leading“ there. One suggestion was that I could have been more confident.
I believe it is a human thing to long for confident leaders. Developers are no exception. The „strong opinions, weakly held“ meme is a symptom. It isn’t generally good or bad.
With the developers we concluded that I was roughly as confident as the circumstances permitted.
Oh yeah, definitely. I’ll add to that that people also want leaders with prestige (or high status if you will).
There is one negative interpretation and a positive one that I oscillate between:
People are bad with uncertainty, so it’s not received well if leadership says “we will do X, and it has a 75% chance of success”. Or worse: “we want X, but it’s not a strongly held opinion, feel free to disagree”
Part of leadership’s job is to create clarity and it’s necessary to just say “we’re sure about this decision, let’s go”. That doesn’t necessarily imply skewing the facts. But it helps tremendously to not have decision-makers that seem confused and fluffy and all over the place. Having insecure managers is terrible and not helpful at all.
If you don’t feed me I’LL DIE
But seriously, good article. My usual criteria is “if you can’t rip a position limb-from-limb, you have no business advocating it.” A little tangential to foxes vs hedgehogs, but also a useful heuristic.
An interview question that I’ve used a couple times is “what do you dislike about your favorite language/framework/library?”.
I ask this too, along with “what’s something we’re probably doing wrong, what would it be?”
I get to find out what they’re going to be advocating for and how they think about it. “Hedgehogs” and “Foxes” is really good vocabulary for a thing I’ve found with these - the best answers are questions about our stack, choices we’ve made, etc. - contingent advice I guess.
Oh, that could be a good one too.
Something someone pointed out last time I brought this up, that I think also applies to yours, is that someone might be reluctant to criticize a thing their interviewee likes. For example, if someone’s interviewing at a Rust shop, they might be unwilling to say “your company might not actually need Rust’s memory-safety guarantees if a GCed language is good enough” or “I don’t like how hard Rust makes it to do self-referential/recursive data structures”. I don’t know how to deal with that.
This is a little bit too harsh on hedgehogs. The thing is, when “big ideas” are right, the impact is enormous – and the people arguing for those big ideas often end up forgotten because the underlying idea becomes part of ‘common sense’. When big ideas are even mostly right, it’s enough to shift the landscape so that instances when they don’t apply are dedicated niches, identified and explored much more quickly because all you need to do to find the pathological cases is to show when the norm falls apart (which requires establishing the norm in the first place). So, lots of people want to be the ones who are publicly associated with big ideas in case the ones they support are right or mostly right.
In software, experienced devs tend to become foxes – because part of experience is working on a variety of very different codebases and toolchains. Experienced devs who become hedgehogs drift away from tech and toward PR work. But ‘big ideas’ are also situated & contextual, and because software is socially constructed, the popularity of a big idea can by itself change the environment in such a way that the idea becomes ‘true’ (for instance, TDD is a great fit with an agile model, and an agile model fits the economic model of a small company with more engineers than experienced managers working on new products in a field that doesn’t have established norms, and for a while the most profitable companies in software were like that so agile & TDD gained a following – and now it’s baked into policy a lot of places & you can’t circumvent agile & TDD without a bureaucratic nightmare, even when working with big globs of inherited or purchased legacy code).
The other thing is that thought leaders aren’t, typically speaking, hedgehogs in Tetlock’s model. A hedgehog is somebody like Karl Marx, who had a deep and broad knowledge of economics, philosophy, and history and brought it all to bear on a specific model of economics wherein capitalism is born out of a particular set of tensions that feudalism inevitably produced, expecting to project out possible future economic systems from capitalism’s tensions, or Charles Darwin, whose knowledge of natural history (both first-hand and theoretical) was beyond his peers due to unique experiences, and whose experiences led him to create a grand unifying theory of evolution based on the twin forces of mutation and selection. In other words, hedgehogs are experts in at least one domain. Most thought leaders in software are not really experts in software, and the kind of guys he’s talking about his experiences with in OP are a different type entirely – folks who have attached themselves to a grand narrative because they heard some emotionally compelling defense of it, and then have never let go. Even hedgehogs are willing to recognize the borders around where their Big Idea applies. If your answer to everything is TDD, you’re not just single-minded but actually lack experience!
I’d love to work with this guy
I’ve been lucky enough to have and highly recommend it if you get the chance!
My simple approach is that I don’t trust anyone who claims to have a direct connection to the one true whatever. Life is more complicated than that.
In case anyone isn’t familiar: https://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox
Also, it sure seems like one of the thought leader’s comments was translated from “MongoDB is web scale.”
There are two possible reasons why people may insist on incontingent advice: they may be parroting platitudes; or, more interestingly, assuming it as a precondition for providing contingent advice instead.
The latter can be also interpreted as (mostly healthy) insecurity and frustration. An expert in, say, unit testing, may not know how to do their best job in environments where it is not widely adopted.
It is also a sign that their expertise might not have been properly considered at the moment they have joined the project, which is absolutely not their fault.
*
In that scenario, incentivizing them to provide contingent advice under different circumstances will be pretty hard.So I would take this advice with a grain of salt. Although it might look great in the short term, and eventually lead to a working product, it will only unnecessarily increase stress on (well-intentioned) hedgehogs.
*
Also, it is usually recruitment stakeholders that look for professionals on a very trendy field of expertise and have them integrate projects that cannot incorporate their knowledge anytime soon.Any broad idea that can be explained in just one or two paragraphs is almost certainly wrong at least some of the time, and usually actually a lot of the time.
The best “hedgehog” example I know of is the proposal to solve child abuse by … creating a free market for children, where people are free to buy children and parents are free to sell them off, because “free market good”. And since “government bad” it also means parents should have the right to let their children die from e.g. starvation if they choose to. This was actually proposed by an alleged “thought leader”, although it seems to me not all that much thought went in to this.
not that much “thought” went into this, but maybe a huge chunk of “leader/boss” was in the process.
this is brilliant. Any source would make this story an absolute delicatessen, worthy of repeated sharing.
Well, I didn’t want to make an already kind-of off-topic and political post even more off-topic and political, so I intentionally left that out 😅 But search for Murray Rothbard if you want to know more.
Interesting how we associate leader with somehow increasing value. Thought leaders might confound motion with progress…
This is a great article about how to identify false prophets.
There is however a rather pervasive religion that has been unreasonably effective at predicting the future; mathematics and assuming balance / eventual equilibrium / equality when equivalent.
That last bit about the reproducible builds is exactly the kind of admission that leads down the path to finding the “one thing” that actually works, <I said something wrong; personally I believe “patience” may fit>.
Whatever actually works is locally consistent and globally not so.
I’m not sure if I agree.
The scientific method, not mathematics per se, is designed to (iteratively) find causal mechanisms using a particular kind of experimental design (the controlled experiment).
That said, there are “non-scientific” mathematical methods that don’t require controlled experiments and still have various degrees of usefulness (i.e. statistical correlation and sometimes predictive power)
What do you mean here?
My experience: I was trained as an electrical engineer, which I summarize in this way. The field of engineering, built by understanding physics deeply, allows ‘working’ engineers to do their jobs without understanding the full complexity of the universe. Engineering is founded by identifing operating conditions that allow for good approximations. This allows for things like: approximating systems as being linear and/or assuming equilibrium.
As a corollary, traditional engineers are skeptical of the term “software engineering”.
I found this series of posts by @hwayne a pretty convincing argument in the other direction. It changed my view, in fact; I used to be a bit uncomfortable with the term “software engineering” for the same reasons.
An example of what I mean would be in the context of newtonian physics, we assume two equivalent things (same shape + weight) to be literally equal in our model.
Another example would be in economics where we assume eventual equilibrium (justified by arbitrage?).
The scientific method is a method for testing how well mathematical models hold up against reality, often the assumptions made rely on some “naturality” that is justified by these eq* concepts.
Excellent article. I have always considered “right tool for the job” as one of the most important principle. To my ind, building software is a far nuanced conversation with shades of gray, than being black and white.
In my experience, “right tool for the job” is a two-edged sword. If pushed too far, you tend to have dozens of unique solutions to similar problems in the same company, and you may loose the benefits of having more similar solutions/approaches to similar problems. I prefer to see things as “global optimization” VS “local optimization”. As you say, this is a continuum anyway. Curious if you’ve been bitten by “right tool for the job” too ?
I think we need to understand “right tool for the right job” also as what people who designed the tool have used / know. (Because of insights of how a tool work in opposition to what we may think a techno/framework/tool works).
I guess the answer is always “well, it depends” and being pragmatic and open minded about things.
Quite relatable