Any technical interview is going to be gameable and exclusionary to some underrepresented group/background. It’s very possible that our current dysfunctional interviewing practices are still, somehow, close to optimal.
Something can’t both be an “uncomfortable truth” and the status quo belief. I have a hard time reading this as anything other than an attempt to make a deferral of any responsibility to improve the situation sound like some sort of hard-won nugget of wisdom.
This isn’t a status quo belief; this is an area of active political debate. The correct way for institutions to react to various demographic groups being underrepresented (in all aspects of public life, not just programming job interviews) is one of the most contentious political issues in the anglophone world at the moment. People disagree vehemently about what changes would constitute “improving the situation”, in a way that affects (and should affect) which politicians they vote for or donate money to.
I think this is selection bias, because it seems to me like the debate is occurring between a minority of people who actually care, and the remainder if pressed would basically shrug and reiterate exactly the “uncomfortable truth” stated here.
Is the status-quo belief that it’s impossible to create a technical interview that is not “going to be gameable and exclusionary to some underrepresented group/background”?
I suspect quite a few people running technical interviews aren’t thinking about this at all, or are making some effort to reduce obvious (minimum, lawsuitable?) biases, and then thinking that they’ve eliminated bias.
I’m not sure I agree with the second part. I don’t see how you can say that current interview processes are both “dysfunctional” and “very possibly … still, somehow, close to optimal”.
We’re never going to get the broader SE culture to care about things like performance, compatibility, accessibility, security, or privacy, at least not without legal regulations that are enforced.
100% IMO. I work at a place with an SLA where we refund customers if we violate our SLA. It makes us care a lot about the things we promise in our SLA.
I think I agreed with every single one of these except the pro-mobbing take.
Sophisticated DSLs with special syntax are probably a dead-end. Ruby and Scala both leaned hard into this and neither got it to catch on.
As someone who’s worked with ruby more than 10 years, couldn’t agree more here. rspec is the conspicuous remnant of this delirium and is an unequivocal mistake.
Anecdata: Every time I see something that completely abuses Javascript in a way that breaks catastrophically when you drift outside of “blog engine demo” territory, it’s always somehow descended from rspec and/or cucumber.
As someone who’s worked with ruby more than 10 years, couldn’t agree more here. rspec is the conspicuous remnant of this delirium and is an unequivocal mistake.
Could you elaborate on this?
I haven’t used rspec, but I’ve used mocha in JS, which I think was inspired by rspec. In Mocha we write a lot of statements like expect(someValue).to.have.keys(['a', 'b']). I don’t love the Mocha syntax, but it does produce quite nice, human-readable error output. I guess it could be easily reduced to expect(someValue).hasKeys(['a', 'b']).
Happy RSpec user here and definitely going to continue using it in future. Not sure why some people keep repeating that DSLs haven’t caught on, especially in Ruby. It’s the least convincing argument as to why it’s worse than something else. ActiveRecord, the ORM in Rails, is nothing but a DSL to model relationships and 1000s of companies use it to build successful businesses. The proof is in the pudding. Is it perfect, certainly not. Does it require reading (and potentially re-reading) the docs, sure does. Is there a learning curve to become proficient and does it require experience to know when to use what or stay away from it, most definitely like with all things high level.
The problem I’ve encountered in most of these DSLs (I played a bit with Mocha many years ago, but have the most experience with SBT/Scala, Chef, and bizzaro DSLs atop Tcl invented in hardware land) is a combination of:
Poor documentation for the not happy paths: The happy path is easy, but the moment you need to do something off the beaten path, you’ll find sharp edges in the DSL and lacking documentation. This also makes teaching other engineers about a DSL difficult. In my experience we taught SBT mostly by having experienced engineers pair with newer engineers to teach them about the DSL. This adds a learning overhead to DSLs that just isn’t there for general purpose programming languages.
Bad error messages: Again, most DSLs are optimized for the happy path. Many of these DSLs don’t really chain errors together very well. When you give these DSLs something they don’t expect, they rarely output any sensible error output to work with.
Few escape hatches: DSL authors (looking at you SBT) really like to, understandably, constrain what you can do in the DSL. That’s great until it’s not. Most DSLs don’t offer you a good way to break out of their assumptions and don’t give you a good way to interact with their cloistered world.
I could write about this at length but I’ll try to be brief.
Rspec re-invents fundamental ruby language concepts for sharing and organizing code, with no benefit other than making your tests “read more English,” which can seem cool to beginners (and did to me at one point) but is purely cosmetic and superfluous. Examples of this language re-invention include shared_examples/it_behaves_like, let statements, and proliferating “helper” files.
To use rspec well, you need to learn a whole new language and set of best practices. And every new member of your team does too. I mean, there are whole books on it. Testing frameworks should be simple, not book-worthy. And there is nothing special about testing that warrants this. If you invest time becoming a ruby expert, you should be able to use your ruby expertise to write good tests. You should be able to use normal language constructs to share code in tests.
With that said, I don’t mind the expect DSL for assertions, and I like “it” blocks as a way of defining tests. But both of these, while technically DSLs, are small, focused, simple constructs that can be learned in five minutes and probably grokked without even reading the docs. Minitest is essentially just these parts of rspec, with the expect assertions optional, and that’s what I’d recommend for testing in ruby.
Since “read like English” is the thing RSpec optimized for, it is incoherent to dismiss it. Programming languages should be easy to read and easy to write, but readability and writability is in tension. RSpec is the result of optimizing readability to extreme such that you don’t need to know RSpec to read it. Compare: you do need to know Ruby to read it.
When is “read like English” useful and not cosmetic? When it needs to be read by someone who don’t know how to program. If everyone reading your RSpec test knows how to program, sure, you probably shouldn’t use RSpec: as you said, it duplicates Ruby for not much benefit. The key idea is that learning RSpec when you already know Ruby is many times easier than learning Ruby when you don’t know how to program.
Since “read like English” is the thing RSpec optimized for, it is incoherent to dismiss it.
It’s not incoherent in the least. It is, and always has been, a silly and misguided goal. I say this as someone who has read the rspec book, who knows it and the philosophy behind it well, and at one time believed the hype.
Programming languages should be easy to read and easy to write, but readability and writability is in tension. RSpec is the result of optimizing readability to extreme such that you don’t need to know RSpec to read it. Compare: you do need to know Ruby to read it.
I’m sorry, but this is pure nonsense. Rspec is not more readable than ruby, and the idea that non-programming “stakeholders” will be more likely to read and participate in the design process if you use rspec or cucumber is a pipe dream. I’ve never seen it happen, not once. And in the rare instance that one of these stakeholders was going to do this, they would find it no more difficult to understand minitest or test/unit. The difference in friction would be neglibigle, and the skill and clarity of the programmer writing the tests would be overwhelmingly more important than the test framework.
The idea that non-programming “stakeholders” will be more likely to read and participate in the design process if you use rspec or cucumber is a pipe dream. I’ve never seen it happen, not once.
I have seen it happen, but if you haven’t, I 100% agree RSpec has been entirely useless for you.
We’re never going to get the broader SE culture to care about things like performance, compatibility, accessibility, security, or privacy, at least not without legal regulations that are enforced.
And regulations are written in blood, so we’ll only get enforced legal regulations after a lack of accessibility kills people.3
The presence of regulations also kills people, and this is also a principle that applies to realms of human experience beyond software. (e.g. the FDA causing deaths by making it too difficult to legally develop or sell drugs). The right amount of regulation, and the way that regulation ought to be administered, is in general a difficult political problem, and there’s no reason to think the dynamics would work any differently in the world of software compared to anything else.
We don’t have the Alan Kay / Free Software / Hypercard dream of everybody having control over their own computer because integrating with disparate APIs is really fucking hard and takes a lot of work to do.
I know that the author works in formal methods, not cloud engineering or package management, so it’s understandable that they might not have the experience of integrating lots of APIs. However, it is relatively easy, as long as the API is well-documented and easy to examine. The reason that I don’t have control over the computers which I own is because of firmware; chips from Intel, nVidia, Broadcom, and other corporations are not fully under my control, undermining my ownership.
Consider GPU drivers. Some GPUs have public documentation for their low-level API; most do not. When that documentation exists, people can write Free Software to drive their GPUs. I have written a GPU driver based on datasheets provided by the GPU vendor. This is relatively easy, compared to reverse-engineering the GPU.
The reason that I don’t have control over the computers which I own is because of firmware; chips from Intel, nVidia, Broadcom, and other corporations are not fully under my control, undermining my ownership.
I don’t think firmware is what stands in the way primarily even for most programmers, nevermind for the general population. The biggest obstacle is a priest class of programmers who think that users simply don’t care to have their computers act as they wish or that the only way a computer can be instructed is by learning languages in detail etc etc
Yes it’s relatively easy for a professional software engineer to do as the productive part of their work. For someone trying to accomplish something else completely it’s probably super confusing.
To take you and @carlmjohnson seriously, I’m going to give an analogy.
Suppose a community of welders is dependent on some gas. This gas is only produced by a proprietary mining company. The company exploits the welding community, only sharing gas preferentially with welders who choose to donate labor to the company. This leads to welders seeking out alternative sources of gas, trying to set up their own mines and refineries, but the company’s network effects and legal backing are insurmountable.
Now, one day an apologist for the company publishes the claim, “we don’t have the Free Welding dream of everybody having control over their own welding workspace because welding two pieces of metal together with a join is really fucking hard and takes a lot of work to do.” Several other welders agree, pointing out that it takes a lot of labor to make a good join weld.
But isn’t it the case that welding requires practice and skill? Yes. It is also the case that any welder can join-weld (indeed, many define the word “weld” in terms of joining), though. By taking the apologist seriously, we might increase the number of welders, democratizing the welding practice and inviting many people to come learn the craft. However, if we are still limited by the company which produces the gas, then this only creates more competition for an artificially-limited resource.
Indeed, it is the case that the company controlling the gas is always the bottleneck, Here’s where the analogy breaks down; while we cannot simply make copies of full tanks of argon, we can make copies of datasheets and source code. The marginal cost to Intel, nVidia, Broadcom, or other chip manufacturers is negligible, especially since they have to sink the bulk of the cost up-front in order to bring up software on the chips in the first place!
I don’t see what proprietariness has to do with it. Integrating a bunch of disparate open source apis is almost as hard.
By contrast, having everything tie in to a single language, paradigm, and environment makes things easier. This is how small talk was designed; I believe it’s part of what seduces Common Lisp weenies; and frankly I believe that vibrant ecosystems with a similar point of view across the ecosystem explain a large part of the success of Python and Java.
It is not “almost as hard”. They are such distinct tasks that they are often listed as separate bullet points on CVs; I list Kubernetes expertise but not VMWare, and it’s not because Kubernetes is easy, but because VMWare is hard even compared to Kubernetes.
I disagree on formal methods never being mainstream. They started to go mainstream once they transitioned from building tools that were trying to prove correctness (whatever that means) to providing tools that reduced the number of bugs. The only sense that I agree with this is that as soon as some aspect of formal methods goes mainstream people stop calling it formal methods. Modern languages with flow-sensitive algebraic and structural type systems are doing things that you could do only in a theorem prover on small examples 20 years ago. Tandem verification (fuzz a formal model and an implementation and ensure that they provide the same traces) is incredibly useful in a load of places such as hardware design, network protocol implementation, and so on. I doubt that we’ll get to a world where you write a spec and then synthesise a naive implementation and prove that your optimised implementation is a refinement of the spec any time soon but that doesn’t mean that formal methods aren’t becoming mainstream.
Point 1. There are many fields of software where the penalty for bugs is death. I’m thinking not only of programming of medical instruments like X-ray scanners, but even things like airplane automation (think of the 737 MAX bug) or even scarier stuff like Nuclear Reactor control routines, or Military applications. The culture of software developers in these fields is much more strict than your typical JS web developer. Saying that it’s somehow impossible to write better software is a surprisingly defeatist attitude, really.
Point 2. There are interesting methods of conducting interviews, which at least try to erase biases, like scrubbing names and identifiers off of resumes, or Blind Auditions for Orchestras. Importantly, they don’t take them into account and try to adjust for biases, but they do attempt to give candidates a level playing field. Orchestras in particular have shown that this approach increases inclusion, but not to the point where orchestra’s ethnic and gender composition matches that of the surrounding community. The tech community needs look outside of itself and learn from other fields.
Also also also, most layfolk don’t want to program their own computers, much in the same way that most layfolk don’t want to do their own taxes or wire their own houses.
This is a very popular take and I think it says more about what programmers think programming is than it does about what people need. Lots of people love Zapier. I’m here to tell you that Zapier is programming. Lots of people hate doing repetitive tasks with their computer, or at least hate how much time they spend on it. These people don’t want “programing” any more than Reformation-era peasants wanted to “learn Latin and Greek” but they do want to be able to convince the computer to do what they need it to do.
To me, the trouble with the capitalism takes is that often they tend to basically amount to un-actionable complaining; if your goal is to figure out how to support open source maintainers, this isn’t really that constructive, at least in the form I usually see it. It comes off as vaguely defeatist a lot of the time.
…which is not to say these takes are necessarily even wrong per-se; at least some of the patchwork of solutions to this that people are trying are distinctly non-capitalist in nature, notably grants from places like the OTF, NLNet, etc. Julia Reda gave a talk at libreplanet last year where she argued for more of these. One possible solution is that we as a society indeed skip past needing to find a sustainable business model and just say “this stuff matters to us; let’s fund it.” This is not the only approach of course, and there are others that folks that are trying that are more capitalism-oriented (see e.g. tidelift).
I do think it’s worth seriously asking whether there are better approaches to the problem than trying to shoehorn it into things you can get profit-motivated businesses to do though. But I’d rather spend time doing that than reading back and forth about whether “capitalism is the problem.:
I should note also that I think “doesn’t capitalism suck?” is sometimes a valid thing to voice; I do not want to give the impression that I think all bug reports must come with a proposed fix.
I understand your position, but I think that usually they are actionable; the problem is that bystanders consider those actions unthinkable or impossibly expensive.
The Unix philosophy of “do one thing well” doesn’t actually work that well. Dan Luu explains this better than I could.
My take on Dan Luu’s post is that it is very hard to have a unifying vision when disparate people are allowed to extend a system with no central authority (or the central authority – in this case GNU doesn’t buy into the vision). Not that the unifying vision itself doesn’t work well.
Something can’t both be an “uncomfortable truth” and the status quo belief. I have a hard time reading this as anything other than an attempt to make a deferral of any responsibility to improve the situation sound like some sort of hard-won nugget of wisdom.
As I understoof it, the uncomfortable truth Hwayne is claiming is that the status quo (which nobody really likes) may be close to optimal.
This isn’t a status quo belief; this is an area of active political debate. The correct way for institutions to react to various demographic groups being underrepresented (in all aspects of public life, not just programming job interviews) is one of the most contentious political issues in the anglophone world at the moment. People disagree vehemently about what changes would constitute “improving the situation”, in a way that affects (and should affect) which politicians they vote for or donate money to.
I think this is selection bias, because it seems to me like the debate is occurring between a minority of people who actually care, and the remainder if pressed would basically shrug and reiterate exactly the “uncomfortable truth” stated here.
Is the status-quo belief that it’s impossible to create a technical interview that is not “going to be gameable and exclusionary to some underrepresented group/background”?
I suspect quite a few people running technical interviews aren’t thinking about this at all, or are making some effort to reduce obvious (minimum, lawsuitable?) biases, and then thinking that they’ve eliminated bias.
I’m not sure I agree with the second part. I don’t see how you can say that current interview processes are both “dysfunctional” and “very possibly … still, somehow, close to optimal”.
100% IMO. I work at a place with an SLA where we refund customers if we violate our SLA. It makes us care a lot about the things we promise in our SLA.
I think I agreed with every single one of these except the pro-mobbing take.
As someone who’s worked with ruby more than 10 years, couldn’t agree more here.
rspec
is the conspicuous remnant of this delirium and is an unequivocal mistake.Anecdata: Every time I see something that completely abuses Javascript in a way that breaks catastrophically when you drift outside of “blog engine demo” territory, it’s always somehow descended from rspec and/or cucumber.
Could you elaborate on this?
I haven’t used
rspec
, but I’ve usedmocha
in JS, which I think was inspired byrspec
. In Mocha we write a lot of statements likeexpect(someValue).to.have.keys(['a', 'b'])
. I don’t love the Mocha syntax, but it does produce quite nice, human-readable error output. I guess it could be easily reduced toexpect(someValue).hasKeys(['a', 'b'])
.Happy RSpec user here and definitely going to continue using it in future. Not sure why some people keep repeating that DSLs haven’t caught on, especially in Ruby. It’s the least convincing argument as to why it’s worse than something else. ActiveRecord, the ORM in Rails, is nothing but a DSL to model relationships and 1000s of companies use it to build successful businesses. The proof is in the pudding. Is it perfect, certainly not. Does it require reading (and potentially re-reading) the docs, sure does. Is there a learning curve to become proficient and does it require experience to know when to use what or stay away from it, most definitely like with all things high level.
RSpec is most likely the most successful DSL, at least judging by the download/deployment numbers, s. https://rubygems.org/gems/rspec (vs. https://rubygems.org/gems/minitest or https://rubygems.org/gems/activerecord for instance).
The problem I’ve encountered in most of these DSLs (I played a bit with Mocha many years ago, but have the most experience with SBT/Scala, Chef, and bizzaro DSLs atop Tcl invented in hardware land) is a combination of:
Poor documentation for the not happy paths: The happy path is easy, but the moment you need to do something off the beaten path, you’ll find sharp edges in the DSL and lacking documentation. This also makes teaching other engineers about a DSL difficult. In my experience we taught SBT mostly by having experienced engineers pair with newer engineers to teach them about the DSL. This adds a learning overhead to DSLs that just isn’t there for general purpose programming languages.
Bad error messages: Again, most DSLs are optimized for the happy path. Many of these DSLs don’t really chain errors together very well. When you give these DSLs something they don’t expect, they rarely output any sensible error output to work with.
Few escape hatches: DSL authors (looking at you SBT) really like to, understandably, constrain what you can do in the DSL. That’s great until it’s not. Most DSLs don’t offer you a good way to break out of their assumptions and don’t give you a good way to interact with their cloistered world.
I could write about this at length but I’ll try to be brief.
Rspec re-invents fundamental ruby language concepts for sharing and organizing code, with no benefit other than making your tests “read more English,” which can seem cool to beginners (and did to me at one point) but is purely cosmetic and superfluous. Examples of this language re-invention include
shared_examples
/it_behaves_like
,let
statements, and proliferating “helper” files.To use rspec well, you need to learn a whole new language and set of best practices. And every new member of your team does too. I mean, there are whole books on it. Testing frameworks should be simple, not book-worthy. And there is nothing special about testing that warrants this. If you invest time becoming a ruby expert, you should be able to use your ruby expertise to write good tests. You should be able to use normal language constructs to share code in tests.
This is an old debate, and DHH was complaining about it years ago.
With that said, I don’t mind the
expect
DSL for assertions, and I like “it” blocks as a way of defining tests. But both of these, while technically DSLs, are small, focused, simple constructs that can be learned in five minutes and probably grokked without even reading the docs. Minitest is essentially just these parts of rspec, with theexpect
assertions optional, and that’s what I’d recommend for testing in ruby.Since “read like English” is the thing RSpec optimized for, it is incoherent to dismiss it. Programming languages should be easy to read and easy to write, but readability and writability is in tension. RSpec is the result of optimizing readability to extreme such that you don’t need to know RSpec to read it. Compare: you do need to know Ruby to read it.
When is “read like English” useful and not cosmetic? When it needs to be read by someone who don’t know how to program. If everyone reading your RSpec test knows how to program, sure, you probably shouldn’t use RSpec: as you said, it duplicates Ruby for not much benefit. The key idea is that learning RSpec when you already know Ruby is many times easier than learning Ruby when you don’t know how to program.
It’s not incoherent in the least. It is, and always has been, a silly and misguided goal. I say this as someone who has read the rspec book, who knows it and the philosophy behind it well, and at one time believed the hype.
I’m sorry, but this is pure nonsense. Rspec is not more readable than ruby, and the idea that non-programming “stakeholders” will be more likely to read and participate in the design process if you use rspec or cucumber is a pipe dream. I’ve never seen it happen, not once. And in the rare instance that one of these stakeholders was going to do this, they would find it no more difficult to understand minitest or test/unit. The difference in friction would be neglibigle, and the skill and clarity of the programmer writing the tests would be overwhelmingly more important than the test framework.
I have seen it happen, but if you haven’t, I 100% agree RSpec has been entirely useless for you.
https://www.stevenrbaker.com/tech/history-of-rspec.html gives a better description of the original goal.
The presence of regulations also kills people, and this is also a principle that applies to realms of human experience beyond software. (e.g. the FDA causing deaths by making it too difficult to legally develop or sell drugs). The right amount of regulation, and the way that regulation ought to be administered, is in general a difficult political problem, and there’s no reason to think the dynamics would work any differently in the world of software compared to anything else.
I know that the author works in formal methods, not cloud engineering or package management, so it’s understandable that they might not have the experience of integrating lots of APIs. However, it is relatively easy, as long as the API is well-documented and easy to examine. The reason that I don’t have control over the computers which I own is because of firmware; chips from Intel, nVidia, Broadcom, and other corporations are not fully under my control, undermining my ownership.
“Other than that, Ms. Lincoln, how was the play?”
Consider GPU drivers. Some GPUs have public documentation for their low-level API; most do not. When that documentation exists, people can write Free Software to drive their GPUs. I have written a GPU driver based on datasheets provided by the GPU vendor. This is relatively easy, compared to reverse-engineering the GPU.
It may or may not be “hard,” but it is certainly time consuming and requires skilled training.
I don’t think firmware is what stands in the way primarily even for most programmers, nevermind for the general population. The biggest obstacle is a priest class of programmers who think that users simply don’t care to have their computers act as they wish or that the only way a computer can be instructed is by learning languages in detail etc etc
Yes it’s relatively easy for a professional software engineer to do as the productive part of their work. For someone trying to accomplish something else completely it’s probably super confusing.
To take you and @carlmjohnson seriously, I’m going to give an analogy.
Suppose a community of welders is dependent on some gas. This gas is only produced by a proprietary mining company. The company exploits the welding community, only sharing gas preferentially with welders who choose to donate labor to the company. This leads to welders seeking out alternative sources of gas, trying to set up their own mines and refineries, but the company’s network effects and legal backing are insurmountable.
Now, one day an apologist for the company publishes the claim, “we don’t have the Free Welding dream of everybody having control over their own welding workspace because welding two pieces of metal together with a join is really fucking hard and takes a lot of work to do.” Several other welders agree, pointing out that it takes a lot of labor to make a good join weld.
But isn’t it the case that welding requires practice and skill? Yes. It is also the case that any welder can join-weld (indeed, many define the word “weld” in terms of joining), though. By taking the apologist seriously, we might increase the number of welders, democratizing the welding practice and inviting many people to come learn the craft. However, if we are still limited by the company which produces the gas, then this only creates more competition for an artificially-limited resource.
Indeed, it is the case that the company controlling the gas is always the bottleneck, Here’s where the analogy breaks down; while we cannot simply make copies of full tanks of argon, we can make copies of datasheets and source code. The marginal cost to Intel, nVidia, Broadcom, or other chip manufacturers is negligible, especially since they have to sink the bulk of the cost up-front in order to bring up software on the chips in the first place!
I don’t see what proprietariness has to do with it. Integrating a bunch of disparate open source apis is almost as hard.
By contrast, having everything tie in to a single language, paradigm, and environment makes things easier. This is how small talk was designed; I believe it’s part of what seduces Common Lisp weenies; and frankly I believe that vibrant ecosystems with a similar point of view across the ecosystem explain a large part of the success of Python and Java.
What’s easier to integrate:
It is not “almost as hard”. They are such distinct tasks that they are often listed as separate bullet points on CVs; I list Kubernetes expertise but not VMWare, and it’s not because Kubernetes is easy, but because VMWare is hard even compared to Kubernetes.
I disagree on formal methods never being mainstream. They started to go mainstream once they transitioned from building tools that were trying to prove correctness (whatever that means) to providing tools that reduced the number of bugs. The only sense that I agree with this is that as soon as some aspect of formal methods goes mainstream people stop calling it formal methods. Modern languages with flow-sensitive algebraic and structural type systems are doing things that you could do only in a theorem prover on small examples 20 years ago. Tandem verification (fuzz a formal model and an implementation and ensure that they provide the same traces) is incredibly useful in a load of places such as hardware design, network protocol implementation, and so on. I doubt that we’ll get to a world where you write a spec and then synthesise a naive implementation and prove that your optimised implementation is a refinement of the spec any time soon but that doesn’t mean that formal methods aren’t becoming mainstream.
Point 1. There are many fields of software where the penalty for bugs is death. I’m thinking not only of programming of medical instruments like X-ray scanners, but even things like airplane automation (think of the 737 MAX bug) or even scarier stuff like Nuclear Reactor control routines, or Military applications. The culture of software developers in these fields is much more strict than your typical JS web developer. Saying that it’s somehow impossible to write better software is a surprisingly defeatist attitude, really.
Point 2. There are interesting methods of conducting interviews, which at least try to erase biases, like scrubbing names and identifiers off of resumes, or Blind Auditions for Orchestras. Importantly, they don’t take them into account and try to adjust for biases, but they do attempt to give candidates a level playing field. Orchestras in particular have shown that this approach increases inclusion, but not to the point where orchestra’s ethnic and gender composition matches that of the surrounding community. The tech community needs look outside of itself and learn from other fields.
This is a very popular take and I think it says more about what programmers think programming is than it does about what people need. Lots of people love Zapier. I’m here to tell you that Zapier is programming. Lots of people hate doing repetitive tasks with their computer, or at least hate how much time they spend on it. These people don’t want “programing” any more than Reformation-era peasants wanted to “learn Latin and Greek” but they do want to be able to convince the computer to do what they need it to do.
To me, the trouble with the capitalism takes is that often they tend to basically amount to un-actionable complaining; if your goal is to figure out how to support open source maintainers, this isn’t really that constructive, at least in the form I usually see it. It comes off as vaguely defeatist a lot of the time.
…which is not to say these takes are necessarily even wrong per-se; at least some of the patchwork of solutions to this that people are trying are distinctly non-capitalist in nature, notably grants from places like the OTF, NLNet, etc. Julia Reda gave a talk at libreplanet last year where she argued for more of these. One possible solution is that we as a society indeed skip past needing to find a sustainable business model and just say “this stuff matters to us; let’s fund it.” This is not the only approach of course, and there are others that folks that are trying that are more capitalism-oriented (see e.g. tidelift).
I do think it’s worth seriously asking whether there are better approaches to the problem than trying to shoehorn it into things you can get profit-motivated businesses to do though. But I’d rather spend time doing that than reading back and forth about whether “capitalism is the problem.:
I should note also that I think “doesn’t capitalism suck?” is sometimes a valid thing to voice; I do not want to give the impression that I think all bug reports must come with a proposed fix.
I understand your position, but I think that usually they are actionable; the problem is that bystanders consider those actions unthinkable or impossibly expensive.
My take on Dan Luu’s post is that it is very hard to have a unifying vision when disparate people are allowed to extend a system with no central authority (or the central authority – in this case GNU doesn’t buy into the vision). Not that the unifying vision itself doesn’t work well.