I have begun to feel that any conversation about “ethics of tech” is analogous in some ways to “the ethics of the conquests of the Khans”. Tech is an expression of power, ultimately unchanged by the opinions of a few employees. Businesses that choose less exploitative behavior without a compelling anti-competitive bulwark to stand behind will be out-competed by those who have fewer scruples. Investment flows to firms that maximize the return on that investment.
As humans making day-to-day decisions, we come across many information hazards that are true facts that nevertheless would be harmful for us to consciously or publicly acknowledge. When we walk past people laying on the sidewalk, we tend to assume they are simply homeless, rather than someone who is having a medical problem and who requires immediate assistance. Learning that they require help means that we have to spend effort or risk taking a social hit - a lose-lose situation for ourselves. Our minds seem to avoid learning such inconvenient facts - and when confronted with them, we usually rationalize them away.
I feel that “ethics in tech” as commonly expressed is way of avoiding the inconveniences associated with being a dominant expression of power relations that have been pretty unfavorable for most people since 1971. Unlike that link though, the question of “WTF Happened In 1971?” is really not mysterious in any way. Bretton Woods disintegrated in 1971, along with the capital controls that prevented investment from flowing freely to the country with the crappiest social support systems and thus the lowest taxes. It’s extremely ironic to me that the banner on top of that page advocates Bitcoin, which is a mechanism for ultimately avoiding the capital controls - similar in function to the historical events that led to the issues showcased there. Since Bretton Woods ended, inequality has exploded, real wages have declined while profits have continued to rise, and larger and larger groups are finding themselves without dignity and in search of alternative viewpoints that paint some scapegoat group as the enemy who has wronged them. We have been down this path many times before, and it clearly leads to genocide and war.
Before Bretton Woods collapsed, unions and other representatives of the majority were able to fight for higher wages. Investors could not easily set up shop somewhere else rather than negotiating. By keeping the investors in geographical boxes via capital controls, they were able to be negotiated with so that the ever-more-productive population was able to join in on the ever-increasing profits. When capital controls were reduced, investment began to flow to the destination which would give the workers the smallest cut of the profits. This is called capital flight and in my opinion, this is the primary issue that ultimately prevents the work we do as machine makers from being as “ethical” as it could be.
What constitutes what is commonly viewed as “ethical” is ultimately defined by power relations in a society. The fact that we are seeing more “ethics in tech” advocates is a nice symptom that people are starting to lose the ability to hide inconvenient truths from themselves. But if we want to change the way that investment is expressed, having opinions about other people’s decisions is not enough. Telling people that they are evil, or that they should act differently, is of limited effect.
Decisions can only be changed by changing incentives. A professional genocidal executioner can be an advocate for “ethical genocidal execution” but it does not change the expression of power that they represent while putting food on their table. Bretton Woods was created in the ashes of WWII when the leading nations of the world decided that they wanted to prevent one of the primary causes for WWI and WWII from happening again: nationalism-breeding economic instability. They had significant, recent, traumatic motivations for wanting to avoid this. And the system was successful in some key areas that ultimately relate to human dignity in a society.
I don’t know what will motivate the next Bretton Woods, but I hope we can motivate it in a less traumatic way than what led to the last one. I believe this is the more complete expression of what is implied by the “be ethical in tech” crowd. I don’t believe that technology is neutral at all, and I actively make all kinds of technical decisions that I believe have a political implication, but I think we should appreciate reality and remember that tech inflicts many traumas on the world as an expression of power relations that are always in flux.
Your entire argument is premised upon the implicit claim in your first paragraph that business concerns are the primary reason why we write programs, and that corporatism is the only framework through which we can take a Darwinian lens.
However, as you may know, massive amounts of Free Software not only exist, but are integral to those same corporate ends. While you and the author may be thinking of corporate employees, they are not the only producers of code. This leads to a big ethical question which you have completely missed: Is it ethical to accept payment for writing code?
Even if it were ethical, we also have to contend with the fact that corporations are able to use intellectual property law to disenfranchise us of our rights to our freshly-written code. By inventing the abusive concept of working-for-hire, corporations take our code from us and alienate us from the proceeds. At the same time, Kerckhoff’s principle extolls us to release code to a public commons, in order to take advantage of Linus’ Law and have less insecure systems overall. This motivates a second big ethical question which you missed: Is it ethical to refuse to publish code?
What constitutes what is commonly viewed as “ethical” is ultimately defined by power relations in a society.
I can agree with only the veneer of this statement. What is ethical is up for debate, and an entire subfield of philosophy is dedicated to exploring it. However, rarely do people disagree with pragmatic ethics; it turns out that pragmatism is pragmatic. As a result, I can say that what constitutes ethical actions in a society is ultimately defined by the effects of those actions upon that society.
If you want to understand power relations more deeply, first start by recognizing that power is equivalent to responsibility; the only reason that people accept the power of social institutions over them is because they believe that those institutions have accepted a delegated social responsibility. When powerful groups are irresponsible, then their power wanes. (Formally, imagine a category whose objects are people and whose arrows X → Y encode “X cedes certain power to Y”. Then the opposite category encodes “Y holds certain responsibility for X”.)
Related: Simon Ser wrote an article about his experience actually saying “no”, and the (lack) of negative consequences for him which resulted from this decision. Saying “no” to unethical tasks.
Disclaimer: I have a business relationship with Simon.
So what are to make of this? Saying “no” may not have direct impact on your own career, and it may allow you to not do something you find personally distasteful, but is it enough?
That’s obviously not what I’m saying. This article demonstrates that you can say “no” and it will probably be just fine for your career.
The next trick is getting a majority of engineers to realize this, and start exercising their influence on the ethical direction of the companies they work for.
The next trick is getting a majority of engineers to realize this, and start exercising their influence on the ethical direction of the companies they work for.
Engineers[1] realized back in the 1800s that they could exercise their influence on the direction of the railroads that hired them. This quickly lead to the birth of the Pinkerton agency and its often violent deployment against workers all over the country for over a century afterwards.
Saying “no” in isolation will probably not impact your career, especially if you are a valuable and skilled software engineer. But saying no and trying to educate/empower/motivate others? The evidence of history suggests your career will be in jeopardy.
This is why I find the author’s idea of a professional code of ethics to be interesting, though I doubt it will be something that can ever be formalized in our industry.
[1] For the pedants reading this: I’m sure it was not just engineers looking to unionize the railroads – but the parallel was too fun to ignore :)
something that can ever be formalized in our industry.
It can with the stroke of a pen.
But it shouldn’t happen. We as a profession would be closing the doors to the next generation of programmers who would see adhering to a professional organization as a needless barrier.
If you say ‘no’ and someone else is willing to say ‘yes’, the outcome will be that the other person does the job. If you’re in an industry with an overall skills shortage (kernel programming definitely counts!) then you will end up working on something not directly the same as the unethical thing. The company has a vested interest in not losing talented employees and in shipping their product. In this case, the two are not in conflict, they keep you happy and someone else who doesn’t consider it unethical does the work.
If; however, everyone says ‘no’, the balance shifts. Now the company has a big incentive to ensure that someone is willing to do the unethical thing. This may mean higher salaries for people working on unethical projects or workplace bullying / intimidation. The latter can be solved with a strong union (you bully your workers? Good luck ever making them show up to work), the former can be addressed by personal liability.
DRM is an interesting example because it’s not universally agreed that things like to HDCP are unethical. They prop up broken business models but all of the folks employed in those industries are likely to be pretty happy with them. Personally, I’d love to see DRM aggressively regulated and, at a minimum, require that any consumer product distributed with DRM must file a DRM-free copy with a copyright library, to be publicly distributed as soon as the DRM’d system stops working. If the owner of the copyrighted material does not comply with this, then they should immediately and permanently forfeit copyright protection on the work, such that anyone who bypasses the DRM is legally able to freely distribute the work. I’d also like to see large statutory fines when DRM prevents activities guaranteed by Fair Use / Fair Dealings or equivalent parts of copyright law. I see DRM for preventing copyright violation in much the same light as putting bear traps around your property to prevent trespassers: you may be allowed to do it but you are opening yourself up to a load of liability when a child steps in one.
That really comes back to the ‘personal liability’ issue. There is no personal or corporate liability for ‘unethical’ behaviour, there is only liability for illegal behaviour. If there is a social consensus that something is unethical, then it eventually becomes illegal. Many of the problems with big tech today are caused by that lag. Pervasive surveillance, DRM, and so on may be broadly perceived within the tech industry as unethical, but they’re legal. Most people outside of the industry don’t think about them at all (ask your Facebook-using friends some time how many of them have actually read the Facebook T&Cs and how many are happy with the company knowing their address, voting preference, top 5 political issues, and being willing to sell that information to political parties). Until they’re a significant issue for a large percentage of the population, they’re a low priority for lawmakers on democratic countries. You don’t get re-elected by championing issues that no one cares about. I used to have an MEP who was an active campaigner against software patents. I would be willing to bet that no more than 5% of the electorate in her constituency even knew what a software patent was and so I doubt it did very much for her re-election chances.
If; however, everyone says ‘no’, the balance shifts. Now the company has a big incentive to ensure that someone is willing to do the unethical thing.
Two points: First, humans are social creatures, and companies are run by humans. if a huge number of workers start complaining about ethics, it will give some of the leadership pause.
Second, companies have option to find a more ethical way of doing something. If doing ethical things turns out to be lower friction, lower cost, and leads to better publicity and easier hiring than doing unethical things, then it becomes the pragmatic business decision.
It’s absolutely not enough; for it to have an impact you need to organize with other workers and present a united front of refusal. Otherwise it’s a token effort that just helps you sleep better at night.
I have begun to feel that any conversation about “ethics of tech” is analogous in some ways to “the ethics of the conquests of the Khans”. Tech is an expression of power, ultimately unchanged by the opinions of a few employees. Businesses that choose less exploitative behavior without a compelling anti-competitive bulwark to stand behind will be out-competed by those who have fewer scruples. Investment flows to firms that maximize the return on that investment.
As humans making day-to-day decisions, we come across many information hazards that are true facts that nevertheless would be harmful for us to consciously or publicly acknowledge. When we walk past people laying on the sidewalk, we tend to assume they are simply homeless, rather than someone who is having a medical problem and who requires immediate assistance. Learning that they require help means that we have to spend effort or risk taking a social hit - a lose-lose situation for ourselves. Our minds seem to avoid learning such inconvenient facts - and when confronted with them, we usually rationalize them away.
I feel that “ethics in tech” as commonly expressed is way of avoiding the inconveniences associated with being a dominant expression of power relations that have been pretty unfavorable for most people since 1971. Unlike that link though, the question of “WTF Happened In 1971?” is really not mysterious in any way. Bretton Woods disintegrated in 1971, along with the capital controls that prevented investment from flowing freely to the country with the crappiest social support systems and thus the lowest taxes. It’s extremely ironic to me that the banner on top of that page advocates Bitcoin, which is a mechanism for ultimately avoiding the capital controls - similar in function to the historical events that led to the issues showcased there. Since Bretton Woods ended, inequality has exploded, real wages have declined while profits have continued to rise, and larger and larger groups are finding themselves without dignity and in search of alternative viewpoints that paint some scapegoat group as the enemy who has wronged them. We have been down this path many times before, and it clearly leads to genocide and war.
Before Bretton Woods collapsed, unions and other representatives of the majority were able to fight for higher wages. Investors could not easily set up shop somewhere else rather than negotiating. By keeping the investors in geographical boxes via capital controls, they were able to be negotiated with so that the ever-more-productive population was able to join in on the ever-increasing profits. When capital controls were reduced, investment began to flow to the destination which would give the workers the smallest cut of the profits. This is called capital flight and in my opinion, this is the primary issue that ultimately prevents the work we do as machine makers from being as “ethical” as it could be.
What constitutes what is commonly viewed as “ethical” is ultimately defined by power relations in a society. The fact that we are seeing more “ethics in tech” advocates is a nice symptom that people are starting to lose the ability to hide inconvenient truths from themselves. But if we want to change the way that investment is expressed, having opinions about other people’s decisions is not enough. Telling people that they are evil, or that they should act differently, is of limited effect.
Decisions can only be changed by changing incentives. A professional genocidal executioner can be an advocate for “ethical genocidal execution” but it does not change the expression of power that they represent while putting food on their table. Bretton Woods was created in the ashes of WWII when the leading nations of the world decided that they wanted to prevent one of the primary causes for WWI and WWII from happening again: nationalism-breeding economic instability. They had significant, recent, traumatic motivations for wanting to avoid this. And the system was successful in some key areas that ultimately relate to human dignity in a society.
I don’t know what will motivate the next Bretton Woods, but I hope we can motivate it in a less traumatic way than what led to the last one. I believe this is the more complete expression of what is implied by the “be ethical in tech” crowd. I don’t believe that technology is neutral at all, and I actively make all kinds of technical decisions that I believe have a political implication, but I think we should appreciate reality and remember that tech inflicts many traumas on the world as an expression of power relations that are always in flux.
Your entire argument is premised upon the implicit claim in your first paragraph that business concerns are the primary reason why we write programs, and that corporatism is the only framework through which we can take a Darwinian lens.
However, as you may know, massive amounts of Free Software not only exist, but are integral to those same corporate ends. While you and the author may be thinking of corporate employees, they are not the only producers of code. This leads to a big ethical question which you have completely missed: Is it ethical to accept payment for writing code?
Even if it were ethical, we also have to contend with the fact that corporations are able to use intellectual property law to disenfranchise us of our rights to our freshly-written code. By inventing the abusive concept of working-for-hire, corporations take our code from us and alienate us from the proceeds. At the same time, Kerckhoff’s principle extolls us to release code to a public commons, in order to take advantage of Linus’ Law and have less insecure systems overall. This motivates a second big ethical question which you missed: Is it ethical to refuse to publish code?
I can agree with only the veneer of this statement. What is ethical is up for debate, and an entire subfield of philosophy is dedicated to exploring it. However, rarely do people disagree with pragmatic ethics; it turns out that pragmatism is pragmatic. As a result, I can say that what constitutes ethical actions in a society is ultimately defined by the effects of those actions upon that society.
If you want to understand power relations more deeply, first start by recognizing that power is equivalent to responsibility; the only reason that people accept the power of social institutions over them is because they believe that those institutions have accepted a delegated social responsibility. When powerful groups are irresponsible, then their power wanes. (Formally, imagine a category whose objects are people and whose arrows X → Y encode “X cedes certain power to Y”. Then the opposite category encodes “Y holds certain responsibility for X”.)
No it isn’t.
Related: Simon Ser wrote an article about his experience actually saying “no”, and the (lack) of negative consequences for him which resulted from this decision. Saying “no” to unethical tasks.
Disclaimer: I have a business relationship with Simon.
And yet, i915 has DRM upstreamed.
So what are to make of this? Saying “no” may not have direct impact on your own career, and it may allow you to not do something you find personally distasteful, but is it enough?
That’s obviously not what I’m saying. This article demonstrates that you can say “no” and it will probably be just fine for your career.
The next trick is getting a majority of engineers to realize this, and start exercising their influence on the ethical direction of the companies they work for.
Engineers[1] realized back in the 1800s that they could exercise their influence on the direction of the railroads that hired them. This quickly lead to the birth of the Pinkerton agency and its often violent deployment against workers all over the country for over a century afterwards.
Saying “no” in isolation will probably not impact your career, especially if you are a valuable and skilled software engineer. But saying no and trying to educate/empower/motivate others? The evidence of history suggests your career will be in jeopardy.
This is why I find the author’s idea of a professional code of ethics to be interesting, though I doubt it will be something that can ever be formalized in our industry.
[1] For the pedants reading this: I’m sure it was not just engineers looking to unionize the railroads – but the parallel was too fun to ignore :)
It can with the stroke of a pen.
But it shouldn’t happen. We as a profession would be closing the doors to the next generation of programmers who would see adhering to a professional organization as a needless barrier.
If you say ‘no’ and someone else is willing to say ‘yes’, the outcome will be that the other person does the job. If you’re in an industry with an overall skills shortage (kernel programming definitely counts!) then you will end up working on something not directly the same as the unethical thing. The company has a vested interest in not losing talented employees and in shipping their product. In this case, the two are not in conflict, they keep you happy and someone else who doesn’t consider it unethical does the work.
If; however, everyone says ‘no’, the balance shifts. Now the company has a big incentive to ensure that someone is willing to do the unethical thing. This may mean higher salaries for people working on unethical projects or workplace bullying / intimidation. The latter can be solved with a strong union (you bully your workers? Good luck ever making them show up to work), the former can be addressed by personal liability.
DRM is an interesting example because it’s not universally agreed that things like to HDCP are unethical. They prop up broken business models but all of the folks employed in those industries are likely to be pretty happy with them. Personally, I’d love to see DRM aggressively regulated and, at a minimum, require that any consumer product distributed with DRM must file a DRM-free copy with a copyright library, to be publicly distributed as soon as the DRM’d system stops working. If the owner of the copyrighted material does not comply with this, then they should immediately and permanently forfeit copyright protection on the work, such that anyone who bypasses the DRM is legally able to freely distribute the work. I’d also like to see large statutory fines when DRM prevents activities guaranteed by Fair Use / Fair Dealings or equivalent parts of copyright law. I see DRM for preventing copyright violation in much the same light as putting bear traps around your property to prevent trespassers: you may be allowed to do it but you are opening yourself up to a load of liability when a child steps in one.
That really comes back to the ‘personal liability’ issue. There is no personal or corporate liability for ‘unethical’ behaviour, there is only liability for illegal behaviour. If there is a social consensus that something is unethical, then it eventually becomes illegal. Many of the problems with big tech today are caused by that lag. Pervasive surveillance, DRM, and so on may be broadly perceived within the tech industry as unethical, but they’re legal. Most people outside of the industry don’t think about them at all (ask your Facebook-using friends some time how many of them have actually read the Facebook T&Cs and how many are happy with the company knowing their address, voting preference, top 5 political issues, and being willing to sell that information to political parties). Until they’re a significant issue for a large percentage of the population, they’re a low priority for lawmakers on democratic countries. You don’t get re-elected by championing issues that no one cares about. I used to have an MEP who was an active campaigner against software patents. I would be willing to bet that no more than 5% of the electorate in her constituency even knew what a software patent was and so I doubt it did very much for her re-election chances.
Two points: First, humans are social creatures, and companies are run by humans. if a huge number of workers start complaining about ethics, it will give some of the leadership pause.
Second, companies have option to find a more ethical way of doing something. If doing ethical things turns out to be lower friction, lower cost, and leads to better publicity and easier hiring than doing unethical things, then it becomes the pragmatic business decision.
It’s absolutely not enough; for it to have an impact you need to organize with other workers and present a united front of refusal. Otherwise it’s a token effort that just helps you sleep better at night.