1. 41
    1. 31

      Maybe this explains why I get annoyed at people who think “automation” of some routine task is “free” after the initial time investment. When an exception to the task appears, you need a human to step in and override the machine’s ruthlessness.

      That necessarily sets a cap on the number of automated systems one person can ethically manage. Too many, and the human can’t override the computer quickly enough or often enough. And that’s how you get stories of people’s lives being ruined by automated account bans.

      1. 24

        And that’s how you get stories of people’s lives being ruined by automated account bans

        Facebook has also banned people for having the name Isis, or for having some other name that triggers their “not a real name” detection algorithm. Banning the name Isis is what happens when we, as a society, spurn a liberal arts education in favor of Great God STEM.

        Is the problem the machines, though, or is it the huge organizations that control them? Corporations in particular are an accountability dodge:

        “Corporations have neither bodies to be punished, nor souls to be condemned; they therefore do as they like.” – Edward Thurlow

        When a company doesn’t have a human to answer the phone and read a script, it appears even less accountable than the traditional organization of yore. At least with the traditional company, you could perhaps ask to talk to a supervisor or similar. We should all be very, very worried when Silicon Valley “thought leaders” talk of reinventing government.

        All of that said, I remain a techno optimist, but you need a human else-branch.

        1. 10

          Is the problem the machines, though, or is it the huge organizations that control them? Corporations in particular are an accountability dodge:

          I think this is the crux of it. A computer system, or more generally, a technical system, is only one part of a social system. In general we design technical systems to serve the needs of our social systems, but also, increasingly, to implement and naturalize parts of our social system that we don’t want to openly admit to. But the ultimate problem is the ruthlessness and inhumanity of our social system. In a better social system, we’d build machines of loving grace, and even though their behavior would have the kind of limits and hard edges this article calls “ruthlessness”, it wouldn’t matter, because they’d be embedded in social systems that didn’t treat those limits as inevitable and final social outcomes.

          This is why I thought the bit at the end about Bitcoin was quite ironic — Bitcoin is clearly designed to implement a social system that is much more ruthless than the existing one (the deflationary design, the winner-takes-all aspects of mining, the irreversibility of transactions). Why try to launder it as a tool of freedom?

      2. 3

        override the machine’s ruthlessness

        Which reminds me of all the latest movies and games where you have to fight some scary robots that operate in “ruthless” mode.

        1. 1

          Do you have any specific examples in mind?

          1. 7

            What’s it called… The Mitchells vs the Machines or something. Enjoyable and recommended!

            Then there was an episode of Love, Death & Robots featuring a derped robo-vacuum, but I found it inane or at least unentertaining.

            I’m sure Black Mirror had one.


            What’s “latest”? I’m old enough to pretty much include Terminator Salvation in there ;)

            1. 2

              I especially liked the Black Mirror with the robot dogs (Metalhead, S04E05).

            2. 2

              for games there is also SOMA and the recently announced Routine (scary trailer)

    2. 29

      The machine does nothing else than what a human tells it to do. So ruthlessness of a machine is actually ruthlessness of humans.

      1. 21

        Oddly, the author acknowledges this, but then doesn’t really explore the implications:

        Yet it is not a force of nature; in reality it is a system deliberately designed by humans to advance a ruthless end, without admitting to it. Behind the door lies the human controlling it. In this way the machine becomes an abstraction and a disguise for human ruthlessness.

        …which is disappointing, since that opens up an opportunity to talk about how computing systems could be designed to be more humane, which is immediately squandered in favor of advocating for crypto-“anarchism” whose success is “beyond dispute” and whose ruthlessness is “clearly” good.

        Which is to say, I agree with you. Furthermore, it’s not just computing itself which obscures the underlying human ruthlessness or integrity (although in a mundane, banality-of-evil sense, it definitely does). But, at a deeper level, what obscures the human element is the belief that computing is inherently anything.

        1. 14

          Absolutely. There are plenty of examples of humane systems:

          • Automated doors which will detect an obstruction and either stop what they’re doing or default open.
          • Highly visible, trivially operated emergency off switches.
          • Free and open source software which allows end users to inspect in massive detail what their system is doing, and to correct anything they consider a bug.
          • Telephone services which immediately go straight to a human. They still exist, and more companies should brag about them.
          • Rounded corners, on both physical and digital products.

          These all exist because someone looked at the existing MVP, decided it wasn’t good enough, and designed (or enforced) something more humane. All of us, as users and creators, “simply” need to encourage the proliferation of such systems.

        2. 3

          I’d agree with you that exploring the human angle is key to fixing this problem. This post is tagged “philosophy”, but I’d as soon tag it “religion” (if such a tag existed and if Lobsters was the place for such debates) because it and these comments are really speaking about our view of humans and morality. Where the author sees “ruthlessness” from a machine following its programming in the subway, others may see life giving opportunity in the biomedical field, for example.

          What computers, and machines in general, therefore possess, is this: the power of unlimited, total ruthlessness, ruthlessness greater than even the most warped and terrible human is capable of, or can even comprehend.

          I don’t believe this can be true because a human, potentially a warped and terrible one, had to intentionally create the machine. In other words, if someone made a machine to destroy humanity, they were doing so intentionally and they are worse than the machine itself because they understand what humanity is, the machine does not (nod to the sentience debate).

          In this regard one wonders if there has ever been a human who truly desired infinite integrity in full and prescient knowledge of its consequences.

          This hypothetical question is the perfect example that this is really a religious debate, not just an ethics in engineering one. 1-2 billion people in the world would answer that question with “Yes, His name is Jesus.” We can’t consider integrity in machines without dealing with integrity in the humans that create and use them and we can’t deal with that without first knowing where we each come from in our beliefs. I sincerely am not posting this to start a religious flame war, merely point out that machines aren’t what is in question in this post.

          1. 7

            This post is tagged “philosophy”, but I’d as soon tag it “religion” (if such a tag existed and if Lobsters was the place for such debates) because it and these comments are really speaking about our view of humans and morality. Where the author sees “ruthlessness” from a machine following its programming in the subway, others may see life giving opportunity in the biomedical field, for example.

            This hypothetical question is the perfect example that this is really a religious debate

            I just want to say that these types of questions and discussions are definitely philosophical, even if people turn to religion for the answer.

            1. 4

              Ethics is absolutely a branch of philosophy.

      2. 2

        Blame not the tools but the systems which created them and in which they are used.

    3. 32

      And that’s why the abuses of techno-optimism from the ruling classes are creating a new wave of luddism inside and outside the tech industry.

      The argument from OP is not new: the conflict of humans vs machines has been a major trope of 20th century philosophy, literature and art, especially after the brutality of nazi-fascism in Europe. Actually, it’s the whole premise of entire fields of study, political institutions and organizations.

      Obviously, this stuff is not taught to engineers, that are trained to implement acritically anything that is requested from them. Just sprinkle some ethics-washing on top and they will believe they are the good guys.

      It’s always fun (not really) when techbros discover they are perceived as the “bad guys” outside their bubble. They get mad at people writing “no programmers” or “no cryptobros” on dating apps or “if you work in tech, everybody hates you. Just leave” on the walls of a gentrified neighborhood.

      1. 4

        Obviously, this stuff is not taught to engineers

        Depends on the schools, in Québec (maybe in the rest of Canada, I don’t remember) we are required to take a course on ethics in engineering. I also had course on sociology (also geared towards technology and engineering), but I don’t know if it’s required outside of Polytechnique of Montréal.

        1. 4

          This kind of courses are taught throughout the world, for what I know, but they are very very shallow compared to the responsibility and a power that a software engineer has. Also they tend to reinforce an idea of ethics that supports the status quo and usually draws the line at “95% of what is being done with technology is totally ok, the remaining 5% must be eradicated and please don’t put AI in your weapons”. I don’t know the one you took, but all the syllabi of the courses I’ve seen are wildly insufficient.

        2. 4

          Canada uses the word “engineer” very differently from USA. Here is it a regulated term with requirements to be one (including ethics training). In USA it can describe almost any practical job, but in this context often means “someone paid to write code”.

        3. 3

          Hi, I hail from Quebec too, and I’ve been practising software development for the past decade and a half. Can’t legally call myself an engineer, only went through college. Most of the people I have worked with over a decade and a half are not legally allowed to call themselves engineers. So “this stuff is not taught to engineers” is not true, from a very technical standpoint, but the reality on the ground is that indeed, the practitioners are not taught that stuff.

          1. 1

            That’s a pretty good point, almost all my colleagues went to the same engineering school, so I tend to forget that not all software developer went to engineering school.

      2. 2

        They get mad at people writing “no programmers” or “no cryptobros” on dating apps or “if you work in tech, everybody hates you. Just leave” on the walls of a gentrified neighborhood.

        They sure do love the engineers’ salaries when it comes to supporting a family or paying taxes for their community programs, though. Damnedest thing.

        1. 2

          Flaunting money is possibly even more repelling than being into cryptos.

    4. 12

      The beginning of the article sounded like a guy convincing himself that AI Safety is important, only to veer of into cryptocurrency at the last second. Disappointing.

      1. 4

        only to veer of into cryptocurrency at the last second.

        I’m fervently anti-cryptocurrency, but my read was somewhat more generous. From the article:

        The infinite ruthlessness of machines, even when employed to noble ends, is alien and unsettling to our human sentiments.

        We can argue about the noble ends part. I have at length, both here and on HN as blindgeek. And it’s not salient. The quoted bits don’t read like cheerleading to me. The enclosing paragraph is a pretty good description of some of the inhuman (anti-human?) implications of cryptocurrency.

        The piece does stand on its own without the Bitcoin digression.

      2. 2

        Harsh? Dismissing what they’ve had to say because they mention “X”?…

        The author’s mention of cryptocurrency is not irrational…

        1. 6

          I’m not dismissing it, I actually agree with a good chunk of it, I was just left feeling like it’s a bit of a wasted opportunity. I expect a lot of other people will dismiss it as no more than a crypto grift though, which is unfortunate given AI Safety is important.

          1. 1

            I expect a lot of other people will dismiss it as no more than a crypto grift though

            Considering how they described it as some immutable good to existence, it definitely turned me away strongly. I wish I didn’t come back to it after I left it in the middle for other commitments. I would have felt better.

    5. 8

      The problem with the “but people will give way” argument is the variability. You get an extension or more stuff not because your argument for doing so is necessarily worthy, but because you were lucky - you happened to get a person willing to be nice, or you happened to argue well enough for that person to give in. Computers are fair, which is to say, they do the rules we plug in, and that’s it. If we want to give more time, money, or other computer-mediated resources to some people then we should decide that for everyone in a defined group not the one’s who happen to be lucky.

      Of course, there’s then the problem that most of the rules are hidden. They’re defined by code and sometimes deliberately not written out, because then “computer says no” can just be used as an excuse. We should be making greater use of tools like lime to explicitly demonstrate that our systems are actually fair, in the sense of “the set of rules used here don’t result in racism/sexism/xenophobia”. Algorithms that make decisions about people should be regulated via test sets of example people.

    6. 8

      I think the shorter version of this article that could’ve made the same point is:

      “Technology and computers aren’t oppressive by themselves, but rather they enable a rigidity and consistency of policymaking and policy enforcement that heretofore hasn’t been considered–and that that rigidity and consistency is actually not desirable.”

      A child who is mute but can type is clearly enabled, not oppressed, by their computer. A prisoner who only talks to humans shuffling papers and who is denied their freedom due to the rules is clearly oppressed.

      Let’s not give ourselves over to a facile understanding of the world.

    7. 7

      technology itself is, not just computers.

      Ivan Illich on convival tools https://archive.org/details/illich-conviviality

      In short: be very careful If tools put themselves in the loop.

      1. 4

        I’d never heard of this book before and have been plowing through it ever since I read your comment last night. His observations about the way society has conformed to our tools really resonates with me. And as someone who has one foot in education and one in computer science, his arguments about technology and education are really resonating. Thanks for pointing out this resource!

        1. 2

          Nice to hear! I discovered Illich last summer and since he’s popping up in unexpected places. As usual with patterns. E.g. about minute 41’ in the congress hearing with Seymour Papert and Alan Kay, 1996 https://youtu.be/watch?v=0CKGsJRoKKs

      2. 2

        I read this recently after discovering Illich through L. M. Sacasas’s newsletter The Convivial Society. It’s kind of a long rant instead of a formal argument, but I do think it’s a really good lens to use when looking at technology.

        I keep thinking that the fediverse (which has had limited, but nonzero success) embodies the principles of maximizing individual freedom when compared to traditional centralized services like Facebook and Apple. Makes me want to go all-in on self hosted open source tools and abandon the corporate overlords. Although freedom is a double-edged sword when it comes to social networks online. I think small communities with strong moderation are the way to go.

        BTW I recommend The Convivial Society. It’s thought-provoking and well written. It’s available in audio form too.

        1. 1

          yes, the fediverse seems the agreeable way out. What shocks me about it, however, are the bloaty ‘standards’[2,3] that consider themselves both mandatory as well as half-baked and unreliable (‘living document’, ‘every implementer can choose’). The degree of neglection showing in link rot, expired certs, deserted referential tests etc. is breathtaking. So they are utterly useless and just a wordy burden. No wonder the implementations are so unwieldy, laypersons can’t operate them. So we’re not better off than we were with atom & websub almost 20 years since.

          I find it telling, that GAFAM started it (look at the authors of [1]) and destroyed it the same instant (dear ‘social’ websites[4], google reader, atom vs. rss, (iphoto) photo feeds, etc.). And since have grown to an ubiquitous plague.

          [1] https://activitystrea.ms/specs/atom/1.0/
          [2] https://www.w3.org/TR/activitypub/
          [3] https://www.w3.org/TR/activitystreams-core/
          [4] https://sebsauvage.net/rhaa/index.php?2013/08/08/08/51/28-dear-so-called-social-websites

    8. 5

      I think the title might not accurately reflect the true content of the article. When I first encountered the title, I was ready to fire off a reply about how personal computers are, in fact, liberating for some people, e.g. blind people who previously couldn’t independently read anything that hadn’t been translated into braille. I think the article has a valid point about computers making decisions affecting people who have no control over those computers. But a personal computer under the control of its user is simply a tool in that person’s hands.

      1. 5

        I don’t agree with the content of the article either. A computer that isn’t under your control can still be influenced. If the subway doors close on you in New York, you can open them by force with your hands and step inside. With enough effort, Juicero machines can be modded to accept general purpose juice bags. And Bitcoin, above everything else, is a political movement. If Bitcoin didn’t have users, it would not have any power.

        I think OP’s frustration lies with the ruthlessness of the society we find ourselves in. Where we have poorly designed subways that are not only slow and dirty, but can’t understand that someone is running towards the doors. We live in a society where there are incentives in place that made Juicero a seemingly legitimate business opportunity. We live in a society with an monetary system that is so incredibly decrepit and corrupt, that it necessitated the creation of a trustless currency like Bitcoin by an anonymous author, since if Satoshi wasn’t anonymous he surely would be in prison.

        1. 2

          If the subway doors close on you in New York, you can open them by force with your hands and step inside

          I once rode the Washington DC metro with my bookbag on the outside of the train. Not all metros are as forgiving as New York’s.

          Funnily enough that helps make your point. Just because it’s a machine doesn’t mean it has to be ruthless.

        2. 1

          Author here. Since I have a hobby of watching train doors, I tend to end up profiling their behaviour. There’s quite a bit of variation.

          Subway trains usually have very “dumb” doors which will only reopen if the driver manually does so (which they’re only likely to do if the interlock doesn’t close promptly and they get tired of waiting.) If obstructed they will generally just knaw on whatever’s in their way until the obstruction clears, in some cases with quite significant force. This may be changing on newer trains.

          Non-subway trains usually are smarter and will cycle if blocked. They’re still more menacing than elevator doors, there are no sensors, you do have to actually resist them. However I’ve noticed on newer models (in the UK) some kind of “partial reopen” feature: if you obstruct the door, it will reopen halfway, wide enough for you to get your arm or a bag strap out, but not to get through. As far as I can tell this appears to be a deliberately designed feature on newer trains to make them more ruthless in this regard.

          1. 3

            Every design must be deliberate, but how is this more ruthless? You get your arm or bag.

            It’s been a while since I took the Helsinki metro, but you could grab the edges of closing doors and yank to make them open. I’ve done it :/

            The central railway station was particularly bad at times when half the city seemed to do that, delaying departures, overcrowding the interior and causing announcements not to do that.

            There aren’t/weren’t turnstiles or such, so many people risked a fine, not even paying for the grief.

            Opening the door completely is oppressive to anyone wanting to get moving, half-way is less.

    9. 5

      I think as the intro implies this can be extended to machines and tools and maybe even further

      I think in the context of computers in particular there’s a bit of a political problem where we force people to use them, sometimes by lawn, sometimes through society. They have to use computers, Smartphones and even certain apps.

      At the same time we see a rise in scams and are surprised how people who might not even need or want this devices and only have them because they are forced to fill out some form online.

      Some decades ago it was relatively easy to come by without almost any particular tool one can think of. You might be odd for it, but it allowed you to stop make use of your rights, etc.

      Today you need apps to log in to your bank, websites to do your taxes, sometimes even the web to apply for elderly homes. And smartphones are pretty complex, and force you to fit example have or create an email address, require passwords, etc. You need to know how to use software, understand what the internet is, should have done concept of pop-ups, online ads, spam, updates, understand that there is no other person sitting on the other end right now and so on .

      I think a lot of ruthlessness comes from this. Then even if you know about all of the above you end up like in Kafka’s The Trial and even if you know what things mean the processes behind the scenes for the vast majority of use cases will remain completely intransparent to you.

      In a non automated/digitalized world is easy to ask quick questions and people who can ask other people handle exceptions. In the digital world one has to hope the developer has to have thought of it and handle it accordingly. If you are lucky there’s a support hotline but these seem to go away, especially for bigger so often more important companies

      I see tools more on the morally neutral side, but I don’t think that’s the issue really. I don’t think computers are impressive but there’s an unintentional direction we move towards to whete things are forced upon people often thinking it’s a good thing when it’s at least debatable.

      As a side note there’s certainly cases where things were done in the name of digitalization, progress, efficiency and things were just harder, slower, turned out to be less cost effective, less secure and required more real people to be involved

      Of course these are the bad example, but given the adjective here is oppressive. Usually even in (working/stable) oppressive societies it works for most people most of the time. Things start to shift when it doesn’t for you many or there’s war. Only the ones not fitting in tend to have problems and while I would have titled it differently I think that is true for how computers are used that’s true today for all sorts of computers.

      1. 13

        In a non automated/digitalized world is easy to ask quick questions and people who can ask other people handle exceptions.

        In the land of unicorn and rainbows? ;)

        From my experience, people in positions of “HTML form actions” absolutely aren’t inclined to answer any questions and handle exceptions, unless they have any real retribution to fear. Worse yet, it’s a rational behavior for them: they almost certainly will be reprimanded if they break the intended logic, so it’s much safer for them to follow its letter.

        Just past month I had to file a certain application for a somewhat uncommon case. The humans responsible for handling them rejected it as invalid because my scenario wasn’t in their “cache” of common cases and they used the default “contact our parent organization” response instead of trying to handle it, and not even in a polite manner. I contacted the parent organization and, luckily, people there were willing to handle it and told me that my application was valid all along and should have been accepted, and that I should file it again.

        I suppose the application form handlers received quite a “motivational speech” from the higher-ups because they were much more polite and accepted it without questions, but it’s still wasted me a lot of time traveling to a different city to file it and standing in lines.

        It may be one of the more egregious example in my practice, but it’s far from unique. I very much prefer interacting with machines because at least I can communicate with them remotely. ;)

        1. 5

          Your anecdote just demonstrates the author’s point; you had to escalate to a more-responsible human, but you successfully did so and they were able to accommodate the uncommon circumstances, even though those cirumstances were not anticipated by the people who designed the process. When was the last time you pulled that off with an HTML form?

          1. 6

            They were anticipated by the people who designed the process. It’s just that their subordinates did a sloppy job executing the logic written for them by the higher-ups. If the higher-ups programmed a machine to do that, it wouldn’t fail.

            And I got very lucky with the sensible higher-ups. It could have been much worse: in that particular case it was obvious who the higher-ups were and they had publicly-accessible contact information. In many other cases you may never even find out who they are and how to reach them.

          2. 1

            everytime the form allows freedom (which they are admittedly rarely used for, but could be), e.g. https://mro.name/2021/ocaml-stickers

            1. 2

              I love that, and I wish more of the web worked that way, but it’s worth pointing out that the only reason it can work is because ultimately the input I put into that form gets interpreted by a human at the post office. It would not be possible to create a form for inputting an email address which would be as resilient to errors or omissions.

              1. 1

                yes, and a lot of the information filled into the form doesn’t make sense to me – I just copy it on the envelope. It makes sense in peels as it is routed along: first country, then ZIP, then street, then name. That’s flexibility! Subsidiarity at work.

      2. 2

        Some decades ago, here in the US, we were deep in the midst of making a large proportion of physical social institutions at best undignified and at worst somewhere between unsafe and impossible to independently access without ownership and operation of a dangerous, expensive motor vehicle, something unavailable to a significant proportion of the population that ruthlessly grinds tens of thousands of people a year into meat just here into the US.

    10. 4

      In David Graeber’s Utopia of Rules, he says software is just mechanized bureaucracy. Or a bureaucracy is a human machine with paper forms as inputs and outputs. Hard and soft bureaucracy makes you feel stupid, as the rules are precise and pedantic. You are limited to the imagination of the bureaucrat and limited by their inflexibility of formats/timings/methods.

      Appeals to humans are necessary release valves for the fixedness of software. I rarely have an issue with commercial websites. I can always find someone to talk to about a package not arriving or being defective. It’s a lot harder to get a human for hyperscale websites like Google or Facebook. In most cases you can’t even pay for support.

      What does this mean for us as engineers? We should try to expand our imaginations to make software that works for as many people as possible. Ultimately, it’s economics that drives where humans are available for appeals. But we can advocate for better human support, too, I suppose.

      1. 4

        I was reminded of UoR by this as well. I wrote a mini-review back when it came out: https://blog.carlmjohnson.net/post/2016-03-31-paperwork-explosion/

        Ideally, the amount of bureaucracy in the world pre- and post- computer should have been the same but completed more quickly in the computerized world. In reality, however, computers made it practical to centralize the management of things that had been handled informally before. Theoretically, this is good because one innovation in the center can be effortlessly distributed to the periphery, but this benefit comes with the Hayekian cost that the periphery is closer to the ground truth than the center, and there may not be sufficient institutional incentives to transmit the information to the center effectively. The result is a blockage that the center tries to solve by mandating that an ever increasing number of reports be sent to it: a paperwork explosion.

    11. 1

      What the author thinks of as having “integrity” I think of as simply being dead. I don’t think we need to accept oppression so that some aspect of reality can fit more concisely on somebody’s whiteboard.

    12. -10

      Who’s the commie?

      1. 1

        A humanist