1. 9
  1.  

  2. 12

    Most of this “AI will do X” trend is the standard hype cycle.

    We’d do well to laugh at it in private and then ignore it.

    1. 12

      This post betrays a fairly simplistic understanding of how automation actually happens.

      In general it’s not the case that a device or program can simply be slotted in to replace a worker, taking over all of that worker’s tasks. The goal of automation (for the employing organization) is to reduce costs, and to do that, there’s no need for the device or program to be able to take over 100% of someone’s work; in general, it’s accomplished in one of two ways:

      • by requiring less time/fewer workers (what used to be called “speedup”)
      • by requiring less skill (the deskilling that Harry Braverman described in Labor and Monopoly Capital)

      An AI will likely not be able to replace us one-for-one, but it doesn’t need to. All our employers want is to be able to hire fewer programmers, cheaper (and more interchangeable) programmers, or both. Arguably, we’ve already made some strides in making it easier to do more with fewer programmers as our tools get better and as we make it possible for our users to do things for themselves that would have once required professional programming (see, for example, Excel). I suspect the main reason this hasn’t yet resulted in a wage crash is that the market for software is still growing faster than our ability to automate our work, but there’s no reason to think that will continue forever. We are generally pretty expensive employees and our employers have every incentive to make us more disposable. To believe that we are immune from this process seems like nothing more than “programmer exceptionalism.”

      1. 8

        Wow, I think this might be the first time I’ve seen Harry Braverman referenced on a mainly tech-oriented forum! I agree this is the right way to look at automation, as a process driven by social and economic pressures that accommodates both technological and non-technological changes (businesses are simultaneously looking for tech to automate tasks, and looking at whether they can change the tasks).

        It’s frustrating, as an AI researcher, that a lot of the debate internal to the AI community (and tech community in general) is so unrooted in any of the existing research on automation. I mean, you don’t have to like Braverman specifically, but most people seem to not have read anything on the subject, from any school of thought or researcher. A lot is just totally off-the-cuff speculation by people who know something about AI and have some kind of impression of how society works (from living in it) and then speculate on how those relate, which is not so satisfying. The fact that even famous people do this (ahem, Elon Musk) probably helps make it socially acceptable. I attended a panel at AAAI-16 that was like that too, billed as a panel on how AI will impact jobs in the next few decades, and the star panelist was… Nick Bostrom. Who is fine if what you want is a philosopher to speculate about the singularity, but not if you want a rigorous discussion about how AI impacts the job market.

        1. 1

          “is so unrooted in any of the existing research on automation.”

          I only see pieces of it here and there. I’ve purposely ignored a lot of such research in economics because it seems they spend more time speculating and modeling from scratch than studying real-world interactions. The field just has too much bullshit like the methodology- and process-oriented side of software research. The kind done by people who don’t write software. ;)

          Do you have links to any online summaries of such research that you believe accurately portrays effects of automation?

          1. 3

            It’s not really economics per se that I have in mind, more history of technology, history of labor, sociology of work, political economy, STS, etc., fields that study concrete things that happened in the real world and attempt to figure out what happened and why. Braverman’s book mentioned above is one classic example; he comes from a Marxist perspective, of the kind that focuses on analyzing concrete material factors, i.e. how physical machines interact with specific types of workplaces and corporate forms to change production processes and the social/economic relationships in them. Imo, even if you aren’t interested in Marxism as a political project, this approach often ends up less abstract than the kind of mathematical modeling you get in neoclassical economics, but there are plenty of other approaches as well (mostly various kinds of historical or sociological methodologies). I’m not sure there’s a great online summary, which maybe is something to fix. Wikipedia has an article on the technological unemployment debate, but it’s skewed towards books by economists rather than historians/sociologists/STS people.

            One interesting historical episode is that there was a huge debate on automation’s impact on employment, both quantity and quality of employment, in the late ‘50s and early '60s (e.g., 1, 2, 3, 4, 5). One can argue either way about its relevance, “this is mostly just a rehash of that debate”, “this time is different”, etc., but I’d personally like to read an informed take on why in either case, which I don’t often find in the AI-and-jobs discussions. Not that I’m an expert either, that’s why I’d like to read from someone who is!

            1. 1

              Appreciate the reply. It will give me something to think about. :)

        2. 4

          I definitely think the idea is being motivated in certain groups (VCs, silicon valley executives) by “let’s cut labor costs, these programmers are too expensive.” And automation in general is a political economy problem of distribution, sure, and current generation of management has embraced a delusional economic theory that means they don’t understand that lower pay means lack of demand.

          But I also think a lot of “AI will replace jobs” is fantasy. E.g. there was a cycle of going back from robots to humans in 1990s (can’t find references, alas), and it’s happening again now: https://www.theguardian.com/technology/2016/feb/26/mercedes-benz-robots-people-assembly-lines

          1. 3

            current generation of management has embraced a delusional economic theory that means they don’t understand that lower pay means lack of demand

            To a large degree, you can blame the consumer credit bubble (and the ongoing mortgage bubble) for this. The Fordist coupling between wages and demand for products broke apart because people can now spend money they don’t have.

            We now have a society where people can buy things with money they don’t have, and while prices for typical consumer goods are fairly stable, prices for housing, healthcare and education have gone out of control while wages have been stagnant.

            The other change is that no employer has the effect on the market that Ford had in 1914. That’s arguably both good and bad. On one hand, decentralization is generally a good thing. On the other, the Fordist argument simply doesn’t apply to a small company where (a) the buyers are usually not the same people as the workers, and (b) a broad-based effect on wage levels– note that Ford’s wage increases lifted the whole market, not just one company– will not occur.

          2. 3

            All our employers want is to be able to hire fewer programmers, cheaper (and more interchangeable) programmers, or both. […] I suspect the main reason this hasn’t yet resulted in a wage crash is that the market for software is still growing faster than our ability to automate our work, but there’s no reason to think that will continue forever.

            We have a wage crash already. Wages for high-skill programmers, inflation-adjusted, are nowhere near where they were in the 1990s. Sure, there are a lot of commodity programmers making $120k, and that certainly didn’t exist (even adjusting for inflation) back then, but the top has been absolutely hammered over the past 20 years.

            Ageism is another form of wage crash. It’s easier to squeeze young people for long hours, and they’re less likely to notice that management is investing nothing in their career growth. Moreover, culling all the old people except for the most successful ones creates an impression, for the young, that the career is more lucrative and rewarding than it actually is.

            Note of course that most economic “crashes” are actually slow and take place over decades. The slow crash isn’t usually newsworthy but it’s a lot more common. We haven’t had the fast, theatrical type of crash since 2002, but the decline of working conditions (open-plan offices) and infantilization of the craft (Agile Scrum) is a slow wage crash because it floods the market with incompetents, enables age discrimination, and eradicates technical excellence in favor of fungible, marginally qualified workers.

            I can’t predict whether there will be a fast, theatrical wage crash like what happened in 2002 or in finance in 2009, but I think it’s more likely than not that there has been and will continue to be long-term decline in pay, reputability, and working conditions in corporate programming. I’m also starting to realize that there’s very little that can be done about it. If companies can operate just well enough while running on sub-mediocre, fungible, cheap talent… who would expect anything else?

            1. 1

              Your comment leaves off H1-B, offshoring, and the labor-fixing scandal. These have significant effects on pushing down IT wages.

              1. 1

                Those are all factors but I think that the dumbing down of programming is a lot more dangerous than the abuse H1-B program or the wage-fixing scandal. It’s not that those aren’t bad, but the commoditization of programming work and the flood of low-talent Scrum programmers are a permanent and ubiquitous threat that we’d still have to deal with even if the H1-B program were fixed.

                1. 2

                  True on that. It’s been sold as a mechanical process anyone can do rather than a mix of creativity, engineering, and mechanical stuff.

                  1. 2

                    “BASIC is easy to learn, and the language of the future! Millions already use it!”

                    I call this the “idol of accessibility” (I get the feeling that label is taboo): where the ease of use by newcomers is valued above all else, especially engineering reasons. I despise it justifies worse-is-better by network effects. It is consumeristic in nature, and encourages a herd mentality. It leads to a string of tedious posts on the “X is bad because I didn’t learn it in an evening” (ahem).

                    I think it is a consideration, but I don’t agree it should be the defining factor in perceived goodness of a tool.

                    However, I risk being seen as a bad person for arguing this, because so many people are trying to make a better life for themselves by switching to tech, and I shouldn’t make it harder.

            2. 2

              Especially in software development, much of that automation is happening constantly, and one could even argue that this has resulted in a larger market for developers, precisely because less skill is necessary these days to achieve things quickly. This all comes at a significant cost in hardware requirements, but it has not, in the past half century, led to the demise of programming as a job. It does depress wages, however, as @michaelochurch correctly finds.

              1. 9

                This is not a near-term threat. We’ve had programs writing code for 65 years: that’s called a compiler.

                If society hasn’t managed technological unemployment by that point, there will already have been so much social disruption that “AI replaces programmers” will barely be a headline.

                However, there is a near-term threat to genuine programmers from a horde of millions of down-market, barely qualified replacements. They don’t know genuine CS (“math is hard, let’s do some Jira tickets”) and their code is awful, but they can fulfill user stories just well enough that products don’t fall apart immediately. (They fall apart, but the managers have been promoted away from any messes by then.) That is much more of a concern, because it’s actually happening and it’s depressing salaries and, more dangerously, giving a lot of leverage to employers, ruining the culture and any hope of progress or innovation. We now have methodologies designed for the management of barely employable idiots (“Agile Scrum” and open-plan offices, because we’re seen as untrustworthy children) that have become the norm.

                What does it fundamentally mean? I think the truth is that many of us got sold by “startup” mythology and piled into what is really just business programming. Business programming didn’t require technical excellence in the days of Office Space and TPS Reports, nor does it require it now. Which makes it a liability, because technical excellence requires a certain personality type that mainstream business types find irritating.

                Unlike many workers, we don’t need to worry about machines. We need to worry about the horde of unqualified people (open-plan commodity Scrum drones) who can do shitty work fast, and the circumstances that made them employable in almost all of software. I think there are two things that are behind it. First, acquisitions are now priced according to head count, which creates an incentive to load up on barely qualified people. If you’re going to be acquired at $4 million per engineer, then you can make $12 million by hiring three bums off the street and giving them enough “Agile Scrum” training to do user stories. Second, in the new corporate climate of “running lean”, managers can now get away with blaming their subordinates, because people with contrary opinions can just be fired. In the old world, if you hired an incapable team, it was your fault and your loss. These days, you can hire commodity Agile programmers and blame them when things fall apart (as they will).

                1. 5

                  While I readily roll my eyes at Agile hype (and XP before it) just as good half of Lobsters, I really dunno man. Yestarday was my 20th anniversary in the industry. I’ve yet to see any evidence that an average programmer got any worse than it was.

                  1. 2

                    Anyone who romanticises about the good old days when “most” software used to do what it said it did in the manual… has a sorry memory and a severe case of nostalgia. :)

                    1. 2

                      Yestarday was my 20th anniversary in the industry. I’ve yet to see any evidence that an average programmer got any worse than it was.

                      To be honest, I think that the nastiness started more than 20 years ago. I know that there was an era when programming was an R&D job as opposed to the deadline-culture, open-plan embarrassment that it is now, but I’d guess that the transition was before 1997. Office Space was produced in 1999, after all.

                      I think the major change is in why the average programmer is unskilled. In the 1970s, there were just very few people who knew how to do it, and the tools weren’t anywhere close to where they are now. So, you had a lot of inexperienced (and, by the standards of today, inept) programmers bumbling around figuring things out. But they were smart people and they were treated as such: it was an R&D environment where programmers decided what to work on (within reason, of course) and defined their own projects and work conditions. Of course, most of them weren’t “just” programmers; they were scientists or mathematicians or engineers who also programmed. They wrote a lot of bad code (as we all did, when starting out) before they got it right, but they could at least improve.

                      These days, the best of us have learned a lot about how to do software right, and the tools are a lot better. The issue is that software companies are most often run by inept and often not-smart-enough people. There’s also a whole cottage industry selling “Agile” tricks that purport to turn sub-mediocre management and sub-mediocre talent into… at least something as opposed to nothing. Since genuinely capable programmers are perceived to be expensive and hard to find, there’s also a huge short-term profit incentive in flooding the market with incapable replacements.

                      So, we’ve gone from being a field of highly intelligent and capable people who just didn’t know what they were doing because so much of it had never been done before… to a market flooded with less-intelligent wage-depressor programmers and an employment environment (“Agile Scrum”) tailored to their lack of curiosity and ability.

                      I find the current constellation to be a lot more depressing. In the ‘70s, software was bad because no one knew what they were doing, but there was also an understanding that this picture would improve over time. These days, software is bad because most software employers are more interested in cutting wages than the quality of what’s produced. I also don’t see corporate capitalism fixing that, because it’s inherently short-term focused. Software quality is a long-term issue (in the short term, the sales team matters a lot more than engineering) and most management types plan on getting promoted away from their messes before anything long-term actually matters.

                  2. 3

                    Doesn’t seem so much worse than the current state of contracting. :-)

                    1. 3

                      “This is not a near-term threat. We’ve had programs writing code for 65 years: that’s called a compile” (michaelochurch)

                      Darnit, he beat me to my counterargument again! :) Mine is that we can barely compile a functional language into something that performs as well as hand-written C. That is, in any situation where a human engineer could either write a good, functional program and a high-performance, C program. Same for straight-forward, program synthesis from logical specifications. Same for digital hardware where the binary nature gives synthesis huge advantage. Should’ve totally automated it by now in a way where hardware engineers don’t start rolling eyes when you say “High-Level Synthesis.” Same with synthesizing design from formal specifications. Same with even formally specifying every act of the system in a way a machine can understand as opposed to just some of them.

                      Over and over, the best tech humans can create for these simple tools fall short of their goals. Then, people worry that automatic programming will (a) happen, (b) happen sooner than good compilers, and © be good enough to threaten human’s jobs. Quite a leap! I suggest they figure out how to get the compiler optimizations working better before worrying about handling a system that replaces compilers and people. ;)

                      1. 2

                        Great point on the synthesis front. We get close, but despite having full control over semantics, runtime and everything else, can’t always guarantee a suitably high-level cost-free abstraction (Rust doesn’t count).

                        Related: overheard the clerks at the grocery store lamenting how their automatic doors sometimes open randomly of their own accord. They were getting extremely cold from the air blowing in. I chuckled, but then realized: if we can’t get automatic doors to function correctly 100% of the time, then we have a long way to go.

                        1. 2

                          Remember on the automatic doors that most projects screw up for economic reasons. Embedded and industrial engineers are always telling me about managers not caring about quality or pushing to cut costs however. One of reasons 8-bitters with no security are so popular: saves a few bucks to tens of dollars every unit that becomes profit.

                          Problem with automatic door is probably economic like that. Cheap sensor, hack job by developer, or not enough maintenance. There’s also a chance the algorithms are just that hard to do but I doubt it.

                        2. 2

                          I agree with this point, and can add the example of Sql. We still can’t turn ORM queries into performant Sql. Or optimization. I have been working on integer optimization for our banking software, and it falls apart after just a few thousand requirements. There are very real, very hard edges to what we can currently automate in software, and most are closer than we think. Optimization is a perfect example. If it takes my integer optimization project 45s to max which 6 cards with 2 attributes and 3 upgrades of 9 types to buy (9 constraints of 45k options each) then it is light years away from intelligently converting high level synthesis of hardware or ORM queries.

                        3. 2

                          It just needs one breakthrough and all this arrogance about the “unreachable” human intelligence will stop. In reality, once a machine is built that can match humans, it will not stop there and become a super-intelligence. It may probably not think like a human, but that’s not a disqualifying term. The specifications need to be clearer, which is no problem when I look at how much spec-aligned contract work has expanded in the last few decades.

                          1. 2

                            Machines already outperform many abilities that human was the best for before they come, like calculus, or memorization.

                            For me it is all about how are they going to outperform us rather than how much.

                            We sure nees to be more accurate.

                            1. 2

                              It just needs one breakthrough and all this arrogance about the “unreachable” human intelligence will stop.

                              It’s not arrogance. It’s a status quo that’s existed since humanity has. There have been waves of people and tech aiming to change it. All failed. As Minksy noted, almost none of the AI work is on producing common sense that humans critically rely on. The tech is nowhere near as open-ended and resilient to mistakes as humans. These are fundamental to their goal of AGI. That they are ignoring or failing on these consistently while thinking human-like intelligence is just around the corner is what I’d call arrogant.

                              Even if it’s achievable, it’s going to be a really, hard problem to solve that needs breakthroughs in a number of components/capabilities plus their successful integration. That’s not even including resistance to errant and malicious information or action by third parties at various stages in development. They’re more likely at this rate to create a bunch of synthesis or analytical programs to take over jobs that don’t take much brains to begin with. The same market the DSL’s, 4GL’s, code generators, expert systems, decision-support tools, and so on have been hitting for decades. Like them, people will either ignore the new tools AI brings or switch from things like those.