Matt Welsh (mdw@mdw.la) is the CEO and co-founder of Fixie.ai, a recently founded startup developing AI capabilities to support software development teams.
My prediction is that we’re going to see a dot com bubble level of startups focused on pitching “AI” to everyone that will be picked up by larger companies in time for the larger companies to be the ones blamed when people realize that “training a model” gives you confident answers, but it might be confidently snake oil if it wasn’t trained correctly.
Less sarcastically, I think there will be a lot of growth in the coming years of people consulting how to train and commercialize someone else’s AI model (i.e. helping marketing learn how to use GPT3 to get a first crack at their next ad campaign). However, at least for the foreseeable future, the confident failures (ask GPT3 to give you an anagram phrase, for example) will require humans to go over its output. AI is a helper and a starting point, it will not replace humans in many fields (it WILL replace them in some more mundane jobs, but you’ll still have humans involved in the arbitration chain, such as content moderation).
Yeah I think what the article is missing is that computing and programming is still growing. Personally I expect that to continue for decades, for inevitable economic reasons.
So the new AI stuff will be in addition to growth in currently mainstream techniques.
Did rails and JS make C and C++ obsolete in the 2010’s and eliminate the need for Rust? Quite the contrary there was enormous growth in systems programming over the last decade, driven by growth in the whole industry.
In fact I’d guess that most of the profits from machine learning have gone to cloud companies providing GPUs! OpenAI would not exist without Microsoft’s cloud, which they built essentially to compete with Google in search.
So new apps drive growth in all parts of the stack
I defended Welsh in a recent thread but I think this article is a bit “interested” and not a high quality analysis
I’d rather skip writing a lengthy diatribe on Christmas – there is, unfortunately, a lot of material here, so it would be lengthy. So I’m going to settle for a short one: I’m really disappointed that ACM ended up publishing something like this. Not because the point of the article is disputable, who knows, but because it fails to meet even basic standards of scientific skepticism. You can replace all variations of “programs will be written by AIs” with variations of “salvation will be delivered unto us by holy computers graced by the word of God” and the article makes exactly as much sense.
If I read this article optimistically, I foresee a great future for anyone who does reverse engineering and security work. I really hope the author is right, because when they write this:
This shift is underscored by the fact that nobody actually understands how large AI models work. People are publishing research papers3,4,5 actually discovering new behaviors of existing large models, even though these systems have been “engineered” by humans. Large AI models are capable of doing things that they have not been explicitly trained to do
all I read is “statistical models can generate surprising new bugs” and that makes my eyes pop out and turn into dollar signs. I hope ChatGPT is a resounding success and sees speedy adoption, by all means. That is the gift I wish for this Christmas!
I believe the conventional idea of “writing a program” is headed for extinction
Well, remember how they advertized and praised self-driving cars five years ago? Likely future developers can do well without reading Knuth’s books, as we no longer use logarithm tables since pocket calculators. But programming will not end, but merely change (once again).
We’ve heard this story over and over, with no code, with workflow engines, and so much more.
What actually happens is that a specialized practitioner function arises - or more rarely - a large number of professionals use the tool, like spreadsheets. The kinds of computer programs that these tools can handle become more widely deployed AND programs made with general purpose tools also become more widely deployed.
FWIW: The author “Matt Welsh is the CEO and co-founder of Fixie.ai, a recently founded startup developing AI capabilities to support software development teams.”
Correct me if I’m wrong, but I believe all AI models are trained from existing code bases, so the models are not particularly good at solving unseen problems. Who will write those programs?
I’m no expert but as far as I understand, the underlying code base can be considered as building block elements that can be connected together to create a larger, novel, whole. If you have building blocks, rules on how to attach them and, maybe, rules on whether the resulting answer is admissible or not, then you “novel” solutions to unseen problems.
Whether the current round of neural networks does this well, or does this in a sophisticated enough way, is, I guess, open for debate but the idea is easy enough to grasp.
I’ve had my share of wow moments with ChatGPT.
But trying to get it producing anything of real value always ended up in a lot of meaningless sentences and wrong output. At one point I asked it to just write me an app which I’ve had on github since pre 2021, and instead of telling me “oh so you want 10KLOC of xml + kotlin ?”, or “well I need to know XYZ for that in detail”, it just spew out 60 lines of a completely broken java class and called its job done. Let’s see how far a CodeGPT will actually get.
But even apart from that it’s pretty broken, if I tell it to generate me some random Nicknames, it’ll repeat itself multiple times and get the amount wrong.
My prediction is that we’re going to see a dot com bubble level of startups focused on pitching “AI” to everyone that will be picked up by larger companies in time for the larger companies to be the ones blamed when people realize that “training a model” gives you confident answers, but it might be confidently snake oil if it wasn’t trained correctly.
Less sarcastically, I think there will be a lot of growth in the coming years of people consulting how to train and commercialize someone else’s AI model (i.e. helping marketing learn how to use GPT3 to get a first crack at their next ad campaign). However, at least for the foreseeable future, the confident failures (ask GPT3 to give you an anagram phrase, for example) will require humans to go over its output. AI is a helper and a starting point, it will not replace humans in many fields (it WILL replace them in some more mundane jobs, but you’ll still have humans involved in the arbitration chain, such as content moderation).
Yeah I think what the article is missing is that computing and programming is still growing. Personally I expect that to continue for decades, for inevitable economic reasons.
So the new AI stuff will be in addition to growth in currently mainstream techniques.
Did rails and JS make C and C++ obsolete in the 2010’s and eliminate the need for Rust? Quite the contrary there was enormous growth in systems programming over the last decade, driven by growth in the whole industry.
In fact I’d guess that most of the profits from machine learning have gone to cloud companies providing GPUs! OpenAI would not exist without Microsoft’s cloud, which they built essentially to compete with Google in search.
So new apps drive growth in all parts of the stack
I defended Welsh in a recent thread but I think this article is a bit “interested” and not a high quality analysis
Shoot. I did it again. I posted a comment without reading the comments 🤣
I’d rather skip writing a lengthy diatribe on Christmas – there is, unfortunately, a lot of material here, so it would be lengthy. So I’m going to settle for a short one: I’m really disappointed that ACM ended up publishing something like this. Not because the point of the article is disputable, who knows, but because it fails to meet even basic standards of scientific skepticism. You can replace all variations of “programs will be written by AIs” with variations of “salvation will be delivered unto us by holy computers graced by the word of God” and the article makes exactly as much sense.
If I read this article optimistically, I foresee a great future for anyone who does reverse engineering and security work. I really hope the author is right, because when they write this:
all I read is “statistical models can generate surprising new bugs” and that makes my eyes pop out and turn into dollar signs. I hope ChatGPT is a resounding success and sees speedy adoption, by all means. That is the gift I wish for this Christmas!
Well, remember how they advertized and praised self-driving cars five years ago? Likely future developers can do well without reading Knuth’s books, as we no longer use logarithm tables since pocket calculators. But programming will not end, but merely change (once again).
We’ve heard this story over and over, with no code, with workflow engines, and so much more.
What actually happens is that a specialized practitioner function arises - or more rarely - a large number of professionals use the tool, like spreadsheets. The kinds of computer programs that these tools can handle become more widely deployed AND programs made with general purpose tools also become more widely deployed.
FWIW: The author “Matt Welsh is the CEO and co-founder of Fixie.ai, a recently founded startup developing AI capabilities to support software development teams.”
Correct me if I’m wrong, but I believe all AI models are trained from existing code bases, so the models are not particularly good at solving unseen problems. Who will write those programs?
Humans are trained on existing code too ;)
You’re correct they’re trained on existing code bases. You’re incorrect about their not being able to solve unseen problems.
I’m no expert but as far as I understand, the underlying code base can be considered as building block elements that can be connected together to create a larger, novel, whole. If you have building blocks, rules on how to attach them and, maybe, rules on whether the resulting answer is admissible or not, then you “novel” solutions to unseen problems.
Whether the current round of neural networks does this well, or does this in a sophisticated enough way, is, I guess, open for debate but the idea is easy enough to grasp.
The “problem” domain itself is a completely solved problem - didn’t you get the memo?
I’ve had my share of wow moments with ChatGPT. But trying to get it producing anything of real value always ended up in a lot of meaningless sentences and wrong output. At one point I asked it to just write me an app which I’ve had on github since pre 2021, and instead of telling me “oh so you want 10KLOC of xml + kotlin ?”, or “well I need to know XYZ for that in detail”, it just spew out 60 lines of a completely broken java class and called its job done. Let’s see how far a CodeGPT will actually get.
But even apart from that it’s pretty broken, if I tell it to generate me some random Nicknames, it’ll repeat itself multiple times and get the amount wrong.
This seems like it’s ignoring the ethical issues that are associated with copilot and ai art generation? Which isn’t surprising but it is notable.