This is an important consideration to look at for any and all centralized technology. If there is a way to make it vital to one’s existence in society, they’ll find a way to do that. See, for instance, the smartphone; my mother has always disliked them and always gotten along perfectly fine without them, but recently changes in her city transit system, job, and condo building have made it very difficult for her to get along without one.
If city provides services that cater actual needs like transit only through a smartphone app.. That’s just dumb.
Here we have smartphone app too for that, but the city transit system mostly relies on a bus-card, so everyone can use it.
Honestly I believe anything that provides a common services (like a city) that are only approachable through smartphone app, should also provide a simple device to use those services. Or just make the services approachable in multitude of ways.
For a long time I refused to use an Apple or Android phone and tried to make things work with alternative smartphones. A few months ago I gave up, because I switched to being an independent contractor and I figured it would be nice to have a phone that actually rings when someone wants to call me. (Yes, the alternatives are still that buggy.) So I bought a Pixel 6.
The change is massive. I can do the same things I could do before, but life is so much…. easier now. Everything works so smooth, because you fit in with what the world expects. And to my own surprise, I started to kind of forget the whole privacy thing. It is really deceptive. Before I would regularly shake my fist and yell at clouds because how could others not see what a massive problem we have. And now I am starting to become one of them. Because it all works for me now.
AI will not take that role for now, because it is nowhere near as intelligent yet. But yeah, I can see the danger coming.
If city provides services that cater actual needs like transit only through a smartphone app.. That’s just dumb.
Not necessarily. There’s a fairly significant cost associated with supporting folks who aren’t using smartphones. Kiosks need building, maintaining, cleaning, and so on. When that cost is benefitting 90% of the people, it’s worth doing. When it’s benefitting 20%, it’s probably worth it. When it’s benefitting <1%, it will soon hit the point when it’s more expensive than just buying those folks a cheap smartphone and contract.
The big problem from my perspective is that a lot of these services provide apps, not service endpoints. This massively distorts the smartphone market: you can only access municipal services if you opt into the Apple or Google ecosystem, there’s no way for another platform to provide a client.
The big problem from my perspective is that a lot of these services provide apps, not service endpoints.
I agree with you there. We have service endpoints here, like physical ones. For example, I think it’s possible to visit some office here to add usage to your bus card with cash, for example. We have a lot of elderly here who do not even have computers, let alone smartphones.
These days you can’t really lunch a major language without an open source implementation. Nobody is going to trust it will survive the eventual demise of the commercial entity selling a closed source language. This is a dramatic change from the 60-90s, where gcc was the only exception for a long time.
I agree with the author that tools like Copilot should only be trusted if open sourced. However in this case most of the “code” responsible for the behavior of Copilot is encoded in NN weights, and the data/training pipeline used to train them. Ideally this would also be released as open source. I’m not hope full that will ever happen, unless the expectations for AI services change the same way they did for compilers.
I don’t think that the equivalence holds. An open-source language implementation is important to avoid lock in. If I use FooLang from FooCorp and write millions of lines of code, then I need to keep paying FooCorp for new versions of the compiler to support new platforms (or new versions of the same one). Once I’ve invested a lot in FooLang, it’s easy for them to put up the price because the cost of rewriting is immense. I want a F/OSS compiler so that I can always find a second source.
AI Assistants are more similar to editors or IDEs than compilers. If I’m using Copilot, for example, then it might make my life easier in the same way that an IDE with good syntax highlighting or autocomplete does (‘AI’ assistants are really just very advanced autocomplete). If I stop using it, I lose none of the investment in my code, including code that it helped me write. If the vendor puts up the price, I can just stop using it. I might be less efficient as a result in writing new code but I don’t lose anything from my existing code and so I can make a judgement whether I gain more value from the assistant than it costs me, month to month.
Google has perfected the way to subvert the trust problem using Chrome/Chromium model: open-source most but not all of a project to calm down the critics, and get everyone else to use the closed binary with proprietary add-ons that eventually become essential (only Chrome has DRM for video sites, and Chromium builds are harassed by Google’s login and captcha; Android AOSP is hindered by commercial certification scheme, proprietary camera processing, PlayServices, etc.)
If any AI is ever required for society to operate, I expect them to give you a free DIY option to run it on a hand-cranked TPU with your data fed on punchcards, or just click “I Accept” to have the Cloud run it for you.
AI can be something that is soon to be expected for everyone to have to help them, and anyone who doesn’t use it is considered slow
At risk of coming off like a Luddite, I think there’s enough precedent in the vast array of dev tooling that’s commonplace that’s really nice but still insignificant compared to the spread of human ability. I don’t see AI code assistance as being meaningfully different than the suite of things IDEs offer.
At least anecdotally I usually see an inverse correlation between fancy tooling and “developer productivity”. I think a lot of that is that older more experienced developers are slower to adopt new tools, but it still suggests that tooling is not going to be a great economically gated divide between developers.
This strikes me as the opposite of Luddism. The Luddites threw shoes into mills in order to fight back against capitalist interests exploiting their labor.
I am critically relying on Google Translate, an AI tool for various things in life.
I also rely a lot on GKeyboard and Gdoc for autocorrect when texting or drafting documentation. My family members rely on Voice Recognition model of LG/Samsung/Google to pick their karaoke song.
So I don’t really have a problem with AI tooling in general. I am, however, concern about corporate entity behind these services turning evil one day 🙈
Yup. What I was trying to get at is that I don’t think AI/ML tech is bad and worth worrying about. So the problem to be solved here is not “how to make AI/ML tech better”, but it’s “how do we improve consumer’s confident in the corporate entities behind these AI/ML techs?”.
I feel like the current public sentiment is conflating the 2 issues. Identify the problem correctly is the first step toward solving it.
In principle, any technology that could plausibly become essentially required to earn a livelihood or live a normal life should be subject to intense scrutiny. And it should be free/open source. I agree empathically with both points.
But for the kind of statistical modeling we call “AI” to attain such a status it has to be…well, good. I feel like the rhetorical question at the end drives home this point, perhaps in a different way than OP intended:
Well, would you trust a random person walking by you suddenly telling you how to write a program? They may be correct, but would you trust them?
Even if you had a reason to trust their intentions, you still have to exercise discretion to decide if the solution they come up with is correct, or takes a good approach, or is even relevant to what you’re trying to do. Okay, maybe the code it regurgitates for you is passingly close to a good solution, but now what you’ve done is shifted the focus of your time away from writing good code to fixing dubious code (which no one actually wrote, so you can’t ask them about it). It’s the mythical person-month, robot edition.
Maybe I’m just being a curmudgeon, but until we get a handle on what intelligence actually is and make meaningful progress on the artificial general intelligence front, I just don’t see any way tools like this will ever be good enough to give anyone an edge, let alone become a de facto standard.
I just don’t see any way tools like this will ever be good enough to give anyone an edge, let alone become a de facto standard.
On the contrary, I would argue that specialized AI (rather than general), is more likely to give an edge to some people (who are more proficient in using it) over others (who are not). Andrew Cantino actually had an interesting article about why properly engineered prompts are essential to maximizing GPT-3 performance. Since these (especially practical applications like code completion systems) are just a variety of powerful tool (and not really intelligent), they have the tendency to confer power upon those who are most adroit at using them. I don’t think that’s here yet, but I suspect it will be soon.
A truly general artificial intelligence will just benefit whoever can use access it the most, these narrow specialist applications will likely benefit those who can use them the best (given sufficient access).
I just don’t see any way tools like this will ever be good enough to give anyone an edge, let alone become a de facto standard.
I do hope so. I don’t think this is a true “AI” people think about when mentioned either, it’s more a glorified Markov chain with a huge curated dataset. Well, that’s my assumption at least, not knowing what’s going on behind the scenes.
My point is basically that “a glorified X with a huge curated dataset” is the state of the art in any sub-field of what we call AI. :P The fact that you need these huge datasets for your system to “learn” anything says a lot about how intelligent it is. Minds don’t need thousands of examples; they need a quick explanation and a handful of examples. Real intelligence involves abstract thinking, which machines simply are not capable of at this point.
I wouldn’t hire a junior engineer I thought incapable of abstract thinking, and I wouldn’t waste my time with bot-generated, partially correct code…at least not one that couldn’t learn from an explanation of what it did wrong. That’s the bar for me. So I share your hope that others would set a similarly high bar, otherwise we are susceptible to any number of AI marketing fads!
This is an important consideration to look at for any and all centralized technology. If there is a way to make it vital to one’s existence in society, they’ll find a way to do that. See, for instance, the smartphone; my mother has always disliked them and always gotten along perfectly fine without them, but recently changes in her city transit system, job, and condo building have made it very difficult for her to get along without one.
If city provides services that cater actual needs like transit only through a smartphone app.. That’s just dumb.
Here we have smartphone app too for that, but the city transit system mostly relies on a bus-card, so everyone can use it.
Honestly I believe anything that provides a common services (like a city) that are only approachable through smartphone app, should also provide a simple device to use those services. Or just make the services approachable in multitude of ways.
Of course. No one here would disagree with that… however reality on the ground is quickly becoming coercive w.r.t. smartphone ownership.
For a long time I refused to use an Apple or Android phone and tried to make things work with alternative smartphones. A few months ago I gave up, because I switched to being an independent contractor and I figured it would be nice to have a phone that actually rings when someone wants to call me. (Yes, the alternatives are still that buggy.) So I bought a Pixel 6.
The change is massive. I can do the same things I could do before, but life is so much…. easier now. Everything works so smooth, because you fit in with what the world expects. And to my own surprise, I started to kind of forget the whole privacy thing. It is really deceptive. Before I would regularly shake my fist and yell at clouds because how could others not see what a massive problem we have. And now I am starting to become one of them. Because it all works for me now.
AI will not take that role for now, because it is nowhere near as intelligent yet. But yeah, I can see the danger coming.
Yeah, I just had to vent a bit.
When has that stopped people?
The unfortunate truth..
Not necessarily. There’s a fairly significant cost associated with supporting folks who aren’t using smartphones. Kiosks need building, maintaining, cleaning, and so on. When that cost is benefitting 90% of the people, it’s worth doing. When it’s benefitting 20%, it’s probably worth it. When it’s benefitting <1%, it will soon hit the point when it’s more expensive than just buying those folks a cheap smartphone and contract.
The big problem from my perspective is that a lot of these services provide apps, not service endpoints. This massively distorts the smartphone market: you can only access municipal services if you opt into the Apple or Google ecosystem, there’s no way for another platform to provide a client.
I agree with you there. We have service endpoints here, like physical ones. For example, I think it’s possible to visit some office here to add usage to your bus card with cash, for example. We have a lot of elderly here who do not even have computers, let alone smartphones.
These days you can’t really lunch a major language without an open source implementation. Nobody is going to trust it will survive the eventual demise of the commercial entity selling a closed source language. This is a dramatic change from the 60-90s, where gcc was the only exception for a long time.
I agree with the author that tools like Copilot should only be trusted if open sourced. However in this case most of the “code” responsible for the behavior of Copilot is encoded in NN weights, and the data/training pipeline used to train them. Ideally this would also be released as open source. I’m not hope full that will ever happen, unless the expectations for AI services change the same way they did for compilers.
I don’t think that the equivalence holds. An open-source language implementation is important to avoid lock in. If I use FooLang from FooCorp and write millions of lines of code, then I need to keep paying FooCorp for new versions of the compiler to support new platforms (or new versions of the same one). Once I’ve invested a lot in FooLang, it’s easy for them to put up the price because the cost of rewriting is immense. I want a F/OSS compiler so that I can always find a second source.
AI Assistants are more similar to editors or IDEs than compilers. If I’m using Copilot, for example, then it might make my life easier in the same way that an IDE with good syntax highlighting or autocomplete does (‘AI’ assistants are really just very advanced autocomplete). If I stop using it, I lose none of the investment in my code, including code that it helped me write. If the vendor puts up the price, I can just stop using it. I might be less efficient as a result in writing new code but I don’t lose anything from my existing code and so I can make a judgement whether I gain more value from the assistant than it costs me, month to month.
Google has perfected the way to subvert the trust problem using Chrome/Chromium model: open-source most but not all of a project to calm down the critics, and get everyone else to use the closed binary with proprietary add-ons that eventually become essential (only Chrome has DRM for video sites, and Chromium builds are harassed by Google’s login and captcha; Android AOSP is hindered by commercial certification scheme, proprietary camera processing, PlayServices, etc.)
If any AI is ever required for society to operate, I expect them to give you a free DIY option to run it on a hand-cranked TPU with your data fed on punchcards, or just click “I Accept” to have the Cloud run it for you.
At risk of coming off like a Luddite, I think there’s enough precedent in the vast array of dev tooling that’s commonplace that’s really nice but still insignificant compared to the spread of human ability. I don’t see AI code assistance as being meaningfully different than the suite of things IDEs offer.
At least anecdotally I usually see an inverse correlation between fancy tooling and “developer productivity”. I think a lot of that is that older more experienced developers are slower to adopt new tools, but it still suggests that tooling is not going to be a great economically gated divide between developers.
This strikes me as the opposite of Luddism. The Luddites threw shoes into mills in order to fight back against capitalist interests exploiting their labor.
I am critically relying on Google Translate, an AI tool for various things in life.
I also rely a lot on GKeyboard and Gdoc for autocorrect when texting or drafting documentation. My family members rely on Voice Recognition model of LG/Samsung/Google to pick their karaoke song.
So I don’t really have a problem with AI tooling in general. I am, however, concern about corporate entity behind these services turning evil one day 🙈
Some may say that it’s already happened.
Yup. What I was trying to get at is that I don’t think AI/ML tech is bad and worth worrying about. So the problem to be solved here is not “how to make AI/ML tech better”, but it’s “how do we improve consumer’s confident in the corporate entities behind these AI/ML techs?”.
I feel like the current public sentiment is conflating the 2 issues. Identify the problem correctly is the first step toward solving it.
Me neither! It’s just how the tech is used.
In principle, any technology that could plausibly become essentially required to earn a livelihood or live a normal life should be subject to intense scrutiny. And it should be free/open source. I agree empathically with both points.
But for the kind of statistical modeling we call “AI” to attain such a status it has to be…well, good. I feel like the rhetorical question at the end drives home this point, perhaps in a different way than OP intended:
Even if you had a reason to trust their intentions, you still have to exercise discretion to decide if the solution they come up with is correct, or takes a good approach, or is even relevant to what you’re trying to do. Okay, maybe the code it regurgitates for you is passingly close to a good solution, but now what you’ve done is shifted the focus of your time away from writing good code to fixing dubious code (which no one actually wrote, so you can’t ask them about it). It’s the mythical person-month, robot edition.
Maybe I’m just being a curmudgeon, but until we get a handle on what intelligence actually is and make meaningful progress on the artificial general intelligence front, I just don’t see any way tools like this will ever be good enough to give anyone an edge, let alone become a de facto standard.
On the contrary, I would argue that specialized AI (rather than general), is more likely to give an edge to some people (who are more proficient in using it) over others (who are not). Andrew Cantino actually had an interesting article about why properly engineered prompts are essential to maximizing GPT-3 performance. Since these (especially practical applications like code completion systems) are just a variety of powerful tool (and not really intelligent), they have the tendency to confer power upon those who are most adroit at using them. I don’t think that’s here yet, but I suspect it will be soon.
A truly general artificial intelligence will just benefit whoever can use access it the most, these narrow specialist applications will likely benefit those who can use them the best (given sufficient access).
I do hope so. I don’t think this is a true “AI” people think about when mentioned either, it’s more a glorified Markov chain with a huge curated dataset. Well, that’s my assumption at least, not knowing what’s going on behind the scenes.
My point is basically that “a glorified X with a huge curated dataset” is the state of the art in any sub-field of what we call AI. :P The fact that you need these huge datasets for your system to “learn” anything says a lot about how intelligent it is. Minds don’t need thousands of examples; they need a quick explanation and a handful of examples. Real intelligence involves abstract thinking, which machines simply are not capable of at this point.
I wouldn’t hire a junior engineer I thought incapable of abstract thinking, and I wouldn’t waste my time with bot-generated, partially correct code…at least not one that couldn’t learn from an explanation of what it did wrong. That’s the bar for me. So I share your hope that others would set a similarly high bar, otherwise we are susceptible to any number of AI marketing fads!