The amount of typing required for each of these is comparable. The amount of time taken to properly phrase the English prompt is comparable with the amount of time taken to just write the code directly. The phrasing will need altering a few times to get workable output. And the amount of refactoring of the generated code to create something equivalent to the handwritten code is always non-zero. So, have we gained anything?
I recognize this, and I have stopped asking the AI for things I can easily do myself. However, this process of asking, reformulating, etc. before you get to a working piece of code can be useful in a situation where you need a rubber duck. This reformulating is where the learning happens, and you discover what you actually want.
Most of the time, AI is just a fancy rubber duck that talks back. Some versions can do a search on Stackoverflow before they answer. This can save time in some situations, but not in the situations where you did not need help in the first place, or the situations where you need to read a book or watch a lecture before you understand the subject.
A similar example would be if I said, “Hey computer, this project uses webpack as its bundler. I want you to convert it to use esbuild instead.” Even if the AI appears to do the thing, I’m still going to have to wade through some esbuild documentation to make sure it did it right, right?
I’m very much in the “not installing a genAI plugin in my editor” guy, but I’ve found a lot of utility in asking ChatGPT to spit out an example plugin for some library/tool, and then reading the output line by line. It’s basically been a way to search for the 20 or so functions I probably need to care about to get my job done. So I get the example, read docs on all the mentioned examples, ask sometimes if there’s something more specific here, and move forward
It’s a bit of a strteamlined approach to the “copy example project and then rip out what you think you don’t need” approach, and it’s worked well. Also aligns with my general feeling of these tools being “query the API docs 100 times in one go and mash it all together”
Yeah. I’ve found this example quite useful. It is basically a customized example that perfectly matches your use case. Then you can use it as a template and as information for what APIs you should read the docs for.
Personally, I find it a lot easier to modify and improve existing code than to start a new project. That makes AI perfect for me. I can just have it output a 300-line skeleton for what I want, then go over it with a fine brush.
Also, a few days ago I needed to port a rather large Bash script to Python. I told Opus to do it and just ignore every time it hit a function that was difficult. Then I did some testing, requested some changes (optparse instead of argparse because even in 2024, etc.), filled in the problematic functions afterwards - again by telling Opus :) - and done. This would have been a very annoying code task by hand, but 99% of the busywork of googling APIs was handled by the system trained on every API in existence.
I recognize this, and I have stopped asking the AI for things I can easily do myself. However, this process of asking, reformulating, etc. before you get to a working piece of code can be useful in a situation where you need a rubber duck. This reformulating is where the learning happens, and you discover what you actually want.
Most of the time, AI is just a fancy rubber duck that talks back. Some versions can do a search on Stackoverflow before they answer. This can save time in some situations, but not in the situations where you did not need help in the first place, or the situations where you need to read a book or watch a lecture before you understand the subject.
I’m very much in the “not installing a genAI plugin in my editor” guy, but I’ve found a lot of utility in asking ChatGPT to spit out an example plugin for some library/tool, and then reading the output line by line. It’s basically been a way to search for the 20 or so functions I probably need to care about to get my job done. So I get the example, read docs on all the mentioned examples, ask sometimes if there’s something more specific here, and move forward
It’s a bit of a strteamlined approach to the “copy example project and then rip out what you think you don’t need” approach, and it’s worked well. Also aligns with my general feeling of these tools being “query the API docs 100 times in one go and mash it all together”
Yeah. I’ve found this example quite useful. It is basically a customized example that perfectly matches your use case. Then you can use it as a template and as information for what APIs you should read the docs for.
I can sympathise with that, but it’s still an essential engineering skill imho, and you need to be reviewing human contributions anyway.
(I 👀 the author has published a bunch of SF novels. One of which I really enjoyed last year!)
Personally, I find it a lot easier to modify and improve existing code than to start a new project. That makes AI perfect for me. I can just have it output a 300-line skeleton for what I want, then go over it with a fine brush.
Also, a few days ago I needed to port a rather large Bash script to Python. I told Opus to do it and just ignore every time it hit a function that was difficult. Then I did some testing, requested some changes (
optparseinstead ofargparsebecause even in 2024, etc.), filled in the problematic functions afterwards - again by telling Opus :) - and done. This would have been a very annoying code task by hand, but 99% of the busywork of googling APIs was handled by the system trained on every API in existence.