I’ve started using ChatGPT more aggressively while coding (mostly to get information on specific libraries, like “how do you import graphviz in NetworkX”) and it’s pretty helpful. But it’s frustrating to not know what it can do. I’m used to “programming thinking” where you know your building blocks and the challenge is in combining them to accomplish a task, here you have no idea what the building blocks are or what they do or how they can be combined, and there’s nowhere where that’s documented.
I’ve started using ChatGPT more aggressively while coding (mostly to get information on specific libraries, like “how do you import graphviz in NetworkX”) and it’s pretty helpful.
For me it is the opposite. The less specific, the better. Because when things get specific, the details become important and that is the weak part of ChatGPT. For instance, a few days ago I asked it to write a simple Go client for the OpenAI API and it did it fine. Except that it made up a Go library that doesn’t exist. It merged the name and the methods of the Python lib with Go syntax and the answer was useless.
On the other hand there were a few occasions where I asked more high level, open ended questions the answers are also not completely correct, but often it suggests directions that I did not think of yet. Then my response is “Oh, yeah, that could work” and then I get the right docs/libs and fill in the details myself.
To me a tool like ChatGPT is a different type of tool than CoPilot. It works on another level of the coding process.
I’ve been using it for a while now just going back and forth on foundational values / strategy / implementation tradeoffs etc. regarding a side project I’ve had in mind for a bit; maintaining context and history in a super-long conversation that’s been saved over many weeks. I don’t expect it to add insights of its own and usually try to give it the content and have it mostly rearrange it for me (good overlap with how writing thoughts down helps). This would then lead to insights and progress, because of the “talking about your thing with people makes you have realizations about it” effect. It also generally expresses that it finds the project interesting or says encouraging things that motivate me to keep going haha. FWIW the GPT-3.5 version felt like it started well and then increasingly missed the mark over time, but GPT-4 has been quite good for the couple days I used it (I guess we’ll see how that continues).
At the code level – I do show it fragments of my own code whenever it’s relevant to serve as an example of the implementation model or of the way I think about things. Some times I then ask for “code review” which actually works decently because its lack of context on the rest of the code and any difference in comprehension contribute to review perspective.
I personally don’t actually rely on it to provide factual info on APIs and “how to X” because I’ve noticed it be incorrect about that stuff and that affected my trust, I guess. And I tend to like a more “browse”-y flow for discovering those things than “chat”-y at a UX level – getting a broad view of the model, developing some proficiency with it and making my own conclusions vs. only seeing specific local points. In conversations with people I tend to like to just “vibe” more; and then go away and have it bounce around in my mind as I do other things – which is how it’s gone with ChatGPT.
I’ve started using ChatGPT more aggressively while coding (mostly to get information on specific libraries, like “how do you import graphviz in NetworkX”) and it’s pretty helpful. But it’s frustrating to not know what it can do. I’m used to “programming thinking” where you know your building blocks and the challenge is in combining them to accomplish a task, here you have no idea what the building blocks are or what they do or how they can be combined, and there’s nowhere where that’s documented.
For me it is the opposite. The less specific, the better. Because when things get specific, the details become important and that is the weak part of ChatGPT. For instance, a few days ago I asked it to write a simple Go client for the OpenAI API and it did it fine. Except that it made up a Go library that doesn’t exist. It merged the name and the methods of the Python lib with Go syntax and the answer was useless.
On the other hand there were a few occasions where I asked more high level, open ended questions the answers are also not completely correct, but often it suggests directions that I did not think of yet. Then my response is “Oh, yeah, that could work” and then I get the right docs/libs and fill in the details myself.
To me a tool like ChatGPT is a different type of tool than CoPilot. It works on another level of the coding process.
I’ve been using it for a while now just going back and forth on foundational values / strategy / implementation tradeoffs etc. regarding a side project I’ve had in mind for a bit; maintaining context and history in a super-long conversation that’s been saved over many weeks. I don’t expect it to add insights of its own and usually try to give it the content and have it mostly rearrange it for me (good overlap with how writing thoughts down helps). This would then lead to insights and progress, because of the “talking about your thing with people makes you have realizations about it” effect. It also generally expresses that it finds the project interesting or says encouraging things that motivate me to keep going haha. FWIW the GPT-3.5 version felt like it started well and then increasingly missed the mark over time, but GPT-4 has been quite good for the couple days I used it (I guess we’ll see how that continues).
At the code level – I do show it fragments of my own code whenever it’s relevant to serve as an example of the implementation model or of the way I think about things. Some times I then ask for “code review” which actually works decently because its lack of context on the rest of the code and any difference in comprehension contribute to review perspective.
I personally don’t actually rely on it to provide factual info on APIs and “how to X” because I’ve noticed it be incorrect about that stuff and that affected my trust, I guess. And I tend to like a more “browse”-y flow for discovering those things than “chat”-y at a UX level – getting a broad view of the model, developing some proficiency with it and making my own conclusions vs. only seeing specific local points. In conversations with people I tend to like to just “vibe” more; and then go away and have it bounce around in my mind as I do other things – which is how it’s gone with ChatGPT.
The link is now broken - but it can be found on archive.org