I really want to see how it does on problems that aren’t beginner programming fiver finger exercises. Right now it can get away with something akin to a hash lookup in the latent space, which in & of itself is impressive of course, but not necessarily actually useful.
Others have already pointed out that the chatbot is very good at coming up with plausible, confident sounding garbage - I suspect the same is going to be true of code generation as soon as we get outside its input space.
I am the biggest AI detractor on earth, and this is starting to get scary good. I told my boss to describe what I was working on this week as a prompt to this, and the output was surprisingly accurate. And I’m talking business-laden domain logic, not just a random code challenge.
I’m pleased, really. Some stuff has been solved so many times before, it’s good to be able to hand it off to a computerized intern.
I think self driving cars as we envision them today will never happen. The first step in creating an automated process is creating an automatible process. For business logic coding and some if not all coding challenges, I can see how we’ve already made it automatible.
I also wonder if the generator would do so well with the entire day 1 problem, calories and all.
This is starting to get really interesting. I think this deserves a boost every few days. Day 3 was hard, but on day 4 the author used a meta-approach; pretty much the same thing as used by https://github.com/max-sixty/aoc-gpt, who is currently 35th on the global leaderboard.
I’m really curious to see how this scales to the more difficult problems. So far comprehensible manual solutions were less than 10 lines of code for me (after golfing them a bit).