Hah. I regret everything bad I said about statistical modeling: it has, in fact, transcended software and has achieved a working – though, being hidden behind a statistical model, undecipherable – understanding of what it is to be human.
For behold, this AI, just like the human counterparts on whose code it was trained, also cracks under the pressure of incomplete early requirements, and eventually just goes fuck it and writes code which merely passes the unit tests so the manager would get off their back.
The final inferred program is completely wrong. It gets the numbers 0 1 5 14 30 55 91. I’m shocked Meyer didn’t notice this.
0 1 5 14 30 55 91
I’m writing about this in a newsletter and just discovered that all the screenshots are inline data literals. Why
Am… Am I the only one who doesn’t find ChatGPT or Copilot useful? To me, their output feels like code a beginner would write. I find myself spending more time debugging than using foundation models‘ code for that reason.
I found a few situations where it saves time, like writing tests for utility functions. But generally I’m of the opinion that if I find myself writing code that is too obvious or has a lot of repeating patterns, it means that my abstractions aren’t as good as they could be.
It’s pretty handy for naming things. A problem I’ve used it for a couple times is “I have Foo and Bar which are both a category of thing. What are some words for that category of thing?”
Smart. I will give that a try next time I cannot come up with a name.
I ran out of ideas how to name things (especially ‘System’, ‘Entity’, ‘Object’, ‘Service’) related classes and functions – long time ago. So I started using lojban :-).
It’s both useful but not anywhere near as useful as people make it out to be. It’s pretty good at giving some simple english summaries on topics. It’s not filled with ads or blogspam like search results. But it’s pretty terrible at producing anything beyond trivial code.
It’s like an advanced rubber duck for me. It often helps me get unstuck on a problem but rarely gives me the final answer.
I’ve gotten ChatGPT to spit out fully functional (rather complex) Python scripts which worked exactly as intended (and when they didn’t work, it can be told what it got wrong, and it will fix it).
It’s not a magic coder wand, but it sure is a good boilerplate generator.
I assume that over time the quality will improve, and that some hybrid approach where the program is actually run will help ensure it’s correct. Most likely programmers will need to get better at writing English, so that they can write effective prompts. Personally I find ChatGPT super useful to get started with new tech, or created examples for stuff I haven’t used in a long time, but I often need to tweak or rewrite it. It’s collaborative :)