Writing tests is one of my favourite things to do with copilot, because they are often a bit repetitive and it figure out the patterns I like to use really fast.
Also a nice illustration of the correctness problems: the implementation is incorrect for books where the title ends e.g. in a question or an explanation mark.
On the other hand, it’s a mistake I would totally expect many naive implementations by humans to also make on the first go.
Writing tests is one of my favourite things to do with copilot, because they are often a bit repetitive and it figure out the patterns I like to use really fast.
I wrote up a TIL around this here: https://til.simonwillison.net/gpt3/writing-test-with-copilot
Also a nice illustration of the correctness problems: the implementation is incorrect for books where the title ends e.g. in a question or an explanation mark.
On the other hand, it’s a mistake I would totally expect many naive implementations by humans to also make on the first go.
Makes me wonder what Copilot would generate if you say that it should work for the rest if the world too. RTL languages, the lot.
Human-written software is full of such problems, so I suppose copilot will propagate them.