An update was posted around three hours ago. After sorting, instead of starting with the next neighbor, which is likely very similar, start with the next neighbor without overlap in letters. It runs in under three seconds now!

It’s surprising how fast can you make an worst case O(n^5) algorithm just by improving the representation. I wonder if maybe you could improve this even more, by having a 2^26 bit array (8MB, so should fit into modern L3 caches) representing if a word with that set of letters exists, and looking up all of the possible words?

Right? I feel like I would have done the “encode in an int” bit, but I think I would have spent far too long trying to develop some way to avoid O(n^5) which is clearly impractical :D

An update was posted around three hours ago. After sorting, instead of starting with the next neighbor, which is likely very similar, start with the next neighbor without overlap in letters. It runs in under three seconds now!

It’s surprising how fast can you make an worst case O(n^5) algorithm just by improving the representation. I wonder if maybe you could improve this even more, by having a 2^26 bit array (8MB, so should fit into modern L3 caches) representing if a word with that set of letters exists, and looking up all of the possible words?

Right? I feel like I would have done the “encode in an int” bit, but I think I would have spent far too long trying to develop some way to avoid O(n^5) which is clearly impractical :D