As with everything else, a code-writing AI doesn’t need to be good by some objective measure - it just needs to be better than the typical human programmer. Just how lucid is the typical industrial codebase?
I’m not sure such a “straightforwardness objective” would be so hard. If the intention is to make code clear and easy to reason about for us dumb humans, then we could make an objective function which behaves like a dumb human trying to reason about the code.
For example, if we want a machine to reason about the correct behaviour of code, we can use some symbolic logic system like Prolog. But that’s not the goal here: we want a system which reasons about ‘how people will assume the code behaves’; for that we could do a few things:
We can use this knowledge to build a machine learning system which makes dumb, human-like predictions about what code will do. This can form part of the objective function of whatever super-smart, logically-perfect system we use to write the code: i.e. “How closely does the dumb, limited, biased AI’s guess about this code’s behaviour correspond to the actual behaviour?”. The smart AI only has control over the code, so to succeed on this objective it must “dumb down” the code into something more easily comprehensible.