Abstract:
The ability to generate natural language sequences from source code snippets can be used for code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens.
We present CODE2SEQ: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of paths in its abstract syntax tree (AST) and uses attention to select the relevant paths during decoding, much like contemporary NMT models.
We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to 16M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as general state-of-the-art NMT models.
I’m not sure what to think of this.
On one hand, it’s an interesting problem and a neat idea, and I can think of some cool ways to enhance the output for specific languages (for example using plists in Common Lisp to store hints for the natural language generation, or the same using annotations in Python).
But on the other hand, I’m a little concerned about the lack of examples, and I don’tunderstand the criteria they use to evaluate the quality of the generated text. The two examples in the paper are nearly useless, IMO. One step removed from comments like “Add 5 to x”.
To be useful, the natural language explanations need to be at a higher level of abstraction, and without “general AI” there’s no way they can do that using only the syntax tree. So the problem circles back to the developer needing to add annotations and/or comments, which is right back where we are now.