Lead author is the node.js guy, right?
Yes! I wondered what became of him. He came back on my radar last year with his automatic colorization experiements but there was no clear, “where is he now?” resolution. It now appears he’s at Google! And, working on this stuff full time. Very cool.
It’s a cool experiment, but I’m trying to think what this could be used for in the real world. Surely the details in the enhanced image are ultimately made up, so for example you can’t rely on them to identify people or objects in the image. Actually, it seems even worse: it produces an image that could resemble some other person not present in the original image. That’s a rather bad outcome in the stereotypical situation of enhancing security camera footage.
I guess this could be used to preserve bandwidth where you transmit an image but don’t require precise reproduction on the other end? Sort of like lossy compression but instead of e.g. artefacts you get an image that looks high quality but is not quite like the original.
The Google+ developer blog posted recently about using exactly this method to reduce bandwidth use - already doing it in prod :)