As a layman caught up in the recent hype I found this twitter thread easier to follow for an overview, but I’m really glad that the details are still so accessible for those can handle the math c:
Seems like diffusion models could be used to hack “blurring”.
When people take screenshots of text, it’s common to obfuscate sensitive text like passwords via painting or blurring. It’s recommended to not use “blurring” because the original text can be brute-force calculated by blurring & comparing the images. i.e., blurring leaves a lot of information in tact, and that residual information can be used to reconstruct the original form.
It seems like you could train a diffusion model to “un-blur” text or even go a step further and un-blur faces too.
You can do that without these models. Reversing such effects applied to photos has been used to restore the faces of child abusers that subsequently went to jail.