These images very much remind me of “plasma” effects from demoscene intros. For me getting this together with animations and music into a 1kb or 4kb executable is much more impressive than getting these from a generative AI.
The first half of the post uses CPPNs, which are usually pretty tiny as well. They’re generative neural networks in a sense, but more artisanal than huge-dataset driven, and usually well under 1k.
The second part does go into giant multi-gigabyte generative networks like Stable Diffusion. But wanted to highlight the CPPN part as quite different despite overlap in terminology.
These images very much remind me of “plasma” effects from demoscene intros. For me getting this together with animations and music into a 1kb or 4kb executable is much more impressive than getting these from a generative AI.
Example: https://archive.assembly.org/2023/4k-intro/ihan-perus-by-yzin-humppatehdas
The first half of the post uses CPPNs, which are usually pretty tiny as well. They’re generative neural networks in a sense, but more artisanal than huge-dataset driven, and usually well under 1k.
The second part does go into giant multi-gigabyte generative networks like Stable Diffusion. But wanted to highlight the CPPN part as quite different despite overlap in terminology.