Neat! Dithering is one of those nearly forgotten great image processing techniques. I’ve often wondered why digital video playback doesn’t dither; it’d really help with the posterization in dark areas .
I’d never thought about the problem of presenting dithered images in browsers that are resizing images and not respecting individual pixels. Is there any practical reason to do this on modern hardware or is it just an interesting graphical effect?
I’ve often wondered why digital video playback doesn’t dither; it’d really help with the posterization in dark areas .
There’s something slightly related, which is film grain synthesis. Basically the encoder makes note of noise in the input (which would be expensive to encode, and result in artifacts at lower bitrates), remembers how much of it there was, filters it out, and encodes the remainder. Then the decoder simply adds a matching amount of random noise on playback.
If you were to take a pristine CG video, add a bit of noise to it, and then encode it with AV1, the result on playback would theoretically be a lot like the dithering you’re asking for.
Dithering like this tends to look bad for video, the dot pattern moves around over multiple frames leading to distracting visible artifacts.
As for your second point, resizing a dithered image is always going to lead to artifacts. Resizing down will have to throw away pixels by either just discarding them (destroying the dot pattern) or blending multiple pixels together (blurring the image, which is what we are trying to avoid in the first place.) Resizing up will add pixels but there is still no good way to keep the dot pattern.
You can try it if you want. Screenshot one of the images on this page and try to resize it to see what I mean.
Posterization in video is from bad encoding settings (DC quantization set too aggressively). The decoder has nothing to dither — the video data has lost the gradient and falsely says the edges are meant to be there.
That is a good look — for the right kinds of content, at least. Apparently a big part of why, besides the dither pattern itself, is that it has “incomplete” error diffusion: the sum of the error terms is only 0.75 instead of 1, which effectively stretches the contrast. An area with a value of < 1/8 or > 7/8 will turn into solid black or white, without any scattered pixels to hint at detail. But the reduced propagation also means that edges don’t “bleed” as far, so everything looks a bit sharper.
Played around with dithering as a weekend project as well.
There are a substantial number of dithering algorithms that mostly differ based on the diffusion matrix that they implement. It can be fun to compare how they look stylistically. Atkinson dithering is interesting because it tends to increase the contrast of the resulting image. I’m also partial to the look of Sierra dithering, which I haven’t seen people talk about much.
Interestingly, while most of these algorithms originate in academia (Atkinson being an exception!), Sierra dithering seems to originate from a post on the CIS Graphics Support Forum from a user named Frankie Sierra. About a year ago, out of curiosity, I tried to track down Frankie Sierra and came across a LinkedIn profile with that name for a person who had worked at 3DO (the video games publisher) in the 90s. Since then, I’ve wondered whether this was the same Frankie Sierra and whether Sierra dithering was used in 3DO’s games like Heroes of Might and Magic.
Anyway, if you’re curious about some of these algorithms, I implemented several of them in Rust here. That repo also has some sample images demonstrating all the different algorithms.
Very interesting, I was not familiar with Sierra dithering but judging by your sample images it gives very nice results - fewer artifacts on gradients and preserves contrast better at the cost of a bigger matrix.
Digital Half Toning or Dithering: an old and very detailed readme that I dug out of the web archive. Interesting for its history and breadth of content.
Ditherpunk: excellent for how it digs into some concepts like blue noise, Bayer, and Riemersma dithering and ties that back to the technical approach used by Obra Dinn.
Reducing Colors in an Image: Nice visual explainer around how these algorithms can work with non-monochromatic palettes
Pet peeve: You should not (though absolutely everyone does), name your custom element <logical-name>. You should name your custom element <username-logical-name>. There is a single global namespace for custom elements. If your web component is named <dither-image> then I cannot name my custom element <dither-image>. Really, the moral is that custom element is a failure of an API and the browser vendors should refuse to extend it anymore, but if people insist on using it, they should use it correctly and only publicly share custom elements that are properly namespaced, in this case as <sheephorse-dither-image>.
flooring the devicepixelratio means it doesn’t look right with fractional scaling. to make it actually be sharp on my laptop (1080p, 1.5 scaling) i have to zoom out to 66% or in to 133%, otherwise it’s blurry. also, it’s probably a good idea to add image-rendering: pixelated to the canvas so it doesn’t ever get scaled in a way that blurs it, though it will won’t fix your issues on fractionally scaled displays (some canvas pixels will be 2 screen pixels wide).
I only discovered the existence of fractional device scaling at the last minute - flooring it is a cheap hack to get the algorithm to work but, as you say, it gives bad results. Fractional device scaling really mucks with my goal of pixel perfect results, I am not sure there is a great solution.
Your suggestion of image-rendering: pixelated is a good one, I’ll add that to my list of TODOs.
This is my weekend project.
The original version had an embarrassing bug with gave terrible results in some browsers if the zoom level was not set to actual size.
This is now fixed, sorry for the inconvenience.
Neat! Dithering is one of those nearly forgotten great image processing techniques. I’ve often wondered why digital video playback doesn’t dither; it’d really help with the posterization in dark areas .
I’d never thought about the problem of presenting dithered images in browsers that are resizing images and not respecting individual pixels. Is there any practical reason to do this on modern hardware or is it just an interesting graphical effect?
There’s something slightly related, which is film grain synthesis. Basically the encoder makes note of noise in the input (which would be expensive to encode, and result in artifacts at lower bitrates), remembers how much of it there was, filters it out, and encodes the remainder. Then the decoder simply adds a matching amount of random noise on playback.
If you were to take a pristine CG video, add a bit of noise to it, and then encode it with AV1, the result on playback would theoretically be a lot like the dithering you’re asking for.
Dithering like this tends to look bad for video, the dot pattern moves around over multiple frames leading to distracting visible artifacts.
As for your second point, resizing a dithered image is always going to lead to artifacts. Resizing down will have to throw away pixels by either just discarding them (destroying the dot pattern) or blending multiple pixels together (blurring the image, which is what we are trying to avoid in the first place.) Resizing up will add pixels but there is still no good way to keep the dot pattern.
You can try it if you want. Screenshot one of the images on this page and try to resize it to see what I mean.
The game Return of the Obra Dinn has dithered graphics and goes out of its way to solve the unstable dot pattern problem. I thought you might enjoy the article, as a fan of dithering, so I looked it up.
It’s such a good write up and the result is so good! Sadly not applicable to dithering video, unless I’m mistaken?
Posterization in video is from bad encoding settings (DC quantization set too aggressively). The decoder has nothing to dither — the video data has lost the gradient and falsely says the edges are meant to be there.
That is a good look — for the right kinds of content, at least. Apparently a big part of why, besides the dither pattern itself, is that it has “incomplete” error diffusion: the sum of the error terms is only 0.75 instead of 1, which effectively stretches the contrast. An area with a value of < 1/8 or > 7/8 will turn into solid black or white, without any scattered pixels to hint at detail. But the reduced propagation also means that edges don’t “bleed” as far, so everything looks a bit sharper.
Played around with dithering as a weekend project as well.
There are a substantial number of dithering algorithms that mostly differ based on the diffusion matrix that they implement. It can be fun to compare how they look stylistically. Atkinson dithering is interesting because it tends to increase the contrast of the resulting image. I’m also partial to the look of Sierra dithering, which I haven’t seen people talk about much.
Interestingly, while most of these algorithms originate in academia (Atkinson being an exception!), Sierra dithering seems to originate from a post on the CIS Graphics Support Forum from a user named Frankie Sierra. About a year ago, out of curiosity, I tried to track down Frankie Sierra and came across a LinkedIn profile with that name for a person who had worked at 3DO (the video games publisher) in the 90s. Since then, I’ve wondered whether this was the same Frankie Sierra and whether Sierra dithering was used in 3DO’s games like Heroes of Might and Magic.
Anyway, if you’re curious about some of these algorithms, I implemented several of them in Rust here. That repo also has some sample images demonstrating all the different algorithms.
Very interesting, I was not familiar with Sierra dithering but judging by your sample images it gives very nice results - fewer artifacts on gradients and preserves contrast better at the cost of a bigger matrix.
Thanks! Here’s a little bibliography of sources that I found useful when I was digging into this.
You could probably hook up the Rust code to your HTML thing with WASM, if you wanted to try out different algorithms in the browser…
This is super cool! Thanks so much for sharing it. This art style will always be gorgeous to me, it’s a bit rare these days…
Pet peeve: You should not (though absolutely everyone does), name your custom element
<logical-name>
. You should name your custom element<username-logical-name>
. There is a single global namespace for custom elements. If your web component is named<dither-image>
then I cannot name my custom element<dither-image>
. Really, the moral is that custom element is a failure of an API and the browser vendors should refuse to extend it anymore, but if people insist on using it, they should use it correctly and only publicly share custom elements that are properly namespaced, in this case as<sheephorse-dither-image>
.You are correct, which is why the tag name is
<as-dither-image>
, thatas
stands for Andrew Stephens.Ha! 😅 Too clever for me. I figured it was “as” as in the as in “as in”.
Wow. Great work. I think that dithering style is why I have such fond memories of the 9” monochrome macs.
Your results look fantastic and I enjoyed the write-up. Thanks for sharing it.
flooring the devicepixelratio means it doesn’t look right with fractional scaling. to make it actually be sharp on my laptop (1080p, 1.5 scaling) i have to zoom out to 66% or in to 133%, otherwise it’s blurry. also, it’s probably a good idea to add image-rendering: pixelated to the canvas so it doesn’t ever get scaled in a way that blurs it, though it will won’t fix your issues on fractionally scaled displays (some canvas pixels will be 2 screen pixels wide).
I only discovered the existence of fractional device scaling at the last minute - flooring it is a cheap hack to get the algorithm to work but, as you say, it gives bad results. Fractional device scaling really mucks with my goal of pixel perfect results, I am not sure there is a great solution.
Your suggestion of image-rendering: pixelated is a good one, I’ll add that to my list of TODOs.