1. 18
  1. 37

    Uhm, dithering isn’t a particularly good way to reduce image size, if you define “good” in terms of visual perception at equal file size. Modern software based on perceptual analysis does better than that 80s dithering. Instead of dithering the image using ancient methods first and then using webp, just use webp alone, the encoder’s -size/-psnr options let you reach your preferred size with the best possible visual quality.

    Of course one might also discussing whether each image pays its way. An image doesn’t necessarily justify its load time.

    1. 11

      To add to that:

      Most photo CODECs that are designed for things that look like photos. They assume that colour changes are typically gradual, because most photos have large regions of almost the same colour. JPEG really baked in that assumption because you need an infinite number of cosines to represent a square wave. Newer CODECs are a lot better at sharp discontinuities in terms of quality, but it’s still the worst case for compression ratio.

      When you dither, you are making every pair of adjacent pixels a sharp colour discontinuity. That’s going to hit the worst case for any CODEC optimised for photos, so you’ll get much worse quality for the same file size than if you don’t dither first.

      1. 2

        I remember reading a discussion on the Low-tech Website (a fully-solar powered website) about the same topic. Dithering simply is not the appropriate tool for this purpose, you get better results with more conventional tools.

        The comments on that page are very informative.

        1. 3

          In a way I think the Low-Tech Magazine makes the dithering of images an aesthetic choice which has a bit of retro charm mixed with the resurgence of dithering (see Return of the Obra Dinn, a game that’s characterized by this style).

          That said I tried dithering some graphics with GMIC but I have to say that the LTM results are somewhat more pleasing. If someone knows how to reproduce that, I’d be interested.

      2. 17

        It’s a neat aesthetic choice, and go for it if you like it, but it doesn’t make sense from pure compression/performance perspective. Dithering is not very compressible (there’s randomness, and byte-oriented compressors struggle below 8bpp). OTOH even good’ol JPEG looks fine at a rate of 1 bit per pixel.

        1. 22

          very odd suggestion. if you want to make files smaller use appropriate compression.

          1. 7

            Throwing the original and dithered examples into ImageOptim for two seconds gives massive file size reductions with no noticable visual changes.

            After optimisation, it seems like the only examples that were significantly smaller than the original image were the (more unpleasant to my eye) 2x2 ordered dithering and GameBoy versions.

            1. 7

              I like the retro style of dithered images. It’s a stylistic choice with the side benefit of performance. Which is a nice change, given that most web design trends of the last decade or two have only increased file size. But it’s a stylistic choice first and a performance one second, since there are much less noticeable ways (WebP) to achieve similar compression.

              1. 6

                The Apollo 11 guidance computers had just 72kB of memory.

                The Apollo 11 guidance computers also didn’t contain any images. The type of calculations the Apollo 11 computer was required to do was, comparatively speaking, quite limited in scope and complexity.

                Here’s the New York Times the day after the moon landing. Know what’s on the frontpage? An image. Thee of them! In the best quality that the technology of the time and economics allowed. the NYT of 1969 probably wouldn’t have fitted in 72kB of memory, and that’s perfectly fine.

                In short, it’s a silly comparison.

                Oversized images have a negative impact on your site’s speed, accessibility, seo, and on the climate.

                I did some calculations on this before, and the effect on the climate is extremely minimal. Bandwidth doesn’t cost that much power in the first place, and saving, say, 0.5MB on every site is blown away with just one Netflix Netflix movie a month, both of which dwarf in comparison to, say, eating a steak. The savings are so marginal that it can mostly be safely ignored.

                I don’t know what SEO problems there are (I don’t think there are any), and if used well it shouldn’t matter if your site uses no images or 10M of images for speed or accessibility; especially accessibility is about how you use it, not ho large an image is. Actually, by reducing the quality you’re only going to make things harder to see for people with low vision, or suboptimal hardware which doesn’t look too great in the first place.

                1. 6

                  I run a normal looking news website. Our homepage looks like any other homepage with a bunch of images in a river. WebpageTest says that it’s on 287KB of images. I think part of that is because the images are marked as lazy loading, so it’s probably not loading the bottom of the river, but still. You can run a normal looking website with decent performance.

                  The problem is that Google’s DoubleClick code is really bad and it lets advertisers run arbitrary JavaScript, which is even worse. Once you have those on your page, performance is dead, and so no one even tries anymore. Just don’t use DoubleClick and everything else is easy.

                  1. 3

                    My measurements (Chrome devtools) show 407kB (transferred, >500kB size) of images on page load in a normal desktop window size. By the time you scroll to the footer it’s 927kB. That’s still great by modern web standards. For the front page of a news site like that I think you’ve found a nice balance and I applaud your work!

                    But on pages that only really have a text I think <100kB total data is a reasonable and realistic target. Even that takes time to load on a slow connection. You’re right about the problem: there’s always some reason the game is already lost. And I’ve myself used the arguments “but X already is so big a couple hundred kB means nothing” and “this popular site loads 10MB of assets, at least we’re better than that”. But we as an industry could certainly do much, much better if we just really tried.

                  2. 5

                    Results of some basic experiments with using cwebp to create webp images from the examples here:

                    my-dog.jpg    120 kb
                    my-dog.png    424 kb (created from the jpeg for comparison, max compression)
                    my-dog.webp   33 kb  (created from the jpeg with default settings)
                    
                    
                    # Dithered version:
                    my-dog-dithered.png             48 kb
                    my-dog-dithered.webp            138 kb (the lossy compression doesn't like dithering)
                    my-dog-dithered-lossless.webp   25 kb  (using the -lossless flag does a better job compressing)
                    

                    So from this sample size of one, dithering can usefully reduce file size but using a better codec is a better place to start. Of course you can put as much work as you want into tuning image compression parameters, but I sure don’t have time for that. I’m sure there’s an image codec out there somewhere that is designed to squeeze the ever-living heck out of dithered images; personally, I like broad-and-easy solutions.

                    1. 2

                      It’s interesting to note that both JPEG and WebP (and most other photo standards) have a progressive mode. The basic idea of most of these CODECs is to turn the image into a wave and then build an approximation of that wave that minimises loss according to some psychovisual model. This lends itself quite naturally to progressive level of detail support because your final image is the sum of a set of waves and so you can sort them by magnitude (rather than sorting them by image location) and download the ones that have the most visual impact first.

                      I remember 20 years ago, there was a lot of discussion about allowing browsers for low-bandwidth devices or with low screen resolutions to give up downloading the files and drop the connection after they’d got enough for a particular resolution / quality. I don’t think it ever went anywhere, but it’s worth noting that you can do almost the same thing in HTML now and provide different quality images depending on the screen resolution using the <picture> tag and the ]<source> tag with the media property](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/source#attr-media). This slightly increases the size on the server (you need the different-resolution images) but it lets you provide small-screen devices a lower-quality version of the image.

                      It’s a shame that media queries don’t let you detect metered connections. If you want to do that then you have to use JavaScript to rewrite the image source based on information from the network connection object.

                      It would be really nice if progressive image formats could be fetched with HTTP range requests and the UA decide at which point it has sufficient quality based on its own heuristics.

                    2. 3

                      I think it could be better sold as a intentional stylistic choice that could be used in some scenarios… just happens to have performance beneifits.

                      1. 3

                        It doesn’t have performance benefits for any of the examples in the article. It’s purely stylistic.

                        Take the dog photo the author uses as an example. The smallest version shown is this 8K green PNG. But wouldn’t you rather have this 8.1K JPEG instead? Or this 7.4K WebP? Squoosh makes this easy.

                        Dithering can be useful in some specific circumstances, for example when reducing the palette of an image lets you step to a smaller PNG palette size. It’s a specialized technique, not some panacea. The author’s attempts to sell it as such are ridiculous, and the notion that it’s somehow relevant to climate change absurd. We have much better tools to reduce image size, and we can make the case for using them without muddying the waters by suggesting a stylistic change is required.

                      2. 3

                        Looks like someone discovered markdown.

                        1. 3

                          I agree with the arguments that more aggressive lossy compression might be better than color-reduction+dithering+lossless compression. But I do think it’s an interesting aesthetic choice. I just found a Gemini capsule that reduces its images’ colors and halftones them rather than conventionally dithering. Haven’t checked the actual file sizes.

                          1. 1

                            I found this article interesting as it seems that this way you can strive a good balance between having images on your site and reducing the size of the website.

                            Additionally, for the use case of adding a picture to the about page, it adds a bit of privacy to an image as the dithering obscures some details in an image. As such I feel less hesitant about putting an image of myself on an about page. The inspiration comes from Matthew Butterick’s Beautiful Racket, although his rendtiion seems like it was not automatic dithering.

                            1. 1

                              For pure information websites, stick to textual diagrams and art: http://len.falken.ink/philosophy/is-privacy-in-all-our-interests.txt

                              It gets the point across.

                              Otherwise I agree: compress and dither the hell out of images appropriately.

                              1. 13

                                The image is unreadable on my phone

                                1. 6

                                  Ironically, this is what it renders like in my browser (Firefox 92 on macOS): https://x.icyphox.sh/DHcX6.png

                                  1. 1

                                    Yep, browsers suck at plain text. It’s pretty sad state of affairs.

                                    1. 10

                                      So bleeding-edge Unicode good, 1990s graphics codecs bad?

                                      Also, that non-ASCII art is really going to mess with screen readers. It’s non-semantic as heck.

                                      1. 3

                                        Now I wonder why there’s no semantic element for ASCII art in HTML5. RFCs are full of ASCII diagrams, for example.

                                      2. 9

                                        Or, an alternative reading: that “plain text” is composed of graphic characters that were only added to unicode last year, and is going to look just as broken in any other application on the same system without a suitable font, so perhaps graphics should be transferred/presented using an actual graphics format.

                                        1. 1

                                          My UTF-8 compatible terminal gives pretty much the same output. Find an example that doesn’t use ancient/outdated character sets.