1. 47
  1.  

    1. 9

      If this was interesting to you, please take a look at (trying to) render raw images from a camera! I tried to look into it once, trying to do some astrophotography processing, and it’s a whole other can of worms. You’ve got debayering, HDR tone mapping, white balance, color temperature, and more… Often you need to take the actual lens into account too, so the image isn’t distorted.

      Turns out, image processing is really important to get a good image out of a digital sensor, and phone cameras really do take that into a whole new level.

      1. 4

        I once considered making an image viewer, had a git init and a bucketful of bookmarks and everything. I see now that not going further down that path was probably one of the best choices in my life =P

        Speaking of, what do people here use on Linux? I’ve never found one I’m happy with (loads fast, respects colourd colour profiles, has basic editing ops, ok UI)

        1. 1

          what do people here use on Linux? I’ve never found one I’m happy with (loads fast, respects colourd colour profiles, has basic editing ops, ok UI)

          I use Google Chrome, or occasionally GIMP, but neither of those meets your criteria.

        2. 3

          The answer to why nobody uses image format X is nearly always “patents”.

          1. 5

            It’s even worse for video codecs. It’s crazy how much technology and free software is held back by these patents. Recently, one of my favorite Age of Empires 2 streamers (Memb) switched to HEVC/H.265 to support higher quality and resolution (4K) streaming and Mozilla just refuses to support this codec on Linux. Interestingly, they recently shipped support for it on Windows, where Firefox can take advantage of infrastructure provided by Windows, circumventing the need for Mozilla to get a license.

            Anyway, this does not explain the slow uptake of WebP, which is licensed very permissively. I think this format just fell victim to the fact that its predecessors are good enough. Yes, I can squeeze out 100KB more with this superior codec. Do I care? Not really. I would care if I served and stored a trillion such images, but I don’t, and most people don’t. This image format caters to Google and other large companies, not the users.

            1. 2

              Do I care? Not really. I would care if I served and stored a trillion such images, but I don’t, and most people don’t.

              There’s a bootstrapping problem as well. Until WebP is as well supported as PNG, you don’t get to switch to WebP, you have to support PNG and WebP. This means you need to either pay for more storage space (to store both) or dynamically transcode from one to the other on demand (which is fine, they’re lossless, but now you’re burning CPU to save bandwidth). And then maybe build a caching layer on top so that the most popular images are stored in both formats and others are stored in one format or the other. And at the end, you’ve build a load of complexity to save some bandwidth.

              If you could rely on WebP existing everywhere, it’s cheaper to do a one-off transcoding. It looks as if all current browsers support it, but I don’t know what happens with, for example, embedded eBook readers (where I would like to use WebP because some eBook vendors really want your ePub to be under 3 MiB and so every KiB saved helps).

              1. 3

                The bootstrapping problem is real. The lack of adoption is a hindrance to further adoption. I download a WebP image and want to make a quick edit, for example to add an arrow or rectangle. I try to open it with my favorite lightweight editor (like Pinta) and the editor suffers from a brain aneurysm because it doesn’t support the format. Am I going to inflict this on other people? No, I am going to avoid that format.

                It also doesn’t help adoption that WebP is often used by companies that crank compression levels up the wazoo, making everything look like a mess full of compression artifacts. This gives the false impression that WebP is terrible. I often reach for PNG because I know that its encoding is guaranteed to be lossless and will never ruin the image. PNG is basically a seal of quality “nobody did funky stuff to this image to squeeze out a few kilobytes”. Of course that’s not entirely true, because the picture could be lossily pre-processed before the PNG encoding. But it’s still a good heuristic compared to the Pandora’s box that is WebP.

          2. 2

            No, seriously, the idea that every terminal should implement support for every image format on its own, and do it in the right way (which we have yet to cover), is just something that will never happen.

            It’s already a miracle if one protocol is implemented correctly, beyond being able to “dump” an image at the prompt.

            I wrote a library for ratatui that generalizes the three protocols (Sixel, Kitty, and iTerm2). The hardest part is testing an array of terminals! Anybody knows how I could screenshot-test a variety of terminals on CI?

            1. 2

              For vv, I went with the PBR Neutral operator, following Aras’ recommendation. The main reason for choosing it was that implementing it required only a little bit of math, instead of using a rather large lookup table.

              When it comes to tonemapping, you can also use a polynomial fit of the AgX lookup table.

              I also learned on Mastodon that the Khronos PBR Neutral tonemapping curve is designed to preserve sRGB textures’ color tones as well as possible. So it’s perhaps not as general purpose as the name sounds like.

              Oh and the gamma 2.2 vs proper sRGB curve comparison was a great demonstration why knowing the fundamentals matter.