Not mentioned there, JPEG XL and its relatives have come up a couple times here. One neat thing about it is it can repack JPEG1 (.jpg) images about 20% tighter without any further loss (you can even get the original .jpg bits back out), which is attractive since for a long time we’ll still have old content, new .jpg’s from old cameras, need to be able to serve a .jpg for backwards compatibility, etc. And mozjpeg minus another 20% wouldn’t be bad in that test!
I don’t know if crossposts copying content are preferred to links to comments elsewhere or what, but I posted a somewhat more wandering comment on the orange site about next-gen image codecs. It’d definitely be cool if both JPEG XL and AVIF got wide client support, since both seem to have legit use cases.
I found WebP to be significantly better for my usage, but I had to experiment a bit, and initially found it worse due to the smoothing out of details. What I found to work well, were these settings (with GraphicksMagick, and I was testing with 1024x1024 images at the time; I also saw a comment for this article on HN which claimed libvips has better results for WebP too):
# JPEG for comparison
gm convert <input_file> -quality 80 <output_file>.jpg
gm convert <input_file> \
-define webp:emulate-jpeg-size=true \
-define webp:filter=sharpness=7 \
-quality 80 \
Even with default settings WebP looked cleaner to my eyes (I strongly dislike JPEG compression artifacts; and I personally didn’t like the results from MozJPEG which also seems to destroy fine detail), but the loss of detail didn’t seem worth it. The webp:filter=sharpness=7 made a noticeable difference in retaining detail in the compressed image, and I personally think it looks a lot better, certainly nothing like the examples in this article. Apparently WebP support is finally landing in Safari 14, so the format finally will be supported by all major browsers (that doesn’t mean I won’t be providing JPEG fallbacks, but it’s nice to know).
I also found that WebP’s lossless mode compressed an image of one of my app’s logos to 10KB compared to 38KB with PNG.
In the long term, I do believe JPEG XL will be the prevailing format. It takes on features from experimental formats like FLIF/FUIF and PIK, and is royalty free. For now though, I disagree with this article’s conclusion, I think WebP has some clear benefits.
The real news here is that AVIF is amazing!
It’s a good analysis, but (unintentionally) misrepresents the timeline. Google introduced webp before the release of MozJPEG, so naturally it used libjpeg as a frame of reference.
A concern the author didn’t touch upon: In 2040, when all code has to be rewritten in the extra safe SausageCrust language, which format will get ported first? Webp, or the ubiquitous JPEG? When you’re in a retirement home, and you want to browse through old photos, do you really want to start an emulator?
JPEG is so simple that we have many implementations, including a SausageCrust one. WebP is much more complex and AFAIK there is only libwebp and nothing else.
However, Apple has relented and added WebP to Safari, so WebP is going to become a thing required for being web-compatible, which makes it immortal now. Your photo viewer written in ye’olde Electron will handle it.
Very interesting. And annoying, as I just started using webp!
Drawing conclusions based on a single image seems a bit premature. Different algorithms will likely have different strengths and weaknesses in terms of the data patterns they compress well, so with only one image, we could just be looking at one particular example which avif happens to compress really well.
I’m not suggesting the conclusions are false, just that the data isn’t statistically significant.
The Kodak image set has issues (it’s lower-res and noisier than contemporary photos), but the conclusions remain the same for high quality photographic images.
WebP uses the old VP8 video codec. In video it’s important to cheaply approximate a frame, but details are unimportant in something that’s on screen for 1/30th of a second. That’s why VP8 was designed with low-resolution color and aggressive blurring.
But in high-quality range that people want for still images it’s no longer enough to smudge an approximation of the image. Hiding of blocking artifacts with blur no longer gives WebP an advantage when users set quality high enough to not have blocking artifacts in the first place.
Ah my mistake. I originally thought they were just running it on a single image of a house. I now see that they were using a dataset of 24 different images, and there are error bars on the graphs.