Did a pretty good job of making me care about JPEG XL even though I have little practical use for it. The backwards compat/re-encoding bit is pretty baller.
FWIW, if others like the back-compat part, a Chrome bug for supporting the re-encoding as a gzip-like transparent filter has not been closed. That may effectively be a clerical error, but I also think it’s a legitimately different tradeoff: doesn’t require the same worldwide migration of formats, just CDNs transcoding for faster transmission, and it’s a fraction of the surface area of the full JXL standard.
(It should also be possible to polyfill with ServiceWorkers + the existing Brunsli WASM module, but I don’t have a HOWTO or anything.)
They don’t like “me too” comments, but stars on the issue, comments from largish potential users, relevant technical insight (like if someone gets a polyfill working), or other new information could help.
I’m sorry, lossless jpeg recompression Is not a feature that is a selling point: it doesn’t impact new images, and people aren’t going to go out and recompress their image library. I really don’t understand why people think this is such an important/useful feature.
Most users didn’t replace gzip with brotli locally. It’s not worth it for many even though it’s theoretically drop-in improved tech. Same things are true of JPEG recompression. But large sites use Brotli to serve up your HTML/JS/CSS, CDNs handle it; Cloudflare does, Fastly’s experimenting, if you check the Content-Encoding of the JS bundle on various big websites, it’s Brotli. Same could be true of JPEG recompression.
I don’t think you stop thinking about handling existing JPEGs better because you have a new compressor; existing content doesn’t go away, and production doesn’t instantly switch over to a new standard. I think that’s how you get to having this in the JXL standard alongside the new compression.
Separately, if JXL as a whole isn’t adopted by Chrome and AVIF is the next open format, there’s a specific way JPEG recompression could help: AVIF encoding takes way more CPU effort than JPEG (seconds to minutes, depending on effort). GPU assists are coming, e.g. the new gen of discrete GPUs has AV1 video hardware. But there’s a gap where you can’t or don’t want to deal with that. JPEG+recompression would be a more-efficient way to fill that gap.
None I know of has a hardware AV1 encoder, though. Some SoCs have AV1 decoders. Some have encoders that can do AVIF’s older cousin HEIF but that’s not the Web’s future format because of all the patents.
I’d quite like good AV1 encoders to get widespread, and things should improve with future HW gens, but today’s situation, and everywhere you’d like to make image files without a GPU, is what I’m thinking of.
Companies already do stuff like this internally, and there’s a path here that doesn’t require end-users know the other wire format even exists. It seems like a good thing when we’re never really getting rid of .jpg!
It’s just good engineering. We’ve all seen automated compression systems completely destroy reuploaded images over the years. It’s not something that users should care about.
Indeed, but my original (now unfixable) comment meant for end users. The original context of the current jpegxl stuff is the removal of jpegxl support from chrome, which meant I approached this article from the context of end users rather than giant server farms.
I don’t quite understand how you get here. Browsers are used for getting images from servers to regular users’ faces. If they support JXL, servers with lots of JPEG images for showing to regular users can get them to the regular users’ faces faster, via the browser, by taking advantage of the re-encoding. Isn’t that an advantage for regular users?
The conversion can be automatically applied by a service like Cloudinary. Such services currently offer automatic conversion of JPEG to WebP, but that always loses quality.
people aren’t going to go out and recompress their image library
I’m not sure why you’d assume that. For many services that stores lots of images it is an attractive option, especially given that it isn’t just identical image, but can enable recreating the original file.
E.g. Dropbox has in the past come up with their own JPEG recompression algorithm, even though that always required recreating the source file for display.
Regular users are not the ones who care, but they’re the people for whom it would need to be useful in order to justify reencoding as a feature worthy of the attack surface adding JPEGXL on the web. The fact that it kept being pulled up to the top of these lists just doesn’t make sense in the context of browser support.
A much more compelling case can be made for progressive display - not super relevant to many home connections now, but if you’re on a low performance mobile network in remote and inaccessible locations or similar (say AT&T or TMobile in most US cities :D ) that can still matter.
That said I think google is right to remove it from Chrome if they weren’t themselves going to be supporting it, and it doesn’t seem that many/any phones are encoding to it (presumably because it’s new, but also modern phones have h/w encoders that might help with heir, etc?)
Most regular users don’t host their own photos on the web, they use some service that they upload images to. If that service can recompress the images and save 20% of both their storage and bandwidth costs, that’s a massive cost and energy saving. I wouldn’t be surprised, given that the transform appears to be reversible. If they’re already doing the transcoding on the server side and just keeping small caches of JPEG images for frequently downloaded ones and transcoding everything else on the fly. If browsers support JEPG XL the their CPU and bandwidth costs go down.
The surprising thing here to me is that the Google Photos team isn’t screaming at the Chrome team.
the jpeg-xl<->jpeg transcoding is simply an improvement of the entropy coder, but more importantly as long as there is no change in the image data the “cloud storage” provider is more than welcome to transcode however they feel - I would not be surprised if the big storage services aren’t already doing any as good or better than jpeg-xl.
The reason it can do lossless transcoding is that it is able to essentially a jpeg using a different extension to indicate a different entropy coder. There is nothing at all stopping cloud storage providers doing this long before jpegxl existed or was a standard, and they don’t have the compatibility requirements a standards body is worried about, so I would not be surprised if providers were already transcoding, nor would I be surprised if they were transcoding using their own system and not telling anyone for “competitive advantage”.
The surprising thing here to me is that the Google Photos team isn’t screaming at the Chrome team.
Why? They’re already free to transcode on the server end, which I assume they do anyway, and I would assume they do a better job of than jpeg-xl. For actual users, Chrome already supports a variety of other formats superior to jpeg (not xl), and seemingly on par with jpeg-xl (+/- tradeoffs). In my experience online views (vs. “download image…” button) use resized images that are smaller than the corresponding JS most such sites use (and reducing hypothetical full resolution image isn’t relevant, because they essentially treat online as being “preview by default” and also why provide a 3x4k version of a file to be displayed in a non-full screen window on a display with a smaller resolution than the image?).
The downside for Chrome of having jpeg-xl is that it’s yet another image format, a field renowned for its secure and robust parsers. I recall Safari having multiple vulnerabilities over the years due to exposing parsers for all the image formats imaginable, so this isn’t an imaginary worry.
Obviously in a year or so, if phones have started using jpeg-xl the calculus changes, it also gives someone time to either implement their own decoder in a secure language, or the chrome security folk have time to spend a lot of effort breaking the existing decoder library, and getting it fixed.
But for now jpeg-xl support in chrome (or any browser) is a pile of new file parsing code, in a field with a sub optimal track record, for a format that doesn’t have any producers.
To me the most repeated feature that is used to justify jpeg-xl is the lossless transcoding, but there’s nothing stopping the cloud providers transcoding anyway, and moreover those providers aren’t constrained to requirements specified by a standard.
This post is a reaction to that initial news and the original post didn’t much of the technical merits, just that removal had happened.
As the article stated, many organizations have come out publicly on the issue tracker to show that the reaction of little interest in JPEG XL was premature and unfounded. There’s more to see than there was a 2 days ago.
There are some good potentially portable ideas embodied in the reference JXL encoder, like the option to target a perceptual distance from the original rather than a bitrate.
There are also deep differences (spatial prediction!) but I bet the folks that worked on JXL could help advance AVIF encoding in terms of CPU efficiency (which AVIF badly needs) and reliable quality even with the set of encoding tools already fixed.
Did a pretty good job of making me care about JPEG XL even though I have little practical use for it. The backwards compat/re-encoding bit is pretty baller.
FWIW, if others like the back-compat part, a Chrome bug for supporting the re-encoding as a gzip-like transparent filter has not been closed. That may effectively be a clerical error, but I also think it’s a legitimately different tradeoff: doesn’t require the same worldwide migration of formats, just CDNs transcoding for faster transmission, and it’s a fraction of the surface area of the full JXL standard.
(It should also be possible to polyfill with ServiceWorkers + the existing Brunsli WASM module, but I don’t have a HOWTO or anything.)
They don’t like “me too” comments, but stars on the issue, comments from largish potential users, relevant technical insight (like if someone gets a polyfill working), or other new information could help.
I’m sorry, lossless jpeg recompression Is not a feature that is a selling point: it doesn’t impact new images, and people aren’t going to go out and recompress their image library. I really don’t understand why people think this is such an important/useful feature.
Realistically I think the JPEG recompression is not something you sell to end users, it’s something that transparently benefits them under the covers.
The best comparison is probably Brotli, the modernized DEFLATE alternative most browsers support. Chrome has a (not yet closed) bug to support the JXL recompression as a Content-Encoding analogous to Brotli, where right-clicking and saving would still get you a .jpg.
Most users didn’t replace gzip with brotli locally. It’s not worth it for many even though it’s theoretically drop-in improved tech. Same things are true of JPEG recompression. But large sites use Brotli to serve up your HTML/JS/CSS, CDNs handle it; Cloudflare does, Fastly’s experimenting, if you check the Content-Encoding of the JS bundle on various big websites, it’s Brotli. Same could be true of JPEG recompression.
I don’t think you stop thinking about handling existing JPEGs better because you have a new compressor; existing content doesn’t go away, and production doesn’t instantly switch over to a new standard. I think that’s how you get to having this in the JXL standard alongside the new compression.
Separately, if JXL as a whole isn’t adopted by Chrome and AVIF is the next open format, there’s a specific way JPEG recompression could help: AVIF encoding takes way more CPU effort than JPEG (seconds to minutes, depending on effort). GPU assists are coming, e.g. the new gen of discrete GPUs has AV1 video hardware. But there’s a gap where you can’t or don’t want to deal with that. JPEG+recompression would be a more-efficient way to fill that gap.
Happily most modern cameras (i.e. smartphones) have dedicated hardware encoders built in.
None I know of has a hardware AV1 encoder, though. Some SoCs have AV1 decoders. Some have encoders that can do AVIF’s older cousin HEIF but that’s not the Web’s future format because of all the patents.
I’d quite like good AV1 encoders to get widespread, and things should improve with future HW gens, but today’s situation, and everywhere you’d like to make image files without a GPU, is what I’m thinking of.
Companies already do stuff like this internally, and there’s a path here that doesn’t require end-users know the other wire format even exists. It seems like a good thing when we’re never really getting rid of .jpg!
Ah, sorry my bad I was thinking of the HEVC encoders, durrrrrr
It’s just good engineering. We’ve all seen automated compression systems completely destroy reuploaded images over the years. It’s not something that users should care about.
Indeed, but my original (now unfixable) comment meant for end users. The original context of the current jpegxl stuff is the removal of jpegxl support from chrome, which meant I approached this article from the context of end users rather than giant server farms.
I don’t quite understand how you get here. Browsers are used for getting images from servers to regular users’ faces. If they support JXL, servers with lots of JPEG images for showing to regular users can get them to the regular users’ faces faster, via the browser, by taking advantage of the re-encoding. Isn’t that an advantage for regular users?
The conversion can be automatically applied by a service like Cloudinary. Such services currently offer automatic conversion of JPEG to WebP, but that always loses quality.
I’m not sure why you’d assume that. For many services that stores lots of images it is an attractive option, especially given that it isn’t just identical image, but can enable recreating the original file.
E.g. Dropbox has in the past come up with their own JPEG recompression algorithm, even though that always required recreating the source file for display.
You’re right - I didn’t make this clear.
Regular users are not the ones who care, but they’re the people for whom it would need to be useful in order to justify reencoding as a feature worthy of the attack surface adding JPEGXL on the web. The fact that it kept being pulled up to the top of these lists just doesn’t make sense in the context of browser support.
A much more compelling case can be made for progressive display - not super relevant to many home connections now, but if you’re on a low performance mobile network in remote and inaccessible locations or similar (say AT&T or TMobile in most US cities :D ) that can still matter.
That said I think google is right to remove it from Chrome if they weren’t themselves going to be supporting it, and it doesn’t seem that many/any phones are encoding to it (presumably because it’s new, but also modern phones have h/w encoders that might help with heir, etc?)
Most regular users don’t host their own photos on the web, they use some service that they upload images to. If that service can recompress the images and save 20% of both their storage and bandwidth costs, that’s a massive cost and energy saving. I wouldn’t be surprised, given that the transform appears to be reversible. If they’re already doing the transcoding on the server side and just keeping small caches of JPEG images for frequently downloaded ones and transcoding everything else on the fly. If browsers support JEPG XL the their CPU and bandwidth costs go down.
The surprising thing here to me is that the Google Photos team isn’t screaming at the Chrome team.
the jpeg-xl<->jpeg transcoding is simply an improvement of the entropy coder, but more importantly as long as there is no change in the image data the “cloud storage” provider is more than welcome to transcode however they feel - I would not be surprised if the big storage services aren’t already doing any as good or better than jpeg-xl.
The reason it can do lossless transcoding is that it is able to essentially a jpeg using a different extension to indicate a different entropy coder. There is nothing at all stopping cloud storage providers doing this long before jpegxl existed or was a standard, and they don’t have the compatibility requirements a standards body is worried about, so I would not be surprised if providers were already transcoding, nor would I be surprised if they were transcoding using their own system and not telling anyone for “competitive advantage”.
Why? They’re already free to transcode on the server end, which I assume they do anyway, and I would assume they do a better job of than jpeg-xl. For actual users, Chrome already supports a variety of other formats superior to jpeg (not xl), and seemingly on par with jpeg-xl (+/- tradeoffs). In my experience online views (vs. “download image…” button) use resized images that are smaller than the corresponding JS most such sites use (and reducing hypothetical full resolution image isn’t relevant, because they essentially treat online as being “preview by default” and also why provide a 3x4k version of a file to be displayed in a non-full screen window on a display with a smaller resolution than the image?).
The downside for Chrome of having jpeg-xl is that it’s yet another image format, a field renowned for its secure and robust parsers. I recall Safari having multiple vulnerabilities over the years due to exposing parsers for all the image formats imaginable, so this isn’t an imaginary worry.
Obviously in a year or so, if phones have started using jpeg-xl the calculus changes, it also gives someone time to either implement their own decoder in a secure language, or the chrome security folk have time to spend a lot of effort breaking the existing decoder library, and getting it fixed.
But for now jpeg-xl support in chrome (or any browser) is a pile of new file parsing code, in a field with a sub optimal track record, for a format that doesn’t have any producers.
To me the most repeated feature that is used to justify jpeg-xl is the lossless transcoding, but there’s nothing stopping the cloud providers transcoding anyway, and moreover those providers aren’t constrained to requirements specified by a standard.
I would actually go and re-encode my share of images, which for some reason* exist as JPEG, if I knew that this won’t give me even more quality loss.
* Archival systems are a typical reason to have high amounts of jpeg stuff laying around.
This repacking trick has been around for a while, e.g. there’s Dropbox Lepton: https://github.com/dropbox/lepton
But the vast majority of users aren’t going to be doing that.
You realized your first order mistake. The second order mistake is what really should be corrected.
tone, “I’m sorry”, this is passive aggressive and not productive.
Not realizing why this would be an advantage for a new format. This is your Chesterton’s Fence moment.
This is a followup to a story that was on the frontpage earlier this week. Does it really need its own submission?
Yes, it’s a good submission. There’s a lot of technical detail about why JPEG XL is the best image format for a wide range of use cases.
This post is a reaction to that initial news and the original post didn’t much of the technical merits, just that removal had happened.
As the article stated, many organizations have come out publicly on the issue tracker to show that the reaction of little interest in JPEG XL was premature and unfounded. There’s more to see than there was a 2 days ago.
[Comment removed by author]
[Comment removed by author]
There are some good potentially portable ideas embodied in the reference JXL encoder, like the option to target a perceptual distance from the original rather than a bitrate.
There are also deep differences (spatial prediction!) but I bet the folks that worked on JXL could help advance AVIF encoding in terms of CPU efficiency (which AVIF badly needs) and reliable quality even with the set of encoding tools already fixed.