In the 2000s one system I worked on provided a simple crud http for manipulating xml objects. It started out as system for storing metadata about OS images, firmware images, etc. Eventually some crafty users realized it could also function as a bespoke key value store and it quickly turned into the primary data store for tons of people despite it only initially being intended for automation of equipment deployment.
Eventually very objects started being stored and users complained about the times it would take to download their 500MB to multiGB objects. We decided to upgrade the servers a bit turn on compression. Since the data was highly compressible it made a massive difference, although due to the sheer amount of data we ended up having to upgrade the servers more than we originally estimated.
I know the OP is just using the stuff provided by the gin community, but I’d suggest people implementing compression to also analyze the Accept-Encoding header sent by the client. Many browsers/client libraries support way more than gzip these days and you could see a massive improvement on larger payloads by utilizing other algorithms.
None of the algorithms that have been standardized and registered for use in HTTP are heavy enough to have much abuse potential. If you implement Accept-Encoding: lzma in your own server, that’s your business.
It’s okay not to compress even if the client requests it. identity is implicitly on the list of accepted encodings. If the client specifically goes out of its way to say “you must use this algorithm” by sending Accept-Encoding: x-heavy-algorithm, *;q=0 you can send them back 406 Not Acceptable, either because the server is temporarily overloaded, or just out of spite.
I was meaning you can get an improvement in size reduction with comparable compression speed by supporting more than just gzip. The browser will tell you all the ones it supports, if it supports Brotli or zstd and your implementations are faster than gzip, it’d make more sense to send those.
Server side I have a list of compression algorithms I prefer and find the most preferred choice that both the server and the requesting client support. On one current project we send more Brotli compressed responses than gzip, due to our preference of it over gzip.
We’ll still serve up gzip/compress things, but only if the client doesn’t support something better.
In the 2000s one system I worked on provided a simple crud http for manipulating xml objects. It started out as system for storing metadata about OS images, firmware images, etc. Eventually some crafty users realized it could also function as a bespoke key value store and it quickly turned into the primary data store for tons of people despite it only initially being intended for automation of equipment deployment.
Eventually very objects started being stored and users complained about the times it would take to download their 500MB to multiGB objects. We decided to upgrade the servers a bit turn on compression. Since the data was highly compressible it made a massive difference, although due to the sheer amount of data we ended up having to upgrade the servers more than we originally estimated.
I know the OP is just using the stuff provided by the gin community, but I’d suggest people implementing compression to also analyze the Accept-Encoding header sent by the client. Many browsers/client libraries support way more than gzip these days and you could see a massive improvement on larger payloads by utilizing other algorithms.
The reason I’d always heard for Gzip is it costs less computer time to compress it than it does to send it uncompressed.
Are you not worried about other compression algorithms being tuned to abuse and possibly denial of service your server?
None of the algorithms that have been standardized and registered for use in HTTP are heavy enough to have much abuse potential. If you implement
Accept-Encoding: lzmain your own server, that’s your business.It’s okay not to compress even if the client requests it.
identityis implicitly on the list of accepted encodings. If the client specifically goes out of its way to say “you must use this algorithm” by sendingAccept-Encoding: x-heavy-algorithm, *;q=0you can send them back406 Not Acceptable, either because the server is temporarily overloaded, or just out of spite.Possibly.
Again, extra optimizations might be worth it if you are getting more than a certain number of requests a day on your service.
If zstd is supported it’ll almost certainly compress better than gzip for less resources. Brotli likely will as well.
Gzip is greatly hampered by its lack of adaptivity and small windowing.
That’s a tall claim but I will try to verify it.
I was meaning you can get an improvement in size reduction with comparable compression speed by supporting more than just gzip. The browser will tell you all the ones it supports, if it supports Brotli or zstd and your implementations are faster than gzip, it’d make more sense to send those.
Server side I have a list of compression algorithms I prefer and find the most preferred choice that both the server and the requesting client support. On one current project we send more Brotli compressed responses than gzip, due to our preference of it over gzip.
We’ll still serve up gzip/compress things, but only if the client doesn’t support something better.
AWS CloudFront and CloudFlare turn on gzip/brotli compression in responses by default. Just had to double check that at work the other week.
Thanks. Good to know.
Moderately surprised that nginx has to be asked nicely before it will honor a gzip encoding request from a client!
Yeah me too but that’s how it is
By default, nginx is configured to run on a potato