Well, you need a bit more than that for a proper layer layered image format, because they are usually used for editing, not for end-use.
For example, things that JPEG-XL doesn’t support but I’d want in a format for editing - vector graphics layers, text layers, palette (pantonne or otherwise) layers, efficient partial edits (e.g. only 1 layer update shouldn’t require rewriting other layers).
Utilizing JPEG-XL for pixel storage in such a format would make sense however.
Instead of building a XML file with assets (optionally inside a ZIP file), just use an embedded database. Updates are a lot cheaper. Interoperability is a lot easier. You get atomicity “for free”.
I’m not sure either is really appropriate because the recommendation to use SQLite is really an implementation detail. I’m not sure it’s a good one because it requires a schema, which is the hard part. Two things that shove images into SQLite databases can’t interoperate unless they agree on how the layers are identified, how the compositing modes, affine transforms, clipping modes, and so on, are specified.
The article suggests SQLite in part because it has multi-TB files and atomic updates but that sounds like the opposite of what I want because it plays very poorly with per-file backups (including storing in revision-control systems) and it makes understanding which layer has changed hard. It’s a shame that bundles (folders that represent a document) haven’t caught on except on much macOS. If you represent a layered image as a bundle that contains PList describing the layers and a single file for each layer then you get a bunch of nice properties:
The layers can be any image file format.
Updates that change only one layer change just that file.
Updates over multiple layers are possible by adding the new files, doing an atomic (write-then-rename) overwrite of the plist, and then deleting any stale files. This is roughly what SQLite does internally.
Incremental backup works trivially. If you modify one layer, only that layer gets backed up.
Diff in revision control systems shows which layers have been modified.
Things that don’t support the layered format can still open each layer separately.
You can also extend this model with auditable non-destructive editing by adding signatures over some layers and modifying them only by adding a new layer over the top. This is useful for proving provenance of a photograph: the file containing the photo is the one whose hash was added to a public audit log, you don’t have to try to prove that some data embedded in a SQLite database is equivalent to some file you got from some trusted source.
As of this week, JPEG-XL works in safari (has previously worked in chrome/firefox and I hope they’ll reinstate that support).
It supports layers, HDR, animation, low-overhead progressive encoding, lossy and lossless compression, and it’s royalty-free.
Yeah, SQLite is cool, but we actually do have an image format that meets all the requirements in the post.
Well, you need a bit more than that for a proper layer layered image format, because they are usually used for editing, not for end-use.
For example, things that JPEG-XL doesn’t support but I’d want in a format for editing - vector graphics layers, text layers, palette (pantonne or otherwise) layers, efficient partial edits (e.g. only 1 layer update shouldn’t require rewriting other layers).
Utilizing JPEG-XL for pixel storage in such a format would make sense however.
TL;DR: Similar to What If OpenDocument Used SQLite?.
Instead of building a XML file with assets (optionally inside a ZIP file), just use an embedded database. Updates are a lot cheaper. Interoperability is a lot easier. You get atomicity “for free”.
Yeah I saw that being reposted and remembered having read this, so I posted it as a reply.
I tagged it databases because there is no “SQLite” tag, but it should be under “SQLite.”
I’m not sure either is really appropriate because the recommendation to use SQLite is really an implementation detail. I’m not sure it’s a good one because it requires a schema, which is the hard part. Two things that shove images into SQLite databases can’t interoperate unless they agree on how the layers are identified, how the compositing modes, affine transforms, clipping modes, and so on, are specified.
The article suggests SQLite in part because it has multi-TB files and atomic updates but that sounds like the opposite of what I want because it plays very poorly with per-file backups (including storing in revision-control systems) and it makes understanding which layer has changed hard. It’s a shame that bundles (folders that represent a document) haven’t caught on except on much macOS. If you represent a layered image as a bundle that contains PList describing the layers and a single file for each layer then you get a bunch of nice properties:
You can also extend this model with auditable non-destructive editing by adding signatures over some layers and modifying them only by adding a new layer over the top. This is useful for proving provenance of a photograph: the file containing the photo is the one whose hash was added to a public audit log, you don’t have to try to prove that some data embedded in a SQLite database is equivalent to some file you got from some trusted source.