1. 83
  1. 28

    Why I appreciate the effort, I need to point out that SVG is basically a decompiled, XML-encoded version of the PDF graphic model (via PGML, a joint effort by Adobe, IBM and Netscape).

    For the same of curbing the proliferation of new formats, I really would like to see a reduced PDF profile for icons (with an official compliance checker).

    1. 31

      Yes. So many people hate on PDF without realizing that the core of it is (without exaggeration) the cleanest incarnation in widespread use of the following three things, layered on top of each other:

      1. an object graph, like JSON (but before JSON!)
      2. … that can be encoded in a compact/compressed binary form that allows seeking within the file, and
      3. a 2D vector graphics model with compositing

      There’s tons and tons of cruft on top of this: JavaScript, forms, 3D objects (!!), but the core of PDF is quite elegant. If you read the reference it was clearly started by people who knew what they were doing.

      SVG is an attempt to translate (3) into XML while throwing away (1) and (2).

      There is PDF/A, which is supposed to be restricted PDF for archival use, but I actually don’t know much about it. Adobe has tools which check for PDF/A compliance. A community tool to check for compliance to an even smaller PDF subset would be pretty cool and I think very feasible.

      1. 5

        Seems like there’s a lot of good reasons to “hate on” PDF, despite realizing that it’s quite clean at its core.

        1. 9

          It’s cat -v all over again. :-)

        2. 4

          an object graph, like JSON (but before JSON!)

          I don’t understand that part. JSON, the serialization format? It doesn’t even seem very fit for graphs to be honest.

          1. 3

            I’d have compared it to ASN.1 over JSON; I suspect @jyc was trying to go with something people were more familiar with.

        3. 5

          PDF is not gonna be the new popular vector graphics format. Maybe someone could make a separate “PDF for icons” standard (and make it an actual standard, not just proprietary Adobe garbage), but it would need a different name and a different extension. Image viewers must be able to tell the system, “I can open PDF icons but not PDFs”. The name “portable document format” is also just a misnomer for a vector graphics format.

          But honestly, I would prefer just a fresh format that’s not bogged down by all the crap that’s in PDF and that’s not affected by PDF’s or Adobe’s legacy. I’d take format proliferation any day over increased PDF proliferation.

          1. 14

            PDF is already an “actual standard”, ISO 32000. If you are implementing a tool that generates PDFs, you don’t need to use any “proprietary Adobe garbage.” If you are implementing a tool that renders PDFs, for 99% of PDFs you won’t need to use any “secret” proprietary extensions either (could you clarify what “proprietary Adobe garbage” you are referring to?)

            But honestly, I would prefer just a fresh format that’s not bogged down by all the crap that’s in PDF and that’s not affected by PDF’s or Adobe’s legacy.

            Why not just choose not to implement the parts of PDF that you don’t care about? You could even go off an existing PDF version and say “my renderer does not implement anything past PDF 1.2.” This is what all PDF viewers already do. It’s not dissimilar from a compiler saying “I support C++14 but not C++17.”

            Writing a new standard is all well and good until you realize you’ve just re-specced and re-implemented PDF 1.2 with new mistakes.

            1. 13

              could you clarify what “proprietary Adobe garbage” you are referring to?

              Sure:

              PDF 1.7, the sixth edition of the PDF specification that became ISO 32000-1, includes some proprietary technologies defined only by Adobe, such as Adobe XML Forms Architecture (XFA) and JavaScript extension for Acrobat, which are referenced by ISO 32000-1 as normative and indispensable for the full implementation of the ISO 32000-1 specification.[9] These proprietary technologies are not standardized and their specification is published only on Adobe’s website.[10][11][12][13] Many of them are also not supported by popular third-party implementations of PDF.

              (https://en.wikipedia.org/wiki/PDF#History)

              I don’t want anything to do with Adobe formats. Flash is dead, PDF ought to die too.

              Why not just choose not to implement the parts of PDF that you don’t care about? You could even go off an existing PDF version and say “my renderer does not implement anything past PDF 1.2.”

              The point of standards is that everyone implements them the same way. If every “PDF for icons” renderer implements a different subset of PDF, and every “PDF for icons” exporter uses a different subset of PDF, we don’t have a “PDF for icons” standard.

              Let’s just use a new, good vector format rather than try to bastardize an old, bloated, terrible, non-standard Adobe format.

              EDIT (since you edited your post):

              Writing a new standard is all well and good until you realize you’ve just re-specced and re-implemented PDF 1.2 with new mistakes.

              Sounds good to me. A reimplementation of the features of PDF 1.2, except with stuff like the PostScript legacy removed, assumptions about page sizes removed, support for multiple pages in a document removed, raster graphics support removed, with everything else that’s necessary for a document format but not an image format removed, with a radically simplified file format because we don’t have to consider extensibility, without the legacy of the PDF name to cause confusion? I love it.

              1. 18

                PDF 1.7, […] ISO 32000-1, includes some proprietary technologies defined only by Adobe, such as Adobe XML Forms Architecture (XFA) and JavaScript extension for Acrobat, which are referenced by ISO 32000-1 as normative and indispensable for the full implementation of the ISO 32000-1 specification.[9] These proprietary technologies are not standardized […]

                (https://en.wikipedia.org/wiki/PDF#History)

                If you scroll a bit further, you’ll see that PDF 1.7 was DOA.

                PDF 2.0, standardized as ISO 32000-2:2017

                eliminat[es] all proprietary elements, updating, enhancing and clarifying the documentation, and establish[es] tighter rules

                While we are talking about history, let’s talk about the fact that the full specification PDF 1.4 have been made available since 2001 by Adobe. https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/pdf_reference_archives/PDFReference.pdf

                And that the specifications for PDF/A (A = Archival subset) are available as ISO 19005-1:2005. Newer revisions of ISO 19005 (PDF/A-2, PDF/A-3) do reference PDF 1.7 and PDF 2.0 but specifically say that only their core functionalities are allowed in compliant documents. (From https://www.loc.gov/preservation/digital/formats/fdd/fdd000318.shtml: «The constraints for PDF/A-1, PDF/A-2, and PDF/A-3 include: * Audio and video content are forbidden. 3D artwork is also forbidden. * Javascript and executable file launches are prohibited. * All fonts must be embedded and also must be legally embeddable for unlimited, universal rendering. […]»

                I don’t want anything to do with Adobe formats. Flash is dead, PDF ought to die too.

                PDF 1.4 (as well as the core of PDF1.7 and PDF 2.0) is going to be supported until the end of the human civilization. It is incorporated in PDF/A and PDF/A is the standard archival format used and recommended by every national archive for long-term storage.

                1. 10

                  From your own quote:

                  Many of them are also not supported by popular third-party implementations of PDF.

                  The point of standards is that everyone implements them the same way. If every “PDF for icons” renderer implements a different subset of PDF, and every “PDF for icons” exporter uses a different subset of PDF, we don’t have a “PDF for icons” standard.

                  Your position is not different from “I don’t want to write C++14, because C++20 has some features I don’t like.” Or “I refuse to use cat, because these days Linux distros are using systemd.” The existence and utility of e.g. PDF 1.2 as a standard is not in any way affected by the existence of PDF 1.7, in the same way that the existence and utility of C++14 / cat is not in any way affected by the fact that newer “standards” built on top of those tools contain things you and I disagree with.

                  I agree it’d be nice if Adobe stopped adding crap to PDF, but I can’t understand the conclusions you’re reaching from that premise.

                  1. 3

                    To my knowledge, there’s no PDF 1.2 standard. If there was, you could maybe have a point. But from what I can tell, 1.7 is the first version to get an associated standard. And even if there was a PDF 1.2 standard, you would still have to get people to adopt “PDF 1.2” as an image format unto itself, and to not just treat it as an old version of a document format. Image views would have to add PDF 1.2 support, browsers would have to add it to their IMG tags, OSes would need to learn to open PDF 1.2 documents in an image viewer rather than a PDF reader, etc… This is basically all the challenges met by a new image format, but complicated by the fact that there’s already a widely supported document format called PDF 1.2. How should Windows know whether a PDF 1.2 file is an image to be opened in an image viewer or a document to be opened in a PDF reader..?

                    And this is all ignoring all the reasons why PDF 1.2 isn’t actually a good image format. I think I laid out some of those reasons in my comment.

                    Repurposing PDF 1.2 to be the new widespread vector image format is just a terrible idea all around.

                  2. 1

                    You might want multiple pages to store dark mode and highlighted states.

            2. 15

              IconVG is another effort in this area.

              1. 4

                No offense to TinyVG, but IconVG seems more likely to catch on to me, if anything ever does, because it is coming from known developers within a large corporation.

                1. 8

                  Nigel knows a lot about graphics. Google, who merely happen to be his copyright-mongering employer, is big in picture formats (notably WebP and JPEG XL) and graphics in general (Skia). I can only assume there is a lot of knowledge transfer going on.

                  In any case, TinyVG is currently a joke, and IconVG is a personal project with no time investment guarantees.

              2. 13

                Hixie (of HTML5 fame) is now working on a vector graphics format for Flutter: https://flutter.dev/go/vector-graphics

                1. 7

                  It’s a real shame that we don’t have a widely supported vector graphics format. Even Google does doesn’t support SVG. I think this is for the most part due to the complexity. The fact that browsers support it helps SVG acceptance but they probably mostly support it because they have all of the components lying around anyways.

                  It hurts me event time I need to render some pixel art at some crazy high resolution so that some web service doesn’t scale it poorly. I really wish vector graphics where just universally supported.

                  I wonder if one day we can even tie the whole stack together so that a screenshot of a web-browser could output a vector image. That would be glorious.

                  1. 5

                    What do you mean by that: “Google doesn’t support svg”?

                    1. 8

                      Can’t speak for the parent comment, but I’ve ripped my hair out on multiple occasions when trying to get clean vector graphics into google slides. It’s extremely counter-intuitive (and not clearly documented), but there is almost no way to transform an SVG into a (vector) file format it accepts.

                      1. 5

                        I meant to say Google Docs, but almost all places where Google accepts an image doesn’t accept an SVG. Play store icons and screenshots, news logo, profile picture… It is really a surprise if an SVG is accepted.

                    2. 7

                      Regarding the fact that you have to convert fonts to outlines - that’s not because of SVG, you would have to do it regardless of the format you use to ensure that the output will be the same for all viewers, as you cannot rely on them to have some specific typefaces installed on their computers. The only way to solve this is to embed the font in your graphic.

                      1. 7

                        Embedding a subsetted font into the graphic sounds like the solution then? I don’t see why a file format couldn’t support text but require that the viewer loads characters only from the embedded font and not from a system font store.

                        Of course, you still end up with the problem that text rendering is a ridiculously complex topic with wildly diverging implementations between systems; DirectWrite and Harfbuzz probably wouldn’t render the same text with the same font in the same way. Maybe embedding text into image formats isn’t such a great idea. Maybe accessibility issues can be solved using metadata associated with rectangles on the screen while the visible text is constructed out of paths. I don’t know.

                        1. 4

                          See my comment from below, embedding fonts can work, but it’s too complex and we already have html for that.

                          1. 2

                            This has has licensing concerns. Generally “rendered” images have different rules in licenses than embedded fonts. Of course one can argue that there is no difference between vector graphics and a subsetted font but I don’t want to try to explain that to a judge.

                          2. 2

                            You’re right, but it’s the biggest problem with SVG and a huge accessibility hit. (I love SVG otherwise.)

                            1. 3

                              Why do you think so? Reading the text that is embedded in the graphic won’t be the user’s biggest issue if they cannot see the graphic itself.

                              1. 3

                                Not only accessibility for low-vision but also for things like searching, copying, scraping, remixing, low mobility etc.

                                1. 2

                                  You should use html for that. You can even place the text on your graphic if you want. svg is for graphics, not text.

                                  1. 2

                                    HTML can’t do stuff like custom kerns (among many other issues).

                                    1. 1

                                      I’ve never seen software that does custom kerning.

                                      1. 3

                                        Inkscape does it.

                                        1. 2

                                          Yeah, saw it just now, cool (but better get a better font ;)

                                2. 2

                                  Embedding fonts would be cool, but also very complex to get right e.g. you should not have to embed the same font separately in every image.

                            2. 5

                              The key difference that I see is that TinyVG does not do text, which is one of the more complicated features of SVG to implement (and I assume of PDF (and I assume of PostScript)).

                              1. 4

                                I get it. My resume is an svg file. I then use chromium to print it to PDF, since Firefox renders my resume incorrectly.

                                And then sometimes the PDF doesn’t work quite right. I gotta fix that.

                                1. 9
                                  • The file extension is implicit, undefined in the specification.
                                  • No media type is even suggested either.
                                  • Limited to sRGB, though at least it’s explicit about that. I’m not counting the “custom” value.
                                  • RGB 565 has an undefined colour space.
                                  • Gradients are implicitly in sRGB’s pseudo-gamma space, therefore incorrect. See reference rendering.
                                  • Blending is undefined, therefore implicitly in sRGB’s pseudo-gamma space, and incorrect.

                                  Our dear author, besides being a newbie to technical writing, also can’t into terminology:

                                  A color value of 1.0is full brightness,while a value of 0.0 is zero brightness.

                                  …and so it sucks, just like SVG. Actually, SVG at least has wide support for gamma-correct filters.

                                  Why can’t humanity learn.

                                  One mildly redeeming feature is that it looks easy to implement in Cairo terms, because it’s also awfully broken. And even then it’s not ideal.

                                  1. 12

                                    Instead of just ranting, feel free to contribute, write a github issue to the specification repo. This is a not a 1.0 release, but a first draft of the specification and i’m open for more precision and correctness. Keep that in mind.

                                    • The file extension is implicit, undefined in the specification.
                                    • No media type is even suggested either.

                                    Both are in the specification since an hour or so, the version on the website will be updated tomorrow at 5:00 UTC

                                    • Gradients are implicitly in sRGB’s pseudo-gamma space, therefore incorrect. See reference rendering.

                                    Gradients are is not yet specified properly yet, but it is defined to be blended in linear color space. See the reference implementation

                                    • Blending is undefined, therefore implicitly in sRGB’s pseudo-gamma space, and incorrect.

                                    Blending is defined as linear color blending, as implemented in the reference implementation

                                    • RGB 565 has an undefined colour space.

                                    That is kinda correct. Feel free to propose a good color space that will fit real world applications. I should figure out if display controllers can properly set gamma curves, this would allow to just fix it to sRGB as well

                                    Our dear author, besides being a newbie to technical writing, also can’t into terminology:

                                    I am not a native speaker, so excuse my bad english technical writing. My main language is german, and i can do better technical writing there, but i assume you wouldn’t be happy with a fully german spec either.

                                    1. 6

                                      I’m a native grump, and I’ve seen way too much broken stuff, nice to meet you. I considered implementing the format, but the more I read, the faster I backed out.

                                      The specification says 1.0, and there are no obvious traces of it being a WIP, so I criticized it as such. If you make changes to it now without bumping, you’re losing points in the technical documentation department already. If you don’t, your 1.0 is subpar. The document also unnecessarily craps on SVG in the introduction, which kind of set the tone for me.

                                      […] the color is linearly interpolated […]

                                      is ambiguous at best–I assumed the straightforward interpretation. “Is interpolated in a linear colour space” will finally be clear.

                                      My point about RGB 565 is concerned with what I see as omission. Perhaps you’ve quoted inappropriately.

                                      I’m no expert here, and colour is tricky, but scRGB in a floating point format would be future-proof and easy to convert. Find an expert.

                                      A color value of 1.0 is full brightness

                                      If you replace “brightness” with “intensity” (of primaries), that particular sentence will stop sounding funny.

                                      1. 4

                                        Coming from the sameish quadrant - what would be the less grumpy wishlist (spec and otherwise) from my booster-jab-muddled mine, what I can think of right now:

                                        1. colour layers to MSDF friendly textures for low-effort accelerated GPU rendering.
                                        2. multiple LoDs and rasteriser/API that reasons in target density and subchannel layout control for biasing.
                                        3. default to perceptual wide-gamut colour-space, SDR tone mapping controls?
                                        4. build-system integration, I guess a coming zig release will solve this with the C output target (22% now?) but the need to vendor in an amalgamation into existing large C infrastructures where CM refuses new tools in build-chains is real.
                                      2. 5

                                        Just because I know it sucks when someone shits on your work, I’ll say this: ignore that guy, they’re just being an asshole. Your library looks awesome, and I’n excited to see how it develops.

                                        1. 12

                                          I mean, the critique is valid, only the presentation is bad ;)

                                          This is actually all stuff i will incorporate in improving the spec.

                                          1. 1

                                            Awesome. I was going to suggest something similar (cmyk support). If this could work for print, it could really take off.

                                            Regardles, I think this really cool.

                                    2. 2

                                      All of this stuff made me realize: XML is meant to be an authoring file for vector graphics like xcf or psd which contains not only the final graphic information, but also how that graphic is constructed piece by piece.

                                      Markup languages in general were never meant for anything else than annotating corpus of text.

                                      1. 1

                                        Don’t see any use case for this. Graphic software and browsers won’t support it and if it has to be converted to svg to be usable, then it’s basically just one more step. All old technologies can probably be done in a better way if started from scratch, but the question is if it’s worth it.

                                        1. 3

                                          Have you noticed how many novel new raster image formats have gotten browser support in the last few years? It actually isn’t that had a bar to pass. There is some bureaucracy involved, but the hardest part is usually getting enough developer buy in and agreement on the details of a format spec. Once enough developers like the format and agree on how it should work, submitting an implementation to one browser vendor and getting buy in isn’t an impossible task, and once you get one the others have been following suite pretty quickly.

                                          1. 8

                                            Oh no. For image formats the bar is very high. In the last 25 years we’ve got:

                                            • WebP that came to existence only because of the enthusiasm for the VP8 codec (which in retrospect was overhyped and too early to get excited about). It took several years of Google’s devrel marketing and Chrome-only website breakages before other vendors relented.

                                            • APNG, because it was a relatively minor backwards-compatible addition. Still, it was a decade between when it was first introduced and became widely supported.

                                            • AVIF. It’s still not well supported. It got in only because of enthusiasm for the AV1 codec (the jury is still out on whether it’s a repeat of the WebP mistake). It’s an expensive format with legacy ISO HEIF baggage, and nobody would touch it if it wasn’t an “adopt 1 get 1 free” deal for browsers with AV1 video support, plus optimism that maybe it’d be easy for Apple to replace HEIC with it.

                                            Video codecs got in quicker, but market pressures are quite different for video. Video bandwidth is an order of magnitude more painful problem. Previous codecs were patented by a commercial entity that was a PITA for browser vendors. OTOH existing image codecs are completely free and widely interoperable. While not perfect, they work acceptably well, so there isn’t as much appetite for replacing them.

                                            Future of JPEG XL in browsers is uncertain, because AVIF may end up being a good enough solution to WebP’s deficiencies. AV1 support is a sunk cost, and browser vendors don’t want more attack surface.

                                            JPEG 2000 is dead. JPEG XR is dead. JPEG XT wasn’t adopted. Even arithmetic-codec old JPEG wasn’t enabled after the patents expired.

                                            1. 1

                                              JPEG XL has a really strong chance thanks to JPEG compatibility and best-in-class lossless compression.

                                              https://cloudinary.com/blog/time_for_next_gen_codecs_to_dethrone_jpeg

                                              1. 3

                                                That’s what authors of JPEG XL say, not what browser vendors say. And in this case browser vendors are the ones making the decision.

                                                JPEG XL does have very good compression and a bunch of flashy features, but browser vendors aren’t evaluating it from this perspective. They are looking at newly exposed attack surface (which for JPEG XL is substantial: it’s a large C++ library). They are looking at risk of future problems (there’s only a single implementation of JPEG XL, and vendors have been burned by single implementations becoming impossible to replace/upgrade/spec-compliance-fix due to “bug-compatible” users). They are weighing benefits of new codec vs cost of maintaining it forever, and growth of code size and memory usage, and growth of the Accept header that is always sent everywhere. You could say the costs are small, but with AVIF already in, the benefits are also small.

                                                Here are my bets:

                                                1. If Safari adds AVIF, then AVIF wins, and JPEG XL is dead. This is because AVIF will become usable without content negotiation, which will mean it will be a permanent requirement of the web stack, and browsers won’t be able to get rid of it. Supporting JPEG 2000 when everyone else supported WebP didn’t work out well for Safari, so I don’t expect Safari to add JPEG XL first.

                                                2. OTOH if AV1 flops, or gets obsoleted by AV2 before AVIF becomes established, then we could see browser vendors drop AVIF and add JPEG XL instead (unless they keep AV1 anyway, and maybe go for a lazy option of AVIF2).

                                                1. 1

                                                  Chrome & Firefox have JPEG XL implemented behind flags in deployment. (I can look at JPEG XL images in Firefox Nightly on Android right now.) WebKit is currently implementing Bug 208235 - Support JPEG XL [NEW]:

                                                  • Bug 233113 - Implement JPEG XL image decoder using libjxl [RESOLVED FIXED]
                                                  • Bug 233325 - [WPE][GTK] Allow enabling JPEG-XL support at build time [RESOLVED FIXED]
                                                  • Bug 233364 - JPEG XL decoder should support understand color profiles [RESOLVED FIXED]
                                                  • Bug 233545 - Support Animated JPEG-XL images [RESOLVED FIXED]
                                          2. 2

                                            Maybe it’s not a perfect match for the browser, but for example Qt applications can benefit largly from this by reducing the complexity of the icon rendering implementation.

                                            The same goes for embedded or in general memory-constraint applications. TinyVG graphics can be rendered with as much as 32k RAM, so there is also a speed benefit in that (less memory usage => faster)

                                          3. 1

                                            Hm… It seems like it still struggles with rounded ends on line segments. I wonder how that could be fixed.

                                            Also, what on earth is inverting the color on the GNOME and KDE folder icons?

                                            1. 3

                                              Ah, someone that actually looked at the PDF data!

                                              This is actually some magic from the Papirus icon theme that have magic color values #dark_theme and such which i just replace with white, gray and dark gray in the conversion process. Otherwise, i could also just yeet those images as they need SVG preprocessing. This would probably be the better way.

                                              And yeah, line ends are specified as “round” right now. More discussion here: https://github.com/TinyVG/specification/issues/4