Personally, I find the title of the article somewhat misguided, but I find it an interesting historical summary of how the PostScript language was created, and how it became ingrained in publishing-like 2D graphics APIs (including OS-level APIs and SVG).
Where I think the article misses some important insights, is where it’s comparing those 2D APIs to the triangle-based 3D APIs of GPUs. In my opinion, a better target for such a comparison, providing a better parallel, would be in comparing the 3D APIs with 2D APIs prevalent in game programming (not desktop publishing) in the ’90s and earlier, before 3D accelerators became popular. Things like: sprites (~textures in 3D?), horizontal/vertical lines and single pixels (~triangles in 3D?).
Though OTOH, the desktop-publishing 2D APIs seem indeed to have become the only 2D APIs avaliable to many games today, IIUC (i.e. mobile & web ones). Though I’m not in the community, so I may be wrong here.
So there are more political / technical details why it hasn’t progressed further on the GPU side and it is not for a lack of trying. Quick and flawed interpretation: Part of the issue lie with Khronos OpenVG adaptation on both driver and application sides. Then we have extensions like NV_path_rendering ( https://developer.nvidia.com/nv-path-rendering ) not being adopted by others. In the end, if you want to do the accelerated drawing you would need to have multiple paths based on the hardware you were running on, ranging from the OpenGL with tricks (distance fields, heavy stencil buffer use) to a software fallback. With 3D a software fallback was never a realistic option, so the need to cooperate was a lot stronger.