I’ve got a bad feeling about this.
I’ve been wanting this for a long time now, and had no idea there was an effort actually being made towards it.
I think this author does a really good job of explaining why, especially with the parts under “But why would I ever want…?” I don’t have a lot to add to that, other than to say that this is “document and open up the pipeline” work. It’s not adding complexity; it’s giving web developers access into complexity that was already there, but has been a black box.
This is exactly the strategy that was behind OpenGL’s initial and continued success. There are newer APIs, such as Metal, which are arguably making progress on unseating OpenGL after a decade. But they’re only doing so because they promise to open a black box which had grown over time; the hardware’s capabilities have diverged from the API’s, and there has been a gradual growth of implementation complexity beneath what used to be a very-low-level API. So opening up the pipeline isn’t only the strategy that worked for OpenGL; it’s the strategy that is working for OpenGL’s replacements, too.
I think it’s completely reasonable to be afraid of what looks like new complexity, but in this case it isn’t. This has the potential to clear up a lot of portability issues, and I think that’s exciting.
This is a good perspective. I fear it has great potential for abuse, but it’s within the bounds of my “acceptable” uses of browsers as intelligent document viewers, so I guess I must accede. Still, it’s another step towards forgetting that we once called browsers user agents. As in, agents for the user. I haven’t felt like a browser has been my agent for some time. Once upon a time I could tweak the default fonts and such to customize how a page rendered. But now I’m going to be forced to use increasingly low level grease monkey style hackery to achieve this effect.
That’s an interesting guiding principle, about which it would be possible to have a tangential discussion - I agree in the abstract but suspect that in the specifics, I’d categorize a lot of borderline things as being documents rather than applications. I also think the conceptual simplicity is a significant argument for allowing the case where a document and its editing interface are both in the browser.
My sympathies as far as the loss of agency. I’ve felt it too, especially with regard to accessibility issues - I’m reading this in what is physically a 36pt font, although it’s probably more like 18pt measured in “font files are full of lies” points. Site designers tend to test, at best, whether their stuff looks okay in 10pt, 12pt, and 14pt… Suffice to say the zoom level I need breaks most of the web.
So the plan is to introduce even more complexity into an already convoluted system. Maybe it’s a knee-jerk reaction to having way too many unnecessary problems with CSS, but I don’t feel good about this.
I think this is a situation where we need to do some damage on the way to undoing prior damage. Yes, adding more hooks in to the existing system will create more complexity. Yes, now having access to the internals means there’s more stuff to know about. However, those hooks, and the machinery that powers the hooks, are built upon those internals.
In the future, you can move the “user land” boundary closer to the internals. You could imagine that a next-next-next-gen browser may only implement the internals and then provide all the higher-level stuff as a library. If you ever want to be really free of CSS, you’d just not depend on the CSS engine and directly use the render tree which abstracts over hardware rendering.
It’s currently impossible to make the surface area of browsers smaller, but we can make it possible to make the surface area smaller by strengthening the divide between high-level constructs and lower-level ones, then building the high-level functionality in terms of the low-level interface.
Maybe this will make http://gridstylesheets.org/ more viable. Would be nice. (Although I’d still love to see native support)