I must be some kind of dinosaur to the wayland devs. I don’t use a compositor. I avoid sending images down the wire in most my programs (unless there’s a specific need). And I do very much enjoy my separate window manager and other utilities. I even use programs over the network - and got some of mine to jump windows from one display to another on demand (though in practice I usually just use terminal programs that way and pop up new copies of X windows as needed).
I know I’m in the minority of both users and developers, but X works very well for me. I am not looking forward to the day when my distro update forces all of this to break for no real personal benefit.
it doesn’t work well, if at all, without a GPU. It’s glacial on a 2D framebuffer. If you want to work with a system with as few firmware blobs as possible, you probably can’t use Wayland on it. For that matter, I saw no improvement, and probably slight performance degredation, on my system with a GPU (WX7100). I’m told eventually Wayland will work better on such systems, but X11 works now.
I use a custom background program that transparently watches what X window is on top and remaps certain modifier keys automatically. On X11 it “just works,” but it’s currently impossible in Wayland without hacking the window manager, AIUI. I’m on Fedora with GNOME and Mutter is apparently in no hurry to provide such functionality.
I don’t blame the Wayland devs for forcing a switch, that’s on the distros and other programs. (My comment for the wayland devs seeing me as a dinosaur is that every Wayland explanation thing I’ve read or watched describe X in a way pretty alien to the way I actually use it.) For example, I don’t like PulseAudio. It doesn’t work very well even today, and I resisted touching it for years… but then Firefox dropped their support for ALSA and kinda forced my hand.
For a lot of programs, if upstream does something I hate, I’ll rewrite it myself or fork it or something like that. Even with firefox I considered switching browsers, but it wasn’t really a good solution either. I switched sound systems instead of browsers and PA continues to annoy me. At least my ALSA programs still work though.
And I fear the same thing is going to happen with wayland in the next several years, and there’s an X server on it too, so that’s good… but my custom window manager won’t run. Possible some day I’ll just write my own compositor but I’d really rather not so not looking forward to it.
(Additionally, users of my graphics libraries keep asking me for wayland support but at least I can say “sorry, go use a different lib” to them, I doubt I’ll ever move my graphics lib over since normal X clients work fine over there. But we’ll see.)
First it should be said that the Compositor, just like the Window Manager (they don’t need to be the same!) are just regular X Clients themself. Designating the Compositor as something “special” is just wrong.
I think this is the part where everyone’s going to disagree, because the core disagreement seems to center around the question of whether X’s system of window managers is truly elegant or not.
If the Compositor really was “just an X client,” then why can’t you run more than one of them on the same X server? Truly normal X clients, the sort that get window frames drawn around them or that draw such a frame themselves, can be combined, but a compositor or a window manager will always be a singleton.
Window managers, even simple ones, want to be able to forcibly draw things on top. Apparently, this is ugly in X11 (after all, if two windows wanted to forcibly draw things on top, you’d get a logical contradiction). The same is true of lock screens. If the window manager is treated as a singleton, then the tie breaker is obvious; the user can choose the winning window by configuring their window manager, and things like the window manager’s own pop-ups will just always win, and if the lockscreen wants special privilege, then either it negotiates with the window manager or the end-user just configures it that way. (or you do what Hikari did and just implement the lock screen within the window manager)
Window managers want things like keyboard commands. Hikari for X11 uses very inelegant polling to achieve this, and so does awesomewm (shown on Page 11 of the Hikari slides). Like with rendering things on top, if the window manager is treated as a true singleton, then it’s obvious that it should always win when trying to get at the keyboard, but if it’s “just another window,” then it’s not clear what should happen if both the window manager and the lock screen implementation are both trying to grab it.
Draxinger’s fear doesn’t seem to have come to pass, exactly. There isn’t one super-Wayland compositor that everybody uses (this is what Mir is supposed to be, but Mir isn’t very popular). And everybody isn’t reimplementing Wayland from scratch, either. Instead, wlroots seems to have become the “Rails of Wayland,” the default starting point for most Wayland compositors that provides the mechanism to go with the compositor’s policy.
In other words, I’m fine with separating policy and mechanism, as long as there’s actually a place to set the policy, rather than just having everyone “play walls and ladders” in a big global namespace.
If the Compositor really was “just an X client,” then why can’t you run more than one of them on the same X server? Truly normal X clients, the sort that get window frames drawn around them or that draw such a frame themselves, can be combined, but a compositor or a window manager will always be a singleton.
Alright, then think of it like this. The compositor is an X client which has the property that, whenever there is a conflict, it wins. There can only ever be one client with that property. But it speaks the same API as all the other programs and generally has the same sorts of interactions. Composition is elegance: no reason to have to separate APIs when you could just have one.
Do they generally have the same sorts of interactions? I don’t think that there’s very much similarity between a window manager and a regular application, or that the cases where they overlap aren’t covered by reusing the raw drawing API (GL, Vulcan, DPS).
Naively, I would expect a window manager protocol to be the opposite of a window client protocol. A compositor deals with incoming frames from many applications, while one other goal of Wayland is to not grant this ability to all apps. A compositor deals with one display buffer per monitor, while applications create as many logical display surfaces as they like. A compositor / window manager never loses focus, but decides on focus for all other apps. A window manager needs to be aware of the position of all windows all the time, but even if you think that regular apps should be able to get this information, it really doesn’t need to be told about these things unless it asks.
And if a compositor wants to render a regular window, why couldn’t it spawn another thread, or even process, and connect to itself?
I hope that one day we’ll work with displays using a tightest circle packing arrangement for pixels (honeycombs), with >200 pixels per cm
Reminds me of this pixel arrangement by samsung. Still rectangular grid, but interesting nevertheless.
There are some attempts for high quality font rendering with OpenGL (Google “Vector Texture Maps” or “Valve Distance Maps”), but they’re not very efficient: A 600kB vector font file blows up to several MB of texture data for just one single glyph size.
Rendering 2-d graphics on the gpu is still not super awesome (compared with 3d), but it’s progressing and it’s much better than it was. Signed Distance Fields (SDFs, what TFA calls ‘valve distance maps’) are succeeded by Multi-channel Signed Distance Fields (MSDFs, see here and here). Slug and pathfinder probably represent the current state of the art, and Raph Linus is working on exciting things.
Many of the aspirational ideas presented at the bottom, especially wrt network transparency, have been brought to life in arcan (hi @crazyloglad).
Here I try to stay out of these discussions as I have nothing at all positive to say about Wayland; the technical breakdown of all the things gone wrong would put me in a worse mood; the attention a serious walkthrough of all the insane stuff in there would get risk pushing me into depression. Use it to profit of exploiting a few million IVI and Tizen empowered devices and move on.
“Imaging you’d simply hold your smartphone besides your PC’s monitor a NFC (near field communication) system in phone and monitor detects the relative position, and flick the email editor over to the PC allowing you to continue your edit there. Now imagine that this happens absolutely transparent to the programs involved, that this is something managed by the operating system.”
Yeah, Imagine that. The thing is you don’t want it transparent to the programs involved, you want the right mechanisms dynamic so that behaviour and visuals match the system you are presenting on rather than the one you are executing on. Transparent to the user. The client should just see it as crash recovery.
Even moreso though, Arcan covers all the good bits from X12
Incidentally, if someone interested in helping out with more detailed work on MSDFs for server-side text rendering, message me.
On the more fun stuff - if VR gets slightly more momentum (which looks to be the case, thank you Half Life: Alyx) we will probably get more of round displays with varying density, or affordable takes on the crazy cool stuff that Varjo does.
Not only I have to agree; I’ll also add that the entire Wayland protocol is built like “we only use unsafe C pointers and callbacks to do stuff”. When the most common operations (like putting up a window and painting something on it) requires an amusing number of event loops, callbacks, ping-pong’s, etc, it means the whole protocol has been somewhat over-engineered.
The C library uses void * data pointers and callbacks, but the protocol itself has nothing to do with this. For instance there is an implementation of the Wayland protocol in Rust that does things differently.
The protocol is very simple, in fact. A few resources:
Sun and NeXT had Display Postscript, and OSX had Display PDF with Quartz – so it’s still possible to add a high level graphics layer on top, including color space conversion. The new parameter that the article introduced is the need for device-specific rasterisation. In a multi-screen desktop where a window might span more than one device, we would need to have the corresponding subsets rasterised to match the specific device.
DPS was great and technically standardized as part of PostScript, but more interesting IMHO was NeWS. It was almost like the modern web: a graphical application written in object-oriented PostScript ran in a separate process and communicated with a backend process. Stuff that could be done entirely in the GUI could run without having to round-trip to the “application logic” side of things.
I’d love to see a modern reimagining of NeWS. Keep the PostScript drawing model (everyone does anyway), provide a compositing model, a resource management layer (for data movement to the display server), an audio interface, and an interface to run GPU shader programs on top. Use WebAssembly for distributing the bits of client-side code. You’d end up with something that could be implemented in a web browser with WebSockets, Canvas, audio, and WebGPU, so you get a remote display interface that anyone can connect to with software that they have installed but where people using it as their main display server can run something a lot more lighweight than a web browser.
News always sounds interesting but I’ve never been able to find much information online about what was the specific API. ej. Did it allow applications to ‘claim’ parts of the screen?
Posting a cached version (sadly missing images) … the wayback archive has visibility:hidden set on the entire content.
Archive.org version. Go to inspector: body > div.blog > div.blog_entry > div.blog_entry_text, and remove the visibility:hidden style. (I also recommend removing text-align:justify, as it doesn’t look good with monospace text.)
Except when the maintainers hold all the cards. It’s primarily Red Hat folks working on Xorg and Wayland, and they don’t want to be working on Xorg much longer, and they’re not going to keep working on Xorg even if Wayland doesn’t work for you. Scratching your itch isn’t their problem, and it doesn’t have to be, because you’ll have to use it eventually with your itch scratched or not when it’s the only game in town. Look how much trouble people have had maintaining a systemd-free distribution. The same thing will happen with Wayland, and I think that’s where a lot of the resistance is coming from.
The maintainers don’t hold the cards, and Red Hat isn’t the only player. Wayland is an open-source project, and as always, if you contribute then you get to have a say in discussions.
I originally had no say in Wayland development, but contributed server-side decoration support (a Red Hat employee has merged it, btw) and pluggable desktop components (still wip, but getting there, with adoption from many compositors).
I must be some kind of dinosaur to the wayland devs. I don’t use a compositor. I avoid sending images down the wire in most my programs (unless there’s a specific need). And I do very much enjoy my separate window manager and other utilities. I even use programs over the network - and got some of mine to jump windows from one display to another on demand (though in practice I usually just use terminal programs that way and pop up new copies of X windows as needed).
I know I’m in the minority of both users and developers, but X works very well for me. I am not looking forward to the day when my distro update forces all of this to break for no real personal benefit.
I have two specific beefs with Wayland:
it doesn’t work well, if at all, without a GPU. It’s glacial on a 2D framebuffer. If you want to work with a system with as few firmware blobs as possible, you probably can’t use Wayland on it. For that matter, I saw no improvement, and probably slight performance degredation, on my system with a GPU (WX7100). I’m told eventually Wayland will work better on such systems, but X11 works now.
I use a custom background program that transparently watches what X window is on top and remaps certain modifier keys automatically. On X11 it “just works,” but it’s currently impossible in Wayland without hacking the window manager, AIUI. I’m on Fedora with GNOME and Mutter is apparently in no hurry to provide such functionality.
As the Wayland maintainer, I’d be the first to say that nothing forces you to migrate to Wayland. Pick the tools that work best for you.
I don’t blame the Wayland devs for forcing a switch, that’s on the distros and other programs. (My comment for the wayland devs seeing me as a dinosaur is that every Wayland explanation thing I’ve read or watched describe X in a way pretty alien to the way I actually use it.) For example, I don’t like PulseAudio. It doesn’t work very well even today, and I resisted touching it for years… but then Firefox dropped their support for ALSA and kinda forced my hand.
For a lot of programs, if upstream does something I hate, I’ll rewrite it myself or fork it or something like that. Even with firefox I considered switching browsers, but it wasn’t really a good solution either. I switched sound systems instead of browsers and PA continues to annoy me. At least my ALSA programs still work though.
And I fear the same thing is going to happen with wayland in the next several years, and there’s an X server on it too, so that’s good… but my custom window manager won’t run. Possible some day I’ll just write my own compositor but I’d really rather not so not looking forward to it.
(Additionally, users of my graphics libraries keep asking me for wayland support but at least I can say “sorry, go use a different lib” to them, I doubt I’ll ever move my graphics lib over since normal X clients work fine over there. But we’ll see.)
I think this is the part where everyone’s going to disagree, because the core disagreement seems to center around the question of whether X’s system of window managers is truly elegant or not.
If the Compositor really was “just an X client,” then why can’t you run more than one of them on the same X server? Truly normal X clients, the sort that get window frames drawn around them or that draw such a frame themselves, can be combined, but a compositor or a window manager will always be a singleton.
A good detailed look at both protocols was spelled out in this experience report on implementing the same hybrid tiling/stacking window manager in both X11 and Wayland. Some important things to note:
Window managers, even simple ones, want to be able to forcibly draw things on top. Apparently, this is ugly in X11 (after all, if two windows wanted to forcibly draw things on top, you’d get a logical contradiction). The same is true of lock screens. If the window manager is treated as a singleton, then the tie breaker is obvious; the user can choose the winning window by configuring their window manager, and things like the window manager’s own pop-ups will just always win, and if the lockscreen wants special privilege, then either it negotiates with the window manager or the end-user just configures it that way. (or you do what Hikari did and just implement the lock screen within the window manager)
Window managers want things like keyboard commands. Hikari for X11 uses very inelegant polling to achieve this, and so does awesomewm (shown on Page 11 of the Hikari slides). Like with rendering things on top, if the window manager is treated as a true singleton, then it’s obvious that it should always win when trying to get at the keyboard, but if it’s “just another window,” then it’s not clear what should happen if both the window manager and the lock screen implementation are both trying to grab it.
Draxinger’s fear doesn’t seem to have come to pass, exactly. There isn’t one super-Wayland compositor that everybody uses (this is what Mir is supposed to be, but Mir isn’t very popular). And everybody isn’t reimplementing Wayland from scratch, either. Instead, wlroots seems to have become the “Rails of Wayland,” the default starting point for most Wayland compositors that provides the mechanism to go with the compositor’s policy.
In other words, I’m fine with separating policy and mechanism, as long as there’s actually a place to set the policy, rather than just having everyone “play walls and ladders” in a big global namespace.
Alright, then think of it like this. The compositor is an X client which has the property that, whenever there is a conflict, it wins. There can only ever be one client with that property. But it speaks the same API as all the other programs and generally has the same sorts of interactions. Composition is elegance: no reason to have to separate APIs when you could just have one.
Do they generally have the same sorts of interactions? I don’t think that there’s very much similarity between a window manager and a regular application, or that the cases where they overlap aren’t covered by reusing the raw drawing API (GL, Vulcan, DPS).
Naively, I would expect a window manager protocol to be the opposite of a window client protocol. A compositor deals with incoming frames from many applications, while one other goal of Wayland is to not grant this ability to all apps. A compositor deals with one display buffer per monitor, while applications create as many logical display surfaces as they like. A compositor / window manager never loses focus, but decides on focus for all other apps. A window manager needs to be aware of the position of all windows all the time, but even if you think that regular apps should be able to get this information, it really doesn’t need to be told about these things unless it asks.
And if a compositor wants to render a regular window, why couldn’t it spawn another thread, or even process, and connect to itself?
Reminds me of this pixel arrangement by samsung. Still rectangular grid, but interesting nevertheless.
Rendering 2-d graphics on the gpu is still not super awesome (compared with 3d), but it’s progressing and it’s much better than it was. Signed Distance Fields (SDFs, what TFA calls ‘valve distance maps’) are succeeded by Multi-channel Signed Distance Fields (MSDFs, see here and here). Slug and pathfinder probably represent the current state of the art, and Raph Linus is working on exciting things.
Many of the aspirational ideas presented at the bottom, especially wrt network transparency, have been brought to life in arcan (hi @crazyloglad).
Here I try to stay out of these discussions as I have nothing at all positive to say about Wayland; the technical breakdown of all the things gone wrong would put me in a worse mood; the attention a serious walkthrough of all the insane stuff in there would get risk pushing me into depression. Use it to profit of exploiting a few million IVI and Tizen empowered devices and move on.
“Imaging you’d simply hold your smartphone besides your PC’s monitor a NFC (near field communication) system in phone and monitor detects the relative position, and flick the email editor over to the PC allowing you to continue your edit there. Now imagine that this happens absolutely transparent to the programs involved, that this is something managed by the operating system.”
Yeah, Imagine that. The thing is you don’t want it transparent to the programs involved, you want the right mechanisms dynamic so that behaviour and visuals match the system you are presenting on rather than the one you are executing on. Transparent to the user. The client should just see it as crash recovery.
Even moreso though, Arcan covers all the good bits from X12
Incidentally, if someone interested in helping out with more detailed work on MSDFs for server-side text rendering, message me.
On the more fun stuff - if VR gets slightly more momentum (which looks to be the case, thank you Half Life: Alyx) we will probably get more of round displays with varying density, or affordable takes on the crazy cool stuff that Varjo does.
Not only I have to agree; I’ll also add that the entire Wayland protocol is built like “we only use unsafe C pointers and callbacks to do stuff”. When the most common operations (like putting up a window and painting something on it) requires an amusing number of event loops, callbacks, ping-pong’s, etc, it means the whole protocol has been somewhat over-engineered.
The C library uses void * data pointers and callbacks, but the protocol itself has nothing to do with this. For instance there is an implementation of the Wayland protocol in Rust that does things differently.
The protocol is very simple, in fact. A few resources:
Sun and NeXT had Display Postscript, and OSX had Display PDF with Quartz – so it’s still possible to add a high level graphics layer on top, including color space conversion. The new parameter that the article introduced is the need for device-specific rasterisation. In a multi-screen desktop where a window might span more than one device, we would need to have the corresponding subsets rasterised to match the specific device.
DPS was great and technically standardized as part of PostScript, but more interesting IMHO was NeWS. It was almost like the modern web: a graphical application written in object-oriented PostScript ran in a separate process and communicated with a backend process. Stuff that could be done entirely in the GUI could run without having to round-trip to the “application logic” side of things.
I’d love to see a modern reimagining of NeWS. Keep the PostScript drawing model (everyone does anyway), provide a compositing model, a resource management layer (for data movement to the display server), an audio interface, and an interface to run GPU shader programs on top. Use WebAssembly for distributing the bits of client-side code. You’d end up with something that could be implemented in a web browser with WebSockets, Canvas, audio, and WebGPU, so you get a remote display interface that anyone can connect to with software that they have installed but where people using it as their main display server can run something a lot more lighweight than a web browser.
News always sounds interesting but I’ve never been able to find much information online about what was the specific API. ej. Did it allow applications to ‘claim’ parts of the screen?
This may be interesting.
Ha, did that pop up on the orange site? I downloaded it today but forgot from where. A piece of gold, like Taligent’s OpenDoc environment.
Archive.org version. Go to inspector:
body > div.blog > div.blog_entry > div.blog_entry_text
, and remove thevisibility:hidden
style. (I also recommend removingtext-align:justify
, as it doesn’t look good with monospace text.)Or—here are the first and second images.
My philosophy: Why bitch about a project from afar when you can bitch about it in dialogue with its maintainers? Make your bitching useful ;)
Except when the maintainers hold all the cards. It’s primarily Red Hat folks working on Xorg and Wayland, and they don’t want to be working on Xorg much longer, and they’re not going to keep working on Xorg even if Wayland doesn’t work for you. Scratching your itch isn’t their problem, and it doesn’t have to be, because you’ll have to use it eventually with your itch scratched or not when it’s the only game in town. Look how much trouble people have had maintaining a systemd-free distribution. The same thing will happen with Wayland, and I think that’s where a lot of the resistance is coming from.
The maintainers don’t hold the cards, and Red Hat isn’t the only player. Wayland is an open-source project, and as always, if you contribute then you get to have a say in discussions.
I originally had no say in Wayland development, but contributed server-side decoration support (a Red Hat employee has merged it, btw) and pluggable desktop components (still wip, but getting there, with adoption from many compositors).