Thus, in this particular case, blaming Nvidia for not working with Wayland
I mostly agree with the article but this part is just plain stupid. No one is blaming Nvidia for not supporting Wayland; they blame Nvidia for refusing to release documentation so that other people can make Wayland work. That’s a 100% legitimate criticism, and it’s a criticism that Xorg people have just as much reason to make as Wayland people. Any hardware vendor that refuses to release documentation for their hardware sucks!
The security thing is slightly misrepresented here: it’s not just ‘don’t run untrusted applications’ it’s that the X11 protocol has no way of differentiating between a 100% trusted-and-may-take-complete-control-of-your-system app and any less-trusted app.
It’s also misrepresented by the Wayland folks: this isn’t intrinsic to X11. It would be fairly simple to add a capability mechanism to the X11 protocol and allow clients to be restricted. In fact, that’s more or less possible today with something like Xephyr: you run an X server inside the X server and it can only see the resources of the child. Xnest did something similar as a lighter-weight proxy. If you wanted to put in the engineering effort, it would be possible to add to Xorg the equivalent of Jails: isolated namespaces for different clients or groups of clients so that they can’t see each other. It would be a lot of work but it would be a lot less work than writing an entirely new display server and getting buy-in from everyone running things on top of it.
It would also be less effort to do an X12 that removes all of the extensions (and core behaviours such as drawing lines with no antialiasing in a way that doesn’t work well with modern GPU architectures) that no one uses anymore and builds render, composite, and damage into the core. This could provide an xlib that abstracted over the differences and require only things that use XCB directly to be modified.
the X11 protocol has no way of differentiating between a 100% trusted-and-may-take-complete-control-of-your-system app and any less-trusted app.
That’s only true if you look at core X11 alone. There’s been a security extension around for a long time to do this; it is the reason ssh -X and ssh -Y are different things.
It isn’t amazing though, but it would definitely be less effort to just fix the bugs in there than to throw it all out…
Thanks! I thought they just provided different ways of setting the MIT magic cookie. Today I have learned about the X11 SECURITY extension. It looks as if it’s not a great design (in particular, the set of features is very static and it doesn’t provide a good mechanism for limiting communication with the WM, which can act as a confused deputy), but it also isn’t too far off what I’d want.
For embedded, mobile and high-performance computing, Linux-based systems already won, in terms of popularity.
Ruling the desktop OS world and providing a system for people that just want to watch movies and play games has been an implicit non-goal for the Linux community for several decades now, with the exception of a few distros like Ubuntu and Pop!_OS. And that is completely fine!
Linux Desktop users should ideally be part of a whole where the “payment” for using the system is to try to contribute back. This way, everyone benefits, and the community thrives.
Let Android, ChromeOS, Windows, macOS, iOS, Fuchsia, Haiku and others deal with the support burden that these users represents. People that want to contribute and participate can help each other over to Wayland (and possibly Sway as well). People that just want a stable Linux desktop can use Xfce4 + Xorg now and forever and ever.
Functionality > Security, Incomplete Product… there’s an elephant in the room that author is oblivious to, and which readily reveals itself as soon as someone utters the magical question: why is Wayland practically omnipresent on appliances at this point, but not on desktops? I mean, most of the BSPs I was carving out in 2016 used Wayland, at a time when most Wayland compositors that could do more than tile windows on a screen were a dysfunctional, crashing mess on like 90% of the desktop computers in the real world.
That elephant is that there are extraordinarily few people working on Wayland-related desktop technologies these days. Actually, there are extraordinarily few people working on Linux-related desktop technologies these days, especially if you take into account how complex the machinery they’ve created over the years is. There were barely enough people working on X11-related stuff, and they had 30+ years’ worth of technology behind them, with enough inertia to keep going on its own for a while. If you look at how few people are doing any desktop stuff with Wayland and at how much they’ve achieved, it’s hard not to think that Wayland got at least some things right, because there’s no way a bad technology would allow so few people to get so many things working.
The current state of affairs has very little to do with Wayland making the right or wrong trade-offs. Lots of people look at X11’s functionality and laugh at Wayland’s, but forget that X11, and a lot of what eventually became Xorg, was sponsored, or straight out developed by large (and small) commercial vendors, who poured a lot of money into it and explicitly cared about the desktop side very much. This just isn’t the case anymore. The golden age of Unix workstations is long past and most of the vendors who fund and drive Linux development would just as soon have you running WSL2, now that it’s an option.
Things haven’t died out completely because there are some vendors who still have stakes in the game (Nvidia, Valve to some degree) but the Unix desktop market isn’t exactly a thriving field anymore. There ain’t enough money in it to motivate the big players so most of the work ends up being done by independent developers who work in their spare time. Of course it’s lagging and never finished. People won’t send patches and don’t have money to spare – so how long do you think it’s going to get to catch up with Xorg, which had a bunch of people working full-time on it for years?
Not if you count OS X. And I think it fits the bill pretty well. Pretty much everything you could point to and say, “But it can’t be a Unix workstation because XYZ!” was true of plenty of golden-age systems that everyone agrees were Unix workstations.
Proprietary desktop rendering not built on X11? Just like pre-Solaris SunOS.
Closed-source? I don’t remember being able to download the HP-UX source for free.
Weird almost-but-not-quite-compatible command-line tools that break your shell scripts? Hmm, was that ps -ef or ps -aux in those scripts?
Tied to a specific vendor’s non-commodity hardware? Sure, just like IRIX.
Not built on top of an “official” Unix distribution? See BSD vs. SysV.
To add to what @x64k says: You can buy a Mac and get a decent UNIX workstation, but it’s a mostly proprietary UNIX and it’s probably not the one that you’d use for any large-scale deployments (if you are targeting macOS or iOS, it’s almost certainly because of the user-facing features and not because of the XNU kernel). xhyve is great and you can run whatever open source *NIX systems you want on it (if you use Docker, it will manager xhyve for you).
You can also buy a high-end Dell (or whoever) desktop running Windows and run as many *NIX systems as you want on Hyper-V (and if you care about only Linux, WSL2 or Docker will manage orchestration of a load of them for you).
In both cases, you get a workstation that probably isn’t running the same OS that you’re deploying on the server but can happily run it in virtualisation and can also run all of the business apps that you may need. Other than the somewhat difficult to quantify benefits of software freedom, there’s very little tangible advantage to running an open-source *NIX OS on a workstation.
I’m running Windows on my work-provided laptop and desktop, macOS on my personal laptop, and both of them are great for the development work I do, which involves remote access a FreeBSD [virtual] machine somewhere.
I don’t mean to say you can’t get a good Unix workstation anymore – and I am counting OS X (as in, I’m counting the Mac Pro). You can still get a good Unix workstation from some manufacturers, it’s just that it’s a footnote in their offering, even for them, and it’s a domain that’s largely neglected by hardware manufacturers, software developers, and especially UX/UI devs.
That elephant is that there are extraordinarily few people working on Wayland-related desktop technologies these days. Actually, there are extraordinarily few people working on Linux-related desktop technologies these days…
I think that there are relatively few people working on desktop technologies in general these days. It seems to me that since the smart phone and web app took off in the late 2000’s, desktop GUI toolkits and desktop-only applications have kinda plateaued. This has just hit Linux harder than Windows/MacOS since there isn’t as much money behind fundamental improvements. But you can also look releases of MacOS over the 2010’s and, even weeding out the haters who bitch about every UI change or lack of such ever, it seems (from the outside) like things have generally stagnated.
Improving desktop technologies doesn’t make people money anymore, and isn’t where the exciting innovation happens.
First and foremost, the post is largely framed like there are two groups of people: the producers and consumers of Xorg/Wayland. You can see that framing in calls to “supply and demand” and mostly talking about what is most valuable to users, and ignoring ease of development as something to care about. Essentially, the post treats FOSS display servers as a product to consume.
But this is free software. There’s no fundamental difference between producers and consumers. You aren’t paying for the development of a display server. You aren’t owed anything. If the maintainers of Xorg no longer enjoy maintaining it, they should stop. And they did. If that doesn’t work for you, the onus is on you to pick up development where it left off.
If that model doesn’t work for you, go pick up Windows or macOS, where you are literally paying money to not have these problems. That’s awesome. But people who use Linux as a desktop are opting into a different model where they are using software distributed “WITHOUT WARRANTY”. It’s unreasonable to then complain that the people who screw around with this software in their free time have decided to stop or switch gears to something they enjoy more.
And then there are some small nitpicks.
Words like DPI, display scaling, framebuffer and such are meaningless
Plenty of users have issues with wanting different DPI and display scaling on different monitors. I’m one of them. This is a very real feature of Wayland that impacts my life, and I’m sure many would agree.
Nvidia makes hardware. Their job is not to sabotage their product so it can work with arbitrary bits of code out there. It is the job of software vendors to make their software work with the hardware in the best fashion (if they want to)
Open sourcing a driver (or even just making it source-available) is not “sabotaging their product”. AMD does it just fine with their GPUs. I won’t speculate on why Nvidia doesn’t cooperate with open source, but I suspect that the reason would frustrate me.
If that model doesn’t work for you, go pick up Windows or macOS, where you are literally paying money to not have these problems. That’s awesome. But people who use Linux as a desktop are opting into a different model
This claim is fundamentally incompatible with… Something that I don’t know if there’s a name for.
Basically, there are two camps in Linux: the “LibreApple” camp (who want to replace Windows/etc with a FOSS equivalent and improve computing for everyone) and the “PowerUserOS” camp (who want an OS by power users for power users, a lot of whom see Linux’s niche status as a benefit, and who like to say “Linux is like Lego”).
By definition, LibreApple can’t be exclusive to hackers (“everyone” means “everyone”), which means you either literally teach the entire world to code, or you build Linux with a fundamental difference between producers and consumers.
Point is, you come down firmly in the “PowerUserOS” camp, to the point where you’re not even acknowledging the existence of LibreApple.
You could claim that “LibreApple means FSF and PowerUserOS means the open source crowd”, but 1) I’m not so sure that’s always true (see below), 2) you used the term “Free Software” so me describing you as in the “open source crowd” would make things less clear, and 3) there’s enough semantic confusion around the phrases “free software” and “open source” that I wanted to exclude them from my comment if at all possible.
From point 1 above: For instance, a lot of “LibreApple” people see Steam as a good thing for Linux, despite being a proprietary platform for (mostly) proprietary games. I don’t really want to get into this semantic discussion about FOSS though, I’m just preemptively responding to an expected response.
The problem is that “LibreApple” doesn’t work. Apple has funds to be Apple. The Linux ecosystem is developed by volunteers for fun. For the majority of FOSS, development will always be by volunteers for fun.
Maybe if there were a company out there whose business model was “buy our linux-compatible software and we promise it will just work”, then there could be a LibreApple. But I don’t know of any such company. System76 is the closest I can come up with. Buy their stuff and it’ll probably just work. But it’s not like they have tons of control over Wayland/X11, since the display servers are written by volunteers for fun. So unless they want to take on the whole stack, there will still be problems.
I don’t know if the Linux ecosystem can house a company that’s paid to make the whole software stack Just Work. I hope it can. I know I’d pay for it.
The Linux ecosystem is developed by volunteers for fun. For the majority of FOSS, development will always be by volunteers for fun.
This hasn’t been true for a long time. Most of the big projects (for example the kernel, glibc, systemd, GNOME) are backed by big companies. RedHat is not part of IBM and employs a load of the core developers for key parts of the system. It’s difficult for a volunteer-run project to keep up with these.
I guess that’s fair. But I think the foundation of what I was saying is still true. If you’re not paying for a product, and instead are essentially relying on the enthusiasm of a volunteer (or a company donating to FOSS, which is not unlike volunteering), then we can’t look at things from the perspective of the software being a product. It’s not a product because no one is buying it.
That said, I admit the situation becomes more complicated with companies involved. They have more funds to throw at developers to maintain the small details that individual volunteers often don’t want to. But the incentive structure to do things like “support Xorg forever” is still missing. If there were a giant user base paying for Xorg that might stop paying if all their programs stopped working, maybe Wayland wouldn’t exist, or would have better backwards compatibility.
‘No end in sight’ seems wrong to me. I’ve been using Linux as a daily driver desktop for about 3 years now, and in that time Wayland has gone from completely nonviable for consumers to the default protocol for my distribution. We also now have Nvidia acceleration on Wayland and XWayland. The only major thing left I believe is screen sharing, and that gap is closing quickly.
I’ve often thought how neat it would be to write “XorgLite” - the Xorg codebase, with network connectivity removed (support only Unix domain sockets), a single fixed font compiled in to more or less force the use of Xft, declare the only supported character encoding is UTF-8, maybe even go so far as to hardcode in only 32bpp and remove support for palettes and whatever.
It would still be X: nothing in the “standard” requires remote network connectivity, more than a single font, etc. It would just be only the parts of X used by the vast majority of desktop users.
If you need server-side fonts and remote windows, by all means use Xorg Classic…but I think there’d be a userbase for this.
What benefit would this bring? It is pretty practical for applications today to be written that way. There’s nothing to really gain by breaking the others.
It’s really stupid to just dismiss the security angle like this post does. The goal for a huge portion of security work in the past decades is to get away from the model where it’s “game over” once you have executed untrusted code. With flatpack and snap, the goal is to be able to execute applications from vendors you don’t necessarily trust, and they can’t do anything dangerous without your permission. Exactly like how iOS and browsers have worked since their inception.
Any post whose take on security is, “once you have executed an untrusted line of code, it’s game over”, can be completely disregarded in these discussions IMO.
It is important to note though that X11 is actually compatible with sandboxing. It isn’t even that hard to do a decent job today, and with some work I’m sure it could become very simple and reliable to use without any breaking changes.
I mostly agree with the article but this part is just plain stupid. No one is blaming Nvidia for not supporting Wayland; they blame Nvidia for refusing to release documentation so that other people can make Wayland work. That’s a 100% legitimate criticism, and it’s a criticism that Xorg people have just as much reason to make as Wayland people. Any hardware vendor that refuses to release documentation for their hardware sucks!
The security thing is slightly misrepresented here: it’s not just ‘don’t run untrusted applications’ it’s that the X11 protocol has no way of differentiating between a 100% trusted-and-may-take-complete-control-of-your-system app and any less-trusted app.
It’s also misrepresented by the Wayland folks: this isn’t intrinsic to X11. It would be fairly simple to add a capability mechanism to the X11 protocol and allow clients to be restricted. In fact, that’s more or less possible today with something like Xephyr: you run an X server inside the X server and it can only see the resources of the child. Xnest did something similar as a lighter-weight proxy. If you wanted to put in the engineering effort, it would be possible to add to Xorg the equivalent of Jails: isolated namespaces for different clients or groups of clients so that they can’t see each other. It would be a lot of work but it would be a lot less work than writing an entirely new display server and getting buy-in from everyone running things on top of it.
It would also be less effort to do an X12 that removes all of the extensions (and core behaviours such as drawing lines with no antialiasing in a way that doesn’t work well with modern GPU architectures) that no one uses anymore and builds render, composite, and damage into the core. This could provide an xlib that abstracted over the differences and require only things that use XCB directly to be modified.
That’s only true if you look at core X11 alone. There’s been a security extension around for a long time to do this; it is the reason ssh -X and ssh -Y are different things.
It isn’t amazing though, but it would definitely be less effort to just fix the bugs in there than to throw it all out…
Thanks! I thought they just provided different ways of setting the MIT magic cookie. Today I have learned about the X11 SECURITY extension. It looks as if it’s not a great design (in particular, the set of features is very static and it doesn’t provide a good mechanism for limiting communication with the WM, which can act as a confused deputy), but it also isn’t too far off what I’d want.
For embedded, mobile and high-performance computing, Linux-based systems already won, in terms of popularity.
Ruling the desktop OS world and providing a system for people that just want to watch movies and play games has been an implicit non-goal for the Linux community for several decades now, with the exception of a few distros like Ubuntu and Pop!_OS. And that is completely fine!
Linux Desktop users should ideally be part of a whole where the “payment” for using the system is to try to contribute back. This way, everyone benefits, and the community thrives.
Let Android, ChromeOS, Windows, macOS, iOS, Fuchsia, Haiku and others deal with the support burden that these users represents. People that want to contribute and participate can help each other over to Wayland (and possibly Sway as well). People that just want a stable Linux desktop can use Xfce4 + Xorg now and forever and ever.
Triple win.
Functionality > Security, Incomplete Product… there’s an elephant in the room that author is oblivious to, and which readily reveals itself as soon as someone utters the magical question: why is Wayland practically omnipresent on appliances at this point, but not on desktops? I mean, most of the BSPs I was carving out in 2016 used Wayland, at a time when most Wayland compositors that could do more than tile windows on a screen were a dysfunctional, crashing mess on like 90% of the desktop computers in the real world.
That elephant is that there are extraordinarily few people working on Wayland-related desktop technologies these days. Actually, there are extraordinarily few people working on Linux-related desktop technologies these days, especially if you take into account how complex the machinery they’ve created over the years is. There were barely enough people working on X11-related stuff, and they had 30+ years’ worth of technology behind them, with enough inertia to keep going on its own for a while. If you look at how few people are doing any desktop stuff with Wayland and at how much they’ve achieved, it’s hard not to think that Wayland got at least some things right, because there’s no way a bad technology would allow so few people to get so many things working.
The current state of affairs has very little to do with Wayland making the right or wrong trade-offs. Lots of people look at X11’s functionality and laugh at Wayland’s, but forget that X11, and a lot of what eventually became Xorg, was sponsored, or straight out developed by large (and small) commercial vendors, who poured a lot of money into it and explicitly cared about the desktop side very much. This just isn’t the case anymore. The golden age of Unix workstations is long past and most of the vendors who fund and drive Linux development would just as soon have you running WSL2, now that it’s an option.
Things haven’t died out completely because there are some vendors who still have stakes in the game (Nvidia, Valve to some degree) but the Unix desktop market isn’t exactly a thriving field anymore. There ain’t enough money in it to motivate the big players so most of the work ends up being done by independent developers who work in their spare time. Of course it’s lagging and never finished. People won’t send patches and don’t have money to spare – so how long do you think it’s going to get to catch up with Xorg, which had a bunch of people working full-time on it for years?
Not if you count OS X. And I think it fits the bill pretty well. Pretty much everything you could point to and say, “But it can’t be a Unix workstation because XYZ!” was true of plenty of golden-age systems that everyone agrees were Unix workstations.
ps -ef
orps -aux
in those scripts?To add to what @x64k says: You can buy a Mac and get a decent UNIX workstation, but it’s a mostly proprietary UNIX and it’s probably not the one that you’d use for any large-scale deployments (if you are targeting macOS or iOS, it’s almost certainly because of the user-facing features and not because of the XNU kernel). xhyve is great and you can run whatever open source *NIX systems you want on it (if you use Docker, it will manager xhyve for you).
You can also buy a high-end Dell (or whoever) desktop running Windows and run as many *NIX systems as you want on Hyper-V (and if you care about only Linux, WSL2 or Docker will manage orchestration of a load of them for you).
In both cases, you get a workstation that probably isn’t running the same OS that you’re deploying on the server but can happily run it in virtualisation and can also run all of the business apps that you may need. Other than the somewhat difficult to quantify benefits of software freedom, there’s very little tangible advantage to running an open-source *NIX OS on a workstation.
I’m running Windows on my work-provided laptop and desktop, macOS on my personal laptop, and both of them are great for the development work I do, which involves remote access a FreeBSD [virtual] machine somewhere.
Virtualisation in general and the cloud in particular have removed most of the need to run the same OS locally that you run on your target system. That’s taken away a lot of the need for UNIX workstations to the extent that the only major vendor still in existence has carefully removed all mention of UNIX from their marketing material, in spite of the fact that their latest machines are officially certified UNIX workstations.
I don’t mean to say you can’t get a good Unix workstation anymore – and I am counting OS X (as in, I’m counting the Mac Pro). You can still get a good Unix workstation from some manufacturers, it’s just that it’s a footnote in their offering, even for them, and it’s a domain that’s largely neglected by hardware manufacturers, software developers, and especially UX/UI devs.
I think that there are relatively few people working on desktop technologies in general these days. It seems to me that since the smart phone and web app took off in the late 2000’s, desktop GUI toolkits and desktop-only applications have kinda plateaued. This has just hit Linux harder than Windows/MacOS since there isn’t as much money behind fundamental improvements. But you can also look releases of MacOS over the 2010’s and, even weeding out the haters who bitch about every UI change or lack of such ever, it seems (from the outside) like things have generally stagnated.
Improving desktop technologies doesn’t make people money anymore, and isn’t where the exciting innovation happens.
There’s a lot wrong with this rant.
First and foremost, the post is largely framed like there are two groups of people: the producers and consumers of Xorg/Wayland. You can see that framing in calls to “supply and demand” and mostly talking about what is most valuable to users, and ignoring ease of development as something to care about. Essentially, the post treats FOSS display servers as a product to consume.
But this is free software. There’s no fundamental difference between producers and consumers. You aren’t paying for the development of a display server. You aren’t owed anything. If the maintainers of Xorg no longer enjoy maintaining it, they should stop. And they did. If that doesn’t work for you, the onus is on you to pick up development where it left off.
If that model doesn’t work for you, go pick up Windows or macOS, where you are literally paying money to not have these problems. That’s awesome. But people who use Linux as a desktop are opting into a different model where they are using software distributed “WITHOUT WARRANTY”. It’s unreasonable to then complain that the people who screw around with this software in their free time have decided to stop or switch gears to something they enjoy more.
And then there are some small nitpicks.
Plenty of users have issues with wanting different DPI and display scaling on different monitors. I’m one of them. This is a very real feature of Wayland that impacts my life, and I’m sure many would agree.
Open sourcing a driver (or even just making it source-available) is not “sabotaging their product”. AMD does it just fine with their GPUs. I won’t speculate on why Nvidia doesn’t cooperate with open source, but I suspect that the reason would frustrate me.
This claim is fundamentally incompatible with… Something that I don’t know if there’s a name for.
Basically, there are two camps in Linux: the “LibreApple” camp (who want to replace Windows/etc with a FOSS equivalent and improve computing for everyone) and the “PowerUserOS” camp (who want an OS by power users for power users, a lot of whom see Linux’s niche status as a benefit, and who like to say “Linux is like Lego”).
By definition, LibreApple can’t be exclusive to hackers (“everyone” means “everyone”), which means you either literally teach the entire world to code, or you build Linux with a fundamental difference between producers and consumers.
Point is, you come down firmly in the “PowerUserOS” camp, to the point where you’re not even acknowledging the existence of LibreApple.
You could claim that “LibreApple means FSF and PowerUserOS means the open source crowd”, but 1) I’m not so sure that’s always true (see below), 2) you used the term “Free Software” so me describing you as in the “open source crowd” would make things less clear, and 3) there’s enough semantic confusion around the phrases “free software” and “open source” that I wanted to exclude them from my comment if at all possible.
From point 1 above: For instance, a lot of “LibreApple” people see Steam as a good thing for Linux, despite being a proprietary platform for (mostly) proprietary games. I don’t really want to get into this semantic discussion about FOSS though, I’m just preemptively responding to an expected response.
The problem is that “LibreApple” doesn’t work. Apple has funds to be Apple. The Linux ecosystem is developed by volunteers for fun. For the majority of FOSS, development will always be by volunteers for fun.
Maybe if there were a company out there whose business model was “buy our linux-compatible software and we promise it will just work”, then there could be a LibreApple. But I don’t know of any such company. System76 is the closest I can come up with. Buy their stuff and it’ll probably just work. But it’s not like they have tons of control over Wayland/X11, since the display servers are written by volunteers for fun. So unless they want to take on the whole stack, there will still be problems.
I don’t know if the Linux ecosystem can house a company that’s paid to make the whole software stack Just Work. I hope it can. I know I’d pay for it.
This hasn’t been true for a long time. Most of the big projects (for example the kernel, glibc, systemd, GNOME) are backed by big companies. RedHat is not part of IBM and employs a load of the core developers for key parts of the system. It’s difficult for a volunteer-run project to keep up with these.
I guess that’s fair. But I think the foundation of what I was saying is still true. If you’re not paying for a product, and instead are essentially relying on the enthusiasm of a volunteer (or a company donating to FOSS, which is not unlike volunteering), then we can’t look at things from the perspective of the software being a product. It’s not a product because no one is buying it.
That said, I admit the situation becomes more complicated with companies involved. They have more funds to throw at developers to maintain the small details that individual volunteers often don’t want to. But the incentive structure to do things like “support Xorg forever” is still missing. If there were a giant user base paying for Xorg that might stop paying if all their programs stopped working, maybe Wayland wouldn’t exist, or would have better backwards compatibility.
‘No end in sight’ seems wrong to me. I’ve been using Linux as a daily driver desktop for about 3 years now, and in that time Wayland has gone from completely nonviable for consumers to the default protocol for my distribution. We also now have Nvidia acceleration on Wayland and XWayland. The only major thing left I believe is screen sharing, and that gap is closing quickly.
I’ve often thought how neat it would be to write “XorgLite” - the Xorg codebase, with network connectivity removed (support only Unix domain sockets), a single fixed font compiled in to more or less force the use of Xft, declare the only supported character encoding is UTF-8, maybe even go so far as to hardcode in only 32bpp and remove support for palettes and whatever.
It would still be X: nothing in the “standard” requires remote network connectivity, more than a single font, etc. It would just be only the parts of X used by the vast majority of desktop users.
If you need server-side fonts and remote windows, by all means use Xorg Classic…but I think there’d be a userbase for this.
What benefit would this bring? It is pretty practical for applications today to be written that way. There’s nothing to really gain by breaking the others.
It would be neat.
It would remove a large security vulnerability and bug surface area.
It would be neat.
It’s really stupid to just dismiss the security angle like this post does. The goal for a huge portion of security work in the past decades is to get away from the model where it’s “game over” once you have executed untrusted code. With flatpack and snap, the goal is to be able to execute applications from vendors you don’t necessarily trust, and they can’t do anything dangerous without your permission. Exactly like how iOS and browsers have worked since their inception.
Any post whose take on security is, “once you have executed an untrusted line of code, it’s game over”, can be completely disregarded in these discussions IMO.
It is important to note though that X11 is actually compatible with sandboxing. It isn’t even that hard to do a decent job today, and with some work I’m sure it could become very simple and reliable to use without any breaking changes.