So, a stupid question. In the new cold Wayland world, what’s to prevent running Xwayland rootful and just running all your apps in that? That should support everything that you’d want from X but doesn’t require you to run Xorg directly … right?
If so, then why did we throw the baby out with the bathwater?
You could run Xwayland in a single-app compositor like cage, yes. That’d give you the ability to run on hardware X doesn’t support (i.e. Apple and some other ARM SoCs)
Yes, you could do that. It kinda sorta works. Just don’t use anything modern with that approach, such as GTK4 because it introduces visual bugs for window decorations. I don’t remember the details but it has something to do with GTK4 introducing a new renderer that tries to make more direct use of modern hardware.
The parts of X11 that had to be thrown out revolve mostly around display management and relating visual output to input events. The X11 model for display management was to define a virtual screen and let the video drivers find a way to make the actual displays map to the virtual screen. This precludes any conventional, reliable model for handling changes in display topology, handling heterogeneous displays, vsync, etc.
So if you run X11 program in Xwayland you get to benefit from all of those display management improvements.
The potentially interesting question is whether you can manage Wayland windows with an X11 window manager using this approach. I wonder if it would be possible for a Wayland compositor to delegate window decoration and similar to an X11 window manager running in XWayland. This would require handling reparenting events and probably hooking into the composite / render / damage extension to identify some of the window management operations, but it might be possible. I bet a lot of objections to Wayland would go away if it provided a way for people to run their favourite existing window manager.
Slightly absurd in a wholeheartedly wonderful way, but I can’t find info on whether this is an actual re-implementation of the C64 or just a similar system on similar hardware. IE, if I had an old C64 disk or cartridge and popped it in, would it work correctly?
Edit: Ah, found the answer in the FAQ (https://cx16forum.github.io/faq.html): “No, this is not an emulation of any prior computer. The X16 is a unique product with its own memory layout and devices. It can run simple BASIC programs written for BASIC 2, and it can even run machine language programs that strictly use KERNAL calls for input and output. It will not, however, run Commodore games without re-writing the graphics, keyboard, joystick, and storage routines for this platform.” Alas!
Yeah, this is less interesting to me personally for that reason. Part of my enjoyment with vintage computing is making an old system do something it was never intended (or built) to do. It’s not just the 8-bit aesthetic, whatever that is, and it’s not just a system that can be fully understood, even though I like that too. This is an incompatible small-scale system that’s inspired by old machines rather than actually being one.
Would I buy it? I salute all the work that went into this but I’m probably not its core market. It lacks the historicity that I find appealing, and it’s making a system more limited rather than making a limited system more capable (which as a hobby I find far more compelling). But I’m glad it’s coming to fruition and I certainly won’t turn up my nose at more 6502s out there. I’m just not sure I’d spend my retro nerd budget on this rather than, say, another peripheral for my beloved Commodore 128DCR.
There are some other projects that provide modern reimplementation of the C64. The Ultimate64 Elite is the one I’m keeping an eye on. Hopefully it will be in stock someday soon.
Those projects do support old peripherals, cartridges, software, etc.
I’m guessing this is just a repackaged POWER10 (not even modified) with OMI controllers on-package (either somehow with better licensing, or using the old RYF switcheroo) and the PCIe controller made amenable somehow. All those specs aren’t incongruent with POWER10.
Nothing else makes sense in terms of timeline and capital costs - modern chips are eye-wateringly expensive to make, even if it’s say, a POWER10 derivative.
Exciting. I’m guessing that its somewhat of a spin-out of a bunch of hobbyists, hence why working on it for a long time without having a registration sorted yet. Hopefully they get their paperwork sorted and filed soon enough (hard to make a trusted supply chain if you don’t know who you are trusting!)
The potentially lower entry pricing is exciting too. Would love to experiment with a workstation and home-lab sized server.
Yeah, I’ve got an inquiry in to talk to these guys. They really came out of nowhere and you want to make sure you’re not betting the store on yesterday’s technology in a new wrapper. But I trust Raptor and they seem very enthusiastic about the S1, so I’m hopeful.
This is my first time seeing anyone ever touch SOM, let alone OpenDoc. Knowing the boilerplate required to make it work and UX confusion, it’s no wonder it didn’t take off. Even if it got better, it’s one hell of a bad first impression.
The Mac IIci and other systems that use that case have a great solution to this problem: the power button can be rotated to lock in the “on” position. Otherwise everything here that doesn’t have hardware power switches or isn’t “always on” is set to automatically power back on after the UPS goes down (mostly repurposed Power Macs plus the POWER6). That’s saved my butt a few times.
Remote reboot is more interesting. I have remotely accessible power cyclers for that.
As a Palm hobbyist I love still holding the actual devices, but this is almost as much fun as PumpkinOS, which is itself making leaps and bounds. https://github.com/migueletto/PumpkinOS
Please let me know if you do! The latest UI update stole the tab navigation keyboard shortcuts in Safari, so you skip through a few tabs and get stuck on the GitHub one and have to reach for the mouse. And every time it happens, I start to hate GitHub just a little bit.
DuckDuckGo rolled out an update around 15 years ago that stopped the normal keyboard navigation shortcuts working in their search box. I emailed a report and a couple of hours later got a reply from their CEO with a link to a test site to see if it fixed the problem. A couple of iterations later, it was fixed and they rolled it out. That kind of thing is the reason that I’m still using them so many years later. I wouldn’t be surprised if I start migrating away from GitHub in a few years.
My situation is (as usual) weird. I use Linux, but with a Mac keyboard. I have Firefox set to use Mac key combinations for copy, paste, etc. so that my muscle memory still works, but Github sees Linux, and overrides them with its own. I wonder if I should just bite the bullet and use a Mac user agent.
Having great extensions become suddenly incompatible really ground my gears. This change only seems to worsen the situation - I don’t see how changing the way modules are loaded will stop GNOME breaking extensions regularly any time soon, and if anyone finds a way to meaningfully take advantage of “allowing greater compatibility with JavaScript ecosystem” I will be pleasantly surprised.
Yeah, they migrate to a different import mechanism and switch from a utility module to inheritance at the same time? For what? They could have easily kept the utility module for a while and let the extension authors only have to deal with a single thing at the moment.
Don’t even try to get away with “well, it just works for me” or “but Wayland no worky”
Wow, amazing rhetoric right there. Paraphrasing an argument in dumb language and just stating it’s invalid without any counterargument.
From a technical point of view, great article. But “X works, Y does not” is as valid as an argument as they come. If using X breaks Z and you need Z, fuck using X, I will use Y. Even if Z is “bad software”, I may need it.
If using X breaks Z and you need Z, fuck using X, I will use Y. Even if Z is “bad software”, I may need it.
The end of the article directly acknowledges it, mostly with respect to accessibility.
Don’t even try to get away with “well, it just works for me” or “but Wayland no worky”
Wow, amazing rhetoric right there.
There’s an inverse rhetoric that’s just as bad, if not worse because it’s never spelled out explicitly:
If a piece of hardware doesn’t work on Windows, it’s the hardware vendor’s fault.
If a piece of hardware doesn’t work on Linux, it’s Linux’s fault.
Why? Because my webcam/printer/wifi works in Windows, so if it doesn’t work on this new Linux thingy, it must be the new thingy’s fault, right? Don’t try to muddy the waters with “the vendor didn’t support Linux” or “the vendor didn’t write a driver”: the thing works on Windows, it doesn’t on Linux, it should be obvious to any child that Linux doesn’t work. Now stop bothering me, I have real work to do.
Or something. Actual late adopters are never that blatant, they tend to have shorter, less refutable quips, like “Linux isn’t serious”, or “why would I ever use RISC-V?”, or “this new language lacks tooling”. But the running theme is the same: burdening the new thing with unfair proofs. Here there is a case for X’s API being fundamentally outdated. Making something better (like Wayland is attempting) requires breaking it. Obviously that comes with growing pains, but blaming Wayland for this feels either ignorant or disingenuous to be honest.
Now the article could explain all this in that many words, or it can use the shortcut it did: Don’t even try to get away with “well, it just works for me” or “but Wayland no worky”.
It’s a difficult balance to strike, on two accounts:
Nuance requires more text, which in many cases detracts from the main point. I routinely write nuances and caveats in my own essays, only to axe many of them just to keep it short and to the point. Sometimes those deliberate omissions are mistaken for ignorance.
Sometimes the best response is a dismissive coarse laugh. It’s a way of conveying that the attacked opinion is so ridiculous or outlandish, that whoever holds it should quietly change their mind in shame. It’s a polarising double edge sword however, to be use sparingly and carefully. In the context of this post it’s a harsh introduction, but it does help set the tone of the entire post, which is about depicting X as a crumbling old dinosaur we should stop feeding as a matter of mercy.
There’s also the target audience to keep in mind. Public writings are accessible to anyone, but they’re rarely intended for everyone. Some people would be more swayed by a nuanced take void of mockery, but for me the tone was very effective at conveying how strongly the author felt how bad the X situation really is.
And I didn’t know about battery life. Crap, I just set up XFCE with XMonad, which I’m assuming are under X… it would be nice to have a Wayland alternative. (And I’m still undecided on how much I actually need a desktop environment. Because ultimately I don’t need much: Ethernet, WiFi, mount USB drives, a way to transfer file to and from my Android phone, sound, screen brightness… the list isn’t short, but it’s still pretty bounded.)
A burned out OS dev is ranting with their friends after work. They’re just off a long shift. They’re being acerbic. They’re saying the things nobody says but a lot of people think. Now we’re doing a close reading of the transcript as if it’s a corporate press release. This is how it feels to me.
Is it actually true? Many of the things said in this post aren’t, so I’m skeptical of this too. Would be interesting to run a test.
I personally find it somewhat hard to believe… but plausible, there’s times when I see the X server on my box chewing through some more cpu than i’d expect necessary.
I’m talking more about factual errors than fallacies but three examples:
Multi-monitor scaling / Nonexistent on Xorg
Just plain false, I’m using it right now. (But the later point about getting the desktop settings being unstandardized is a fair point.)
the [fractional scaling] story with Xorg is still in integer scales.
Again, not true.
You’ll be spending more time in a fetal position sobbing than doing anything productive if you even try and interact with X.
lol. This is kinda subjective so maybe it isn’t fair to put next to two factually false statements, but it sounds like she doesn’t actually know what she’s talking about - most Wayland proponents obviously know very little about X so unsurprising - and the later thing about “To get the full experience, I’ll be writing a mini Wayland server and X client, which should teach me a fair bit on how all of this works” sure indicates she probably doesn’t even know the basics.
oh thanks, i know i shouldn’t assume but whelp. i’ll edit that.
I actually would be interested in reading the experience report writing that program though. A lot of newbie X tutorials and rants make a big deal out of the various steps in a hello world, so you go into it thinking it is going to be a mess…. but it actually isn’t much different than a hello world on, Windows, for example. And you can skip some of the steps on computers nowdays since you can pretty safely assume there’s true color support, etc. (until someone plugs in one of those new e-ink things that are all the rage :P but still)
And people criticize the Windows hello world too. There is a fair argument to be made that you can skip the steps and use the defaults instead - indeed, this is the case with most the libraries on top. auto window = new SimpleWindow; window.eventLoop(); for example with my lib… but there’s times when you actually do need to know how to break down the tasks. Maybe you actually do want the window hidden until you’re ready to show it. Maybe you do want to create a subclass instead of taking default behavior. Maybe you actually do want to support a 16 color display. Then new SimpleWindow(); isn’t going to cut it, and the lower level apis show their value.
Xlib itself is a lower level api (not quite as low as it gets, it has some conveniences built in, but not many), and X11 is a lower level protocol - explicitly being “mechanism, not policy”. Then you gotta read the other manuals on top of it. But… again this isn’t that different than other platforms. There’s the Win32 api which does provide some policy but you are still supposed to read the human/computer interaction guidelines and apply them.
I’ve never personally programmed something on Wayland but i wouldn’t be surprised if its low level functions take some boilerplate you don’t need to edit 95% of the time either.
Here there is a case for X’s API being fundamentally outdated. Making something better (like Wayland is attempting) requires breaking it.
The article disagrees with you in its conclusion: “You could probably add all (well, most) of these to Xorg, but not without some pretty fundamental changes, rewrites, and extensions.”
The wayland faq also disagrees with you. https://wayland.freedesktop.org/faq.html#heading_toc_j_5 “Why not extend the X server? Because for the first time we have a realistic chance of not having to do that. It’s entirely possible to incorporate the buffer exchange and update models that Wayland is built on into X. “
In other words, they broke things because they wanted to, not because they had to.
At that point, it is 100% their fault when things that used to work on Linux no longer work on Linux. They deliberately broke it.
What’s different now is that a lot of infrastructure has moved from the X server into the kernel […] or libraries […] and there is very little left that has to happen in a central server process.
But we can’t ever get rid of the core rendering API and much other complexity that is rarely used in a modern desktop. With Wayland we can move the X server and all its legacy technology to an optional code path.
Simplicity requires breaking APIs at some point. Even I can’t make the simplest thing out of the box, even for something as easy as a cryptographic signature API. Had to break the API and bump the major version to get from vtable abomination to the simpler (and more flexible!) solution I have now. I can imagine the constraints of a much bigger, much older project like X.
And to be honest, I wouldn’t be surprised if Wayland have to suffer the same fate a couple decades from now.
I don’t think they’re saying you’re not allowed to do a different thing, rather that at this point any interoperability issue between Wayland and Z is probably on Z. You aren’t considering the rest of the paragraph which frames the discussion as
X11 is, to put it simply, not at all fit for any modern system.
A piece of software being incompatible with Wayland does not improve the standing of X11, it introduces the constraint that it’s use requires X11.
X11 being unfit for any modern system does not make Wayland magically fit for a modern system. Wayland proponents need to understand that the introduction of superseding technical requirements (e.g. around rendering and input – huge improvements by Wayland) does not invalidate adjacent functionality that applications have come to depend on. For example, applications that need to know their own location on the display are not bad software. It is just software that is inconvenient for Wayland developers to support on some platforms. It does not matter if there is a Wayland protocol for it because the most-used desktop environment has no intention of supporting it. At the same time they have broken Xwayland rendering so compatibility shims like GDK_BACKEND=x11 don’t work reliably either. So now huge swaths of Wayland users cannot reliably use software that requires functionality that is available on nearly every other modern platform. Users are now left with two options that are both unfit for a modern system.
For example, applications that need to know their own location on the display
What kind of application needs that? I’m not sure even a screen recorder would need to know where other applications windows are located, as long as it can record their contents.
This article has a good breakdown of window types and IMO gives a fair description of the conflict. The main one that still doesn’t work broadly on Wayland is what they call satellite or toolbox windows. These used to be common but got a bad reputation, probably for the better, because they usually got sloppy (e.g. GIMP until about 10 years ago). However there’s a lot of applications that make better use of knowing and setting window coordinates. One I’m partial to is what I call “buddy” programs. I maintain my own diagraming software that is essentially gvim w/ a plugin that collaborates with a rendering server and an image viewer. It originally used Xembed to have gvim embedded in a frame along with the image viewer. Xembed is not supported on Wayland and while there is a protocol for sharing rendering surfaces the GTK maintainers have no intention of refreshing their “socket” feature to use it. The program has an alternative mode where it can collaborate with an external viewer. They are independent processes and windows but it is helpful for them to know their positions relative to one another because they would like to reposition toolbars to be on the side nearest the other application. Wayland designers use inter-application window position as a security bogeyman but this use case doesn’t even really need to know the other application’s position. It can do a pretty good job if it just knew it’s own coordinates in the desktop. i.e. if it is on the left edge with room to the right then the other application is most likely to be to its right side.
Anything that has floating panels benefits from being able to have policies for placing the panels such as put it close to the focused document window, but not overlappying, unless that would be off the edge of the screen. Applications that manage multiple windows in different contexts benefit from being able to create new ones near others.
I’m not familiar with such applications to be honest. I would guess, that in most cases you want to create the new window near the focused window, and that’s a job window managers do right now. The one case it would break down is when we want to create the window near a window that is not focused (and possibly in another workspace while we’re at it). I guess it has its uses, but to be honest this feels fairly niche (though never having ever needed that I do reckon I speak from ignorance).
It’s a UI pattern for a pop-up thing that inspects another thing. Most things on OpenStep can be inspected by hitting meta-I. Ideally, the inspector window should appear close to the thing being inspected.
Applications whose purpose is to move other windows, like devilspie2. I have a peculiar way I like my apps arranged, and devilspie2 lets me achieve that, without having to find a (non-tiling) window manager that supports the kind of rules I can do with devilspie.
So you need to do window management without relying on the window manager… seems to me the real problem is that existing window managers don’t solve your problem, and you ended up working around that.
This is super-niche, but I wrote a 3D viewer application that displays a composite image from two USB cameras ( https://github.com/classilla/camaglyph for devices like the Minoru). Because of the fixed way passive 3D displays are polarized, the window needs to adjust which line it displays where based on its absolute Y position, or otherwise the polarization order is wrong and the wrong image is sent to each eye. I can’t do this in Wayland.
I imagine the Wayland developers would respond with something like “add this yourself” or “nobody does this.”
Niche or not, there’s no substitute for knowing at least the polarity parity of your absolute position. And I don’t think ignoring 3D monitors altogether is a good move for the future. They may get eaten by VR goggles, but I’m not sure the writing is on the wall yet.
Moreover, letting applications know where their windows are (after all they already know how big they are), seems much less disruptive than letting them chose where to place them. You may have a case.
Got a Mac IIci here running NetBSD, which it’s done since 1999. It had to get a recap and a new hard disk, but it’s still going, doing its thing (printing, internal DNS, internal DHCP, internal AppleShare).
Wow. Nice work! I remember using MacLynx with SLiRP in school to connect my Mac to the web over the university’s xyplex terminal server. (We didn’t have PPP or SLIP yet.)
The screenshots of your development environments remind me:
I really liked CodeWarrior. It’s entirely possible that the things I was learning to do with it play a very large role in that, though.
GUIs from Mac OS 8.0 - 9.1 are still among my favorites ever. That skin, with window shades, those drawers at the bottom of the screen, and the control strip on top of modern memory management, multitasking and hardware support still seems like it’d be a great system to use, to me.
I used O.G. MacLynx back in the day too on my first Mac, which was a used IIsi. It was a far better fit for the hardware than Netscape. It’s a lot of fun hacking on it now.
CodeWarrior is an uncommonly pleasant IDE to use. But I do really like the immediacy and HIG thought that went into the classic Mac OS generally; there were certainly crummy Mac apps, but you had to work at that. In that sense CodeWarrior was building on what was already a very pleasant user environment (the dog’s breakfast of the underlying OS politely handwaved away). If I could do my work productively in 9.2.2, I’d never leave.
The “lookup ALU” reminds me of the IBM 1620 “CADET” [0] which also used lookup tables, and got the unofficial acronym expansion Can’t Add Doesn’t Even Try.
The really fun thing on the 1620 was that it stored those tables in core memory and had no memory protection. This meant that it was possible for a memory safety bug (the machine was typically programmed in assembly, so really an address arithmetic bug) to alter the results of later addition and multiplication. I’m told by people who operated the machine (it was the first computer that Swansea university ever bought) that this was moderately common and resulted in pages of nonsense printout until someone hit the stop button.
It’s a fascinating machine in that it so consistently made the wrong choice about almost everything. Dijkstra wrote a deeply unflattering review of it, though it’s amusing in hindsight that the thing that he focused on was having a dedicated call instruction, which he considered premature optimisation and is probably about the only thing the 1620 got right.
To me his complaint seems to be more about the fact that it supports only one level of subroutines (emphasis mine):
[…] Besides a general mechanism for calling subroutines, it has a special instruction (“Branch and Transmit”) to call in subroutines, an instruction by which control is transferred to the subroutine after the return address has been saved: for the benefit of the end of such a subroutine the order code contains a complementary instruction (“Branch Back”) which takes care of the return jump. So far, so good, but the blunder is that the return address is saved in a special return address register, the contents of which are accessible only via the instruction “Branch Back” and in no other way! […]
That makes it usable for leaf functions which are about half of functions, so it still ended up being a win. The instruction is almost identical to the jump and link instruction in all RISC architectures, except for the fact that they can move things out of the link register. I see it as a useful step towards general support, which still provides a big win, rather than an outright failure.
It does rely on knowing which functions were leaf functions at their call site, but since these systems predated shared libraries that was quite feasible.
The leaf function case is part of why RISC architectures favoured a link register rather than the established CISC style of pushing the return address onto the stack. Leaf functions typically don’t need to spill the link register and so can be faster. They are a sufficiently common case that it’s worth optimising them.
I’m sorry about your kitty. Thank you for posting about all the weird old hardware: Brother made some really underrated hardware that punched above its price class, but it’s always cheap and infuriating in places it doesn’t have to be.
Yes, their object extensions are quite arresting. Though they’re better than Magic Cap’s, which requires #defines and #ifdefs for class and segment management.
I inherited a DECtalk from a previous job and even replaced the bad ROM chip, but I regret not figuring out anything to do with it and gave it away. Good memories!
So, a stupid question. In the new cold Wayland world, what’s to prevent running Xwayland rootful and just running all your apps in that? That should support everything that you’d want from X but doesn’t require you to run Xorg directly … right?
If so, then why did we throw the baby out with the bathwater?
You could run Xwayland in a single-app compositor like cage, yes. That’d give you the ability to run on hardware X doesn’t support (i.e. Apple and some other ARM SoCs)
Cage looks perfect for my needs. Thanks for the pointer.
Yes, you could do that. It kinda sorta works. Just don’t use anything modern with that approach, such as GTK4 because it introduces visual bugs for window decorations. I don’t remember the details but it has something to do with GTK4 introducing a new renderer that tries to make more direct use of modern hardware.
The parts of X11 that had to be thrown out revolve mostly around display management and relating visual output to input events. The X11 model for display management was to define a virtual screen and let the video drivers find a way to make the actual displays map to the virtual screen. This precludes any conventional, reliable model for handling changes in display topology, handling heterogeneous displays, vsync, etc.
So if you run X11 program in Xwayland you get to benefit from all of those display management improvements.
The potentially interesting question is whether you can manage Wayland windows with an X11 window manager using this approach. I wonder if it would be possible for a Wayland compositor to delegate window decoration and similar to an X11 window manager running in XWayland. This would require handling reparenting events and probably hooking into the composite / render / damage extension to identify some of the window management operations, but it might be possible. I bet a lot of objections to Wayland would go away if it provided a way for people to run their favourite existing window manager.
It certainly would for me. A Wayland that didn’t upend everything would have been much more welcome, I suspect.
Slightly absurd in a wholeheartedly wonderful way, but I can’t find info on whether this is an actual re-implementation of the C64 or just a similar system on similar hardware. IE, if I had an old C64 disk or cartridge and popped it in, would it work correctly?
Edit: Ah, found the answer in the FAQ (https://cx16forum.github.io/faq.html): “No, this is not an emulation of any prior computer. The X16 is a unique product with its own memory layout and devices. It can run simple BASIC programs written for BASIC 2, and it can even run machine language programs that strictly use KERNAL calls for input and output. It will not, however, run Commodore games without re-writing the graphics, keyboard, joystick, and storage routines for this platform.” Alas!
Yeah, this is less interesting to me personally for that reason. Part of my enjoyment with vintage computing is making an old system do something it was never intended (or built) to do. It’s not just the 8-bit aesthetic, whatever that is, and it’s not just a system that can be fully understood, even though I like that too. This is an incompatible small-scale system that’s inspired by old machines rather than actually being one.
Would I buy it? I salute all the work that went into this but I’m probably not its core market. It lacks the historicity that I find appealing, and it’s making a system more limited rather than making a limited system more capable (which as a hobby I find far more compelling). But I’m glad it’s coming to fruition and I certainly won’t turn up my nose at more 6502s out there. I’m just not sure I’d spend my retro nerd budget on this rather than, say, another peripheral for my beloved Commodore 128DCR.
There are some other projects that provide modern reimplementation of the C64. The Ultimate64 Elite is the one I’m keeping an eye on. Hopefully it will be in stock someday soon.
Those projects do support old peripherals, cartridges, software, etc.
I’m guessing this is just a repackaged POWER10 (not even modified) with OMI controllers on-package (either somehow with better licensing, or using the old RYF switcheroo) and the PCIe controller made amenable somehow. All those specs aren’t incongruent with POWER10.
Nothing else makes sense in terms of timeline and capital costs - modern chips are eye-wateringly expensive to make, even if it’s say, a POWER10 derivative.
That would actually be a good thing, if so, because then it would be directly comparable. Power10 minus the bad stuff would be fine with me.
Exciting. I’m guessing that its somewhat of a spin-out of a bunch of hobbyists, hence why working on it for a long time without having a registration sorted yet. Hopefully they get their paperwork sorted and filed soon enough (hard to make a trusted supply chain if you don’t know who you are trusting!)
The potentially lower entry pricing is exciting too. Would love to experiment with a workstation and home-lab sized server.
Yeah, I’ve got an inquiry in to talk to these guys. They really came out of nowhere and you want to make sure you’re not betting the store on yesterday’s technology in a new wrapper. But I trust Raptor and they seem very enthusiastic about the S1, so I’m hopeful.
Getting all the pieces together for xa65 version 2.4. It’s about time.
This is my first time seeing anyone ever touch SOM, let alone OpenDoc. Knowing the boilerplate required to make it work and UX confusion, it’s no wonder it didn’t take off. Even if it got better, it’s one hell of a bad first impression.
Yeah, it was a mess. I can’t imagine trying to write a component from scratch. Working from a template was bad enough.
The Mac IIci and other systems that use that case have a great solution to this problem: the power button can be rotated to lock in the “on” position. Otherwise everything here that doesn’t have hardware power switches or isn’t “always on” is set to automatically power back on after the UPS goes down (mostly repurposed Power Macs plus the POWER6). That’s saved my butt a few times.
Remote reboot is more interesting. I have remotely accessible power cyclers for that.
As a Palm hobbyist I love still holding the actual devices, but this is almost as much fun as PumpkinOS, which is itself making leaps and bounds. https://github.com/migueletto/PumpkinOS
I want a way to turn off Github’s key combinations. Is there a way to do it?
Please let me know if you do! The latest UI update stole the tab navigation keyboard shortcuts in Safari, so you skip through a few tabs and get stuck on the GitHub one and have to reach for the mouse. And every time it happens, I start to hate GitHub just a little bit.
DuckDuckGo rolled out an update around 15 years ago that stopped the normal keyboard navigation shortcuts working in their search box. I emailed a report and a couple of hours later got a reply from their CEO with a link to a test site to see if it fixed the problem. A couple of iterations later, it was fixed and they rolled it out. That kind of thing is the reason that I’m still using them so many years later. I wouldn’t be surprised if I start migrating away from GitHub in a few years.
My situation is (as usual) weird. I use Linux, but with a Mac keyboard. I have Firefox set to use Mac key combinations for copy, paste, etc. so that my muscle memory still works, but Github sees Linux, and overrides them with its own. I wonder if I should just bite the bullet and use a Mac user agent.
Use the developer tools to find out which element has the event listeners and make a userscript to remove them?
The extension churn in GNOME was one of the big reasons I’m on KDE Plasma now.
Having great extensions become suddenly incompatible really ground my gears. This change only seems to worsen the situation - I don’t see how changing the way modules are loaded will stop GNOME breaking extensions regularly any time soon, and if anyone finds a way to meaningfully take advantage of “allowing greater compatibility with JavaScript ecosystem” I will be pleasantly surprised.
Yeah, they migrate to a different import mechanism and switch from a utility module to inheritance at the same time? For what? They could have easily kept the utility module for a while and let the extension authors only have to deal with a single thing at the moment.
Agree with this.
Not sure what the extension story is like on KDE because I almost never use them. But the vanilla experience has pretty much everything you need.
What’s frustrating with gnome is new versions can break extensions that are pretty key to making the desktop experience work.
The dynamic workspaces of GNOME are better than fixed ones other DEs have. Well, if the vertical workspaces extension is used.
I only want those, good expose and fast app launcher using fulltext over .desktop files.
No other DE comes close.
Wow, amazing rhetoric right there. Paraphrasing an argument in dumb language and just stating it’s invalid without any counterargument.
From a technical point of view, great article. But “X works, Y does not” is as valid as an argument as they come. If using X breaks Z and you need Z, fuck using X, I will use Y. Even if Z is “bad software”, I may need it.
The end of the article directly acknowledges it, mostly with respect to accessibility.
There’s an inverse rhetoric that’s just as bad, if not worse because it’s never spelled out explicitly:
Why? Because my webcam/printer/wifi works in Windows, so if it doesn’t work on this new Linux thingy, it must be the new thingy’s fault, right? Don’t try to muddy the waters with “the vendor didn’t support Linux” or “the vendor didn’t write a driver”: the thing works on Windows, it doesn’t on Linux, it should be obvious to any child that Linux doesn’t work. Now stop bothering me, I have real work to do.
Or something. Actual late adopters are never that blatant, they tend to have shorter, less refutable quips, like “Linux isn’t serious”, or “why would I ever use RISC-V?”, or “this new language lacks tooling”. But the running theme is the same: burdening the new thing with unfair proofs. Here there is a case for X’s API being fundamentally outdated. Making something better (like Wayland is attempting) requires breaking it. Obviously that comes with growing pains, but blaming Wayland for this feels either ignorant or disingenuous to be honest.
Now the article could explain all this in that many words, or it can use the shortcut it did: Don’t even try to get away with “well, it just works for me” or “but Wayland no worky”.
Great reply, thanks for providing the perspective that the article is missing.
When I read
I don’t really read the nuanced story you provide. I read a mocking remark that has no argument behind it.
So maybe in this case it would be better to provide a more nuanced story or just omit the mockery altogether.
It’s a difficult balance to strike, on two accounts:
Nuance requires more text, which in many cases detracts from the main point. I routinely write nuances and caveats in my own essays, only to axe many of them just to keep it short and to the point. Sometimes those deliberate omissions are mistaken for ignorance.
Sometimes the best response is a dismissive coarse laugh. It’s a way of conveying that the attacked opinion is so ridiculous or outlandish, that whoever holds it should quietly change their mind in shame. It’s a polarising double edge sword however, to be use sparingly and carefully. In the context of this post it’s a harsh introduction, but it does help set the tone of the entire post, which is about depicting X as a crumbling old dinosaur we should stop feeding as a matter of mercy.
There’s also the target audience to keep in mind. Public writings are accessible to anyone, but they’re rarely intended for everyone. Some people would be more swayed by a nuanced take void of mockery, but for me the tone was very effective at conveying how strongly the author felt how bad the X situation really is.
And I didn’t know about battery life. Crap, I just set up XFCE with XMonad, which I’m assuming are under X… it would be nice to have a Wayland alternative. (And I’m still undecided on how much I actually need a desktop environment. Because ultimately I don’t need much: Ethernet, WiFi, mount USB drives, a way to transfer file to and from my Android phone, sound, screen brightness… the list isn’t short, but it’s still pretty bounded.)
Agree with you 100%.
A burned out OS dev is ranting with their friends after work. They’re just off a long shift. They’re being acerbic. They’re saying the things nobody says but a lot of people think. Now we’re doing a close reading of the transcript as if it’s a corporate press release. This is how it feels to me.
Is it actually true? Many of the things said in this post aren’t, so I’m skeptical of this too. Would be interesting to run a test.
I personally find it somewhat hard to believe… but plausible, there’s times when I see the X server on my box chewing through some more cpu than i’d expect necessary.
That bad? I don’t have the expertise to know, can you cite 3 fallacies?
I’m talking more about factual errors than fallacies but three examples:
Just plain false, I’m using it right now. (But the later point about getting the desktop settings being unstandardized is a fair point.)
Again, not true.
lol. This is kinda subjective so maybe it isn’t fair to put next to two factually false statements, but it sounds like she doesn’t actually know what she’s talking about - most Wayland proponents obviously know very little about X so unsurprising - and the later thing about “To get the full experience, I’ll be writing a mini Wayland server and X client, which should teach me a fair bit on how all of this works” sure indicates she probably doesn’t even know the basics.
Thanks, that’s what I was asking for.
We wouldn’t guess from the writing style, but it’s “she”.
oh thanks, i know i shouldn’t assume but whelp. i’ll edit that.
I actually would be interested in reading the experience report writing that program though. A lot of newbie X tutorials and rants make a big deal out of the various steps in a hello world, so you go into it thinking it is going to be a mess…. but it actually isn’t much different than a hello world on, Windows, for example. And you can skip some of the steps on computers nowdays since you can pretty safely assume there’s true color support, etc. (until someone plugs in one of those new e-ink things that are all the rage :P but still)
And people criticize the Windows hello world too. There is a fair argument to be made that you can skip the steps and use the defaults instead - indeed, this is the case with most the libraries on top.
auto window = new SimpleWindow; window.eventLoop();
for example with my lib… but there’s times when you actually do need to know how to break down the tasks. Maybe you actually do want the window hidden until you’re ready to show it. Maybe you do want to create a subclass instead of taking default behavior. Maybe you actually do want to support a 16 color display. Thennew SimpleWindow();
isn’t going to cut it, and the lower level apis show their value.Xlib itself is a lower level api (not quite as low as it gets, it has some conveniences built in, but not many), and X11 is a lower level protocol - explicitly being “mechanism, not policy”. Then you gotta read the other manuals on top of it. But… again this isn’t that different than other platforms. There’s the Win32 api which does provide some policy but you are still supposed to read the human/computer interaction guidelines and apply them.
I’ve never personally programmed something on Wayland but i wouldn’t be surprised if its low level functions take some boilerplate you don’t need to edit 95% of the time either.
The article disagrees with you in its conclusion: “You could probably add all (well, most) of these to Xorg, but not without some pretty fundamental changes, rewrites, and extensions.”
The wayland faq also disagrees with you. https://wayland.freedesktop.org/faq.html#heading_toc_j_5 “Why not extend the X server? Because for the first time we have a realistic chance of not having to do that. It’s entirely possible to incorporate the buffer exchange and update models that Wayland is built on into X. “
In other words, they broke things because they wanted to, not because they had to.
At that point, it is 100% their fault when things that used to work on Linux no longer work on Linux. They deliberately broke it.
Good link, that FAQ. I can read from it:
Simplicity requires breaking APIs at some point. Even I can’t make the simplest thing out of the box, even for something as easy as a cryptographic signature API. Had to break the API and bump the major version to get from vtable abomination to the simpler (and more flexible!) solution I have now. I can imagine the constraints of a much bigger, much older project like X.
And to be honest, I wouldn’t be surprised if Wayland have to suffer the same fate a couple decades from now.
I don’t think they’re saying you’re not allowed to do a different thing, rather that at this point any interoperability issue between Wayland and Z is probably on Z. You aren’t considering the rest of the paragraph which frames the discussion as
A piece of software being incompatible with Wayland does not improve the standing of X11, it introduces the constraint that it’s use requires X11.
X11 being unfit for any modern system does not make Wayland magically fit for a modern system. Wayland proponents need to understand that the introduction of superseding technical requirements (e.g. around rendering and input – huge improvements by Wayland) does not invalidate adjacent functionality that applications have come to depend on. For example, applications that need to know their own location on the display are not bad software. It is just software that is inconvenient for Wayland developers to support on some platforms. It does not matter if there is a Wayland protocol for it because the most-used desktop environment has no intention of supporting it. At the same time they have broken Xwayland rendering so compatibility shims like
GDK_BACKEND=x11
don’t work reliably either. So now huge swaths of Wayland users cannot reliably use software that requires functionality that is available on nearly every other modern platform. Users are now left with two options that are both unfit for a modern system.What kind of application needs that? I’m not sure even a screen recorder would need to know where other applications windows are located, as long as it can record their contents.
This article has a good breakdown of window types and IMO gives a fair description of the conflict. The main one that still doesn’t work broadly on Wayland is what they call satellite or toolbox windows. These used to be common but got a bad reputation, probably for the better, because they usually got sloppy (e.g. GIMP until about 10 years ago). However there’s a lot of applications that make better use of knowing and setting window coordinates. One I’m partial to is what I call “buddy” programs. I maintain my own diagraming software that is essentially gvim w/ a plugin that collaborates with a rendering server and an image viewer. It originally used Xembed to have gvim embedded in a frame along with the image viewer. Xembed is not supported on Wayland and while there is a protocol for sharing rendering surfaces the GTK maintainers have no intention of refreshing their “socket” feature to use it. The program has an alternative mode where it can collaborate with an external viewer. They are independent processes and windows but it is helpful for them to know their positions relative to one another because they would like to reposition toolbars to be on the side nearest the other application. Wayland designers use inter-application window position as a security bogeyman but this use case doesn’t even really need to know the other application’s position. It can do a pretty good job if it just knew it’s own coordinates in the desktop. i.e. if it is on the left edge with room to the right then the other application is most likely to be to its right side.
Anything that has floating panels benefits from being able to have policies for placing the panels such as put it close to the focused document window, but not overlappying, unless that would be off the edge of the screen. Applications that manage multiple windows in different contexts benefit from being able to create new ones near others.
I’m not familiar with such applications to be honest. I would guess, that in most cases you want to create the new window near the focused window, and that’s a job window managers do right now. The one case it would break down is when we want to create the window near a window that is not focused (and possibly in another workspace while we’re at it). I guess it has its uses, but to be honest this feels fairly niche (though never having ever needed that I do reckon I speak from ignorance).
It’s pretty common for implementing the inspector UI pattern in OpenStep / Mac apps.
Inspector UI? First time I ever heard of it. DuckDuckGo seems to point to GUI testing, is that it?
It’s a UI pattern for a pop-up thing that inspects another thing. Most things on OpenStep can be inspected by hitting meta-I. Ideally, the inspector window should appear close to the thing being inspected.
Applications whose purpose is to move other windows, like devilspie2. I have a peculiar way I like my apps arranged, and devilspie2 lets me achieve that, without having to find a (non-tiling) window manager that supports the kind of rules I can do with devilspie.
So you need to do window management without relying on the window manager… seems to me the real problem is that existing window managers don’t solve your problem, and you ended up working around that.
Yes. But I can solve it under X11, and I can’t solve it under Wayland.
It doesn’t matter that this isn’t a Wayland problem. It is a problem that prevents me from moving to Wayland nevertheless.
This is super-niche, but I wrote a 3D viewer application that displays a composite image from two USB cameras ( https://github.com/classilla/camaglyph for devices like the Minoru). Because of the fixed way passive 3D displays are polarized, the window needs to adjust which line it displays where based on its absolute Y position, or otherwise the polarization order is wrong and the wrong image is sent to each eye. I can’t do this in Wayland.
I imagine the Wayland developers would respond with something like “add this yourself” or “nobody does this.”
Niche or not, there’s no substitute for knowing at least the polarity parity of your absolute position. And I don’t think ignoring 3D monitors altogether is a good move for the future. They may get eaten by VR goggles, but I’m not sure the writing is on the wall yet.
Moreover, letting applications know where their windows are (after all they already know how big they are), seems much less disruptive than letting them chose where to place them. You may have a case.
Got a Mac IIci here running NetBSD, which it’s done since 1999. It had to get a recap and a new hard disk, but it’s still going, doing its thing (printing, internal DNS, internal DHCP, internal AppleShare).
Wow. Nice work! I remember using MacLynx with SLiRP in school to connect my Mac to the web over the university’s xyplex terminal server. (We didn’t have PPP or SLIP yet.)
The screenshots of your development environments remind me:
I used O.G. MacLynx back in the day too on my first Mac, which was a used IIsi. It was a far better fit for the hardware than Netscape. It’s a lot of fun hacking on it now.
CodeWarrior is an uncommonly pleasant IDE to use. But I do really like the immediacy and HIG thought that went into the classic Mac OS generally; there were certainly crummy Mac apps, but you had to work at that. In that sense CodeWarrior was building on what was already a very pleasant user environment (the dog’s breakfast of the underlying OS politely handwaved away). If I could do my work productively in 9.2.2, I’d never leave.
The “lookup ALU” reminds me of the IBM 1620 “CADET” [0] which also used lookup tables, and got the unofficial acronym expansion Can’t Add Doesn’t Even Try.
https://en.wikipedia.org/wiki/IBM_1620
The really fun thing on the 1620 was that it stored those tables in core memory and had no memory protection. This meant that it was possible for a memory safety bug (the machine was typically programmed in assembly, so really an address arithmetic bug) to alter the results of later addition and multiplication. I’m told by people who operated the machine (it was the first computer that Swansea university ever bought) that this was moderately common and resulted in pages of nonsense printout until someone hit the stop button.
It’s a fascinating machine in that it so consistently made the wrong choice about almost everything. Dijkstra wrote a deeply unflattering review of it, though it’s amusing in hindsight that the thing that he focused on was having a dedicated call instruction, which he considered premature optimisation and is probably about the only thing the 1620 got right.
To me his complaint seems to be more about the fact that it supports only one level of subroutines (emphasis mine):
That makes it usable for leaf functions which are about half of functions, so it still ended up being a win. The instruction is almost identical to the jump and link instruction in all RISC architectures, except for the fact that they can move things out of the link register. I see it as a useful step towards general support, which still provides a big win, rather than an outright failure.
It does rely on knowing which functions were leaf functions at their call site, but since these systems predated shared libraries that was quite feasible.
The leaf function case is part of why RISC architectures favoured a link register rather than the established CISC style of pushing the return address onto the stack. Leaf functions typically don’t need to spill the link register and so can be faster. They are a sufficiently common case that it’s worth optimising them.
Still use C-Kermit for some automation tasks, and I used it heavily on a Commodore 128 when all I had for Internet access was a dialup shell account.
I’m sorry about your kitty. Thank you for posting about all the weird old hardware: Brother made some really underrated hardware that punched above its price class, but it’s always cheap and infuriating in places it doesn’t have to be.
Thank you, and I think you hit it on the head: it’s a weird machine that does more than you think it would and less than you think it should. ;)
Relatedly, I looked at the GEOS SDK and I’m terrified - they have a language that looks like Objective-C for Win16 developers.
Yes, their object extensions are quite arresting. Though they’re better than Magic Cap’s, which requires
#define
s and#ifdef
s for class and segment management.These are useful tools, but it seems more like using C to access modern image processing algorithms implemented by SOD.
Can’t find anything about Environ-V (could be that that search engines are also bad nowadays); anyone can ID the UI font?
Do you mean the window titles? Looks like italic Univers.
Thank you, it does indeed look like it!
I inherited a DECtalk from a previous job and even replaced the bad ROM chip, but I regret not figuring out anything to do with it and gave it away. Good memories!
Which model, the big O.G. one?