1. 124

After @PhantomZorba did the interview with me last month we decided to try to make the lobste.rs interview series a relay, where each interviewee then interviews someone else. Picking a single person from this community is hard, but Arcan is the project that I wish I had the time, expertise, and patience to have written, so I had to choose @crazyloglad.

Introduce yourself, describe what you do for work and how long you’ve been at it.

Hello! For the last few years I have been an independent contractor working with smaller startups in information security. I have just finished a longer engagement into hardware reversing and am taking something of a breather doing more playful things. On the open-source side of the world, I am the architect and lead engineer of the Arcan project.

There are paper trails that say I am a software engineer, yet others that say I am a computer scientist. For me, I tend to stick with the term ‘reverser’; it means little to most but much to a few and the few tend to interest me more than the many.

My background in computing is life-long. I am a child of the personal computing era. My conversations with the Commodore 64 happened much too early, starting with the Usborne Publishing ‘Computer Battlegames’ book and continued into devouring anything the local library could provide thereafter, picking up English in the process. This took place in a small village close to the 56th parallel north where computers were mostly dismissed as witchcraft and being accused of performing witchcraft didn’t make life any easier.

I will try and fail to keep this short, condensing the last three decades into three pivotal moments. The first moment from my preteens-into-teens was emulation. With home consoles becoming more prevalent I got curious as to why games on one system wouldn’t run on another, especially not on my own beloved computer: Isn’t it all just code? I started asking around in the circle of much older explorers and penpals and got a cursory explanation that it was difficult and unreasonable. That sounded like just my thing. Someone handed me a printout of phone numbers to ‘Bulletin Boards’ which led to heavier drugs like FidoNet and a small world suddenly became very large and down the rabbit hole I went.

The second pivot was the PhD thing. I enrolled at university to try and get some structure to the chaos of being primarily autodidactic. It was a small university that had the reputation of being a place for ‘hackers’ through the ‘software engineering’ disguise. For a small pocket of time the student housing area was a sight to behold, and every scene from cracking to osdev to demos to gamedev was represented in abundance. There was an offer of a place to work in a lab, coding 3D engines and visualizations for demonstrators of autonomous coordination of naval vessels and submarines and things like that. I built something of a repertoire with the professor that was pulling the strings, which started the doctoral journey. My main interest was (and remains) debugging when things grow large, complex and distributed. Following the money meant that the best fit at the time was SCADA for critical infrastructure, and the consequences when energy systems would need to be restructured to deal with consumers also being producers and you can no longer hide behind the lie of control systems being ‘air-gapped’. In that context nobody would listen if you talked debugging, so I had to flip the switch to talk about security.

The third pivot was first an industrial collaboration that turned into employment at a tiger-team (the ‘Men in Black’) at Sony-Ericsson, now SONY Mobile. This happened just before the transition from feature-phones into smartphones. This group dealt with putting out fires, meaning critical bugs that others had given up on but meant the death of a product or possibly the company if it caused device returns. The margins were razor thin; if you could save a few cents on the BOM by not having a MMU and process isolation, you saved a few cents on the BOM. A close friend and I tried to distil some of these lessons into trainings and a university course on the subject, with the ‘NDA friendly’ version still around on the Internet somewhere under the idea of ‘Systemic Software Debugging’ which I hope to revisit someday in a more approachable format.

What is your work/computing environment like?

Very compartmentalised.

My house is less of a house and more of a daycare for forever young techies. I have one lab for coding with a big desktop and a lot of ‘one purpose devices’, novelty displays and input tech. Of note is a small cluster of ephemeral devices that boot from preset images and run some software that I need but deeply distrust (like web browsers). These reset and re-image (saving raw-disk diffs if something strange happens) and map into tabs, though the windowing scheme used is quite different.

I work offline with ‘bare minimal’ editor and man pages instead of IDEs and autocompletion as I tend to, paraphrasing James Mickens, break my tools with my tools.

Another lab doubles as a home theater. This is used for VR and general controlled environment ‘audio/video’ testing. The more playful things happen in the basement area around pinball machines and DIY arcade games. Pinballs are great fun for computer-vision like experiments. The latest addition is a more controlled space for lower level hardware work - with the usual scopes, some laser ‘cutters’, CNCs, 3D printers and so on.

I also rely on printing things on paper and scribble. I always carry at least one notebook or e-Ink notetaker. There is also something almost purifying when later shredding and incinerating the evidence. When travelling, commuting or just hiding in a pub or bar for a change of scenery I nowadays tend to use AR glasses hooked up to a steam deck or laptop.

At some point it appears that you decided to rewrite the *NIX graphics stack from scratch, how did that happen?

Reluctantly and regrettably. Knowing what I know today I probably would not have bothered. I approach ‘system graphics’ from an angle of signal processing and queueing theory (IPC systems) rather than what most think of as ‘graphics’. In that sense we are mixing and resampling signals much more than drawing pixels. Very little of the actual work is modern graphics, and I am artificially constrained by the capability ‘sweet-spot’ of the first generation Raspberry Pi.

This is done in order to get away from the influence of GPUs that only a few can build and comprehend, and to be able to work on simpler dedicated processors that many more can design and build. Many architectural decisions in Arcan are explicitly to begin the painful decoupling from proprietary GPUs. The modern GPU ecosystem is highly abusive and deliberately user unfriendly yet is in a place where it is actually running the show. I don’t like that.

There is also a personal story of trying to build the panopticon of debugging. With that I mean ‘observing and experimenting’ with the Rube Goldberg contraption of what we unknowingly build by smashing pieces of software together. Even though the individual pieces can sometimes be formally proven to work and then actually made to work, the solutions that we end up with often solves some problem in a very convoluted way, but we lack the faculties to see it. Part of the reason for that is a tooling problem.

This coincides with the other side of the same coin. What I am trying to achieve with the Arcan project, and the sole reason any of this is public, is that I want to string together some of the frayed threads of our history. With that I mean to resurrect some of what we lost from the personal computer era where you just ‘powered on and started to code’ as a means of exploring computing. This is not meant as a retrocomputing romance. The output should be more capable than what we already have, not less. There are other and far more interesting and intimate webs to uncover than the ones we have, Ted Nelson even said as much.

This is also why I keep aiming that any layer follows the model of having agency be defined and scriptable in Lua - which I find to be a more civilized take on BASIC. Lua is a fair vantage point for working towards the abstract if that is your thing, but also for digging further down the stack. The target audience is not so much the developers that are, as much as the ones that should come. For that aim to have any chance of success at all, controlling the tactile, the aural and the visual need to be much more approachable and many still start their journey programming games.

You mentioned that you sometime regret going down the path of building a new display system, if you could give one piece of advice to your younger self, what would it be?

Because you asked for one, I will provide two – mainly because the most important one is really short: Disseminate earlier. The distance from what has been done and where my mental model currently sits compared to what is written down or published is several years. I was afraid of stepping on toes even though I should be crushing feet. There are good reasons as to why I couldn’t, but that doesn’t invalidate the fact that I was overly careful and have been paying the price ever since.

The real ‘one’ would be to stick to the first plan for the integration story. For a long time I simply used Xorg as the user space portion of the display driver and had all my bits and pieces running as the one and only full screen surface, just disable its native socket after setup and done. There was simply not much to gain from trying to use the lower level interfaces as they were being split out and refactored away from Xorg. I have easily wasted years trying to untangle issues caused by subtle changes to that neverending refactoring story and the building blocks are still woefully incomplete.

At the same time I maintained a private, less than friendly version of Android that swapped out parts of their stack for mine using all the tricks in the offensive software engineering playbook. This was abandoned a long time ago and I shouldn’t have done so. Part of the reason I did was the fallout from witnessing FirefoxOS from a front row seat. The architecture I was trying to push would have had a much better chance (still dangerously close to zero). In the end that would have been a much more fruitful path than the FreeDesktop one, which could have been retrofitted much later. The strategic position of having a seemless transition away from the Android we know today that isn’t hinged on whole-system virtualization would have been very valuable.

You said that part of your goal is tying together the frayed threads of history. In computing, there are a lot of paths that were abandoned for various reasons that you may no longer apply. What idea would you most like to see resurrected from computing history?

The old quip about the future already being here, just not evenly distributed applies to the past as well. In that sense I think there is little that has actually gone away, merely drifted out of sight until the next opportune cycle. Just consider how many rounds of ebb and flow we have had for AI, Virtualization and Centralized-shared versus Distributed-single user.

The framing I would like to see is webs of personal computing. In this sense the ‘app’ is a topic of interest originating at the individual by default, but invites, or even mandates, collaboration around said topic (or a slice thereof).

Here, the economy should be around active participation rather than the current default of a quiet (or in twitch.tv - loud but saying nothing) mass which observes, takes note and punishes or rewards the central figurehead while simultaneously fighting to take the spotlight, which encourages censorship over attenuating a message.

With modern hardware and communications, local content and links can be much more dynamic and interactive. Within it I want to see the ‘living document’ revisited as a communal scratchpad.

This would land closer to a continuation of where the BBS model meets Xanadu with an internal representation closer to that of NeWS and Display Postscript with overlays for alternate presentation to account for accessibility. It would warrant other solutions than ‘cloud central’ web for a number of interesting areas across the board: from authenticated identity to search and retrieval, but especially for the principal building block that is ‘the link’.

Done wrong (URL) the link is merely an web-ABI conforming RPC call or (symlink) a reference to a data assumption (“everything” is a file schtick). If so, ancillary systems chime in and compensate for abandoned opportunity. That is how you get “advertisement and 3rd party cookies”, man-in-the-middle “shorteners” and so on. Just as the dynamics would change if the one being linked contractually knows who linked to them, the rules we bake in here has a profound systemic effect.

If you could make all OS kernels include one feature, and have compatible implementations supported everywhere, what would it be?

I think the pragmatic reality of dealing with- and sustaining- legacy while also paving the way for new things is the most important part for the kernel to play and curate, so this is really for the popular user facing kernels and not more research, nostalgia or server specialized ones. The biggest problem I have right now is not the lack of any singular feature, but how the current ones are exposed.

The single most expensive and needlessly painful parts have been working with capabilities and resource identifier tokens, whatever we call them (HANDLEs, file descriptors). The POSIX ones are worst in class, much thanks to the sparse allocation requirement but also thanks to the staggered evolution of all the support functions that comes with it. This is worse in graphics where you might need to juggle tens of tokens to cover even a single image frame. Connected to this is process and thread creation but that is only part of the story. Inheriting a complex set of states as per fork() is bad, but the other side of the very verbose ‘CreateProcessEx’ or ‘posix_spawn’ is not that much better.

What I want from the OS is a better and coherent interface for specifying short lived, language runtime agnostic and specialized compute, especially as we are getting more FPGAs, DSPs, and so on. Having one interface for FPGAs, a handful for GPUs, others for tracing then to try and specify the compute, lifecycle, resource access permissions, data transfer and error handling conditions over things as unrefined and inefficient as kill, mmap, mprotect, madvise, read, write, exec etc. is soul-crushing to say the least.

I found a lot of the low level bits of the open source graphics stacks very hard to understand, what would you recommend for someone wanting to get started in this area?

A challenge in unpacking the graphic stack is that it is not in a single discrete place, but smeared all over. Surprisingly little is actually ‘inside’ the display server API or display server itself.

Much more live in the accelerated graphics API, and how it talks to kernel devices (this is MESA and through it, KMS/GBM or HWC/Gralloc in Android). That said, MESA has many faces and while the public ones relate to the many versions and extensions of GL and Vulkan, that is only part of the story. For the actual ‘graphics’ part, that is the place to study or use as reference. Note that the MESA codebase started around 1993 with all that entails. It’s not an easy read.

What I think is the best course of action depends a bit on where you current intuition took a wrong turn. The most common one in my experience is the mistake of thinking of a ‘framebuffer’ that you batch write ‘pixels’ into, binned by some discrete synchronization signal (‘vsynch’, ‘vblank’, …). This is common because that is often at least part of what higher level graphics API used to offer the developer.

The printer was, and is, a more accurate model. Just as there was good reason for why the printer server part of Xorg fell out of favour, there was good reason for why it was there in the first place.

We still queue deeply paginated batches of draw commands to a strange and foreign device that looks at colours very differently from what we do; papers still get stuck; often we want to cancel a dispatched job because it contained the wrong thing or something changed; render times vary wildly for unclear reasons and so on. The display server do to render jobs what the printer spooler did to print jobs.

For the real system graphics experience however, pick a single usecase in normal desktop use that is common. The one that I prefer when comparing is ‘drag resizing’ a window as it really gives every subsystem a good shake. This is where you will find the biggest divergence between solutions and near every design decision comes into scrutiny.

    1. 27

      I really love the concept of having this be a relay thing. Really cool from a community perspective :)

      1. 8

        With home consoles becoming more prevalent I got curious as to why games on one system wouldn’t run on another, especially not on my own beloved computer: Isn’t it all just code?

        I remember this question haunting my childhood as well. We had a family DOS PC and a SNES but the games on the DOS PC looked like crap in comparison and the DOS PC was thousands of dollars. The need to understand why did indeed motivate my interest in programming.

        I eventually learned the answer.

        1. 7

          Neato! Great interview. :)

          +1 for the relay idea.

          1. 6

            Hah, this is better than any tech podcast I’ve been forced to watch because I couldn’t just tell people who do podcasts for a living that I haven’t really listened to a podcast since 2007 seen, and the relay format is a really nice!

            1. 5

              I like this, thank you both!

              Small accessibility request: would you consider using block quotes or headings or prefixing questions with Q: or something to make them more visually distinct?

              The bold style doesn’t stand out very clearly for me when it’s used for a whole paragraph.

              1. 1

                One option is to use

                ---

                before each bold question to separate it from the last, which draws a horizontal rule <hr> even though this is not documented under “Markdown formatting available”. It requires a newline above and below and looks like this:


                Also undocumented is that putting a --- directly below a line is another way to surround the line above with <strong>.

                There may be other undocumented markdown features as well.

                As a workaround to the parent poster, you could create a custom style sheet that does something like add color in addition to font-weight to <strong> elements on this site. This would be helpful on any post which uses bold.

              2. 4

                Right, now that I’ve recovered from the man flu somewhat I can come back and say something more useful than hey man, cool interview! Sorry to spam the thread but I can’t edit my old one by now.

                I think the most interesting and useful takeaway from this interview is this:

                The most common one in my experience is the mistake of thinking of a ‘framebuffer’ that you batch write ‘pixels’ into, binned by some discrete synchronization signal (‘vsynch’, ‘vblank’, …). This is common because that is often at least part of what higher level graphics API used to offer the developer.

                The printer was, and is, a more accurate model. Just as there was good reason for why the printer server part of Xorg fell out of favour, there was good reason for why it was there in the first place.

                The framebuffer abstraction is super straightforward but if you peek under the hood of graphics systems for a bit you’ll find that, like, half the history of the development of modern hardware graphics hardware interfaces basically consists of trying to figure out how to build (what amounts to) a good framebuffer API -> vector operations pipeline transpiler. From a distance, it looks like it’s not a solved problem at all.

                If @david_chisnall and @crazyloglad don’t mind expanding on the relay thing:

                1. In most modern systems there’s an additional layer between the application and the drivers – the display server. These tend to be the most constrained in terms of API design choices, because most 3rd party developers will likely hate anything that doesn’t eventually boil down to “here’s a bucket of pixels, draw on it”. So most of them have no choice but to implement some framebuffer interface over not quite a framebuffer hardware. What would you say are the most common pitfalls in designing such a system? Or, to put it another way, if you guys had a senior year student to advise on his final project, what are the first quirks you just know their first design just won’t deal with properly?
                2. Other than Arcan :-P what is some good prior art on securely managing shared graphical resources (like, remember GEM-Flink?)
                1. 3

                  I was otherwise engaged until this morning so didn’t have time to notice :-)

                  For 1 - the biggest pitfalls, locally and systemically, that damn near everyone runs into is synchronisation. That’s why I suggested the resize case as a mental model for unpacking graphics as the system needs to converge towards a steady state while invalidating requests might still be in-flight. There is a lot to unpack in that, and whatever you chose you get punished for something. Even if you are single source/sink /dev/fb0 kind of a deal, modeset is another that’ll get you.

                  Then comes colour representation and only thinking ‘encoding implies equivalent colour space’. This gets spicy when you only have partial control of legacy and clients. There will be those sending linear RGB for a system that expects sRGB and vice versa, and that goes for the hardware chain as well. On top of this is blending. On top of that is calibration and correction.

                  Then comes device representation that ties all of this together, e.g. what the output sink actually can handle versus what you can provide, and the practical reality that any kind of identity+capability description we’ve ever invented for hardware even as static as monitors gets increasingly creative interpretations as manufacturing starts.

                  For 2 - I well remember flink and might have good reason to not trust the current system aswell. The heavy investment into hardware compartments and network transparency is personally motivated by that distrust. Are you thinking of the whole stack (i.e. specific window contents, sharing model between multiple-source-multiple-sinks, deception like a full screen client that looks just like the login-screen?) or the GPU boundary specifically (i.e. GEM/dma-buf/…)?

                  1. 1

                    I was otherwise engaged until this morning so didn’t have time to notice :-)

                    It’s okay, Paracetamol makes me drowsy as hell so I was “otherwise engaged”, too, as in I slept through most of Sunday :-D.

                    That list of common pitfalls pretty disappointing to read in a Linux context. I actually recognize some of those from my Linux BSP sweatshop days. Eek!

                    Are you thinking of the whole stack (i.e. specific window contents, sharing model between multiple-source-multiple-sinks, deception like a full screen client that looks just like the login-screen?) or the GPU boundary specifically (i.e. GEM/dma-buf/…)?

                    The GPU boundary specifically, the former covers way too much ground to make it a useful question IMHO.

                    1. 3

                      That list of common pitfalls pretty disappointing to read in a Linux context. I actually recognize some of those from my Linux BSP sweatshop days. Eek!

                      If you contrast them with the state of some other project, not to invite the keyword, how well did their past experiences with maintaining a popular display server avoid these basic pitfalls?

                      The GPU boundary specifically, the former covers way too much ground to make it a useful question IMHO.

                      So Arcan has a systemic opinion in that ‘very large ground’ space as somehow all the layers need to be tied together for the boundary to have more bite than the equivalent ‘fork + unveil/pledge’ like short compartments for things. Do note that I have basically /ignored all of CUDA etc. for ‘other GPU uses’.

                      With the work it takes, the open source spectrum only really leads to that one coarse grained viable interface that everyone copies near verbatim post the render-node/dma-buf change, they are just in different stages of being synched to it (can’t say for Haiku though, a while since I last poked around in there, maybe @waddlesplash).

                      There are interpretations for how you can leverage them (afair Genode handles it slightly different for a good experimental outlier in general), but in the end there’s only so much you can ‘do’ – opaque interchangeable sets of tokens representing work items that piggyback on some other authentication channel or a negotiated/authenticated initial stream setup that gets renegotiated when boundary conditions (resize) change. Even Android isn’t much different in this regard outside of petty nuances.

                      In the proprietary space, although you’ll find little documentation on the implementation (or at least I back when I hadn’t just given up on the platform entirely), IOSurfaces in the OSX Sense has a more refined model on the opaque sense in constrast to EGLStreams.

                      1. 3

                        (can’t say for Haiku though, a while since I last poked around in there, maybe @waddlesplash).

                        Haiku doesn’t have GPU acceleration yet (with the exception of one experimental Vulkan-only driver for Radeon Southern Islands that one contributor wrote), so we haven’t settled on a design for that API; doubtless we will just use Mesa so something under the hood will still provide dma-bufs or an equivalent, eventually.

                        But one thing which is notable here is that Haiku still does server-side drawing, for all applications that use the native graphics toolkit, anyway. Stuff like Qt and GTK of course just grabs a shared-memory bitmap and draws it repeatedly, but native applications send drawcalls to the server, specifying where they’re to be drawn into (a window, a bitmap, etc.) So, there’s a lot of leeway here, should we eventually get GPU acceleration and decide to experiment with GPU rendering, for the server to “batch” things, share contexts efficiently, etc. which applications on Linux can’t do anymore in the Wayland era (and didn’t do for a long time before that, usually, because X11’s rendering facilities didn’t really keep pace with what people expected 2D graphics drawing APIs to be.)

                        1. 1

                          Well we can coquettishly suggest HVIF or SVG as a new entry to wl_shm::format and they’d tick the server side graphics box about as well as many other ones.

                          1. 1

                            Neither of those are really designed for on-the-fly generation, though. HVIF in particular has a lot of features which make it very compact at the expense of writing and decoding time. The graphics protocol that Haiku uses for real-time drawing isn’t related to it. (Though it also has an “off the wire” form: BPicture files.)

                2. 3

                  I am trying out Arcan, but I think it’s really hard to figure out what I could contribute, given that so much is in flux. I watch the videos on the blog, and I think I understand and agree with the ethos. But if I were to build my own widget for Durden, it sounds like there is no chance it’s gonna keep working. I can understand why you can’t make promises about stability, I just don’t understand how it’s supposed to gain traction if people can’t grasp how to solve their immediate problems.

                  For X11, I could read an XCB tutorial and I’d be up and running. For Arcan, all the documentation is philosophical. I open up durden/widgets/colorpick.lua to understand how widgets work, and the code itself seems all right but I fail to understand the big picture. What are the patterns used, which things are in flux and which things aren’t? Maybe I’ve gotten too used to typed programming, I suppose I am bad at teasing out the concepts from example imperative code. What’s a stepframe_target? Should I be able to guess this?

                  If Durden is the main desktop environment, why does it seem like widgets are not concerned with tabbing or windowing? How do I make extensions to the tree navigation using an API (not within the Durden codebase)?

                  1. 3

                    Durden is more of an evolutionary playground, it took a lot of beatings for figuring out where the rough spots were.

                    https://github.com/letoram/arcan/wiki/Exercises -> the first one points you into arcan/doc folder of the source checkout where every function has a corresponding .lua file that act as document, test-cases, example of use and misuse. There’s even a (very hacky) docgen.rb mangen for converting them to manpages or editor highlights.

                    There is a slow generational shift towards also specifying the types of each overloaded argument form so that it can be used to generate a serializable command-stream format and bindings from them as well.

                    Any breaking changes or quirks has been added as note: over the years. In total there’s been less than 4? API breaks since 2013. That interface is stable. On the other hand it doesn’t have a corresponding form in X11/XCB metal modelling. For that it’s the low-level C API (SHMIF) from libarcan-shmif, its TUI specialised counterpart (libarcan-tui). That one is formally in flux, but practically been stable unless for very fringe parts like eccentric input devices and VR.

                  2. 3

                    Great interview, these points really resonate with me:

                    Many architectural decisions in Arcan are explicitly to begin the painful decoupling from proprietary GPUs. The modern GPU ecosystem is highly abusive and deliberately user unfriendly yet is in a place where it is actually running the show. I don’t like that.

                    This is the great sadness of F/OSS, at least the story that was told in the 90’s.

                    We did “get” a free operating system, but meanwhile most hardware is now software, and it’s proprietary software. We can no longer see how it works, and it often works poorly.

                    There is also a personal story of trying to build the panopticon of debugging. With that I mean ‘observing and experimenting’ with the Rube Goldberg contraption of what we unknowingly build by smashing pieces of software together. Even though the individual pieces can sometimes be formally proven to work and then actually made to work, the solutions that we end up with often solves some problem in a very convoluted way, but we lack the faculties to see it.

                    Agree here too, to me it is the interactions that are important, not the individual components. And I’m still trying to understand the interactions on a modern Linux desktop :-/

                    Making a wild accusation, I’m going to make a guess that Bjorn is left-handed :-) At least I am, and I feel it’s the curse of the left-handed to see the interactions and large-scale view of systems more than the parts … to lament the big mess, and to scribble across established boundaries and APIs. Everyone is optimizing one part, but the whole is incoherent.

                    1. 4

                      Agree here too, to me it is the interactions that are important, not the individual components. And I’m still trying to understand the interactions on a modern Linux desktop :-/

                      I have a very strong opinion in this regard, and that mainly comes from the debugging experiences on Android and the infinitely many ‘task forces’ for fixing audio latency then fixing the increased perceived latency because we actually synch audio to video then fixing the video latency causing a regression in audio latency, … ad nauseum.

                      Android have ‘on paper’ a single de-factor IPC system, Binder, The properties of the IPC system are axiomatic, everything will work with- or work around- its limitations. That helps them somewhat. On the other hand they lack an integration authority. It’s split up the delivery path through infinitely many ‘manager’ and ‘flinger’ processes and then a whole lot of duct tape to multiplex them together in the perceived end.

                      Something akin to a GNOME installation has what, 4+? D-Bus for ‘some meta’ but it’s performance and security characteristics are such that it can’t be used for much of anything, X11 for ‘mainly graphics but we threw everything in there except audio’. Wayland for ‘mainly graphics but also some of everything else and seemingly at random chose what to include or exclude based on who was inconvenienced by it in the committee’, Gstreamer and Pipewire (as a workaround) for ‘graphics and audio but not the other stuff’. Then ‘portals’ as a meta-orchestration layer about how permissions and sharing should work across all these. Then comes all the terminal jazz. Now stitch this into a desktop and even before the game of performance wack-a-mole, a neverending stream of different “1%” bugs ..

                      Making a wild accusation, I’m going to make a guess that Bjorn is left-handed :-) At least I am, and I feel it’s the curse of the left-handed to see the interactions and large-scale view of systems more than the parts.

                      Left-handed, music instruments, arts etc. all the stereotypes with a never ending string of little accidents from the subtle consequences of most things being mirrored from where they ‘logically’ should be - affordances are askew and all that..

                    2. 2

                      I love love love this concept, and great interview. Thank you both!

                      1. 1

                        The printer was, and is, a more accurate model. […] The display server do to render jobs what the printer spooler did to print jobs.

                        Is this to suggest that he views something like GKS, PHIGS and OpenGL deferred mode as a preferred model?

                        1. 1

                          I am thinking of it in a more abstract graphics sense than that, otherwise ‘streaming SVG’ and the likes would go in the same category. Not that being wrong per se, but these are of course APIs towards a vector-defined graphics model with texturing for including rasterised assets. Fine as a starting point but then mentally move towards ‘how do these synchronise, merge with other queued jobs then specialise to displays’ in ways that avoids conflict, corruption and optimises for resource utilisation and processing times.