On FreeBSD, a lot of the command-line tools support libxo, which allows them to output human-readable output or the same thing in JSON or XML. The bug problem with it is that you have a single output stream and so you have to pick one. I have a modified kernel that allows multiple output streams and the ability to do content negotiation over them, but with this as a primitive you should be able to simply write the structured data and formatting information and have different lenses to view it through. You should also be able to do things like filter out terminal control sequences automatically when feeding one of the output buffers to another tool and so you can always provide the rich output even for pipelines.
I’m looking forward to seeing deeper integration with the GUI. For example, there’s no reason that the output of ls in such a shell needs to be a load of text, it could easily be a sortable / filterable table view as in a graphical file manager, with the ability to click on any of the files to open them, without leaving the command line window (or select a set for use in your next command line, and so on). The possibilities for this are huge.
I’m a bit nervous about the unlimited output capturing. I generally set my scrollback to some large-but-finite amount having once left some tests running overnight and come back to a multi-gigabyte in-memory scrollback causing my machine to suffer (it was a mac, so it sent SIGSTOP to a load of processes until memory pressure stopped increasing and I had to slowly send SIGCONT to them to get it back under control). I’d like some heuristic where large command output was kept for less long, but maybe longer if the command took a long time, and things were gradually aged out if they weren’t referenced.
This is absolutely gorgeous
Not compared to what is hidden behind the surface here, but all in due time and thanks ;)
The bug problem with it is that you have a single output stream and so you have to pick one.
I have yet to expose libarcan-shmif-server to the scripting runtime. When that happens it can host native arcan applications, including itself. This allows decomposable nesting (any client can detach and reattach to another at any point, by default as a crash response but more useful to switch between networked and local). It also allows the client to dynamically announce in-out support for a set of stream extensions (similar to how X11 would mediate drag and drop, but less painful). This works in a push style, the server creates the input/output descriptor and transfers it with the desired extension. There’s the negotiation mechanism then for picking stream types.
You should also be able to do things like filter out terminal control sequences automatically when feeding one of the output buffers to another tool and so you can always provide the rich output even for pipelines.
The state machine implementation (base/vt100.lua) is really incomplete still, but part of the legacy integration it can be spotted (enabled by default on p! command).
view #0 wrap vt100
copy #0(1-10)
#0 | new_command
It is the currently active view that determines the output “slice” (take a subset of the stored data), with the form above both the ‘copy’ command and the (“run new_command in the context of job 0) would get the vt100 view slice that strips vt/ecma, swap the view and you get the raw form etc.
I’d like some heuristic where large command output was kept for less long, but maybe longer if the command took a long time, and things were gradually aged out if they weren’t referenced.
I went a bit overboard with features as is, but a lot more laying around here waiting to be pushed on a rainy day. Among the pending are things like:
I have yet to expose libarcan-shmif-server to the scripting runtime. When that happens it can host native arcan applications, including itself. This allows decomposable nesting (any client can detach and reattach to another at any point, by default as a crash response but more useful to switch between networked and local). It also allows the client to dynamically announce in-out support for a set of stream extensions (similar to how X11 would mediate drag and drop, but less painful). This works in a push style, the server creates the input/output descriptor and transfers it with the desired extension. There’s the negotiation mechanism then for picking stream types.
Have you given any thought as to how to retrofit this to existing binaries? Some of the FreeBSD utils, for example, rejected libxo because they are statically linked (at least, in the rescue versions) and the increase in code size was a show stopper. ls was one of these, where it’s actually quite useful to have the ability to send the equivalent of ls -l to jq or similar for additional filtering (e.g. to list all files that more than 1 MiB but less than 1 GiB).
The mechanism that I added to the kernel to experiment with this was in two parts:
A content negotiation protocol over pipes. The receiver did a blocking ioctl to request a list of types that the sender could provide. If the sender did a write, this returned an error. The sender then did a blocking ioctl to provide a list of types. If the receiver did a read, this returned an error. The receiver then did an ioctl to select the desired type, which caused the sender’s ioctl to return that value. The pipe could then be used for sending typed data.
A ‘pipe peeling’ mechanism in the pty layer, where the owner of the server side of a pty could advertise that is supported the mechanism and a client could request additional pipes to be opened to whatever program owned the server end of the pty. This let you write human-readable text to the pty and also receive a parallel stream for writing the structured equivalent.
The idea was that a terminal emulator would advertise that it supported typed streams and then every command running in the terminal could easily negotiate an additional stream with it. I extended libxo to try to use the content negotiation protocol on stdout and to try to use pipe peeling if supported and then do content negotiation over the additional stream and send typed data there, human-readable text elsewhere. I had a small demo that ran commands as normal, but for anything that sent an additional HTML stream, opened the output it in a browser (in addition to displaying it in the terminal).
I’m curious whether something like this (or something different) from the kernel would help Arcan.
That seems like a reasonable strategy if your goal is to interoperate at the level of pipes. If, on the other hand, you think pipes are the worst, and aim to replace them, it seems much more sensible to have a golden standard for initial discovery, and then do all additional negotiation at the higher level.
That’s not to say that, e.g. if an application is already using libxo, an alternate implementation of the latter could not be provided that behaved more sensibly. (For flavour: I looked into doing a curses->arcan-tui implementation at one point, but once I had set up a really basic set of stubs, I tried a curses application and found that it also poked the underlying termios junk itself. So that was the end of that.) But adding a completely new kernel interface for this seems counterproductive.
Have you given any thought as to how to retrofit this to existing binaries?
Lacking other primitives, arcan-shmif picks up its connection parameters and argv(!) from env. The connection point is named with the doubling as an unnamed type. In X11 you have DISPLAY= that just points out the local TTY Xorg owns, or the IP in the case of its networked version. In arcan the WM (and by extension, Lash#Cat9 as it fills that role) picks a name for each connection. This mechanism is used throughout to add denial of service protection, to allow limited sets of special clients for trays, external input drivers and so on. It is also used as an address for crash recovery and runtime network redirection.
With Cat9 this is intended to be used as a probe mechanic - generating a unique connection point for a command t hat is to be run, and if a connection is activated over it I know that it speaks the language and can communicate its sets of streams accordingly. If that fails, the frontending part kicks in, i.e. define command aliases that specify explicitly which command line toggles that change the type of stdin/stdout. Finally the last part would be to actually wrap the binaries themselves in a chain-loader like fashion. It is probably there I’d try to leverage something like ioctl announced types.
A content negotiation protocol over pipes.
In principle I love the idea of type-annotating descriptors, and ioctl looks to be one of the few ways to go there. Did you also go into having kqueue be able to notify on the set of types changing after the first “announce”?
I’m curious whether something like this (or something different) from the kernel would help Arcan.
For Arcan in general I think the biggest primitive I miss is actually DuplicateHandle, the socket passing method is just so fundamentally painful. I try to get by with user space scraps simply for portability constraints.
Lash/Cat9/Tui subproject is a bit more special as it is designed decoupled from the larger system/vision. It would be theoretically possible to reimplement libarcan-tui over X11 (wayland is in a worse spot) albeit painful. As such it relies on a subset. I figured that would perhaps marginally help adoption and constraining the feature space a little bit might provide some insight.
In principle I love the idea of type-annotating descriptors, and ioctl looks to be one of the few ways to go there. Did you also go into having kqueue be able to notify on the set of types changing after the first “announce”?
I didn’t, for two reasons:
I was mostly thinking about non-interactive commands, where they just consume some input and spit out some output.
It was a proof-of-concept to see if the idea could work.
For interactive commands, this is definitely an interesting idea. The pipe peeling mechanism could provide something similar, if the client just requests a new pipe from the pty and runs content negotiation over this. I’d have to think a bit about the interaction between kqueue and renegotiation because you want to be able to send an explicit marker so that the receiver would get the end of the old-type stream, the notification, and then the start of the new-type stream, and if the receiver just did two reads without a kevent in the middle then this might be a problem.
For Arcan in general I think the biggest primitive I miss is actually DuplicateHandle, the socket passing method is just so fundamentally painful. I try to get by with user space scraps simply for portability constraints.
DuplicateHandle doesn’t play very nicely with a couple of UNIX features, unfortunately. With per-process FD limits, the ability to send a file descriptor to another process without its consent would be a trivial DoS, allowing you to just send copies of an FD into the target until you hit the limit in another process and then cause it to fail in any FD-creating call. It also doesn’t play nicely with the requirement that new FDs are always the lowest number: single-threaded *NIX programs can be confused if they close a low-number FD and then someone else puts something in that slot in between the close and open.
How would you feel about making this an operation on a process descriptor, gated on a Capsicum permission on the PD. An ioctl on PDs that let you do a remote dup and dup2 operations (local to remote or remote to local, implicit or explicit target FD number) would be feasible and if only the parent process or a process that the parent handed the rights to has the ability to inject FDs then this might be a fairly safe primitive. It might also want to be coupled with a procctl flag to disable it from the target side.
Lash/Cat9/Tui subproject is a bit more special as it is designed decoupled from the larger system/vision. It would be theoretically possible to reimplement libarcan-tui over X11 (wayland is in a worse spot) albeit painful. As such it relies on a subset. I figured that would perhaps marginally help adoption and constraining the feature space a little bit might provide some insight.
Totally unrelated question that this has made me ponder: has anyone looked at a Windows port of Arcan? I’d love to be able to use all of this stuff from a Windows client machine connected to a *NIX server.
For interactive commands, this is definitely an interesting idea. The pipe peeling mechanism could provide something similar, if the client just requests a new pipe from the pty and runs content negotiation over this. I’d have to think a bit about the interaction between kqueue and renegotiation because you want to be able to send an explicit marker so that the receiver would get the end of the old-type stream, the notification, and then the start of the new-type stream, and if the receiver just did two reads without a kevent in the middle then this might be a problem.
I guess synchronisation with the handover and read across type boundaries would become a substantial pain (and it’d basically becomes a sort of API for demultiplexing streaming media container formats), then there is all the interactions with linuxisms like splice…
How would you feel about making this an operation on a process descriptor, gated on a Capsicum permission on -the PD.
Ioctl on PDs makes sense, maybe could set a cap on FD receive buffer window (=0 would disable) and sparse base so you could segment yourself? Regardless the actual target-local descriptor still needs to be returned on the dup2 on the sender end for pairing with sideband data.
Totally unrelated question that this has made me ponder: has anyone looked at a Windows port of Arcan? I’d love to be able to use all of this stuff from a Windows client machine connected to a *NIX server.
I used to maintain a port many years ago, I stopped around 2013 or so as the pains of the time were just too great. Multiprocess, c99, mingw, shared memory, atomics, … it was nigh impossible to separate bugs introduced by the toolchain and those from my own cluelessness. Code base is better layered now, though some POSIXisms have snuck in, it’s the IPC system that is mainly in need of cleanup / refactoring. I’ve been holding that off until after the “network focus” branch is over, also hoping that’d reduce the amount of bugs with GPU VK/GL4.5 interop. Still no “not-malware” inspired way of cuckooing dwm.exe though …
Before then there’ll likely be a port / client for the networking protocol (“A12”) as that can be justified within my day to day, also grant applications pending.
On FreeBSD, a lot of the command-line tools support libxo, which allows them to output human-readable output or the same thing in JSON or XML.
Oh, that’s interesting! Structured I/O is, I think, the main upgrade the Unix small-composable-tools model needs; but of course it needs the tools to support it. I didn’t know there was a major(ish) OS with that.
At a UI level, the two big things I want in a terminal are
A split view with a Finder (file browser) pane, both panes sharing the same current directory, and some variable like $sel bound to the GUI selection;
Easily-configurable pull-down menus for invoking or retyping shell commands.
I’ve seen several Mac apps that partially do 1 but they never synchronize the pwd or selection.
Oh, that’s interesting! Structured I/O is, I think, the main upgrade the Unix small-composable-tools model needs; but of course it needs the tools to support it. I didn’t know there was a major(ish) OS with that.
The code was contributed ages ago by Juniper. JunOS (the FreeBSD derivative that runs on Juniper routers) has a load of admin interfaces that relied on parsing the output of various commands. Whenever the output formats changed slightly, they ran the risk of breaking the JunOS GUI, which cost them a lot of validation effort on merges from upstream. With the libxo stuff, they just parse the XML or JSON output. If this changes, it will be by adding new fields, not by breaking existing things. It’s much easier to validate because they can just check for new fields and add handlers for anything that they care about (ideally these tools would produce a schema for their output, but that’s harder).
I do have to say I was very skeptic about the title, but this really does deliver on some new kind of shell that leaves behind the mess we have, without breaking backwards compatibility, and with enough features to make the upfront investment worth it.
I’m on a really bad wifi connection here so I can’t actually see the videos, but the descriptions sound so sweet!
Random things I really like just by the sound of them – ‘cause, like I said, I can’t see them :-D:
Shell and window management interaction. Way back when I was still bothering with tiled WMs I’d made a wrapper for launching some of the applications that I commonly used (some of which I was frequently spawning from the terminal) which did some window management magic in order to not do really stupid things, like launch a 2560x1440 xterm window. It was extremely clunky and the feeling that this was just slightly better than launching X11 apps from a VAX through a rsh connection, and there’s just no way the computer industry peaked before I was born. Things like focused vs. unfocused prompts were nowhere near feasible.
Saving and tracking job context. That just, like, makes sense.
Better classification of jobs and ability to reference them. I prompty forgot how to use bg and fg the moment I discovered terminal multiplexers because man I hated job multiplexing. The downside to it is that every couple of days or so I go on a tmux killing spree where I manually go through the tmux panes and kill the ones that are idling now, because I no longer Ctrl-Z anything, I C-o c a new pane instead. Sue me. I’d love to go back to a life where my only use for tmux (or screen) is terminal persistence.
You should be more excited about the last part (but also, watch the videos, though the youtube-dl version is 600ish MB), project approval pending but there is a student or two about to look at a gdbserver MI friendly set of builtins …
Durden is actually closest to being “finished”, it is just really rough in its default packaging in an attempt to try and get others to do that and take the minuscule glory and related attention. What really holds things back is the client application compatibility, I held out for the longest time hoping that Wayland/X11 path would become workable. That is unlikely to happen.. I’m close to opening my wallet further and just hiring to deal with porting things to shmif.
The coolest thing in public is still Pipeworld. The coolest thing no-longer in public is Senseye but perhaps one day that’ll make a return…
I do think it is time for the shell to free itself from the terminal, but I can’t help but wonder how much of these demos could be achieved with existing OSC sequences such as OSC133. Unfortunately, backwards-compatible improvement (that do not require to throw the shell away) might be more successful in seeing adoption in the long run.
Barely anything can just be plugged by adding more instructions, you’re irreversibly stuck behind architectural flaws from the 50ies view on serial communication and the best band aid you have is running more terminal emulators inside a terminal emulator (which is what a multiplexer à screen/tmux is). Some visuals like the compartmentation can be imitated sure, but otherwise everything shown all rely on building blocks for synchronisation and data transfer that just did not exist.
There’s been tons of band-aid OSCs introduced, just look at the emulator in Enlightenment for a creative set.
They practically don’t get implemented or used elsewhere, or rely on assumptions that are deeply flawed. Pasting a binary blob from the clipboard would have the OSC sequence provide a file path (you can’t do large data transfers as they block the line) on the local system. The feature runs on your machine fine and you expect it to continue running, but ssh:ing to a machine, chrooting, jumping into a container, … and it is suddenly broken.
This is absolutely gorgeous.
On FreeBSD, a lot of the command-line tools support
libxo, which allows them to output human-readable output or the same thing in JSON or XML. The bug problem with it is that you have a single output stream and so you have to pick one. I have a modified kernel that allows multiple output streams and the ability to do content negotiation over them, but with this as a primitive you should be able to simply write the structured data and formatting information and have different lenses to view it through. You should also be able to do things like filter out terminal control sequences automatically when feeding one of the output buffers to another tool and so you can always provide the rich output even for pipelines.I’m looking forward to seeing deeper integration with the GUI. For example, there’s no reason that the output of
lsin such a shell needs to be a load of text, it could easily be a sortable / filterable table view as in a graphical file manager, with the ability to click on any of the files to open them, without leaving the command line window (or select a set for use in your next command line, and so on). The possibilities for this are huge.I’m a bit nervous about the unlimited output capturing. I generally set my scrollback to some large-but-finite amount having once left some tests running overnight and come back to a multi-gigabyte in-memory scrollback causing my machine to suffer (it was a mac, so it sent SIGSTOP to a load of processes until memory pressure stopped increasing and I had to slowly send SIGCONT to them to get it back under control). I’d like some heuristic where large command output was kept for less long, but maybe longer if the command took a long time, and things were gradually aged out if they weren’t referenced.
I have yet to expose libarcan-shmif-server to the scripting runtime. When that happens it can host native arcan applications, including itself. This allows decomposable nesting (any client can detach and reattach to another at any point, by default as a crash response but more useful to switch between networked and local). It also allows the client to dynamically announce in-out support for a set of stream extensions (similar to how X11 would mediate drag and drop, but less painful). This works in a push style, the server creates the input/output descriptor and transfers it with the desired extension. There’s the negotiation mechanism then for picking stream types.
The state machine implementation (base/vt100.lua) is really incomplete still, but part of the legacy integration it can be spotted (enabled by default on p! command).
It is the currently active view that determines the output “slice” (take a subset of the stored data), with the form above both the ‘copy’ command and the (“run new_command in the context of job 0) would get the vt100 view slice that strips vt/ecma, swap the view and you get the raw form etc.
I went a bit overboard with features as is, but a lot more laying around here waiting to be pushed on a rainy day. Among the pending are things like:
Since job creation time is tracked, a slight modification to the overflow trigger should suffice.
Have you given any thought as to how to retrofit this to existing binaries? Some of the FreeBSD utils, for example, rejected libxo because they are statically linked (at least, in the rescue versions) and the increase in code size was a show stopper.
lswas one of these, where it’s actually quite useful to have the ability to send the equivalent ofls -ltojqor similar for additional filtering (e.g. to list all files that more than 1 MiB but less than 1 GiB).The mechanism that I added to the kernel to experiment with this was in two parts:
ioctlto request a list of types that the sender could provide. If the sender did awrite, this returned an error. The sender then did a blockingioctlto provide a list of types. If the receiver did aread, this returned an error. The receiver then did anioctlto select the desired type, which caused the sender’sioctlto return that value. The pipe could then be used for sending typed data.The idea was that a terminal emulator would advertise that it supported typed streams and then every command running in the terminal could easily negotiate an additional stream with it. I extended libxo to try to use the content negotiation protocol on
stdoutand to try to use pipe peeling if supported and then do content negotiation over the additional stream and send typed data there, human-readable text elsewhere. I had a small demo that ran commands as normal, but for anything that sent an additional HTML stream, opened the output it in a browser (in addition to displaying it in the terminal).I’m curious whether something like this (or something different) from the kernel would help Arcan.
That seems like a reasonable strategy if your goal is to interoperate at the level of pipes. If, on the other hand, you think pipes are the worst, and aim to replace them, it seems much more sensible to have a golden standard for initial discovery, and then do all additional negotiation at the higher level.
That’s not to say that, e.g. if an application is already using libxo, an alternate implementation of the latter could not be provided that behaved more sensibly. (For flavour: I looked into doing a curses->arcan-tui implementation at one point, but once I had set up a really basic set of stubs, I tried a curses application and found that it also poked the underlying termios junk itself. So that was the end of that.) But adding a completely new kernel interface for this seems counterproductive.
Lacking other primitives, arcan-shmif picks up its connection parameters and argv(!) from env. The connection point is named with the doubling as an unnamed type. In X11 you have DISPLAY= that just points out the local TTY Xorg owns, or the IP in the case of its networked version. In arcan the WM (and by extension, Lash#Cat9 as it fills that role) picks a name for each connection. This mechanism is used throughout to add denial of service protection, to allow limited sets of special clients for trays, external input drivers and so on. It is also used as an address for crash recovery and runtime network redirection.
With Cat9 this is intended to be used as a probe mechanic - generating a unique connection point for a command t hat is to be run, and if a connection is activated over it I know that it speaks the language and can communicate its sets of streams accordingly. If that fails, the frontending part kicks in, i.e. define command aliases that specify explicitly which command line toggles that change the type of stdin/stdout. Finally the last part would be to actually wrap the binaries themselves in a chain-loader like fashion. It is probably there I’d try to leverage something like ioctl announced types.
In principle I love the idea of type-annotating descriptors, and ioctl looks to be one of the few ways to go there. Did you also go into having kqueue be able to notify on the set of types changing after the first “announce”?
For Arcan in general I think the biggest primitive I miss is actually DuplicateHandle, the socket passing method is just so fundamentally painful. I try to get by with user space scraps simply for portability constraints.
Lash/Cat9/Tui subproject is a bit more special as it is designed decoupled from the larger system/vision. It would be theoretically possible to reimplement libarcan-tui over X11 (wayland is in a worse spot) albeit painful. As such it relies on a subset. I figured that would perhaps marginally help adoption and constraining the feature space a little bit might provide some insight.
I didn’t, for two reasons:
For interactive commands, this is definitely an interesting idea. The pipe peeling mechanism could provide something similar, if the client just requests a new pipe from the pty and runs content negotiation over this. I’d have to think a bit about the interaction between kqueue and renegotiation because you want to be able to send an explicit marker so that the receiver would get the end of the old-type stream, the notification, and then the start of the new-type stream, and if the receiver just did two
reads without akeventin the middle then this might be a problem.DuplicateHandledoesn’t play very nicely with a couple of UNIX features, unfortunately. With per-process FD limits, the ability to send a file descriptor to another process without its consent would be a trivial DoS, allowing you to just send copies of an FD into the target until you hit the limit in another process and then cause it to fail in any FD-creating call. It also doesn’t play nicely with the requirement that new FDs are always the lowest number: single-threaded *NIX programs can be confused if they close a low-number FD and then someone else puts something in that slot in between the close and open.How would you feel about making this an operation on a process descriptor, gated on a Capsicum permission on the PD. An
ioctlon PDs that let you do a remotedupanddup2operations (local to remote or remote to local, implicit or explicit target FD number) would be feasible and if only the parent process or a process that the parent handed the rights to has the ability to inject FDs then this might be a fairly safe primitive. It might also want to be coupled with aprocctlflag to disable it from the target side.Totally unrelated question that this has made me ponder: has anyone looked at a Windows port of Arcan? I’d love to be able to use all of this stuff from a Windows client machine connected to a *NIX server.
I guess synchronisation with the handover and read across type boundaries would become a substantial pain (and it’d basically becomes a sort of API for demultiplexing streaming media container formats), then there is all the interactions with linuxisms like splice…
Ioctl on PDs makes sense, maybe could set a cap on FD receive buffer window (=0 would disable) and sparse base so you could segment yourself? Regardless the actual target-local descriptor still needs to be returned on the dup2 on the sender end for pairing with sideband data.
I used to maintain a port many years ago, I stopped around 2013 or so as the pains of the time were just too great. Multiprocess, c99, mingw, shared memory, atomics, … it was nigh impossible to separate bugs introduced by the toolchain and those from my own cluelessness. Code base is better layered now, though some POSIXisms have snuck in, it’s the IPC system that is mainly in need of cleanup / refactoring. I’ve been holding that off until after the “network focus” branch is over, also hoping that’d reduce the amount of bugs with GPU VK/GL4.5 interop. Still no “not-malware” inspired way of cuckooing dwm.exe though …
Before then there’ll likely be a port / client for the networking protocol (“A12”) as that can be justified within my day to day, also grant applications pending.
Oh, that’s interesting! Structured I/O is, I think, the main upgrade the Unix small-composable-tools model needs; but of course it needs the tools to support it. I didn’t know there was a major(ish) OS with that.
At a UI level, the two big things I want in a terminal are
I’ve seen several Mac apps that partially do 1 but they never synchronize the pwd or selection.
Windows isn’t major(ish) enough? :P
Powershell has done structured IO for native commands for ~20 years, and falls back to text streams only if you pipe to non-native executables.
Well, that’s awesome! (Not awesome enough to get me to use Windows, but still.)
You can use PowerShell on Linux/Mac if you’d like :) https://github.com/powershell/powershell
The code was contributed ages ago by Juniper. JunOS (the FreeBSD derivative that runs on Juniper routers) has a load of admin interfaces that relied on parsing the output of various commands. Whenever the output formats changed slightly, they ran the risk of breaking the JunOS GUI, which cost them a lot of validation effort on merges from upstream. With the
libxostuff, they just parse the XML or JSON output. If this changes, it will be by adding new fields, not by breaking existing things. It’s much easier to validate because they can just check for new fields and add handlers for anything that they care about (ideally these tools would produce a schema for their output, but that’s harder).Yes, this is exactly what I’ve been wanting for many years, kudos for delivering that!
Eagerly waiting the moment I can install and start using that as my daily driver!
I do have to say I was very skeptic about the title, but this really does deliver on some new kind of shell that leaves behind the mess we have, without breaking backwards compatibility, and with enough features to make the upfront investment worth it.
I’m on a really bad wifi connection here so I can’t actually see the videos, but the descriptions sound so sweet!
Random things I really like just by the sound of them – ‘cause, like I said, I can’t see them :-D:
Shell and window management interaction. Way back when I was still bothering with tiled WMs I’d made a wrapper for launching some of the applications that I commonly used (some of which I was frequently spawning from the terminal) which did some window management magic in order to not do really stupid things, like launch a 2560x1440
xtermwindow. It was extremely clunky and the feeling that this was just slightly better than launching X11 apps from a VAX through arshconnection, and there’s just no way the computer industry peaked before I was born. Things like focused vs. unfocused prompts were nowhere near feasible.Saving and tracking job context. That just, like, makes sense.
Better classification of jobs and ability to reference them. I prompty forgot how to use bg and fg the moment I discovered terminal multiplexers because man I hated job multiplexing. The downside to it is that every couple of days or so I go on a tmux killing spree where I manually go through the tmux panes and kill the ones that are idling now, because I no longer Ctrl-Z anything, I C-o c a new pane instead. Sue me. I’d love to go back to a life where my only use for tmux (or screen) is terminal persistence.
You should be more excited about the last part (but also, watch the videos, though the youtube-dl version is 600ish MB), project approval pending but there is a student or two about to look at a gdbserver MI friendly set of builtins …
This, Arcan, and Durden, which I didn’t know about before, are the coolest things I have seen in a while.
I use keyboard-only and have been wondering what an actual modern CLI would be like, and this feels like a peek at what that could be.
Going to try Durden and see if I can make it work for me. :)
Durden is actually closest to being “finished”, it is just really rough in its default packaging in an attempt to try and get others to do that and take the minuscule glory and related attention. What really holds things back is the client application compatibility, I held out for the longest time hoping that Wayland/X11 path would become workable. That is unlikely to happen.. I’m close to opening my wallet further and just hiring to deal with porting things to shmif.
The coolest thing in public is still Pipeworld. The coolest thing no-longer in public is Senseye but perhaps one day that’ll make a return…
I do think it is time for the shell to free itself from the terminal, but I can’t help but wonder how much of these demos could be achieved with existing OSC sequences such as OSC133. Unfortunately, backwards-compatible improvement (that do not require to throw the shell away) might be more successful in seeing adoption in the long run.
Barely anything can just be plugged by adding more instructions, you’re irreversibly stuck behind architectural flaws from the 50ies view on serial communication and the best band aid you have is running more terminal emulators inside a terminal emulator (which is what a multiplexer à screen/tmux is). Some visuals like the compartmentation can be imitated sure, but otherwise everything shown all rely on building blocks for synchronisation and data transfer that just did not exist.
There’s been tons of band-aid OSCs introduced, just look at the emulator in Enlightenment for a creative set. They practically don’t get implemented or used elsewhere, or rely on assumptions that are deeply flawed. Pasting a binary blob from the clipboard would have the OSC sequence provide a file path (you can’t do large data transfers as they block the line) on the local system. The feature runs on your machine fine and you expect it to continue running, but ssh:ing to a machine, chrooting, jumping into a container, … and it is suddenly broken.