Throughput, cpu and memory usage are all nice, but the main thing to test is latency. It would be cool to see this compared, since going to GPU theoretically makes it worse.
Yeah, latency is something you win on by keeping it simple. And throughput seems irrelevant to me - you can’t read text flying by that fast anyway, so there’s no real benefit to even trying to show it vs just settling for a slower update speed. The only point of showing anything is to let the user know it hasn’t frozen up and you don’t need to bother showing the actual screen in real time to get that across.
Yah, it’s possible to have just as good latency with GPU rendering as CPU rendering but you need to be careful about exactly how you present frames. Terminals also vary greatly in how much they buffer input from the shell before rendering a frame, which can have a big impact on latency.
It’s definitely a cool project, the compute shader rendering is neat. But when I think about how I use a terminal the only two things where I’ll notice speed are input latency and when I print megabytes of plain text, so I don’t care too much about efficiency of escape code handling since that’ll probably be a fraction of a millisecond per frame.
I don’t agree beyond a certain point. It’s many years since I used a terminal (except over remote X11 and one written by someone for fun in JavaScript) where the latency from keypress to character appearing was large enough that I noticed it at all. At a 60Hz refresh of most LCD monitors, you’ve got over 15ms to render the update, maybe 5ms if another process is running when the key press comes in and you don’t get scheduled immediately. On a modern system, 5ms is an astonishing amount of compute and I still probably wouldn’t notice if it took several screen refresh cycles before the character appeared. It’s hard for me to imagine anyone doing such a bad job at implementation that I’d notice.
In contrast, I have in the last six months hit a case where I typed a command on a remote SSH session that generated a few MBs of output and then had to wait for the terminal to consume it. The command took under a second to produce all of the output, it took a few seconds for ssh to transfer all of it and then a minute for the terminal to finish scrolling. That prevented me from doing any work for a minute and so is something I really, really care about.
and then a minute for the terminal to finish scrolling
See, that’s an absurd situation. What my terminal does there is just… not scroll. It sees that a lot has changed, updates its internal data structures, then prints out the result. There’s just no benefit in showing scrolling when it knows there’s a bunch more data already in the buffer.
I still probably wouldn’t notice if it took several screen refresh cycles before the character appeared. It’s hard for me to imagine anyone doing such a bad job at implementation that I’d notice.
You’d be surprised, but I see it with my eyes in iTerm2 and Alacritty. It’s not only noticeable, it’s extremely irritating.
I’ve been trying to wrap my head around this, that “going to GPU theoretically makes it worse”. What do you mean by that? As longs as the frame is complete before the 16.67ms deadline, wouldn’t that be the minimal possible latency on an LCD without tearing (assuming no gsync or freesync or similar)?
In the last few years, quite a few GPU accelerated terminals emulators have popped up. This puzzles me, to say the least. Over the last two decades I have used xterm, rxvt, gnome-terminal, xfce-terminal, lxterm, putty, terminator, terminal.app, iterm2 and probably a handful more, never once I felt that it wasn’t immediate. I have tried a couple of these GPU accelerated ones and I can’t notice anything different than any of the others. The only terminal specific feature I use copy or paste, which they ally support.
What exactly is the difference in terms of everyday usage? What can you do with these that you couldn’t do 20 years ago?
I am also curious about this! One difference is that we now have 10 times more pixels on the screen, so it might be the case of having to run to stay at the same place, but I don’t know.
If you’ve used all of those, then have you noticed a significant difference between e.g. xterm and gnome-terminal? It’s most noticeable to me when I type ls -lR in my home directory; terminals based on libvte tend to be much slower than their older counterparts.
I know that’s not useful for measuring single-keystroke input latency, but it is a useful indicator of lag. Some terminals get so backed up reading the stream of incoming data that it can be many seconds before you get your prompt back, even though the command has already finished executing, and your shell is already waiting on your input. On a smaller scale, IME, this is a major contributor to perceived latency.
I first noticed this when libvte added support for infinite scrolling. The library authors needed somewhere to put the scrollback data, so it was synced to disk (since 2014 it’s also encrypted, which probably doesn’t help performance). You can get significantly better results just implementing your own terminal application using libvte, but not using any of the infinite scrollback features. I’ve done this; it takes fewer than 100 lines of code.
My go-to terminal emulator for speed used to be Eterm; I even submitted patches to get it to pass vttest. These days I’ve switched to kitty, which uses the GPU. It’s about twice as fast in the ls -lR test.
According to the author of Kitty, tmux is a bad idea, apparently because it does not care about the arbitrary xterm-protocol extensions Kitty implements. Ostensibly, terminal multiplexing (providing persistence, sharing of sessions over several clients and windows, abstraction over underlying terminals and so on) are either unwarranted and seen as meddling by a middleman, or should be provided by the terminal emulator itself (Kitty being touted as the innovator here). A remarkable standpoint, to say the least.
Because this is something that I completely agree with. I have recently switched to abduco from tmux because I want my terminal to handle being a terminal and the only thing that I wanted from tmux was connection persistence. There are a load of ‘features’ in tmux that really annoy me. It does not forward everything to my terminal and implements its own scrollback which means I can’t cat a file, select it, copy it, and paste it into another terminal connected to a different machine (which is something I do far more often than I probably should).
Yeah, I do not like how some terminal emulators now are leaving everything to tmux/screen, rather than implementing useful features for management, scrollback, etc themselves. For 99% of my cases, I don’t need tmux in addition to my shell and a good terminal emulator, so idk why I’d want to introduce more complexity.
kitty honestly works very well for me, and has Unicode and font features that zutty does not seem to consider. Clearly some work needs to be done for conformance to the tests that the author raises, but for my needs, kitty works great for Unicode coverage and rendering.
Yeah, I do not like how some terminal emulators now are leaving everything to tmux/screen,
So I think tmux and screen both suck since they don’t pass through to the terminal things like scrollback. Instead of the same mouse wheel or shift+page up, I have to shift gears to C-a [ or whatever it is.
I actually decided to write my own terminal emulator… and my own attach/detach session thing that goes with it. With my custom pass-through features I can actually use them all the same way. If I attach a full screen thing, the shift pageup/down just pass through to the application, meaning it can nest. Among other things. I kinda wonder why the others don’t lobby for similar xterm extensions or something so they can do this too.
I also love how Kitty pretty easily allows you to extend these features with other programs. Instead of Kitty’s default history, I have it enter neovim (with all of my configurations) so that I can navigate and copy my history in the same way the I write my code. I have been using Kitty for a few years and absolutely love it. The only issue I run into on occasion is that SSHing into some servers can mess the terminal up a little.
Same. I never warmed to the “tmux is all you need” approach, because, honestly, it’s just a totally unnecessary interloper in my terminal workflow. I like being able to detach/reattach sessions, but literally everything else about tmux drives me bananas.
Throughput, cpu and memory usage are all nice, but the main thing to test is latency. It would be cool to see this compared, since going to GPU theoretically makes it worse.
Yeah, latency is something you win on by keeping it simple. And throughput seems irrelevant to me - you can’t read text flying by that fast anyway, so there’s no real benefit to even trying to show it vs just settling for a slower update speed. The only point of showing anything is to let the user know it hasn’t frozen up and you don’t need to bother showing the actual screen in real time to get that across.
Terminal latency is not about reading text. It’s about typing text.
Yeah, I know. The first sentence is about [input] latency being a win. Then I changed the topic to [output] throughput, which I think is irrelevant.
Yah, it’s possible to have just as good latency with GPU rendering as CPU rendering but you need to be careful about exactly how you present frames. Terminals also vary greatly in how much they buffer input from the shell before rendering a frame, which can have a big impact on latency.
It’s definitely a cool project, the compute shader rendering is neat. But when I think about how I use a terminal the only two things where I’ll notice speed are input latency and when I print megabytes of plain text, so I don’t care too much about efficiency of escape code handling since that’ll probably be a fraction of a millisecond per frame.
I’ve found that Kitty beats Alacritty in latency at least on macOS: https://thume.ca/2020/05/20/making-a-latency-tester/
Yeah I’m using Terminal.app because of this, see https://danluu.com/term-latency/ for measurements
I don’t agree beyond a certain point. It’s many years since I used a terminal (except over remote X11 and one written by someone for fun in JavaScript) where the latency from keypress to character appearing was large enough that I noticed it at all. At a 60Hz refresh of most LCD monitors, you’ve got over 15ms to render the update, maybe 5ms if another process is running when the key press comes in and you don’t get scheduled immediately. On a modern system, 5ms is an astonishing amount of compute and I still probably wouldn’t notice if it took several screen refresh cycles before the character appeared. It’s hard for me to imagine anyone doing such a bad job at implementation that I’d notice.
In contrast, I have in the last six months hit a case where I typed a command on a remote SSH session that generated a few MBs of output and then had to wait for the terminal to consume it. The command took under a second to produce all of the output, it took a few seconds for
ssh
to transfer all of it and then a minute for the terminal to finish scrolling. That prevented me from doing any work for a minute and so is something I really, really care about.See, that’s an absurd situation. What my terminal does there is just… not scroll. It sees that a lot has changed, updates its internal data structures, then prints out the result. There’s just no benefit in showing scrolling when it knows there’s a bunch more data already in the buffer.
You’d be surprised, but I see it with my eyes in iTerm2 and Alacritty. It’s not only noticeable, it’s extremely irritating.
I’ve been trying to wrap my head around this, that “going to GPU theoretically makes it worse”. What do you mean by that? As longs as the frame is complete before the 16.67ms deadline, wouldn’t that be the minimal possible latency on an LCD without tearing (assuming no gsync or freesync or similar)?
In the last few years, quite a few GPU accelerated terminals emulators have popped up. This puzzles me, to say the least. Over the last two decades I have used xterm, rxvt, gnome-terminal, xfce-terminal, lxterm, putty, terminator, terminal.app, iterm2 and probably a handful more, never once I felt that it wasn’t immediate. I have tried a couple of these GPU accelerated ones and I can’t notice anything different than any of the others. The only terminal specific feature I use copy or paste, which they ally support.
What exactly is the difference in terms of everyday usage? What can you do with these that you couldn’t do 20 years ago?
I am also curious about this! One difference is that we now have 10 times more pixels on the screen, so it might be the case of having to run to stay at the same place, but I don’t know.
If you’ve used all of those, then have you noticed a significant difference between e.g. xterm and gnome-terminal? It’s most noticeable to me when I type
ls -lR
in my home directory; terminals based on libvte tend to be much slower than their older counterparts.I know that’s not useful for measuring single-keystroke input latency, but it is a useful indicator of lag. Some terminals get so backed up reading the stream of incoming data that it can be many seconds before you get your prompt back, even though the command has already finished executing, and your shell is already waiting on your input. On a smaller scale, IME, this is a major contributor to perceived latency.
I first noticed this when libvte added support for infinite scrolling. The library authors needed somewhere to put the scrollback data, so it was synced to disk (since 2014 it’s also encrypted, which probably doesn’t help performance). You can get significantly better results just implementing your own terminal application using libvte, but not using any of the infinite scrollback features. I’ve done this; it takes fewer than 100 lines of code.
My go-to terminal emulator for speed used to be Eterm; I even submitted patches to get it to pass vttest. These days I’ve switched to kitty, which uses the GPU. It’s about twice as fast in the
ls -lR
test.I was intrigued by this:
Because this is something that I completely agree with. I have recently switched to abduco from tmux because I want my terminal to handle being a terminal and the only thing that I wanted from tmux was connection persistence. There are a load of ‘features’ in tmux that really annoy me. It does not forward everything to my terminal and implements its own scrollback which means I can’t cat a file, select it, copy it, and paste it into another terminal connected to a different machine (which is something I do far more often than I probably should).
Yeah, I do not like how some terminal emulators now are leaving everything to tmux/screen, rather than implementing useful features for management, scrollback, etc themselves. For 99% of my cases, I don’t need tmux in addition to my shell and a good terminal emulator, so idk why I’d want to introduce more complexity.
kitty honestly works very well for me, and has Unicode and font features that zutty does not seem to consider. Clearly some work needs to be done for conformance to the tests that the author raises, but for my needs, kitty works great for Unicode coverage and rendering.
So I think tmux and screen both suck since they don’t pass through to the terminal things like scrollback. Instead of the same mouse wheel or shift+page up, I have to shift gears to C-a [ or whatever it is.
I actually decided to write my own terminal emulator… and my own attach/detach session thing that goes with it. With my custom pass-through features I can actually use them all the same way. If I attach a full screen thing, the shift pageup/down just pass through to the application, meaning it can nest. Among other things. I kinda wonder why the others don’t lobby for similar xterm extensions or something so they can do this too.
I also love how Kitty pretty easily allows you to extend these features with other programs. Instead of Kitty’s default history, I have it enter neovim (with all of my configurations) so that I can navigate and copy my history in the same way the I write my code. I have been using Kitty for a few years and absolutely love it. The only issue I run into on occasion is that SSHing into some servers can mess the terminal up a little.
Same. I never warmed to the “tmux is all you need” approach, because, honestly, it’s just a totally unnecessary interloper in my terminal workflow. I like being able to detach/reattach sessions, but literally everything else about tmux drives me bananas.
Regarding Alacritty:
It also runs on Windows. I find it nice to be able to use the same terminal across Windows, macOS, Linux, and BSD.