Just wondering about the blurred rounded corners. Wouldn’t it be easier to reuse the SDF, scale/clamp it and pass it through erf? If they’re going with an approximation anyway, isn’t that very close to their result? (Or am I missing something obvious?)
The two things are not mutually exclusive. The use of GPU rendering doesn’t require you to re-render the display when nothing has changed. But when things are changing, it might as well look nice. You might, for example, want visually flawless smooth scrolling, which it would be if the screen is updated at 120 FPS during scrolling, with low latency on mouse/trackpad input, and you have a fast monitor.
Yes those things are mutually exclusive. Any thread that is waiting on IO can miss a frame due to scheduling jitter. For example, consider this linear sequence of events in your hypothetical editor:
events = get_events()
for event in events:
process_event(event)
render_screen() # must render next screen in 8.33ms
events = get_events() # <---- this can block for longer than 8.33ms due to granularity of scheduler
The only way to guarantee actual 120 FPS is for get_events() to be non-blocking. At that point your process is spinning at 120FPS, wasting power. So either your process is rendering at 120 FPS or it’s not using any CPU when it’s idle.
The editor could update at 120Hz while some animation is active, presumably having been triggered by some input event. After the animation is done it could just wait until the next input or poll at a lower frequency. The app could switch between blocking and non-blocking variants of get_events() based on whether there is a currently active animation.
The terminal rarely, if ever, knows if it’s animating. Animations are triggered by external events, not internal timers. Polling at a lower frequency doesn’t help much if the requirement is that it should be using 0% CPU if it’s idle.
In my gamedev experience the blocking operation is generally render_screen(); either that or it’s a callback that the OS invokes as necessary. And afaik render_screen() has to happen no matter what you want, on one level or another. With double-buffering it’s easy to do nothing when you have nothing to do; you simply give the OS the same buffer you gave it last frame.
So the event loop is:
while True:
needs_redraw = False
events = get_events()
for event in events:
needs_redraw |= process_event(event)
if needs_redraw:
render_to_backbuffer_within_8_33_ms()
swap_buffers()
render_current_buffer() # blocks until the OS says it needs another frame to draw
You can make event handling run async or in its own thread and do its own state updates, but from the render thread’s perspective that’s just a more complicated way of making it a queue that is polled every frame.
In gamedev, the OS gives special scheduling privileges to the game and knows to immediately switch back to the process on the swap_buffers() routine (once VSYNC is finished, if VSYNC is enabled). Fullscreen game windows are treated as realtime applications but that only works in this particular special case, not for general GUI windows. Microsoft engineered Windows this way because of the issues I have raised.
So… what you’re saying is that if you design a GUI app this way, you’ll have the OS helping you out hitting your latency targets and using an API designed to be efficient and low-overhead?
I found the 3D approach to building a text editor very compelling right up to what is probably the most important feature of an editor: text rendering. I’ve been down the SDF road for rendering text in a 3D context. It has some tradeoffs.The most significant tradeoff is losing accessibility and other features that come with OS-native text. Looking at the CVs of Zed’s authors, I’ll give them the benefit of assuming they have a plan for this.
Artifacts are another big problem. SDF text can look “chipped.” That can be mitigated somewhat by using a multichannel SDF, but it’s still an inherently lossy representation of the glyphs’ underlying vector representations.
Then there’s the size of the font atlas. It’s all well and good to claim 120 FPS with an atlas that has only Latin alpha-numerics and some limited math symbols. As you add glyphs, the texture gets larger. That’s compounded by the temptation to increase the resolution of the atlas to mitigate the artifacts. In my experience, large textures tend to hog memory and bog down the GPU, but perhaps my domain (WebGL) is different from theirs.
I wish the Zed team every success, but I’ll be looking hard at how Zed deals with SDF text rendering tradeoffs when it’s released. Maybe I’m not Zed’s target audience. I’m not as sensitive as others to plain text rendering performance. My performance beef is with certain very sluggish language servers, which no editor can fix.
Following up on this, I tried Zed’s beta out today and was pleasantly surprised by the text rendering quality. I didn’t see any examples of artifacts, even blowing the text up to 400%. Surprisingly, the default font has code ligatures that render. Emoji characters render as well. I’m curious how they implement those features. On the downside, code ligatures in anything but the default font don’t seem to work. Also, as I suspected, OS native text features are largely missing. One notable exception is that macOS’ speak-selection shortcut does work.
losing accessibility and other features that come with OS-native text.
People keep talking about this, but OS’s must be some way to hook your custom-rendered text into the API’s that provide this sort of functionality, right? ……..right?
This is super interesting! I wonder if this project is going to open-sourced at some point, and if it is whether the text renderer could be made a standalone component? It would be great to have a fast, high-quality rendering engine so that people could layer their own UIs on top and experiment with new styles and types of editors.
The FAQ says that the core will be open sourced, and that network based plugins will be closed source and monetized.
The same people previously created Atom, Electron and Tree-sitter.
The tech sounds similar to what Raph Levien has been working for a number of years (in open source Rust): the Xi editor, the Xilem UI library, the approach to GPU rendering of 2D graphics and text.
Just wondering about the blurred rounded corners. Wouldn’t it be easier to reuse the SDF, scale/clamp it and pass it through erf? If they’re going with an approximation anyway, isn’t that very close to their result? (Or am I missing something obvious?)
I don’t want my editor to render at 120 FPS. I want my editor process to be completely idle when it’s not receiving any input.
The two things are not mutually exclusive. The use of GPU rendering doesn’t require you to re-render the display when nothing has changed. But when things are changing, it might as well look nice. You might, for example, want visually flawless smooth scrolling, which it would be if the screen is updated at 120 FPS during scrolling, with low latency on mouse/trackpad input, and you have a fast monitor.
Yes those things are mutually exclusive. Any thread that is waiting on IO can miss a frame due to scheduling jitter. For example, consider this linear sequence of events in your hypothetical editor:
The only way to guarantee actual 120 FPS is for
get_events()
to be non-blocking. At that point your process is spinning at 120FPS, wasting power. So either your process is rendering at 120 FPS or it’s not using any CPU when it’s idle.The editor could update at 120Hz while some animation is active, presumably having been triggered by some input event. After the animation is done it could just wait until the next input or poll at a lower frequency. The app could switch between blocking and non-blocking variants of
get_events()
based on whether there is a currently active animation.The terminal rarely, if ever, knows if it’s animating. Animations are triggered by external events, not internal timers. Polling at a lower frequency doesn’t help much if the requirement is that it should be using 0% CPU if it’s idle.
In my gamedev experience the blocking operation is generally
render_screen()
; either that or it’s a callback that the OS invokes as necessary. And afaikrender_screen()
has to happen no matter what you want, on one level or another. With double-buffering it’s easy to do nothing when you have nothing to do; you simply give the OS the same buffer you gave it last frame.So the event loop is:
You can make event handling run async or in its own thread and do its own state updates, but from the render thread’s perspective that’s just a more complicated way of making it a queue that is polled every frame.
In gamedev, the OS gives special scheduling privileges to the game and knows to immediately switch back to the process on the
swap_buffers()
routine (once VSYNC is finished, if VSYNC is enabled). Fullscreen game windows are treated as realtime applications but that only works in this particular special case, not for general GUI windows. Microsoft engineered Windows this way because of the issues I have raised.So… what you’re saying is that if you design a GUI app this way, you’ll have the OS helping you out hitting your latency targets and using an API designed to be efficient and low-overhead?
Your comment shows that you didn’t fully understand my previous comment so I can’t really respond in a substantive way.
I found the 3D approach to building a text editor very compelling right up to what is probably the most important feature of an editor: text rendering. I’ve been down the SDF road for rendering text in a 3D context. It has some tradeoffs.The most significant tradeoff is losing accessibility and other features that come with OS-native text. Looking at the CVs of Zed’s authors, I’ll give them the benefit of assuming they have a plan for this.
Artifacts are another big problem. SDF text can look “chipped.” That can be mitigated somewhat by using a multichannel SDF, but it’s still an inherently lossy representation of the glyphs’ underlying vector representations.
Then there’s the size of the font atlas. It’s all well and good to claim 120 FPS with an atlas that has only Latin alpha-numerics and some limited math symbols. As you add glyphs, the texture gets larger. That’s compounded by the temptation to increase the resolution of the atlas to mitigate the artifacts. In my experience, large textures tend to hog memory and bog down the GPU, but perhaps my domain (WebGL) is different from theirs.
I wish the Zed team every success, but I’ll be looking hard at how Zed deals with SDF text rendering tradeoffs when it’s released. Maybe I’m not Zed’s target audience. I’m not as sensitive as others to plain text rendering performance. My performance beef is with certain very sluggish language servers, which no editor can fix.
Following up on this, I tried Zed’s beta out today and was pleasantly surprised by the text rendering quality. I didn’t see any examples of artifacts, even blowing the text up to 400%. Surprisingly, the default font has code ligatures that render. Emoji characters render as well. I’m curious how they implement those features. On the downside, code ligatures in anything but the default font don’t seem to work. Also, as I suspected, OS native text features are largely missing. One notable exception is that macOS’ speak-selection shortcut does work.
People keep talking about this, but OS’s must be some way to hook your custom-rendered text into the API’s that provide this sort of functionality, right? ……..right?
This is super interesting! I wonder if this project is going to open-sourced at some point, and if it is whether the text renderer could be made a standalone component? It would be great to have a fast, high-quality rendering engine so that people could layer their own UIs on top and experiment with new styles and types of editors.
The FAQ says that the core will be open sourced, and that network based plugins will be closed source and monetized.
The same people previously created Atom, Electron and Tree-sitter.
The tech sounds similar to what Raph Levien has been working for a number of years (in open source Rust): the Xi editor, the Xilem UI library, the approach to GPU rendering of 2D graphics and text.