1. 43
  1.  

  2. 17

    Multi-head support on Linux has been terrible for a long time. What’s really aggravating is that for a while in the mid 2000’s, it was actually really great. Around the time that LCD monitors began to take hold, CRTs started getting downright cheap and any “serious” workstation had two or more big chunky CRTs sitting side by side. Not my battlestation, but here’s the evidence: https://www.linux-user.de/ausgabe/2001/12/044-dual/dual.jpg Eventually multiple 17” or 19” 4:3 CRTs replaced those.

    Both KDE and GNOME 2 handled multiple displays very well. Even better than Windows and Mac at the time. You could hot-plug monitors into your system and your desktop would magically expand to fill it. If you ran some applications on that second monitor and then disconnected it, the windows would automatically move back to the first. Reconnect the monitor again and the window moved right back to where it was. And all of this worked great when you added and removed displays even while the system was suspended.

    However, eventually a thing happened. Two things, really. 1) Widescreen LCD displays started getting cheap. (Why bother with the fuss of two monitors when you could get almost the same horizontal resolution in one?) 2) Users and developers flocked to portable laptops instead of big powerful desk-bound workstations. From my observations working in this field, most developers often work directly on their laptop with no external screen. A good percentage of those work in what i call “iPad mode” where each application they run is maximized full screen and instead of dealing with moving windows around, they just switch from app to app. Or, when they do plug into a screen, they close the laptop lid and just use the one screen.

    I feel that as a result of these cultural changes, multi-head Linux desktop configurations seems to faltered. My workflow involves spending most of my time in dual-head mode on my laptop: the laptop screen and an external monitor through a dock. When I need to go to a meeting, I undock the laptop and need to have the desktop do the right thing. And the same when I come back and dock it again.

    KDE ostensibly supports multi-head configurations but last I checked it was a little buggy and not as flexible as I’d like. XFCE’s implementation has been buggy and annoying for years, although they do keep trying to improve it. Right now GNOME is the only one that seems to get it completely right, or at least right enough for me. (Which is annoying because I don’t really like GNOME that much!)

    1. 4

      In the 10+ years I’ve been running multi-head Linux (usually two, sometimes three monitors (3rd is the laptop screen)) I’ve had no big issues with it. Definitely not the issues I see colleagues having with Windows or OS X, which generally are hard to debug: it either works or it doesn’t on those OSs.

      However, I run a niche distro (Void Linux, Debian before that) and do not use desktop environments: I’ve always been on i3, StumpWM or EXWM and use xrandr to configure my monitors.

      I do realize this lacks the easy of use you might be looking for, but it’s very Linux ;-)

      1. 1

        Same; I haven’t seen any multi-head issues since ~2005 on Debian.

        1. 1

          use xrandr to configure my monitors.

          If you ever want a graphical frontend to xrandr, I do suggest arandr. It’s packaged too :) and can emit shell scripts that reapply the current screen configuration, for automation purposes.

        2. 3

          Some anecdata, and an idea: I use Cinnamon and have not had problems with restoring external screen state in years, and I assume it is part-or-mostly because of utilities shared with, or borrowed from, GNOME.

          If you dislike GNOME (or Cinnamon for that matter), but this feature is very important to you, you might want to check other GNOME-adjacent projects such as MATE or Pantheon.

          1. 1

            I used MATE on and off for many years after GNOME 2 was abandoned. I would like to keep using it but it seems like the developers have too much on their plate just to keep up with GTK and other dependencies constantly changing out from underneath them. As a result, it seems to be getting increasingly more broken for me, unfortunately. I wish I could contribute to the project somehow but desktop development is not in my wheelhouse.

            I used Cinnamon for a while years ago but haven’t tried it lately. I’ll have to give it a fair shake again. Thanks for the suggestion. I always seem to forget about it.

            Right now I’m seeing if I can acclimate to the Ubuntu (GNOME 3) desktop with some tweaks. So far it’s quite stable. The dash-to-panel extension and the arc menu extension make it pretty close to palatable (for me) from a UI standpoint, but we’ll see if that holds in the long term.

            1. 1

              I feel like I was in a similar situation for a long time. I used Fluxbox and then Openbox for years, did the tiling thing for a bit, but ultimately decided I wanted a more traditional DE. I ended up ping-ponging between them all, never quite satisfied.

              Cinnamon passed the “works for me” threshold for me in 2016 as a function of several factors, but a lot of it seems to coincide with the Linux Mint team giving up on matching every Ubuntu release. They began basing on the LTS release and were thus able to focus their resources into squashing lower-tier bugs, adjusting experience-defining annoyances, improve UI response times, lower memory usage, etc. instead of forever chasing this-or-that compatibility.

          2. 1

            I use KDE with three monitors right now, have for many years, and haven’t noticed any relevant bugs at all. What bugs bother you?

            1. 1

              Most widescreen displays don’t really have more horizontal resolution. The newest ones are (roughly) half-height UHD displays (3840 resolution across, but only 1200 pixels down)

              https://www.amazon.com/Samsung-LC43J890DKNXZA-CHG90-Curved-Monitor/dp/B07CT1T7HH

              A proper UHD (“4k”) screen is 3840 x 2160. So the non-ultrawide will have the same horizontal resolution as an ultrawide, but less vertical resolution!

            2. 7

              True, but there’s a lot more that doesn’t work or works weirdly on Linux. It’s not the way it’s supposed to be, but it’s how it is. Windows and Mac get you a standard, decent GUI for most things. Linux is much more customizable but you often need a lot of work to get basic things working. That’s not ideal, but in practice, that’s the tradeoff.

              Example: I recently installed MX Linux on my desktop, which has an HDMI monitor and a soundcard. The volume slider doesn’t do anything and the brightness of my monitor is not adjustable via a graphical tool (although it can be done with a bash script).

              1. 6

                Windows and Mac get you a standard, decent GUI for most things. Linux is much more customizable but you often need a lot of work to get basic things working.

                I recently installed MX Linux on my desktop

                If you use a more mainstream distribution, you will have fewer issues. If you use the distribution chosen by your hardware manufacturer, you will have basically zero issues (e.g. Pop on System76 machines, Manjaro on Manjaro-branded laptops). This is the #1 misconception regarding free desktops in my experience and I think it’s quite important not to perpetuate it.

                It’s basically true that “more customized” = “more weirdness”, but a lot of desktop Linux stuff is far, far to the left on that scale.

                1. 4

                  If you use a more mainstream distribution, you will have fewer issues.

                  I don’t like this argument. If he had said he was using Ubuntu there would be a whole bunch of linux users claiming he should be using a “real” distro.

                  If you use the distribution chosen by your hardware manufacturer, you will have basically zero issues (e.g. Pop on System76 machines, Manjaro on Manjaro-branded laptops).

                  So what you’re saying is that linux works ok if used on machines specifically made for it?

                  1. 2

                    I don’t like this argument. If he had said he was using Ubuntu there would be a whole bunch of linux users claiming he should be using a “real” distro.

                    I agree that that is reprehensible behaviour. The fact that some other people do reprehensible things like that doesn’t affect the validity of my argument.

                    So what you’re saying is that linux works ok if used on machines specifically made for it?

                    Much like all other popular operating systems, yes. Free desktops (including the BSDs!) are the only OSes which really have the unenviable task of running on hardware that is actively hostile to them, with the exception of Windows on Apple hardware, which last I checked was doing much worse than, e.g., Linux on Lenovo hardware, and requires either an expensive vendor support package or lots of tinkering.

                    1. 3

                      Free desktops (including the BSDs!) are the only OSes which really have the unenviable task of running on hardware that is actively hostile to them

                      To some extend, this is also true for Windows. People expect to run Windows on every computer, hardware vendors produce drivers of terrible quality, Microsoft has to deal with the fall-out. I guess this is the reason Windows ships a lot of Microsoft-provided drivers by default.

                      The other side of the coin is that Intel and AMD actively develop drivers for Linux and contribute to the X.org and Wayland ecosystems, and we still have all the problems that we have. I am using GNOME, because so far it has dealt best with Wayland, HiDPI screens, and connecting additional screens. But man, the desktop experience is broken, many applications are invisibly in the background (Spotify, Skype, Dropbox) since GNOME has decided that tray icons should be killed with fire [1]. GNOME becomes extremely laggy after a few days without a restart (e.g. there is a noticeable latency when I am typing this). None of the browsers support accelerated video playback out of the box (it’s supported by the drivers/libraries, but I guess having two APIs for video accelerations does not help). Etc. etc.

                      For me the biggest problem is that we (as the FLOSS Unix community) haven’t made much progress in the last 10-15 years. Of course, there has been a lot of awesome work in the form Wayland, Vulkan, HiDPI support, etc. But the delta between, say, Linux in 2005 and Windows/macOS has not really become smaller than the delta between Linux now and Windows/macOS. The Linux desktop is still a thousand paper cuts, that some of use can live with because there are other attractions.

                      [1] Sure, there is an extension, but it often does not render the menus correctly and introduced noticeably more gnome-shell crashes.

                      1. 1

                        I am using GNOME, because so far it has dealt best with Wayland, HiDPI screens, and connecting additional screens. But man, the desktop experience is broken, many applications are invisibly in the background (Spotify, Skype, Dropbox) since GNOME has decided that tray icons should be killed with fire [1].

                        Yeah, I agree that this is a terrible decision, on par with MS’s decision to totally break the Start Menu user experience (which is what eventually pushed me completely away from Windows). In fact I’m considering going to KDE for this reason.

                        But, in a lot of ways, that’s the point, for me. I have that choice. On Windows, I don’t have a choice but to use the new Explorer and Start Menu experience which I really dislike.

                        For me the biggest problem is that we (as the FLOSS Unix community) haven’t made much progress in the last 10-15 years. Of course, there has been a lot of awesome work in the form Wayland, Vulkan, HiDPI support, etc. But the delta between, say, Linux in 2005 and Windows/macOS has not really become smaller than the delta between Linux now and Windows/macOS. The Linux desktop is still a thousand paper cuts, that some of use can live with because there are other attractions.

                        This does not comport with my experience; I started using free desktops around 2012 and, for me, even in just those 8 short years the difference is night and day. 99% of the time, my stuff “just works”.

                2. 5

                  That used to be fun when I was younger, those issues were challenges that I loved to solve and I learned a lot while solving them. But now, after some years, those issues start looking more like obstacles than challenges.

                3. 7

                  I run X, Slack, and Zoom. I screen share all the time to pair program. The limiting factor for me isn’t Linux; its my DSL Internet. Single monitor, I3, Archlinux, FWIW.

                  Wayland is cool.. but its ‘Next Gen’, stuffs going to be broken there for some time.

                  1. 5

                    Maybe not ideal, but isn’t TeamViewer a simple viable option?

                    1. 8

                      Also Google Hangouts works under Wayland.

                      1. 7

                        Zoom (not a recommendation really, just what we use at the company) also does screen-sharing with Wayland. If you’re not using the Flatpak packaged version that is.

                        1. 3

                          Also been using Zoom at two different workplaces for years now on Linux. Works well enough.

                          (This is with Void Linux and using EXWM/StumpWM/i3, so no ‘standard’ Linux.)

                          edit: this is not totally true: Zoom is kinda wonky with tiling window managers, or at least with StumpWM and EXWM.

                          1. 2

                            Zoom works fine on X too as far as I’ve seen.

                            1. 1

                              I had to use zoom for my last job and it was…. finicky on my Linux box. It’d often work just fine, but it was also liable to pop up a full-screen black window when I attempted to screen share, meaning both me and the other people on the call would only see black.

                              I hypothesized it was trying to do a transparent window to capture clicks or something and since I don’t run a compositor (imo translucent windows are a totally pointless wtf) maybe it ended up black instead.

                              Except it wouldn’t always do it. So I don’t know.

                              I also had a weird experience with my multiple monitors but again it would sometimes just work.

                              What I ended up doing most the time is if I know I had to present on the call, I’d just get out my Windows laptop and do it from there.


                              And before the company switched to zoom (and also sometimes after lol it depended on who was organizing the meeting) it was all about google hangouts. The screen share there for a while didn’t work at all, then google changed from the plugin to the html5 thingy and it actually worked pretty reliably for a while… then it just quit working again later, iirc around the time they rebranded from “hangouts” to “meet” but that could be coincidence since I updated firefox and a few libs too. So I don’t know if it was something I did or mozilla or something google or whatever did that broke it. But again, I would tend to just take calls from Windows when I knew ahead of time I had to present just to avoid taking the risk of the embarrassing delay of it not working.

                              1. 1

                                Compositing isn’t just about clever transparency effects, it actually provides a really nice performance benefit by changing the way that windows are drawn. In a non-composited flow, there’s basically one buffer that all windows draw to, and that buffer is displayed directly on the screen. Whenever you raise a window, it needs to paint itself ASAP to that buffer so that you can see it on the screen. Whenever you move a window, any windows under it need to redraw themselves, because parts of them that were covered up (and weren’t painted before) are now visible. The result is a bunch of processing, a bunch of context switches, and often a bunch of flickering or sluggishness.

                                But with compositing, every window gets its own buffer to draw to, without interference from everything else. The compositor just tells the graphics card how to make them appear on screen, and they do. Yes, this enables effects like transparency, and blur, and window previews in task switchers, but it also means that apps only need to redraw themselves when they actually want to display some new content. There’s no rush to repaint whenever you raise or drag a window, because everything is already there; the compositor just has to update the display list a little bit. The result is a system that does less work and feels a whole lot snappier.

                                1. 1

                                  Those performance benefits can be surpassed by older techniques, though the result is slightly different. My system is set up like it is 1995 - there’s no animated effects and window moves/resizes only appear as an outline until you release the click, at which point the move/size is completed. As a result, there’s simply very little work to do at all - for the bulk of the operation, it is just a small rectangle outline getting xored across the existing memory. Since xor is reversable, when it is time to draw the next frame, you just xor the old one again, undoing it, and then xor the new one. No application repaint or extra buffer required.

                                  At the end, when the exposed windows do need to be redrawn, it is just one operation and there’s not a huge rush to do it (if it can’t be done in one frame, meh, since it isn’t animated it isn’t that bad). But since there’s very little to do - at the end the windowing system calculates the specific rectangles of the specific windows that need it which can be done very quickly too (or else the application may maintain its own double buffering and thus it reduces to that same operation the compositor would be doing. In fact I suspect many programs end up triple buffering because they still do their own thing AND the compositor keeps a copy!).

                              2. 1

                                I use both Zoom and https://meet.jit.si on xorg; Zoom used to crash and sometimes hide its controls, but some time in the last year or so they’ve gotten it to be pretty reliable. Jitsi has a few more dropped calls, but other than that it is more reliable in the UI and sharing aspects.

                            2. 3

                              It depends what compositor you’re using. GNOME and KDE seem to work fine, but I couldn’t get any browser-based display capture working on sway, even with the xdg-portal-wlroots package

                              1. 1

                                Are you sure? https://bbs.archlinux.org/viewtopic.php?id=249421

                                Seems like there is a project called PipeWire that’s supposed to bring screensharing to Wayland.

                            3. 3

                              I don’t often screenshare on Linux, but when I do, I’ve found the average WebRTC video chat service has functional screensharing.

                              1. 2

                                Do they do separation of multiple monitors? I test Jitsi Meet in this post.

                                1. 2

                                  Ah, admittedly, I haven’t tried that. Only shared a laptop, and a pre-multi-monitor desktop. (I realize that you said you were lamenting multi-monitor not working…)

                                2. 2

                                  I’ve read that there’s basically only one or two WebRTC implementations, and all the various screenshare/video-call programs are just different wrappers around the same tech.

                                3. 3

                                  I certainly agree that the screen sharing scene on Linux isn’t ideal, but there’s a lot more there than the author lists in their writeup.

                                  I use the public testing build of Discord, and the screen sharing scene is already waaay better than it was a month ago, and it wasn’t very long before that they launched it in the first place. They allow you to select individual windows to share, I’ve gotten no random crashes even after using it pretty regularly over the past couple of weeks, and the only remaining issue is that sound sharing doesn’t work. Given the monstrous complexity of just one of the sound servers available for Linux (as detailed by the writeup posted to Lobsters over last weekend, I really can’t blame them.

                                  Linux is so low-priority for Discord

                                  The fact that Discord is supported as a first-class platform is impressive tbh given the still minuscule share of users that Linux represents. As much as devs love to rip on Electron, it makes this sort of thing possible for the vast majority of languages and frameworks where cross-compatibility doesn’t “just work.”

                                  To watch my screen, viewers have to open a video client, and point it at my RTMP URL

                                  This isn’t the only way for that to work. You can create a web page that connects to the server and plays the RTMP stream. There will certainly be some configuration to do, but just googling “HTML5 RTMP player” should be a good start. I’ve set this up myself on more than one occasion.

                                  One last point is that the author doesn’t mention Teamviewer. I use that program regularly both connecting from and to Linux desktops and between different operating systems with ease. It’s free for private non-commercial use, it works pretty well, and it’s low-latency enough to support real-time collaboration and remote control.

                                  That all being said, I’d certainly love to see more complete and fine-grained support for screen sharing. There’s a ton of room for improvement, but there’s a lot of working solutions to build off of as well.

                                  1. 2

                                    I’ve always though that, even if it cannot detect a screen, these apps usually can display a single window perfectly. What about creating a fully transparent window before sharing the screen, and share it instead ? Would the transparency work when screensharing ?

                                    If it does, that would be even better than sharing the whole screen actually, because you could share only a portion of your screen, change screen, …

                                    This would require a compositor running though, that could be the only drawback. But it is 2020 right ? ☺

                                    1. 2

                                      I use i3wm, with multiple monitors. Google Meet or Bluejeans works fine for screen sharing with my setup.

                                      1. 2

                                        This solution isn’t quite ready yet but we are targeting the exact use case of sharing application windows, cross-platform and with remote-control capabilities: https://www.coscreen.co. If you are interested, please consider subscribing, we are currently working on acquiring funding and plan on opening up our beta test later this year.

                                        1. 2

                                          I’m surprised they didn’t cover VNC over SSH, that’s usually worked ok for sharing with one user.

                                          1. 2

                                            it’s not hard to imagine that a vast portion of developers have multiple monitors

                                            Is this really the case? I would be much more surprised to find out that most developers don’t work on smaller to mid-sized laptops. We have a few double-monitor computers at university, and while I get that it is cool, I don’t feel much more or less productive than with virtual workspaces.

                                            1. 1

                                              I gave up on multi-head setup long time ago, and I don’t know any multi-head users personally anymore either, on any OS. I have keyboard shortcuts for switching virtual desktops, for development, that’s all “multiple screens” setup I need.

                                              1. 1

                                                I don’t know how you guys do it. I’ve got 3 displays and I’m constantly wishing I had more.

                                                1. 1

                                                  I tried the multihead thing in the past, but found that 1) the workflow would always change when I switched back to single display (e.g. undock laptop), and 2) I could basically do everything I could do on multiple displays using a tiling WM with multiple workspaces and the ability to move windows to the same workspace for side-by-side viewing. Sure, I don’t get the super wide resolution, but tbh it’s pretty rare that I am doing something where I feel constrained by that.

                                                  To each their own.

                                            2. 1

                                              You should try with Firefox instead of Chromium. For me, Firefox WebRTC support is able to let you select individual screens. However, I am also now using Zoom which just works and is a moderate CPU hog.

                                              1. 2

                                                Article:

                                                Both Firefox and Chrome suffer from the same issue when trying to screenshare from a multi-head setup on Linux. They do not have the XRandR integration that allows you to choose an individual monitor, despite having this same functionality on other operating systems.

                                                1. 1

                                                  Firefox requires pulseaudio, which on my machine means that the audio will randomly cut out 2-3 times a week. I hate using Chromium, but I keep it around specifically for this reason.

                                                  1. 1

                                                    I use sndio on my system. When Firefox is not emitting sound, a “pkill pulseaudio” fixes it right up.