1. 37
  1. 13

    I had a good chuckle with Avery about this on Twitter — it’s just wild to me that someone can put in all of this time and energy into auditioning, analyzing, and optimizing all of these video conferencing tools, and get so deep into the issue of latency, but when it comes to lighting and sound quality, just shrug and say

    We didn’t get all fancy with green screens and pro-quality microphones and all that stuff that other people talk about. Maybe it would be better, I don’t know, but it definitely sounds like too much work . . .

    Different strokes for different folks, I guess.

    (For the record: yes a high-quality camera, good lighting, and a reasonable microphone make an enormous, night-and-day difference and should be mandatory for every employee at a fully-remote company!)

    1. 5

      My readthrough gave me the impression that the author cared primarily about latency and reliability. From my perspective:

      • Bad latency: makes it hard to have conversations, gets very annoying.
      • Bad reliability: the video link doesn’t work at all or continuously
      • Lighting and mic: worth considering if you are lucky enough to have reliably functional and reasonable latency vidconf in the first place :D

      I’m in Australia with businesses running of ADSL (twisted phone lines) in rainy weather. I’d be happy for any of the vidconf stuff I’ve tried to even begin to slightly work; or at least provide some form of error/feedback when they don’t work (rather than just doing nothing >:( ). Me thinks the upwards bandwidth at many of my sites is a magnitude below what most vidconf assumes to exist.

      ie I think both your and the author’s perspectives are valid; there are many environments out there.

      1. 2

        Latency and A/V quality are both extremely important to having good conference calls. My point is that A/V quality is I think substantially more important — a good mic in particular can even mitigate latency problems to some degree — and, crucially, unlike latency, the participant has complete control over the microphone, camera, and lighting they use.

        1. 2

          Bandwidth requirements vary. In particular I’ve seen something that assumed n Mbps upstream, where n is the number of participants.

          However, I’ve also used video conferencing on a symmetric 1Mbps link. Effectively. That is, without anyone struggling to understand or even remarking on poor quality.

          If you have a problem and think it’s about bandwidth, I suggest looking for middleboxes that drop some packets or delay them for a long time (a quarter-second is a long time). Dropping ICMP or spending most of of the bandwidth on sending packets that have to be discarded on arrival are sure ways to get a poor result.

        2. 3

          I agree entirely.

          FYI I’ve worked from home for many years, and in my experience the three most important factors wrt. video conferences are good speakers, proper camera/microphone placement and if I need to type during conferences, then a window/sound coexistence. Latency etc. isn’t even a top-ten factor.

          I ended up spending perhaps more than necessary on Genelec speakers and using velcro tape to fix a USB camera to the wall in the best spot. And (less importantly) I got a separate screen so I can keep the conference open eight hours per day when that makes sense, without intruding on emacs. With the camera/microphone in that spot my voice is clearly but not unpleasantly audible, my key clicks don’t sound like hammers, and the camera focuses attention on my eyes and forehead, not my unshaven chin. One wants to convey a professional appearance.

          (EDIT: Perhaps it’s weirdly imbalanced to have spent several hundred euro on speakers and 3cm of velcro tape on the camera. When I tried something that worked well, I stopped trying. Velcro is great, all hail velcro. But #$@!$#@, a microphone that’s acoustically coupled to the keyboard is HORRIBLE)

          1. 2

            Good microphones are so vital. We have a challenging space in the office – it’s essentially a large warehouse floor; cavernous and echoes abound. Instead of trying to use a central pedestal microphone amongst the couches, which was awful, I got four cardioid shotgun microphones on tall stands and positioned one within 4-6 feet of face height on each couch. They’re also all pointed away from the speakers, and have good rear rejection. They’re fed into a quad channel USB mixer with XLR inputs and phantom power, so Zoom can adjust levels on each independently. The difference is pretty amazing. For the most part you really feel like you’re in the room.

            1. 1

              My comment yesterday was bad, sorry. Imbalanced.

              What I perhaps should have said was: Microphones are essential. Speakers can compensate for bad microphones, to some degree. But fixing the microphones is best.

              When colleagues wouldn’t use decent microphones (the problem was occasional for most people, nearly constant for me) getting €600 speakers solved that for me.

          2. 9

            Webex: included only for completenes, because it’s a total tire fire. Nobody who has honestly reconsidered their conferencing system in 10+ years would choose it. If you have a subscription to Webex, cancel it right now. Your employees, suppliers, and customers will hold a festival in your honour.

            I once did a remote workshop over WebEx and had to share something. When I clicked the “share file” button, my computer crashed. After rebooting, I emailed it to someone else and had him share it. When he did, his computer crashed.

            We were on different operating systems.

            1. 15

              That’s impressive cross-platform consistency!

            2. 3

              I don’t understand the obsession with videoconferencing in all these remote work discussions. It seems like people want to replicate normal work environments remotely, which just seems to defeat the whole purpose. It’s a bit like when the dominant UI style on phones was skeuomorphic: the notes app looked like a little notebook, etc. It’s good for familiarity, I guess, but it’s not really taking advantage of the benefits of the different situation/platform/etc while still getting the downsides.

              To me the biggest advantage of remote work is surely that you don’t have to sit in synchronous face-to-face meetings. With a ‘real’ office you get the advantage of face-to-face meetings being easy, whether they’re informal or formal. So you take advantage of that by synchronising everyone’s work hours (or mostly synchronised with a bit of flexibility, perhaps).

              When working remotely, formal face-to-face meetings are hard and informal face-to-face meetings don’t exist. So why bother? Abandon the whole notion of synchronised work hours and face-to-face meetings and take advantage of the advantages of remote work, like being able to work entirely offline and distraction-free at your own pace in your own time. As long as you get the assigned work done, why should your boss care if you do it at 3am in your underwear while watching Netflix?

              If I were working remotely I would want to work my own hours, at my own pace, and communicate via email with the rest of the people I was working with. When I need their help, I send them an email. They’ll respond within 24 hours (and probably much sooner) unless it’s the weekend, and in the meantime I’d get on with some other work.

              Obviously this is a bit different if you’re doing something like a system administration role where you’re expected to monitor the status of a system during your assigned hours, but I’m talking about development roles.

              1. 5

                This is unfortunately constrained. If you are doing assigned tasks, then sure you can work down your inbox. If you are in one of the more meta business roles (what are we doing, where are we going, what are our concerns) then you need a face to face. One in person meeting can set the stage for an entire year of tasks as you describe them, but it could take ages to manage what been only be achieved in person.

                1. 5

                  Having a discussion to make a tricky but important decision - fairly common during software development in my experience - takes far longer over email (asynch) than via a meeting (synchronous).

                  1. 1

                    The Linux kernel development process seems to work a lot better than most commercial development processes. Perhaps it’s actually better for people to have a good long time to think between their messages rather than trying to think things through in depth ‘live’.

                    1. 4

                      The Linux kernel development process seems to work a lot better than most commercial development processes. Perhaps it’s actually better for people to have a good long time to think between their messages rather than trying to think things through in depth ‘live’.

                      The Linux kernel is not bound by the kind of market pressure that dictates velocity and decision-making in most commercial organizations.

                      1. 1

                        A huge part of the Linux kernel’s development is done by commercial organisations that experience market pressure. The Linux kernel changes at a rate almost unprecedented for software projects, the sheer volume of commits in each release is huge and it just keeps growing. So I don’t know about that really.

                        How many decisions are made on a daily basis that really couldn’t wait a day? There’s not really any less throughput on decisions, it’s just decision latency that’s affected. If you have enough other work to be doing it doesn’t affect your throughput for one task to be put on the back burner for a day. Latency only affects throughput when you have low concurrency, or if task switching is high overhead.

                        1. 2

                          How many decisions are made on a daily basis that really couldn’t wait a day?

                          I mean, have you worked in a high-efficacy, market-driven organization? Tons. It’s never the single decision, but rather that every decision exists in an extremely long chain of decisions, each dependent on the previous. The communication praxis of these organizations is making those decisions with imperfect information, getting agreement among stakeholders, and moving to the next synchronization point. Sometimes you do like dozens of these in a single meeting. If every step cost a day of latency things would break totally.

                          The total transparency and async processes of GitLab provide a great real-life example of what I mean. Almost every decision-making thread you see on the GitLab internal boards (or whatever they’re called) tend to span multiple days, weeks, sometimes months, with tons of input from tons of people who aren’t direct stakeholders. In a high-efficacy on-site organization these threads would probably be single half-hour meetings with a handful of people and be done. Is the GitLab process better, by some metrics? Probably, surely, yes. Is GitLab slower in the market because of the constraints imposed by that process? Without a doubt.

                          1. 1

                            Sure, you make the decisions faster, but do you actually make them better? Almost certainly not. You make them much worse because you don’t have any time to really sit down and THINK about the consequences.

                            1. 1

                              And very often, in these environments, faster is better than better.

                      2. 3

                        Better by what metric?

                        1. 2

                          Quality of code. The Linux kernel is a well-maintained codebase with little legacy code. There’s a strong culture of removing and replacing outdated code, and of replacing internal uses of legacy code with new code so that that legacy code can be removed. It’s also very well-documented.

                          The ability for many different organisations to all contribute in different ways that fit with their workflows. Because contributions are made by mailing patches to a mailing list, you don’t have to use specific proprietary tools or web-based interfaces to contribute. Any workflow that can result in someone or something running git format-patch and git send-email (which is any workflow that uses git at all) can result in sending patches to the LKML. You can use GitHub pull requests internally, you can use your own mailing list, or an internal GitLab instance, or sourcehut, or just plain git. You can have individual contributors within your organisation contribute patches individually, or you can have a few maintainers within your organisation bundle up all your work and send it, or a single person that does that. You don’t even have to use git, diff -uN works if all you want to do is write a single small patch.

                          Openness to contribution from drive-by contributors. The Linux project doesn’t require you to sign CLAs or provide evidence your employer has signed off on you contributing changes. All you need to do is agree to the contribution agreement, which basically just says ‘I have the right to contribute these changes’, and you indicate this by including a Signed-off-by: First Last <first.last@example.com> line in your commit messages, which you can do automatically with a flag to git commit. No complicated legalese, no contracts, no giving up your copyright to some company that could close it down at any moment. Just a flag to git commit.

                          Speed of changes. When a new security fix is required, it’s usually fixed and published as a new stable release within a couple of hours. What commercial products get that kind of maintenance? I’ve played online competitive games where there have been bugs that allow any player to crash the game server and all connected clients at any time that have taken weeks to be fixed. Losing your game? Just crash the server.

                          Access to developers. One of the great things about free software is that the people working on it are just people. Have an issue with something? Documentation unclear? Send an email. The email addresses of the maintainers of each part of the kernel are all listed in the MAINTAINERS file, and they’re pretty good at replying to email, even though they get an enormous amount of it. About a decade ago when I was basically a child I emailed Linus Torvalds a random question about the kernel and he emailed me back in quite some detail. I don’t even know who the lead developer of the NT kernel is, and even if I did I doubt he would respond to my emails. A company that I worked at loved to use PostgreSQL over commercial databases for the same reason: it’s a product where you can email the lead developer of the project and expect to get a detailed technical email back within 24 hours. Good luck doing that with Oracle unless you’re one of Oracle’s top 5 biggest customers.

                          Permanent record of design thought process. The Linux kernel mailing list archives are available on the web. If you’re wondering about the thought process that went into the development of a feature or a choice made during its development you can just go look at the mailing list archives. The responses to the patch series when posted, the various different versions of the patch series, etc. are all permanently archived. All the feedback it got, all the discussion around it. All archived in the same format on many different mirrors. This is invaluable. And it’s not just theoretically useful. It’s actually used in practice all the time. LWN articles often include quotes from design discussions that were had 10 or 15 years ago that provide valuable insight that would be really hard to discern just from the code or the documentation that exists today. Good luck doing this if you’ve used half a dozen different proprietary chat systems over the last decade and 95% of your design discussions are had in non-minuted face-to-face meetings.

                          1. 2

                            I appreciate you taking the time to lay that out so clearly. You’ve convinced me that asynch discussions are superior for OSS code or even commercial code that’s not being developed in a competitive enviroment. I’m not sure if that applies to commercial code being developed in a competitive environment though.

                  2. 3

                    In terms of video conferencing, https://meet.jit.si/ is missing from the list. It compares to Whereby.in in terms of tech stack since it also uses WebRTC, but it also has dial-in numbers and is completely open source.

                    1. 2

                      Fully agree on the statement about full remote team or no remote team. I tried to manage a setup with an on-site team and a fully remote team I manage. The fact that I as a manager was off-site made it really difficult to close the communications gap which eventually led me to resign from my position as well.

                      Those comments on Hangouts surprise me. I’m using it for over a year now and I’m really happy with the quality. I can also recommend to get a proper mic. All of my peers loved it when I got a studio microphone.

                      1. 2

                        For video conferencing, I would say my biggest technological problem appears to be occasional lag spikes, where the person is spotty, and then I get a large burst of very fast talking or something. I’ve primarily seen this on WebEx, and then only when the person is dialing in from their computer.

                        However, this is by far the least annoying part of a video chat. The most annoying part is that, once you get above a handful of people on the call, you always get at least one oblivious person who leaves their mic unmuted all the time and breathes directly into it / eats directly into it / has people talking loudly at them / uses the laptop mic so you hear loud typing noises etc. etc. etc.

                        Unfortunately, they don’t hear any of this cacophony coming from their machine because almost no setups give you an inkling of what your background sounds like. A bite of sandwich on a call might sound utterly silent to them, but to everyone else on the call it sounds like toilet being plunged with a leaf-blower running in the background.

                        Blue mics have an interesting solution to this. If you’re plugged into the headphone port on the mic, you can hear yourself with basically no latency. When you mute yourself, you stop hearing the background. When you unmute, you can hear exactly what people hear on the other end. Why doesn’t every headset do this?!

                        1. 1

                          I think this is built in functionality in every modern OS. You can always route your mic audio also to your headphones. It is not on by default as it would create a nasty feedback loop with speakers.

                          1. 3

                            That typically incurs a significant lag, in my experience. The headphone jack on my microphone is essentially zero lag. Talking when you can hear yourself with lag is incredibly difficult for most people.

                            1. 2

                              My Logitech USB headset does this in hardware as well. There is even a mixer specifically for how loud the folded back audio is in your ear. I believe this is generally known as sidetone in telephony and radio.

                        2. 1

                          Really good read.

                          Boss just asked me to look into what we can do for a few people. Oh god.

                          1. 1

                            This was a really useful overview. It’s cool that they put in the effort of finding the best workflow for their team, instead of just going with what worked at their earlier job.

                            Anyway, there was this part on Zoom:

                            The latency is also not great (wasn’t lower latency the whole point of the fancy algorithm?).

                            They probably need some buffering in order to slow down the video. If you only have a tiny buffer then you run out of frames to present too soon.

                            1. 1

                              Slightly off topic, but has anybody used tailscale? How is it better than setting up your own cjdns or Yggdrasil network if you’re a techy?

                              1. 3

                                Yes, I’m running it alongside a wireguard VPN that I hand-configure for devices to talk in a LAN-over-WANs setup. (Spoke, with my home router being the hub.)

                                Tailscale is much easier to deploy, as it takes care of everything beyond “talk to x securely” which is what wireguard brings. No managing “you have IP .5, router has .254, phone gets .6, etc). Haven’t really made any use of Tailscale beyond “put everything on the same LAN so I can ssh to things” yet.

                                1. 1

                                  Thanks. I’ll try it