Threads for dsr

  1. 4

    If you want to have fun, this is probably the right approach. After a while of managing it on behalf of other people, especially in a commercial context, it can be a relief to discover that most of the companies that provide SIP and IAX trunks will also provide virtual PBX services for roughly the same cost as adding a pair of virtual machines at a cloud company. The tradeoff is flexibility, usually.

    1. 2

      If you’re doing it for fun, you might consider going even more old school. When I was a student, companies were starting to move to SIP for internal use and analogue PBXs were starting to be dirt cheap on eBay. A quick skim suggests that they still are. We picked up a PBX and enough phones to put one in every room in the flat we were renting and so could call each other and forward external calls to the right person (mobile phones were still expensive then). The PBX itself had an RS-232 interface and so we wrote some code to talk to it and do things like sort our phone bill for outgoing calls. The PBX also had fun things like a PA mode that let you broadcast to the speakerphone mode on every handset, which was fun at parties.

      You won’t learn anything useful doing this, but I’m not sure you will with SIP either since most office phone setups seem to be moving to proprietary services. Now that everyone needs a decent microphone and speaker for video calls and has a phone-shaped computer in their pocket, the endpoints are all general-purpose computers and you just need a whatever-you-use-internally to POTS bridge for interoperability, and that’s something all of the big video calling platforms provide (except Signal, but it isn’t really aiming at the corporate market).

    1. 1

      A system that slowly converges humans towards an efficient heterogeneous geographical distribution within given borders. The entire landscape is evaluated for resources and potential density. Those who are part of the system receive benefits for contributing to infrastructure and living in designated areas.

      1. 1

        How does efficient go with distribution?

        1. 1

          Cover more land, access more resources, reduce traffic, reduce far dependencies, reduce costs.

      1. 21

        I agree it is a scam—slightly strong language but hear me out.

        Especially the issue with the protocol is important, if your protocol is open source and decentralized then a “private beta” period makes so sense. That alone puts the lie to their advertising.

        If they advertised themselves they could make a case for the non-decentralized ID services and limited access are a good thing. For example Signal has done just that. Their protocol is open source and anybody could implement it but they don’t make any pretense of federating with services or even allowing 3rd party apps on their service. Right or wrong, they justify it with the ability to move quickly and fix things without breaking everybody else. Whether you buy that at not at least they have been up front about why they are not distributed.

        Bluesky is a scam because their marketing uses words to gain attention while behind the scenes their fundamental business and development model is a different gig.

        1. 11

          Especially the issue with the protocol is important, if your protocol is open source and decentralized then a “private beta” period makes no sense. That alone puts the lie to their advertising.

          Counterpoint: Jonathon Blow’s programming language is (AFAICT) open-source but is currently in closed beta, but nobody is calling that a scam. I think Hare did the same thing. My point here is that a closed beta is about temporarily controlling the distribution to only trusted people, and is not mutually exclusive with being open in the long term.

          That said, I agree Bluesky is a scam.

          1. 6

            Jai is for sure not open-source - I doubt that it will ever be - as Jonathan is not willing to entertain community contributions.

            1. 16

              Sqlite is open source, but doesn’t accept contributions.

              1. 5

                sqlite is in the public domain.

                What I meant by my comment is that Jai, even when the code will be made public, it will most likely not be released under a license that allows all freedoms of the Open Source definition.

                1. 2

                  What makes you say that?

                  1. 2

                    Watching the streams in which Jonathan works on the compiler. He mentioned multiple times that most likely they’ll come up with their own licence so I was mostly inferring from that.

                    I’m not sure how willing he’ll be to allow other entities - and based on the main target demographic of the language, which is game developers, they’ll probably be commercial ones - to repackage/redistribute the compiler.

              2. 13

                Open source does not mean open contribution. It does commonly imply it, but it’s quite literally in the name “open source”.

                1. 13

                  There’s never a guarantee of open contribution – a project maintainer can drop your patch on the floor, loudly or quietly – so the real question is always if you have the right to fork your own version?

                  Right now the question is moot, since there isn’t any source release at all.

                  1. 1

                    And, importantly, there isn’t any binaries released either. It’s not a case of, “we will open-source this product eventually, we promise; until then, just use our closed source work”. It’s just not ready yet.

                    1. 1

                      But there are binaries, they’re released to a restricted (last time he mentioned it, it was about 500 people) closed beta.

          1. 2

            The weather report says sunny and 70F from 11am through 6pm with steady 9-10mph wind, so I’m going to go fly a kite.

            1. 4

              Followup: flew kites. Had fun. Very relaxing. Recommended.

            1. 5

              Each of these machines were less than $20,000. Amortize that over five years. That’s $333/month for all the hardware (minus routers etc) needed to run Basecamp Classic today. And this is still a large SaaS app that’s generating literally millions of dollars in revenue per year! The vast majority of SaaS businesses out there would require far less firepower to service their customers.

              holy misleading comparison batman

              is owning your infrastructure probably cheaper? sure

              is this an apples to apples comparison? nah

              the expensive part of any business is the people (not to mention the fact that they are handwaving away all their other costs)

              1. 6

                They talk a lot about pricing in the other posts linked from the body. dhh may be many things, but he’s not stupid. I’m pretty sure he knows how budgeting works.

                This specific article is about performance, and the one paragraph where it mentions pricing is not supposed to be a rigorous comparison—it’s not a comparison at all. It’s just the author saying “wowee, you can get a lot of computer for not much money”.

                1. 1

                  It’s the marginal cost (of adding more capacity) being compared, not the total cost. Both are important, but once you the infrastructure is in, the marginal cost is usually more interesting.

                2. 3

                  I may not have all the aspects in my head, so please add some I miss. What I though of:

                  1. More ppl needed for owned infra vs. cloud: in this case, to some extent, I’m paying cloud instead of my people. I’d probably vote for my people.

                  2. If I manage my own infra, I’ll need people with ops skills. In the cloud, I’ll need people with ops skills AND cloud specific skills as well (vendor lock-in?).

                  1. 2

                    Large and complex cloud deployments also need a lot of effort and constant oversight. Guess what Amazon is doing when your Kubernetes cluster on AWS is not working properly? Yep, nothing, that’s on you. The effort to maintain the hardware on which everything runs is a tiny fraction of the time spent maintaining the entire system. And debugging a cloud deployment may even be substantially harder than when you can call your in-house pals and see for yourself if the server room is on fire. The value proposition of the cloud only looks good to those with myopia or very specific needs.

                  1. 6

                    What kind of expertise would a company need to pull an cloud exit strategy? It seems to me that one of the “advantages” that cloud tries to sell is “managed”, i.e. no need for in house sys admin expertise needed (although AWS/Kubernetes/et al has become so complex that dev ops is now a thing even for not so complex applications!)

                    1. 8

                      It depends on precise circumstances, but as a rough guide: let’s say you’re moving to a datacenter which will provide rackspace in cabinets or cages, reliable power, HVAC, cross-connects to a meet-me room and a hands-and-eyes service. You will need to:

                      • spec, buy or lease hardware (servers, switching, router/firewall at a minimum)
                      • install and cable the hardware
                      • configure switching, routing, firewalling, come up with an IP plan and a naming plan
                      • install operating systems
                      • figure out monitoring, logging, and alerting (for hardware and OS, as well as your application)
                      • maintain infrastructure services - DNS, NTP, local email, possibly DHCP.
                      • pick a deployment system

                      There are at least two people at my company who can do all of this (I’m one of them) and at least two more people who can, together, handle the whole thing. You need a minimum of two people. If you want 24/7 operations, you need a minimum of 5. Lots of things can be delayed for specialists, lots of bits can be addressed by contractors, and almost everybody really wants a competent DBA as well.

                      The thing is, those 2-5 people plus DBA(s) can easily handle a hundred to perhaps a thousand machines, depending on how fast you need them in place and how complex an environment you need.

                      1. 3

                        It seems to me that one of the “advantages” that cloud tries to sell is “managed”

                        This really should be the advantage of the cloud, but no one has yet built the thing that customers want: a computer that they can run their workload on. I’m honestly shocked at how badly IBM is doing in this market. A modern cloud system should be a hybrid of mainframe and supercomputer ideas and IBM has been building both for longer than any of the major cloud player have existed. Instead of a system where I can write my program and deploy it and have it scale up and down transparently, I get to manage a fleet of containers on top of a set of VMs. If anything the management overhead of current cloud offerings is higher than the overhead of managing leased hardware.

                      1. 5

                        Interested in what makes you not like tiling window managers on a 4k display! I started using a tiling window manager when I got a much larger screen, as it made laying out windows easier for me

                        1. 3

                          I find that it is almost too large? I find my eyes jumping around, where “floating” allows me to keep everything together near the center of my view.

                          1. 3

                            I agree with this. I can’t read lines that never wrap, so if a window is 2k pxs wide, that’s way too much. There’s a reason that typographers prefer to wrap lines at 2-3 alphabets.

                            1. 2

                              Sounds like the issue is about the size of the display and its distance from your eyes, rather than the resolution?

                              1. 3

                                Think of it this way: most monitors are a little bigger than a sheet of paper, or maybe two sheets of paper. 4K means we can have big monitors, which are closer to the size of an actual desk. We still want to work on two sheets of paper near the center, but the rest of the desk is not wasted simply because our attention is not focused there. That’s where we put things that we expect we will need.

                                At the same time, if what you want is just two sheets of paper, you can use that monitor setup as well.

                          1. 4

                            Here’s one that is probably affecting several percent of you right now:

                            In https://pyfound.blogspot.com/2023/04/the-eus-proposed-cra-law-may-have.html the Python Foundation says that “a version of Python is downloaded over 300 million times per day” and “10 billion packages in an average month”.

                            This is, of course, ridiculous. It has to be the product of a thoughtless automation. It also means that there are supply-chain attacks available. And it won’t be just a Python problem: Node and NPM, Ruby and gems, whatever: all of them likely have a few tens of thousands of users causing stupid amounts of network traffic.

                            Go install a local repository. Ask it to check for updates every 12 or 24 hours, instead of mindlessly downloading on every request. Keep a few old versions around in case you suddenly need to revert; for extra points, keep a separate ‘stable’ repository where you only have the last version that passed all of your tests, and make that the way you deploy to production.

                            1. 1

                              You are right of course, but the instructions for a local mirror are usually so convoluted, not prominent, or plainly complicated that no one bothers.

                              Every single company has usually been “either we already have it in place or for a new ecosystem, we’ll fix it later”. If it was just as easy to get it started as not using it.. more people would do it.

                              1. 1

                                11 and half years ago: https://stackoverflow.com/questions/7575627/can-you-host-a-private-repository-for-your-organization-to-use-with-npm (Yes, several methods given)

                                12 years ago: https://stackoverflow.com/questions/5677433/private-ruby-gem-server-with-authentication (Yes, use geminabox or artifactory)

                                14 years ago: https://stackoverflow.com/questions/77695/how-do-i-set-up-a-local-cpan-mirror (use CPAN::Mini)

                                10 years ago: https://stackoverflow.com/questions/15556147/pypi-is-slow-how-do-i-run-my-own-server (several options, devpi looks good to me)

                                Assuming you have enough disk space and bandwidth – and by definition, you do, somewhere – none of these appear to be more than a couple of hours to implement and one email message to dev-all telling them what to change.

                            1. 2

                              At this point, every modern web browser on every device can connect a voice call nearly directly (modulo finding a TURN or other NAT-traversal solution) to every other. The problem is directory services.

                              There are various solutions for that, but I believe they are all either fundamentally biased towards telco profit-making (hence undesirable to everyone else) or offer no good reason for telcos to interoperate (thus preventing a smooth transition).

                              1. 2

                                It’s not just directory services, it’s interoperability of identifiers. POTS remains the lowest common denominator for an identifiers that you can post somewhere and guarantee that people can connect to you. If you’re a bigger company then you can provide a web calling thing that connects to your call enter but that’s hard for a small business.

                              1. 16

                                Things are never perfect, but even with some of the issues I ran into I’m very happy I switched. It’s hard to describe, but things feel more solid.

                                Holy crap. You have to restart whatever App Store clone is fancy this season in order to use it more than once, and one of the most widely-use password managers crashes (let me guess, the crash involves Wayland Gnome’s flavour of Wayland, GTK, or both?) and it feels more solid? Are you sure you weren’t using Windows Me with a weird WindowBlinds theme before!?

                                I made the switch the other way round (Linux -> macOS) two years ago. Did I already develop Apple Stockholm syndrome? Am I crazy? Is that kind of stuff normal?

                                Edit: I mean please don’t get me started on macOS Ventura. I’m not trying to scoff at Linux, I’m asking if we are doomed!

                                2+ years later I’m still SSH-into into Linux boxes for a lot of development. Is this going to be my next ten years, choosing between a) using the latest breakthrough in silicon design as a glorified VT-220 strapped to an iPad or b) perpetually reliving 1999 desktop nightmares, except without Quake III Arena to make it all worth it?

                                1. 13

                                  Sometimes I’m beginning to wonder if we neckbeards just never run into these problems because we’ve set our ways 20y ago and never changed. On my Ubuntu work machine some sort of graphical apt pops up from time to time (couldn’t be bothered to investigate how to turn it off), but I run my updates regularly via cli-apt-get. There’s no regularly crashing app besides Zoom, and I don’t hold that against any Linux distro.

                                  1. 5

                                    That’s kind of what I’m leaning towards, too. Gnome Software isn’t the first attempt to bolt a GUI in top of a package manager and/or an upstream software source, people have been trying to do that since the early 00s (possibly earlier, too, but I wasn’t running Linux then). At some point, after enough thrashed installs, I just gave up on them.

                                  2. 5

                                    I’ve always avoided Gnome (and PulseAudio and software like that) like the plague and it’s been the Year of Linux on the Desktop for 20+ years now and it’s generally been rock solid for me.

                                    At the moment I’m running Guix on an MSI Gaming laptop from two years ago with an RTX3080 and I love it. Running Steam, Lutris (Diablo 4 beta last weekend), Stable Diffusion and no crashes. 50+ day uptimes only to reboot because Guix kinda expects it.

                                    And ofcourse an ideal dev machine. No Docker shenanigans like on Windows and OSX.

                                    1. 3

                                      Your response captures my own feelings on reading this.

                                      They jumped from frying pan into fire, and they’re happy about the change of scenery. They point out that some extremities are on fire, but hey, it’s different.

                                      It’s very odd indeed, but it’s probably part of what life is like if all this stuff is just a mystery to you and you use the built-in tools without question.

                                      1. 3

                                        I haven’t had to restart a Linux system to fix the package manager in a couple decades on 3 distros I regularly administrate 15-50 systems (depending on the year). This includes systems updated daily/weekly and left running for over a year. Lately most systems get rebooted whenever a new kernel package comes along, but never any other time. Maybe the problem here is you shouldn’t be using the “whatever fancy GUI prototype is in vogue this season” and just use the default system CLI package manager.

                                        1. 1

                                          Why in the world does everyone think I’m talking about myself here and not about the original post!?

                                          Edit: AH! I think I get it. The “you” there is not the generic “you”: the link to that blog post was posted by the post’s author. I’m not using Gnome Software, I’m not even using a Linux desktop anymore. They are :-).

                                        2. 3

                                          You have to restart whatever App Store clone is fancy this season

                                          Why do you use the App Store at all? Which Linux distro are you talking about exactly? Does it not provide a CLI package manager like apt-get or yum or something?

                                          1. 3

                                            I don’t use the software app, but I might if it worked. Package names are often undiscoverable, and for whatever reason, I forget if it’s dnf list, dnf search or some other command—the gui has a search window—nice and discoverable.

                                            Beyond that, if it’s crap, why do they ship the damn thing? So many linux users proudly explain that they know better than to stand near the spike pit. I just want software that doesn’t have the spike pit.

                                            1. 3

                                              Why do you use the App Store at all?

                                              I don’t! In my experience, the only App Store-like thing that ever came close to working on Linux was Synaptic!

                                            2. 2

                                              Especially since KeePassXC is one of the most robust applications for me, across 3 machines and 2 operating systems. I don’t have any problems with using KUbuntu as linux daily driver of that sort. Then again, they don’t go all-in on wayland, and it’s not Gnome Wayland. Event though KDE has its own issues.

                                              1. 1

                                                That’s kind of what I’m surprised at, too. I’ve used it everywhere – I used it on Linux, I now use it on both macOS and Windows. It’s one of the applications I’ve never seen crashing. I haven’t used it under a Wayland compositor, mind you, mostly because those tend to crash before I need to login anywhere, hence my suspicion this is Gnome-related somehow…

                                                1. 2

                                                  Randomly looked at their issues again. And snap seems to be doing it’s job (TM).

                                                  1. 1

                                                    Oh, wow, okay. I’m sorry I blamed Gnome Shell or GTK for that – they caused me the most headaches way back but I should’ve obviously realised there are worse things out there.

                                                    I’m not even sure Snap is the worse thing here? I’ve heard – but the emphasis is on “heard”, I haven’t had to know in a while and I’m just ecstatic about it – that some KDE-related software can be kard to package due to the different lifecycles of KDE Frameworks, Apps, and Plasma. It might be a case of the folks doing the frameworks packaging getting stuck between a rock (non-KDE applications that nonetheless use kf5 & friends) and a hard place (KDE apps and Plasma).

                                                    KDE 3.2 nostalgia intensifies

                                              2. 1

                                                I ran Fedora for 6 months and experienced this level of problems, so I switched to Mint, and it has been much better. I previously tried OpenSUSE Tumbleweed as well, didn’t like it, and concluded that running a Linux with extremely fresh packages is not for me, I want stability and “it just works”. Mint is stable and boring.

                                                1. 1

                                                  You have to restart whatever App Store clone is fancy this season

                                                  gnome-software is over 10 years old at this point, which is probably also why it has so many issues. Its not the norm, no. The standard for most package management GUI has been fairly responsive, batch installs and uninstalls, etc. (e.g. synaptic for apt).

                                                  KPXC doesn’t touch GTK at all, and runs stable under Wayland and under Gnome, at least in my case. (Fedora Silverblue with Flatpaks).

                                                  1. 13

                                                    Its not the norm, no.

                                                    I have to disagree here.

                                                    I worked for Red Hat. I was an insider. This kind of PITA is completely 100% normal for RH OSes, but people who live in that world consider it normal and just part of life.

                                                    I recently wrote an article about the experience – as a Linux and Mac person – of using an Arm laptop under Windows:

                                                    https://www.theregister.com/2023/03/21/lenovo_thinkpad_x13s_the_stealth/

                                                    I commented, at length, on the horrors of updating Windows, and said that habitual Windows users wouldn’t notice this stuff.

                                                    Sure enough, one commenter goes “well it’s not like this on Intel Windows! It’s just you! Or it’s just Arm! It’s not like that!”

                                                    It is EXACTLY like that but if you don’t know anything else, it’s normal.

                                                    You say “GNOME software is over 10 years old” like that’s an excuse. It is not an excuse. It is the opposite of an excuse. At ten days old this sort of thing should not happen.

                                                    But because GNOME 3.x is a raging dumpster fire of an environment, lashed together in Javascript, and built on a central design principle of “look how others do this and do it differently”, GNOME users have forgotten what a stable solid reliable desktop even feels like, and feel that something a decade old will naturally barely work any more because the foundations have been ripped out and rebuild half a dozen times since then, the UI guidelines replaced totally 3 times, the API changed twice a year as if that is normal

                                                    It is not normal. This is not right. This is not OK.

                                                    Graphical desktops are simple, old, settled tech, designed in the 1970s, productised in the 1980s, evolved and polished to excellence by the 1990s. There is quite simply no legitimate excuse for this stuff not being perfect by now, implemented in something rock-solid, running lightning fast in native code, with any bugs discovered and fixed decades ago.

                                                    Cross-platform packaging was solved in the 1980s. Cross platform native binaries were a thing by a third of a century ago. “Oh but this is a new field and we are learning as we go” is not an excuse.

                                                    As Douglas Adams put it:

                                                    “Well, you’re obviously being totally naive of course,” said the girl, “When you’ve been in marketing as long as I have you’ll know that before any new product can be developed it has to be properly researched. We’ve got to find out what people want from fire, how they relate to it, what sort of image it has for them.”

                                                    The crowd were tense. They were expecting something wonderful from Ford.

                                                    “Stick it up your nose,” he said.

                                                    “Which is precisely the sort of thing we need to know,” insisted the girl, “Do people want fire that can be applied nasally?”

                                                    This is, in a word, such an utterly bogus and ludicrous response that anyone should be ashamed to offer it.

                                                    It’s nearly a decade old so of course it doesn’t work is risible.

                                                    The correct answer is “it is nearly a decade old, so now it is tiny, blisteringly fast, and has absolutely no known bugs”.

                                                    1. 6

                                                      Graphical desktops are simple, old, settled tech, designed in the 1970s, productised in the 1980s, evolved and polished to excellence by the 1990s

                                                      I’m sorry but modern requirements do have changes to this. Sometimes changes that are so hard to put into the old codebase, people started rewriting it. HighDPI, multi DPI, (fractional scaling), HDR support, screen readers, touch and its UI change requirements, security (hello X11, admin popups..), direct rendering vs “throw some buttons on there”, screen recording. Sure it’s no excuse to have a buggy mess, but it’s not like you could just throw windows 2000 (or similar) on a current system and call it a day. You’ll have a hard time getting any of the modern requirements I mentioned integrated.

                                                      1. 4

                                                        I don’t really see how that invalidates any part of my comment, TBH.

                                                        Desktops are not unique to Linux. Apple macOS has a “desktop”. They call it the “Finder,” because in around 2000 the NeXTstep desktop was rewritten to resemble the classic MacOS desktop which was actually called the Finder.

                                                        But the NeXTstep desktop, which used to be called Workspace IIRC, has been around since 1989.

                                                        I am using it right now. I have two 27” monitors. One’s a build-in Retina display, which at 5120x2880 is quite HighDPI, and the other one is an older Thunderbolt display, which at 2560x1440 is higher DPI than most of my other screens. Everything looks identical on both my displays, they are both smooth and crisp, and if I drag a window from one to the other, both halves of the window are the same size as I move it even while it’s straddling the display.

                                                        This is 34 year old code. Over a third of a century. 35 if you count the first NeXT public demo of version 0.8 in 1988.

                                                        Windows has a desktop, called Explorer. It is basically the same one that shipped on Windows 95. It’s 28. Again, Windows 10 and 11, both currently shipping and maintained, can both handle this with aplomb. Took ’em a while to catch up to macOS but they got there.

                                                        If GNOME can’t do this properly and well, if this means constant rewrites and functionality being dropped and then reimplemented that means that the GNOME team are doing software development wrong. KDE is a year older than GNOME and I have tried it on a HiDPI display, this month, and it worked fine.

                                                        1. 6

                                                          I don’t think it’s fair to include pre-OpenStep versions of NeXTSTEP, because the addition of the Foundation Kit was a pretty fundamental rewrite. Most of the NX GUI classes took raw C strings in a bunch of places. So most of this code is really only 28 years old.

                                                          To @proctrap’s point, there have been some fundamental changes. OpenStep had resolution independence through it’s PostScript roots and adding screen reader support was a fairly incremental change (just flagging some info that was already there), but CoreAnimation was a moderately large shift in rendering model and is essential for a modern GUI to efficiently use the GPU. OPENSTEP tried very hard to avoid redrawing. When you scrolled, it would copy pixels around and then redraw. It traded this a lot against memory overhead. It used expose events to draw only the area that had been exposed, so nothing needed to keep copies of bits of windows that were hidden. When you dragged a window, you got a bunch of events to draw the new bits (it actually asked for a bit more to be drawn that was exposed so that you didn’t get one event per pixel). With CoreAnimation’s layer model, each view can render to a texture and these live on the GPU. GPUs have a practically infinite amount of RAM in comparison to rather requirements of a 2D UI (remember, OPENSTEP ran on machines with 8 MiB of RAM, including any buffering for display) and so you avoid any redraw events for expose, you only need to redraw views whose contents have changed or which have been resized. For things with simple animation cycles (progress indicators, glowing buttons, whatever), the images are just cycled on the GPU baby uploading different textures.

                                                          Text rendering is where this has the biggest impact. On OPENSTEP, each glyph was rasterised on the CPU directly every time it was drawn. On OS X (since around 10.3ish), each glyph in a font that’s used is rendered once to a cache on the GPU and composited there. This resulted in a massive drop in CPU consumption (it’s why you could smooth scroll on a 300 MHz Mac), which translated to lower power consumption on mobile (compositing on the GPU is very cheap, it’s designed to composite hundreds of millions of triangles, the thousands that you need for the GUI barely wake it up).

                                                          That said, Apple demonstrated that you can retrofit most of these to existing APIs without problems. A lot of software written for OpenStep can be built against Cocoa with some deprecation warnings but no changes. Updating it is usually fairly painless (the biggest problem is that the nib format changed and so UIs need redrawing, XCode can’t import NeXT-era ones).

                                                          If GNUstep had gained the traction that GTK and Qt managed, the *NIX desktop would have been a much more pleasant place.

                                                          1. 1

                                                            I defer on the details here, inasmuch as I am confident you’ve forgotten more about NeXTstep and its kin than I ever knew in my life.

                                                            But as you say: old stuff still works. Yes, it’s been rewritten and extended substantially but it still works, as you say better than ever, while every 6mth or so there are breaking changes in GNOME and KDE, as per the messages about KeePassX upthread from here.

                                                            It is not OK that they still can’t get this stuff right.

                                                            I don’t know where to point the finger. Whenever I even try, big names spring out of the woodwork to deny everything and then disappear again.

                                                            I said on the Reg that WSL is a remote cousin of the NT POSIX personality. Some senior MS exec appears out of nowhere to post to say that, no, WSL is a side-offshoot of Android app support. They’re adamant and angry.

                                                            I request citations. (It’s my job.)

                                                            Suddenly even more senior MS exec appears with tons of links that isn’t Googleable anywhere to show that WSL1 is Android runtime with the Android stuff switched out.

                                                            What this really said to me: “we don’t have any engineers who understand the POSIX stuff enough to touch it any more, so we wrote a new one. But it wasn’t good enough, so now, we just use a VM.”

                                                            It is documented history that MS threatened to sue Red Hat, SUSE, Canonical and others over Linux desktops infringing MS patents on Win95. They did. MS invented Win95 from the whole cloth. I watched, I ran the betas, I was there. It’s true.

                                                            So SUSE signed and the KDE juggernaut trundled along without substantial changes.

                                                            RH and Canonical said no, then a total rewrite of GNOME followed. Again, historical record. Canonical tried to get involved; GNOME told them to take a hike. Recorded history. Shuttleworth blogged about it. So GNOME did GNOME 3, with no Start menu, no system tray, no taskbar, and they’re still frantically trying to banish status icons over a decade later.

                                                            Canonical, banished, does Unity. There’s a plan: run it on phones and tablets. It’s a good plan. It’s a good desktop. I still use it.

                                                            I idly blog about this, someone sticks it on HN and suddenly Miguel de Icaza pops up to deny everything. Some former head of desktops at Canonical noone’s ever heard of pops up to deny everything. No citations, no links, no evidence, everyone accepts it because EVERYONE knows that MS <3 LINUX!

                                                            It’s Wheeler’s “We can solve any problem by introducing an extra level of indirection,” only now, we can solve any accusation of fundamental incompetence by introducing an extra level of lies, FUD and BS.

                                                            1. 2

                                                              It is not OK that they still can’t get this stuff right.

                                                              Completely agreed.

                                                              Suddenly even more senior MS exec appears with tons of links that isn’t Googleable anywhere to show that WSL1 is Android runtime with the Android stuff switched out.

                                                              The latest version of the Windows Kernel Internals book has more details on this. The short version is that the POSIX and OS/2 personalities, like the Win32 one, share a lot of code for things like loading PE/COFF binaries and interface with the kernel via very similar mechanisms. WSL1 used a hook that was originally added for Drawbridge called ‘picoprocesses’. The various personalities are all independent layers that provide different APIs to the same underlying functionality, but they’re also completely isolated. One of the reasons that the original NT POSIX personality was so useless was that there was no way of talking to the GUI and very limited IPC, so you couldn’t usefully run POSIX things on Windows unless you ran only POSIX things.

                                                              In contrast, picoprocesses provided a single hook that allowed you to create a(n almost) empty process and give it a custom system call table. This is closer to the FreeBSD ABI layer than the NT personality layer, but with the weird limitation that you can have only one. The goal for WSL wasn’t POSIX compatibility, it was Linux binary compatibility. This meant that it had to implement exactly the system call numbers of Linux and exactly the Linux flavour of the various APIs. This was quite a different motivation. The POSIX personality existed because the US government required POSIX support as a feature checkbox item, but no one was ever expected to use it. The support in WSL originally existed to allow Windows Phone to run Android apps and was shipped on the desktop because Linux (specifically, not POSIX, *BSD, or *NIX) had basically won as the server OS and Microsoft wanted people to deploy Linux things in Azure, and that’s an easier sell if they’re running Windows on the client. Unfortunately, 100% Linux compatibility is almost impossible for anything that isn’t Linux and so WSL set expectations too high and people complained when things didn’t work (especially Docker, which depends on some truly horrific things on Linux).

                                                              They’re surprisingly different in technology. The Win32 layer has more code in common with the POSIX personality than WSL does.

                                                              What this really said to me: “we don’t have any engineers who understand the POSIX stuff enough to touch it any more, so we wrote a new one. But it wasn’t good enough, so now, we just use a VM.”

                                                              Modifying the old POSIX code into a Linux ABI layer would have been very hard. Remember, this was a POSIX layer that still used PE/COFF binaries, used DLLs injected by the kernel for exposing a system-call interface, and so on. It also hadn’t been updated for recent versions of Windows and depended on a lot of things that had been refactored or removed.

                                                              The thing that made me sad was that they didn’t just embed a FreeBSD kernel in NT and use the FreeBSD Linux ABI layer. The license would have permitted it and they’d have benefitted from starting with something that was about as far along as WSL ever got and had other contributors.

                                                              RH and Canonical said no, then a total rewrite of GNOME followed. Again, historical record. Canonical tried to get involved; GNOME told them to take a hike. Recorded history. Shuttleworth blogged about it. So GNOME did GNOME 3, with no Start menu, no system tray, no taskbar, and they’re still frantically trying to banish status icons over a decade later.

                                                              I only vaguely paid attention to that drama, but from the perspective of someone trying to create a GNUstep-based DE at the time, it looked more like Mac-envy than MS-fear: GNOME 3 and Unity both seemed like people trying to copy OS X without understanding what it was that made OS X pleasurable to use and without any of the underlying technology necessary to be able to implement it.

                                                              I idly blog about this, someone sticks it on HN and suddenly Miguel de Icaza pops up to deny everything.

                                                              I was really surprised at the internal reactions when MdI joined Microsoft. The attitude inside the company was that he’s a great leader in the Linux desktop world and it’s fantastic that he’s now helping Microsoft make the best Linux environments and it shows how perception of Microsoft has changed. My recollection of his perception from the F/OSS desktop community (before I gave up, ran OS X, and stopped caring) was that he was the guy that never met a bad MS technology that he didn’t like and tried to force GNOME to copy everything MS did, no matter how much of a bad idea it was. The rumour was that he’d applied to MS and been rejected and so made it his mission to create his own MS-like ecosystem that he could work on.

                                                              EVERYONE knows that MS <3 LINUX!

                                                              Pragmatically, MS knows that Linux brings in huge amounts of money to Azure, and that Linux (Android) brings in a huge amount of money to the Office division. And MS (like any other trillion-dollar company) loves revenue. Unfortunately, in spite of being one of the largest contributors to open source, only a few people in the company actually understand open source. They think of open source as being an ecosystem of products rather than a source of disruptive technologies.

                                                              P.S. When are you going to write an article about CHERIoT for El Reg?

                                                      2. 4

                                                        ‘The correct answer is “it is nearly a decade old, so now it is tiny, blisteringly fast, and has absolutely no known bugs”.’

                                                        This. I’m sad to say that there are still some bugs in XFCE, but none that I encounter on a daily basis and generally fewer in each release. I haven’t understood why people think GNOME is a good idea since their 2.x releases.

                                                        I’ve been waiting for Wayland to mature and I’m still not really seeing signs of it.

                                                        Every Debian upgrade from stable to new stable is smoother than the last one, modulo specific breaking changes which are (a) usually well documented, (b) aren’t automatable because they require policy choices, and (c) don’t apply to new installs at all, which are also smoother and faster than they used to be.

                                                        1. 2

                                                          why people think GNOME is a good idea

                                                          I would actually recommend it for some people, since it’s looking pretty good (unlike XFCE), has some good defaults and doesn’t come with the amount of options that KDE has. (And I haven’t had any breakage on LTS Ubuntu with Gnome desktops.) I prefer KDE, but I wish I would have recommended some people in my family gnome. (Which I gave KDE back then, as it’s more resembling the windows 7 startup menu.) But you don’t change the Desktop of someone who is over 80 years old. Even if their KDE usage ends up spawning 4 virtual desktops, with 10 firefox windows, 2 Taskbars and 2 start menus. Apparently they like it that way.

                                                          1. 3

                                                            GNOME is pretty. Its graphics design is second-to-none in the Linux world, and it pretty much always has been, since the Red Hat Linux era.

                                                            It’s therefore even more of a shame that, to me, it’s an unusable nightmare of a desktop environment.

                                                            KDE, which is boldly redefining “clunky” and “overcomplicated”, is at least minimally usable, but it is, IMHO, fugly and it has been since KDE 2.0.0. And I wrote an article on how to download, compile and install KDE 2.0.0. Can’t remember for whom now; long time ago.

                                                            (When RH applied the RHL 9 Bluecurve theme to KDE, I have never ever seen KDE look so pretty, before or since.)

                                                            Xfce is plain, but it’s not ugly. You can theme it up the wazoo if you want. I don’t want. I leave it alone. But that pales into utter insignificance because it works.

                                                          2. 2

                                                            Thank you!

                                                            Sometimes I feel like it’s just me. I really do appreciate this feedback.

                                                          3. 1

                                                            Its not the norm, no.

                                                            I have to disagree here. […] This kind of PITA is completely 100% normal for RH OSes […] It is not normal. […]

                                                            Confusing structure.

                                                            You wouldn’t use synaptic, as I mentioned as an example of something more normal, on a RH OS.

                                                            The correct answer is “it is nearly a decade old, so now it is tiny, blisteringly fast, and has absolutely no known bugs”.

                                                            It clearly wouldn’t be the correct answer because that contains a lie?

                                                            1. 4

                                                              I do not think that you understood what I was saying here. I am making extensive use of irony and sarcasm in order to try to make a point.

                                                              Confusing structure.

                                                              I am saying that problems like those described are normal for RH products and people using the RH software ecosystem.

                                                              Then I continue to say that these things are not normal for the rest of the Linux world.

                                                              In other words, my point is that these things are normal for RH, and they are not normal for Linux as a whole.

                                                              In my direct personal experience as a former RH employee, a lot of RH people are not aware of the greater Linux world and that other distros and other communities are not the same, and that often, things are better in the wider Linux world.

                                                              I am sorry that this was not clear. It seemed clear to me when I wrote it.

                                                              It clearly wouldn’t be the correct answer because that contains a lie?

                                                              Again, you are missing the point here.

                                                              I am saying “the correct answer,” as in, this is how things should be.

                                                              In other words, I am saying that in a more normal, sane, healthy software ecosystem, the correct answer ought to be that after a over a decade of biannual releases, which means over 20 major versions, something should have improved and be better than it ever was.

                                                              In a normal healthy project, after 12 years and 44 versions, a component should be completely debugged, totally stable, and then have had 5-10 years to do fine-tuning and performance optimisation.

                                                              (I will also note that 2 major releases per year = 20 major releases. For a healthy software project, you do not need to obfuscate this by, in this example, redefining the minor version as the major version at version 3.40, so that version 3.40 is now called version 40 and from then on everyone pretends that minor versions are major versions.)

                                                              (BTW “obfuscate” is a more polite way of saying “tell a lie about”.)

                                                              I am not saying “GNOME Software is written in native code, is bug free and performance optimised”.

                                                              I am saying “GNOME Software OUGHT TO BE native code, bug free and performance optimised by now.”

                                                              Is that clearer now?

                                                              1. 1

                                                                Then I continue to say that these things are not normal for the rest of the Linux world.

                                                                Which is what I already said, with an example from the rest of the Linux world, so I don’t understand why you say you disagree with me on that topic. Hence my confusion.


                                                                […]

                                                                https://lobste.rs/s/wbcgdt/switching_fedora_from_ubuntu#c_as8hxe

                                                                1. 1

                                                                  So, from your quoted reply, you are saying that:

                                                                  10 years of development against moving targets (app store trends, flavor-of-the-year GTK api, plugin based package management, abstraction based package management, back to plugin based package management)

                                                                  … justifies it hanging? That this is understandable and acceptable given the difficult environment?

                                                          4. 7

                                                            gnome-software is over 10 years old at this point, which is probably also why it has so many issues.

                                                            I have clearly developed Stockholm syndrome because IMHO ten year-old software should not have so many issues :-D. Software that’s been maintained for ten years usually gets better with time, not worse. This isn’t some random third-party util that’s been abandoned for six years, Gnome Software is one of the Core Apps.

                                                            1. 3

                                                              To elaborate further, 10 years of development against moving targets (app store trends, flavor-of-the-year GTK api, plugin based package management, abstraction based package management, back to plugin based package management)

                                                              Similarly, Servo easily hitting speed achievements that Firefox struggles to achieve.

                                                              1. 1

                                                                Right, I can see why it reads that way, but I didn’t mean that as a jab specifically at the code in Gnome Software. Its developers are trying to solve a very complicated problem and I am well aware of the fact that the tech churn in the Linux desktop space is half the reason why the year of Linux on the desktop is a meme.

                                                                I mean that, regardless of the reason why (and I’m certainly inclined to believe the churn is the reason), the fact that ten years of constant and apt maintenance are insufficient to make an otherwise barebones piece of software work is troubling. This is not a good base to build a desktop on.

                                                        1. 1

                                                          Here’s my proposal:

                                                          nushp - new UNIX shell project. It’s available as a .org. The only namespace collision appears to be the Northeastern University Student Health Plan, and while there are a bunch of CS people at Northeastern, they are unlikely to be confused.

                                                          Call the binary nush. Default it to the new language, version 1.0. If it is invoked by any other name ending in sh, set to bash-compatible mode. If it’s invoked by a name ending in one or more digits, treat it as a version number and go look for the right binary to exec. No python2/3 transitions, please.

                                                          Call the data format nushdata. Better version it as well.

                                                          When someone asks how to pronounce nush, tell them that if they like it, they can say ‘newsh’, and when they criticize it, they can say ‘nuhsh’.

                                                          1. 1

                                                            Thanks for the suggestion! Too similar to http://www.nushell.sh/ though

                                                          1. 17

                                                            I was discussing this with a friend the other day, and as he said, “how did the fools get their money in the first place?”

                                                            1. 6

                                                              Having known a bunch of person who tended to fall into these kind of pricey rabbit hole, my surprising conclusion is that they are not particularly rich people. They just are very, very bad at making financial decision. You can expect a lot of these people to have loaded credit credit cards, a remortgage, and even a few personal bankruptcy to their name.

                                                              In one particular case, the guy had an undiagnosed ADHD and was constantly making impulse purchase for new Magic the Gathering card, despite being barely able to afford his rent.

                                                              I’m guessing audiophile are the very same type of people.

                                                              1. 2

                                                                You can be an outstanding dentist but an atrocious engineer.

                                                                1. 1

                                                                  They inherited it from parents or grandparents who worked hard and were lucky, or did something vile and were lucky; occasionally all three.

                                                                1. -2

                                                                  da39a3ee5e6b4b0d3255bfef95601890afd80709

                                                                  1. 5

                                                                    I was expecting to read about how the cheap hardware with open source firmware can be set up with all the features of expensive mesh networks, but that’s not what it was.

                                                                    I don’t like expensive mesh networks because (a) expensive (b) tend to require special proprietary control systems (c) which often want logins and other privacy-violators. So I buy $40 wifi routers that are known to work well with DD-WRT/OpenWRT and set them up as follows:

                                                                    • all wifi radios set the same SSID

                                                                    • turn off 2.4GHz on the AP nearest the kitchen (microwave fun)

                                                                    • channels are set by hand for minimum overlap

                                                                    • NAT, firewalling, DHCP and DNS are turned off

                                                                    • Cat5e runs to the nearest switch port (three switches: office, den, living room, all interconnected)

                                                                    Five of these cover the house and the back yard nicely. No meshing. No outside-the-house dependencies except power.

                                                                    1. 3

                                                                      Interesting. I’m curious, do you know if Openwrt supports anything for handover protocol as you move from one client to the next?

                                                                      1. 1

                                                                        Recent versions support 802.11 r, k and v, but not on all radios. Support is necessary on both ends. If you aren’t active while moving from one ‘best’ station area to another, none of them are needed.

                                                                      2. 2

                                                                        Which routers are you using? What do you recommend?

                                                                        1. 1

                                                                          TP-Link Archer C7 with OpenWRT is great. If you have a home server running a VM with OpenWRT and dumb Access Points from Mikrotik is fun and can cover easily cover multiple rooms/area/house. AVM‘s FritzBox have DSL/Cable/LTE Modem or Fiber included had quick stable but expensive.

                                                                        2. 1

                                                                          That sounds pretty great. Do you have a wiki/post breaking all of that down? Or least have solid suggestions for cheap routers? Sounds very interesting.

                                                                          1. 2

                                                                            Most of my routers are TP-Link Archer C7, which are routinely on sale in the US for $45 each. If I see a sale on some new plausible router, my criteria are:

                                                                            • at least one gigabit ethernet port, preferably 4.
                                                                              • one for uplink to a switch, the others for local devices that I might want to position there
                                                                            • 802.11ac and n on 2.4 and 5Ghz bands
                                                                              • the most usable protocols as of early 2023 – machines that were new in 2010 onwards use n, machines new in 2015 onwards use ac. ax has been out for almost 4 years and is still uncommon except on the newest phones and laptops.
                                                                            • known good firmware from dd-wrt or openwrt in the most recent stable release

                                                                            It’s reasonable to get everything set up well on machines that don’t have open source firmware, even if they don’t support an AP mode, by carefully turning off all the things I wrote about before and avoiding the ‘WAN’ port.

                                                                            I don’t trust any of these things as firewalls for outside connections, strictly as access points.

                                                                        1. 43

                                                                          I still like Zulip after about 5 years of use, e.g. see https://oilshell.zulipchat.com . They added public streams last year, so you don’t have to log in to see everything. (Most of our streams pre-date that and require login)

                                                                          It’s also open source, though we’re using the hosted version: https://github.com/zulip

                                                                          Zulip seems to be A LOT lower latency than other solutions.

                                                                          When I use Slack or Discord, my keyboard feels mushy. My 3 GHz CPU is struggling to render even a single character in the browser. [1]

                                                                          Aside from speed, the big difference between Zulip and the others is that conversations have titles. Messages are grouped by topic.

                                                                          The history and titles are extremely useful for avoiding “groundhog day” conversations – I often link back to years old threads and am myself informed by them!

                                                                          (Although maybe this practice can make people “shy” about bringing up things, which isn’t the message I’d like to send. The search is pretty good though.)

                                                                          When I use Slack, it seems like a perpetually messy and forgetful present.

                                                                          I linked to a comic by Julia Evans here, which illustrates that feature a bit: https://www.oilshell.org/blog/2018/04/26.html

                                                                          [1] Incidentally, same with VSCode / VSCodium? I just tried writing a few blog posts with it, because of its Markdown preview plugin, and it’s ridiculously laggy? I can’t believe it has more than 50% market share. Memories are short. It also has the same issue of being controlled by Microsoft with non-optional telemetry.

                                                                          1. 9

                                                                            +1 on zulip.

                                                                            category theory https://categorytheory.zulipchat.com/ rust-lang https://categorytheory.zulipchat.com/

                                                                            These are examples of communities that moved there and are way easier to follow than discord or slack.

                                                                            1. 9

                                                                              Zulip is light years ahead of everything else in async org-wide communications. The way the messages are organized makes it extremely powerful tool for distributed teams and cross-team collaboration.

                                                                              The problems:

                                                                              • Clients are slow when you have 30k+ unread messages.
                                                                              • It’s not easy (possible?) to follow just a single topic within a stream.
                                                                              • It’s not federated.
                                                                              1. 12

                                                                                We used IRC and nobody except IT folks used it. We switched to XMPP and some of the devs used it as well. We switched to Zulip and everyone in the company uses it.

                                                                                We self-host. We take a snapshot every few hours and send it to the backup site, just in case. If Zulip were properly federate-able, we could just have two live servers all the time. That would be great.

                                                                                1. 6

                                                                                  It’s not federated.

                                                                                  Is this actually a problem? I don’t think most people want federation, but easier SSO and single client for multiple servers gets you most of what people want without the significant burdens of federation (scaling, policy, etc.).

                                                                                  1. 1

                                                                                    Sorry for a late reply.

                                                                                    It is definitely a problem. It makes it hard for two organizations to create shared streams. This comes up e.g. when an organization with Zulip for internal communications wants to contract another company for e.g. software development and wants them to integrate into their communications. The contractor needs accounts at the client’s company. Moreover, if multiple clients do this, the people working at the contracted company now have multiple scattered accounts at clients’ instances.

                                                                                    Creating stream shared and replicated across the relevant instances would be way easier, probably more secure and definitely more scalable than adding wayf to relevant SSOs. The development effort that would have to go into making the web client connect to multiple instances would probably be also rather high and it would not be possible to perform it incrementally. Unlike shared streams that might have some features disabled (e.g. custom emojis) until a way forward is found for them.

                                                                                    But I am not well versed in the Zulip internals, so take this with a couple grains of sand.

                                                                                    EDIT: I figure you might be thinking of e.g. open source projects each using their own Zulip. That sucks and it would be nice to have a SSO service for all of them. Or even have them somehow bound together in some hypothetical multi-server client. I would love that as well, but I am worried that it just wouldn’t scale (performance-wise) without some serious though about the overall architecture. Unless you are thinking about the Pidgin-style multi-client approach solely at the client level.

                                                                                2. 7

                                                                                  This is a little off topic, but Sublime Text is a vastly more performant alternative to VSCode.

                                                                                  1. -4

                                                                                    Also off-topic: performant isn’t a word.

                                                                                  2. 3

                                                                                    I feel like topic-first organization of chats is, which Zulip does, is the way to go.

                                                                                      1. 16

                                                                                        It still sends some telemetry even if you do all that

                                                                                        https://github.com/VSCodium/vscodium/blob/master/DOCS.md#disable-telemetry

                                                                                        That page is a “dark pattern” to make you think you can turn it off, when you can’t.


                                                                                        In addition, extensions also have their own telemetry, not covered by those settings. From the page you linked:

                                                                                        These extensions may be collecting their own usage data and are not controlled by the telemetry.telemetryLevel setting. Consult the specific extension’s documentation to learn about its telemetry reporting and whether it can be disabled.

                                                                                        1. 4

                                                                                          It still sends some telemetry even if you do all that

                                                                                          I’ve spent several minutes researching that, and, from the absence of clear evidence that telemetry is still being sent if disabled (which evidence should be easy to collect for an open codebase), I conclude that this is a misleading statement.

                                                                                          The way I understand it, VS Code is a “modern app”, which uses a boatload online services. It does network calls to update itself, update extensions, search in the settings and otherwise provide functionality to the user. Separately, it collects gobs of data without any other purpose except data collection.

                                                                                          Telemetry disables the second thing, but not the first thing. But the first thing is not telemetry!

                                                                                          • Does it make network calls? Yes.
                                                                                          • Can arbitrary network calls be used for tracking? Absolutely, but hopefully the amount of legal tracking allowable is reduced by GDPR.
                                                                                          • Should VS Code have a global “use online services” setting, or, better yet, a way to turn off node’s networking API altogether? Yes.
                                                                                          • Is any usage of Berkeley socket API called “telemetry”? No.
                                                                                          1. 3

                                                                                            It took me awhile, but the source of my claim is from VSCodium itself, and this blog post:

                                                                                            https://www.roboleary.net/tools/2022/04/20/vscode-telemetry.html

                                                                                            https://github.com/VSCodium/vscodium/blob/master/DOCS.md#disable-telemetry

                                                                                            Even though we do not pass the telemetry build flags (and go out of our way to cripple the baked-in telemetry), Microsoft will still track usage by default.

                                                                                            Also, in 2021, they apparently tried to deprecate the old setting and introduce a new one:

                                                                                            https://news.ycombinator.com/item?id=28812486

                                                                                            https://imgur.com/a/nxvH8cW

                                                                                            So basically it seems like it was the old trick of resetting the setting on updates, which was again very common in the Winamp, Flash, and JVM days – dark patterns.

                                                                                            However it looks like some people from within the VSCode team pushed back on this.

                                                                                            Having worked in big tech, this is very believable – there are definitely a lot of well intentioned people there, but they are fighting the forces of product management …


                                                                                            I skimmed the blog post and it seems ridiculously complicated, when it just doesn’t have to be.

                                                                                            So I guess I would say it’s POSSIBLE that they actually do respect the setting in ALL cases, but I personally doubt it.

                                                                                            I mean it wouldn’t even be a dealbreaker for me if I got a fast and friendly markdown editing experience! But it was very laggy (with VSCodium on Ubuntu.)

                                                                                            1. 2

                                                                                              Yeah, “It still sends some telemetry even if you do all that” is exactly what VS Codium claim. My current belief is that’s false. Rather, it does other network requests, unrelated to telemetry.

                                                                                          2. 2

                                                                                            These extensions may be collecting their own usage data and are not controlled by the telemetry.telemetryLevel setting.

                                                                                            That is an … interesting … design choice.

                                                                                            1. 7

                                                                                              At the risk of belaboring the point, it’s a dark pattern.

                                                                                              This was all extremely common in the Winamp, Flash, and JVM days.

                                                                                              The thing that’s sad is that EVERYTHING is dark patterns now, so this isn’t recognized as one. People will actually point to the page and think Microsoft is being helpful. They probably don’t even know what the term “dark pattern” means.

                                                                                              If it were not a dark pattern, then the page would be one sentence, telling you where the checkbox is.

                                                                                              1. 6

                                                                                                They probably don’t even know what the term “dark pattern” means.

                                                                                                I’d say that most people haven’t been exposed to genuinely user-centric experiences in most areas of tech. In fact, I’d go so far as to say that most tech stacks in use today are actually designed to prevent the development of same.

                                                                                                1. 2

                                                                                                  The thing that feels new is how non-user-centric development tools are nowadays. And the possibility of that altering the baseline perception of what user-centric tech looks like.

                                                                                                  Note: feels; it’s probably not been overly-user-centric in the past, but they were a bit of a haven compared to other areas of tech that have overt contempt for users (social media, mobile games, etc).

                                                                                              2. 4

                                                                                                That is an … interesting … design choice.

                                                                                                How would you do this differently? The same is true about any system with plugins, including, eg, Emacs and Vim: nothing prevents a plug-in from calling home, except for the goodwill of the author.

                                                                                                1. 3

                                                                                                  Kinda proves the point, tbh. To prevent a plugin from calling home, you have to actually try to design the plugin API to prevent it.

                                                                                                  1. 4

                                                                                                    I think the question stands: how would you do it differently? What API would allow plugins to run arbitrary code—often (validly) including making network requests to arbitrary servers—but prevent them from phoning home?

                                                                                                    1. 6

                                                                                                      Good question! First option is to not let them make arbitrary network requests, or require the user to whitelist them. How often does your editor plugin really need to make network requests? The editor can check for updates and download data files on install for you. Whitelisting Github Copilot or whatever doesn’t feel like too much of an imposition.

                                                                                                      1. 4

                                                                                                        Capability security is a general approach. In particular, https://github.com/endojs/endo

                                                                                                        For more… https://github.com/dckc/awesome-ocap

                                                                                                      2. 3

                                                                                                        More fun: you have to design a plugin API that doesn’t allow phoning home but does allow using network services. This is basically impossible. You can define a plugin mechanism that has fine-grained permissions and a UI that comes with big red warnings when things want network permissions though and enforce policies in your store that they must report all tracking that they do.

                                                                                                      3. 1

                                                                                                        nothing prevents a plug-in from calling home, except for the goodwill of the author.

                                                                                                        Traditionally, this is prevented by repos and maintainers who patch the package if it’s found to be calling home without permission. And since the authors know this, they largely don’t add such functionality in the first place. Basically, this article: http://kmkeen.com/maintainers-matter/ (http only, not https).

                                                                                                        1. 1

                                                                                                          We don’t necessarily need mandatory technical enforcement for this, it’s more about culture and expectations.

                                                                                                          I think that’s the state of the art in many ecosystems, for better or worse. I’d say:

                                                                                                          • The plugin interface should expose the settings object, so the plugin can respect it voluntarily. (Does it currently do that?)
                                                                                                          • The IDE vendor sets the expectation that plugins respect the setting
                                                                                                          • A plugin that doesn’t respect it can be dealt with in the same way that say malware is dealt with.

                                                                                                          I don’t know anything about the VSCode ecosystem, but I imagine that there’s a way to deal with say plugins that start scraping everyone’s credit card numbers out of their e-mail accounts.

                                                                                                          Every ecosystem / app store- type thing has to deal with that. My understanding is that for iOS and Android app stores, the process is pretty manual. It’s a mix of technical enforcement, manual review, and documented culture/expectations.


                                                                                                          I’d also not rule out a strict sandbox that can’t make network requests. I haven’t written these types of plugins, but as others pointed out, I don’t really see why they would need to access the network. They could be passed the info they need, capability style, rather than searching for it all over your computer and network!

                                                                                                          1. 1
                                                                                                          2. 1

                                                                                                            Sure, but they don’t offer a “disable telemetry” setting.

                                                                                                            What I’d do, would be to sandbox plugins so they can’t do any network I/O, then have a permissions system.

                                                                                                            You’d still rely on an honour system to an extent; because plugin authors could disguise the purpose of their network operations. But you could at least still have a single configuration point that nominally controlled telemetry, and bad actors would be much easier to spot.

                                                                                                            1. 1

                                                                                                              There is a single configuration point which nominally controls the telemetry, and extensions should respect it. This is clearly documented for extension authors here: https://code.visualstudio.com/api/extension-guides/telemetry#custom-telemetry-setting.

                                                                                                  1. 17

                                                                                                    This seems a little weird to me compared to having both the 768p laptop screen and the UHD monitor plugged in, and using the small screen as the target for design work while allowing yourself the luxury of the big screen to get things done.

                                                                                                    But my first principle of keeping users happy is de gustibus non disputandum.

                                                                                                    1. 2

                                                                                                      de gustibus non disputandum

                                                                                                      For those who don’t know Latin, like myself, this translates to:

                                                                                                      In matters of taste, there can be no disputes

                                                                                                    1. 18

                                                                                                      I’ve been doing some back of the napkin math on my company’s cloud transition and containers are incredibly expensive. (An order of magnitude more expensive than virtual servers!)

                                                                                                      The joke of “Kubernetes was the Greek god of spending money on cloud services” is pretty accurate.

                                                                                                      On the other hand, increasing our headcount is more expensive than containers. We actually save money this way. And we’re unlikely to grow our headcount and business enough that switching to less expensive infrastructure would be cheaper in the long run.

                                                                                                      1. 12

                                                                                                        … I’m confused, how is “adopt containers in a ‘cloud’” an alternative to “hire staff” ?

                                                                                                        1. 7

                                                                                                          Depending on scale, you need to have skills and hours for:

                                                                                                          cloud containers:

                                                                                                          • containerization
                                                                                                          • orchestration
                                                                                                          • cloud networking (high level)
                                                                                                          • cloud security
                                                                                                          • access management

                                                                                                          physical hardware in a datacenter:

                                                                                                          • hardware build/buy, deploy, monitoring, maintenance
                                                                                                          • network setup, deploy, management, monitoring (low-level)
                                                                                                          • security
                                                                                                          • access management

                                                                                                          If you think that one of these requires skills and hours you don’t currently have, and you do for the other, then you need to hire people.

                                                                                                          1. 9

                                                                                                            Ah yes, the old “it’s the cloud or break ground on your own datacenter, there’s no in between” trope.

                                                                                                            1. 20

                                                                                                              That’s uncharitable. Everything I attributed to “physical hardware in a datacenter” applies equally to renting rackspace from an existing colo provider… which is what my employer does.

                                                                                                              You can also lease servers from many datacenters, pay them for deployment, and pay them for networking.

                                                                                                              1. 11

                                                                                                                It took me a while to figure out what parent is getting at but I think it’s a matter of walking a few miles in young people’s shoes. All this is happening in 2023, not 2003. Lots of people who are now in e.g. their late twenties started their careers at a time when deploying containers to the cloud was already the norm. They didn’t migrate from all that stuff in the second list to all that stuff in the first list, they learned all that stuff in the first list as the norm and maybe learned a little about the stuff in the second part in school. And lots of people who are past their twenties haven’t done all that stuff in the second list in like ten years. Hell, I could write pf and iptables rulesets without looking at the man pages once – now I’m dead without Google and I woke up to find nftables is a thing, like, years after it was merged.

                                                                                                                It’s not a dying art (infrastructure companies need staff, too!) but it’s a set of skills that software companies haven’t really dealt with in a while.

                                                                                                                1. 2

                                                                                                                  I’m actually more skilled in running servers than containers. My company is transitioning to the cloud and I’m getting the crash course on The New Way. Docker and Dockerfiles are the currently bane of my existence.

                                                                                                                  But I can’t ignore that containers allow a level of automation that’s difficult to achieve with virtual or physical ones. Monitoring is built in. Monit or systemd configs aren’t needed anymore. They’ve been replaced by various AWS services.

                                                                                                                  And frankly, we can push the creation of Docker images down the stack to experienced developers and keep operations headcount lower.

                                                                                                                  It’s more efficient to hire a developer like me who works part time on devops than hire a developer and an devops person.

                                                                                                                  1. 1

                                                                                                                    I’m 100% not an infra guy so I’m probably way off but my (possibly incorrect) expectation is that a company that’s running cloud-hosted services deployed in containers & co. at the moment would also deploy them in containers in a non-cloud infrastructure, too. I mean, regardless of whether that’s a good idea or not in technical terms (which I suspect it is but I have no idea) it’s probably the only viable one, since hardly anything can be built and ran in another environment today. IMHO you’d need people doing devops either way. Tooling may be “just” a means to an end but it’s unescapable and we’re stuck with the ones we have no matter what we run them on.

                                                                                                                    That’s probably one reason why gains like the ones the author of the article wrote about are currently accessible only to companies running large enough and diverse enough arrays of services, who probably need, if not super-specialised, at least dedicated staff to manage their cloud infrastructure. In that case, you’re ultimately shifting staff from one infrastructure team to another, so barring some initial investments (e.g. maybe you need to hire/contract a network infra expert, and do a lot of one-off work like buy and ship cabinets and the like), it’s mostly a matter of infrastructure operation costs.

                                                                                                                    Smaller shops, or at least shops with less diverse requirements and/or lighter infrastructure requirements that can be (mostly?) added to the developers’ plates aren’t quite in the same position. In their case, owning infrastructure (again) probably translates into having a full-sized, competent IT department again to keep the wheels spinning on the hardware that developers deploy their containers on. So they’d be hiring staff again and… yeah.

                                                                                                                2. 1

                                                                                                                  I mean, there are other options where you rent VMs or even physical servers, but those require additional skills as well that you have to hire for. If you’re alluding to a PaaS then you won’t need additional headcount, but you may well be spending more for your resources than you would in the cloud.

                                                                                                                  1. 3

                                                                                                                    I’m coming at this with quite a bit of grey in my beard, but it makes me profoundly uncomfortable to think that the folks who are responsible for all of the cloud bits that “dsr” outlines would be uncomfortable handling the physical pieces. I get that it’s a thing, but having started from the other side (low-level), the idea that people are orchestrating huge networks without having ever configured subnets on e.g. an L3 switch… that freaks me out.

                                                                                                                    1. 4

                                                                                                                      Fun, isn’t it? I don’t (usually) feel like I’ve been at this that long, but a lot of fundamentals that I’d have expected as table stakes have been entirely abstracted away or simplified so much that people starting today just aren’t going to need to know them. (Or, if they do, are going to need a big crash course…)

                                                                                                                      OTOH I spend a lot of my time realizing that there’s yet another new thing I need to learn to stay current…

                                                                                                                      1. 3

                                                                                                                        I feel attacked xD

                                                                                                                        More seriously, I love programming, but years of family and friends asking me to help with their network issue over the phone or text just completely killed my will of doing this kind of configuration.

                                                                                                                        The exception being terraform, I was pleasantly surprised by how satisfying it was to be ablee to declare what you want and be able to inspect the plan before executing it. But that’s still pretty high-level I guess…

                                                                                                                    2. 1

                                                                                                                      I think even when colocating, you still are needing some extra level of expertise. There is most definitely more people who can get by with cloud hosting stuff who would be more overwhelmed by the issues coming with managing the hardware.

                                                                                                                      I think that if you have people in a team with that skillset, though, then it’s a different calculus. But it’s hard to overstate how little you have to think about the hardware with cloud setups. I mean you gotta decide on some specs but barely. And at least in theory it lets you ignore a level of the stack somewhat.

                                                                                                                      Most companies are filled with people who are merely alright at their jobs, and so when introducing a new set of problems you’re looking at pulling in new people and signing up for a new class of potential problems.

                                                                                                                  2. 2

                                                                                                                    You need slightly less people if you don’t have servers (virtual or otherwise) to monitor and maintain.

                                                                                                                    As annoying as I’m finding The Cloud, containers natively support automation in a way servers do not. Linux automation isn’t integrated, it’s bolted on after the fact.

                                                                                                                    It’s easy to mistake something you’re familiar with as being simpler than something you aren’t.

                                                                                                                1. 2

                                                                                                                  Multiple artists per track. I dislike having metadata like “(feat. So-and-So)” in the track name. I’m currently just deleting that info altogether, but I believe it’s possible to represent it in metadata by having multiple “Artist” tags. I’m just not sure how that shows up in music players. Do they use it at all? Will they show the track under the contributing artist’s file-browser entry as well?

                                                                                                                  What I do is delimit all artists with ; and keep their role (i.e. feature/performer/etc). So an artist would be A;B;C feat D;E.

                                                                                                                  Some music players support 1:many album:artist relationships. I’ve not had much luck with support on Linux music players; coincidentally last night I was looking for one to fork support into. On Android, Poweramp supports custom delimiters, so I have ;, feat., and a few others set as artist delimiters for a 1:many relationship.

                                                                                                                  Likewise, I hadn’t heard of accuracy-oriented rippers like EAC and morituri/whipper, or about the practice of ripping whole albums into one file and using “CUE files” alongside them to denote track boundaries. Am I missing out? Do I need to start over again?

                                                                                                                  I used to care about this, but now I’d rather enjoy the music and not worry about a rare bit error that I probably won’t even notice.

                                                                                                                  1. 1

                                                                                                                    Oh cool; I didn’t even know delimiters within a field were an option! Thanks!

                                                                                                                    I used to care about this, but now I’d rather enjoy the music and not worry about a rare bit error that I probably won’t even notice.

                                                                                                                    Yeah, in general I agree with that sentiment, but I started wondering if that kind of error might be the cause of some very noticeable “blips” I hear occasionally in some tracks (could be from something else, of course)… just got me thinking about it.

                                                                                                                    1. 2

                                                                                                                      If you hear it consistently on playback of the same track, it’s a source error of some kind. If it’s not consistent at the same spot on the same track, it’s a playback error of some kind.

                                                                                                                      1. 1

                                                                                                                        Yeah I’m hopeful that it may just have been an artefact of the quick spot-checking I was doing by playing and skipping around a few sample tracks in VLC, rather than importing them into a more dedicated music player.

                                                                                                                  1. 7

                                                                                                                    It sounds like we have even forgotten about the continuous amnesia issue.

                                                                                                                    What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing. And as men filled not with wisdom but with the conceit of wisdom they will be a burden to their fellows. — Socrates, on the invention of writing ~2500 years ago

                                                                                                                    I was also very disappointed when the author didn’t share his thoughts on reusability… I think we (as a discipline) have learned and retained quite a lot about reusability…! So much that most new programmers are not even aware of the reuse anymore.

                                                                                                                    Long long (long) gone are the days of learning to program by first copying the code for a usable text editor from a magazine into the computer.

                                                                                                                    1. 5

                                                                                                                      In every era you can find people making the same complaints: young people have failed to learn what is already known, young people are disrespectful and dissolute, every year is worse, society is collapsing, and new media are making us into morons.

                                                                                                                      And yet, largely speaking humanity’s collective ability to produce food, heal injury and disease, communicate over long distances, preserve knowledge, and kill each other mostly increase rather than decrease.

                                                                                                                      1. 4

                                                                                                                        In every era you also find people responding to any longer term observation of decline that it has always been thus and that there is nothing to worry about.

                                                                                                                        I know this:

                                                                                                                        • coworkers seem less and less interested in building up skills, and maintainable code, and expect to ditch whatever they’re working on in 2-3yrs

                                                                                                                        • enormous amounts of lessons from the desktop era have been lost. zoomers don’t know any of this stuff and have to discover the hard way that replicating serious app UIs in browsers is real work

                                                                                                                        • compilation and tooling is still slow as balls, despite computers having gotten much, much faster.

                                                                                                                        • relying on an IDE to codegen imports and boilerplate will create a codebase where you can’t find anything and every file is huge

                                                                                                                        • people keep thinking they can write O(n^2) state transitions better and faster than they can write correct O(n) intent, and they keep being wrong.

                                                                                                                        1. 4

                                                                                                                          Addressing your first point: the tenure of an awful lot of software devs seems to be 2-3 years. Then they get a new job, for one or more of the following reasons:

                                                                                                                          • the company folded or had layoffs
                                                                                                                          • the company has a policy of not giving competitive raises
                                                                                                                          • companies think 2-3 year stints are normal
                                                                                                                          • that’s the amount of time needed to dig their way into trouble
                                                                                                                          • they have acquired enough resume points to find a better-paying position

                                                                                                                          All together, it seems to be a reasonable personal choice which is terrible for the world.

                                                                                                                          1. 1

                                                                                                                            Yes but for the most part it was ever thus. I think we genuinely haven’t figured out yet how best to propagate knowledge within our field.

                                                                                                                      1. 5

                                                                                                                        In the real world, the restaurant hangs up on you after 1-3 minutes of waiting, asking you to call back when you have all your prerequisites in place, because they understand queueing better than you do.

                                                                                                                        If you have these three people to schedule, break yourself into three parts (parallelization) and call all of them at the same time. They all indicate busyness, so you ask each one to call you back (asynchronously), as long as it isn’t more than 15 minutes from now. Then you sit around, possibly doing other things, until you get conversations with all three or your fifteen minute timer expires and you send the “sorry, maybe next time” message.

                                                                                                                        And then you schedule lunch for next week in advance, making a proper reservation at a restaurant. If the dining philosophers can all agree on a protocol, nobody has to starve.