Threads for teh_codez

  1. 6

    My issue is that 99.99% of the time I’m mindlessly using a tiny fraction of git - clone, branch, add, commit, push. So 99.99% of the time I don’t need to understand git, think about how it works, or remember the rest of its capabilities. If I only ever use the first page of the manual, why would I have a refresher read of the remaining 300 pages every six months on the off chance I need to do something out of the ordinary? There should be two tools: everyday-git, and chaos-git.

    1. 3

      Git was designed to for a whole most engineers don’t inhabit. One of the core goals is to allow distributed, peer-to-peer sharing of commits. This makes sense when you work on the Linux kernel. In that world, they routinely test WIP patches shared via a mailing list. But that’s not the world most of us live in. Usually, you just want to make some changes, have them reviewed, and merge them. Merging is a pain no matter what system you use since you have to choose how to do the operation. That’s why it’s common for people to hold off on making big changes if they know another colleague is working on the same files.

      So Git makes possible what most of us don’t care about (distributed/peer-to-peer sharing of commits), and can’t help with what makes version control complicated (merging) because it’s just essential complexity.

      It would be useful if someone made a system where merging was made as simple as possible - perhaps by tracking edits from the editor itself, chunking them logically (instead of “mine or theirs”, “keep these extra logs I added but keep their changes to add retries”).

      It doesn’t help that the most popular OSS repository, GitHub, is so tied to this technology.

      1. 3

        It doesn’t help that the most popular OSS repository, GitHub, is so tied to this technology.

        Gitlab is also so tied - Hg support was specifically rejected IIRC: https://gitlab.com/gitlab-org/gitlab-foss/-/issues/31600

        Throughout the code of both GitLab, Gitaly, and all related components, we assume and very much require Git. Blaming a file operates on the assumption we use Git, merge requests use Git related operations for merging, rebasing, etc, and the list goes on. If we wanted to support Mercurial, we would effectively end up duplicating substantial portions of the codebase. Gitaly helps a bit, but it’s not some kind of silver bullet that magically allows us to also support Mercurial, SVN, CVS, etc. It’s not the intended goal, and never will (or at least should) be. Instead, the goal of Gitaly was to allow us to centralize Git operations, and to shard the repositories across multiple hosts, allowing us to eventually get rid of NFS.

        (This quote was in the context of why adding Mercurial support would be difficult and expensive.)

    1. 7

      This tempest in a teapot seems stirred up by the belief that new warning messages might as well be fatal errors … which … is quite a hot take.

      To sysadmins, a program’s output or its lack thereof is a vital part of its behavior; adding things to the output is anything but harmless, and often forces us to fix the program right away.

      “Forces us to fix the program right away” seems like a Words As Intended, from the perspective of a maintainer.

      1. 7

        This author has a long history of similarly-hot takes on the topic of software compatibility and change over time, and their position consistently is hard to distinguish from “other people are not allowed to change their software, but I am still allowed to change mine”.

        1. 1

          I find it pretty easy to distinguish, personally.

        2. 5

          In the specific case of things invoked from shell scripts, I don’t think it’s controversial. UNIX provides streams of text, any change in the output format is a breaking change. If the tool provided structured output (e.g. JSON) then it would be less of an issue because tools could easily discard unknown fields, but anything else will be connected to ad-hoc parsers.

          Sometimes making a breaking change is the right call. Supporting legacy behaviour can be a significant maintenance burden. That needs to be weighed against the cost to the downstream consumers of the break. In the case of two self-contained, trivial, shell script wrappers that are unlikely to ever need modification, the cost is immeasurably close to zero and so any non-zero breakage downstream should be enough of a cause to stop.

          1. 11

            Lumping stdout and stderr together as “the program’s output” without any semantic difference (“any chance in the output format is a breaking change”) flies in the face of all my experience using UNIX.

            stdout, sure, I’ll accept that as a versioned API. It’s the result of the requested process; it shouldn’t be more than that.

            But stderr isn’t and hasn’t ever been. Errors are an unbounded and fractal set. Most programs pass stderr to their subprocesses. Even glibc prints messages on stderr!

            […] two self-contained, trivial, shell script wrappers […]

            Not important at all, but I thought GNU grep changed its behaviour based on argv[0]?

            1. 3

              Lumping stdout and stderr together as “the program’s output” without any semantic difference (“any chance in the output format is a breaking change”) flies in the face of all my experience using UNIX.

              It’s common to treat ‘any output on stderr’ as an error. For example, ctest has a mode that treats output on stderr as a test failure, so any test using egrep in a pipeline processing it’s output will now fail.

              Not important at all, but I thought GNU grep changed its behaviour based on argv[0]?

              Nope, see the article. It used to but that was changed years ago and the wrapper scripts were added for backwards compatibility. The new change added the error message in output from these scripts. It was more effort to add the deprecation notice than to do nothing and keep the wrappers in perpetuity, all because of a belief that POSIX should be proscriptive rather than prescriptive and descriptive.

              1. 5

                It’s common to treat ‘any output on stderr’ as an error.

                I’ve seen to treat ‘any non-zero exit’ code as an error; but, this is the first time I’ve ran into this culture.

                For example, ctest has a mode that treats output on stderr as a test failure, so any test using egrep in a pipeline processing it’s output will now fail.

                A test framework seems like a very special case which I hope any developer or maintainer would be happy to hear got broke. This example feels like it would be tagged as Works As Intended in the grep bugtracker.

                […] a belief that POSIX should be proscriptive rather than prescriptive and descriptive.

                I’m but a simple journeyman who long ago accepted that APIs change in arbitrary and usually undocumented ways.

                1. 5

                  It comes from sysadmin culture. It’s important to tune cron jobs so that they are silent in normal operation, and any output is an alert that needs attention and debugging.

                  When writing scripts, some programs can be uncooperative and fail to set their exit status reliably, so it’s good practise to also check stderr. This is usually only necessary for programs that are not designed to be scripted. It’s a nasty turn for tools that have been mainstays of scripting in unix for 45 years to suddenly become uncooperative.

            2. 3

              I agree, but this is yet another example of “there’s no such thing as plaintext”.

            3. 1

              “Fix” implies something was broken before.

              1. 1

                It seems to me that the maintainers believe they made a fix. You’re welcome to both your own opinion and choice of software.

                1. 1

                  Where do the maintainers indicate that they fixed something?

                  1. 1

                    The release notes?

                    1. 1

                      What part of the release notes? Can you please quote the section which says this change is supposed to fix something that was broken?

                      Because the only text I see that’s related to egrep and fgrep is:

                      The egrep and fgrep commands, which have been deprecated since release 2.5.3 (2007), now warn that they are obsolescent and should be replaced by grep -E and grep -F.

                      Adding a warning when using deprecated functionality is a perfectly valid change in a lot of contexts, but it’s not a fix. It’s not marked as a fix, the maintainers don’t say it’s a fix, it’s not a kind of change which is usually considered a fix, it’s not even something which can be interpreted to fall under any colloquial definition of the word “fix”.

            1. 7

              This is a great article! Looking at it from a security perspective, modern hardware is an absolute dumpster fire: whom do you have to trust for the damn thing to even boot correctly, let alone do anything useful?

              1. 8

                Looking at it from a security perspective, modern food is an absolute dumpster fire: whom do you have to trust for the damn stuff to not kill you, let alone provide any nutritional value?

                1. 3

                  Touché. I believe there’s some nuance here (the backlash when bad things happen in food security are much more important, and the supply chain is generally pretty well understood), but in essence this could probably be said about any modern $THING. But I reject the fact that it’s a necessary complexity, especially in computers, since we control the thing down to the atoms used to make up the transistors.

                  1. 2

                    But I reject the fact that it’s a necessary complexity, especially in computers, since we control the thing down to the atoms used to make up the transistors.

                    I personally would agree with you.

                    However, this becomes necessary complexity when we look at the progress people want in their lives. Inventors of tomorrow begin with today’s tech stack, not yesterday’s, which means we keep assuming that everything present today must be there. Just the XKCD comic, we don’t ever ask why one tiny rectangular block is needed to hold everything up, we just start at the top and keep going. It works fine when things are fine, but you’re only adding more pieces on to troubleshoot when things break.

                    The only way I see this ending is when customers realize building on the existing tech stack isn’t a quicker means to progress, but an inherent liability and start paying for simplicity. This isn’t free, though, the customer will be making tradeoffs that I just don’t see our current culture writ large wanting to make (I have to install dependencies and not just docker up?! What is this, 2005?).

                    Think of the current trend towards touch screens in cars that was discussed here a while back. It lets car manufacturers produce a common part for more cars (lowering costs) and it looks slicker (getting more people to want it), but it reduces the driver’s abilities to interact with the controls without looking (decreasing attention to the road) is far more likely to have issues than a physical knob (which doesn’t need a few thousand lines of code to control my volume), and potentially opens up attack surface. So we can see this pattern of obscuring all the things you’re building on and assuming they work from the chips up through entire systems.

                    1. 4

                      May years ago, I was at a talk by Alan Kay, where he described progress in computing as a process of adding new abstraction layers on top and then collapsing the lower ones into thinner things to support the new tops. I always felt that he was overly (and uncharacteristically) optimistic, since I’ve seen a huge number of cases of people doing the first step of this and very few of the second.

                      1. 2

                        Part of the problem is if you collapse a lower layer, people come complaining because they were using it for something. Look at something like PGP vs. age. Age is neat, but the people who use PGP aren’t going to stop using it just because age exists, since it isn’t exactly the same as PGP, so it doesn’t do quite the same things. Better is different, and different is worse, so better is worse. :-)

                        1. 1

                          There’s also a good talk by Brian Cantrill what this does to a system’s debuggability, and it isn’t great…

                          Found it

                        2. 2

                          Inventors of tomorrow begin with today’s tech stack, not yesterday’s, which means we keep assuming that everything present today must be there.

                          It’s not an assumption, it’s a chicken/egg problem -you write for the platform where the users are, not the platform that’s good.

                          Suppose you’re writing a commandline application. Commandlines are mostly running on the terminal emulator, which are literally emulating a specific piece of 1970s hardware. It’s why the Helix Editor (a vim-like (or rather, Kakoune-like) editor written in Rust starting in ~2021, very modern) can’t detect ctrl-/, and currently requires you to bind to ctrl-7 as a workaround (it can’t detect ctrl-/, but ctrl-7 emits the same keycode as ctrl-/ so it can’t not interpret ctrl-/ as ctrl-7).

                          Anyway, point is that the terminal emulator is garbage. Suppose you want to write a commandline program that’s not reliant on the terminal?

                          Well, who’s actually going to use the program? Arcan users? Arcan is awesome, but I don’t know if there are any actual real-world users, and if they are they’re a niche within a niche.

                          But why does everyone use the terminal emulator in the first place? Well, this is sounding awfully like a page on the osdev wiki, but basically it’s because it’s “pragmatic” and that serious distros “shouldn’t try to boil the ocean” and such. Or perhaps more cynically, it’s because nobody prioritizes it highly - the point of a platform isn’t to be good, it’s to be available for users to do things they care about,

                  1. 18

                    I feel like the comment about minifiers is a bit of a non-issue as comments are generally stripped out of minified css bundles anyways

                    1. 4

                      Which is often unsafely done because it can contain licensing info that needs to persist.

                      1. 1

                        I guess the ideal solution would be if it “simply” detects the relevant license, then shortens it to a specific license identifier (like $GPL3 or whatever)? It’d have to skip minifying uncommon licenses/modifications of licenses and repeat them verbatim, but that’s fine because uncommon licenses are uncommon.

                        1. 1

                          Some minifiers look for @license or /*! (or //! in our hypothetical case) as indicators to not remove those sorts of comments. Practically though, with Brotli compression (which should be enabled), if you have a lot of duplicate licenses, the size difference with duplicate license info should be negligible.

                          This is one of my biggest beefs with folks choosing SVGO over Scour because Scour will hold onto your license info by default rather than stripping it—which could easily put you in violation of Creative Commons licenses (think like folks using Font Awesome for SVG icons (don’t ever use actual font icons) that requires CC BY). SVGO calls this “editor data” for some strange reason and strips it by default and the project is deeply embedded in npm dependency chains that a lot of folks use and never (or can’t) configure.

                      2. 2

                        Why is CSS minified anyway? Wouldn’t compression already do a good job of reducing the transfer size of CSS in transit?

                        1. 1

                          True, gzipping is more effective, but in practice you still get a bit more out of doing both, but not dramatically moreso than just gzipping

                          https://css-tricks.com/the-difference-between-minification-and-gzipping/

                      1. 26

                        Important bit about composability and extensibility is that composability is extensibility on the next level. kakoune is a plug-in for unix shell/terminal, and the terminal is your extensible world.

                        So, you don’t “prefer composability”, you rather select an extensible layer: is your editor extensible? Your terminal? Your browser? Your kernel?

                        And the problem I have here is that shell/terminal combo is an engineering abomination. It gets the thing done, but 99.9 explanation for that is pure inertia, rather than a particularly good design. It makes no sense for my software to emulate hardware which was obsolete before I was born for basic programming task. Especially if parts of the emulation live in the kernel.

                        Curiously, this is an area which sees very little experimentation from the community: everyone is writing their terminal editor, or terminal emulator, or a shell language, building on top of XX century APIs.

                        Of the top of my head, I can name only two projects which think along the lines of “what if there’s something better than a terminal” and try to build platforms to build composable software on top: Emacs (which isn’t exactly XXI century either) and Arcan (for which I am still waiting when I can nix run nixpkgs#cat9 https://github.com/letoram/cat9).

                        1. 4

                          That’s an interesting take. Could you expand the part about emulating obsolete hardware? What hardware is being emulated by the shell? My understanding of the shell/terminal, is that it offers a simple and efficient interface for interacting with a computer system. It provides a text-based command line interface that allows users to execute commands, automate tasks, and manipulate data in a straightforward manner. The idea of it emulating hardware is foreign to me, so I’d like to learn more about that.

                          1. 18

                            To try and offer a different explanation / mental model:

                            Take the ASCII table (and thus Unicode as ASCII is a subset), https://www.asciitable.com/, the predominant encoding scheme for what is often, and dangerously, referred to as “just text”. It is actually about Information Interchange – the abbreviation expanded even says as much.

                            Notice how a fair portion of the first half of bitpatterns do not, in fact, encode symbols to carry the written word but some rather peculiar choices, like DEL – why would a text encoding scheme have processing instructions for stripping away its own data?!.

                            The common use for these entries is as a variable length instruction set intended for a machine to plow through. That is what is nowadays ‘emulated’, but this exact machine hasn’t existed for a grand long time, if ever, as it is a compromise. If you would follow the instructions strictly and try to run a modern program (“text file”), the results would likely be very unpleasant unless it happens to be “just text”.

                            The table itself is far from complete enough to reflect the capabilities of the hardware (terminals) that were out there yet people wanted their “just text” programs to do more. Some terminals only had a humble dot matrix printer as output, others glorious vector graphics CRTs. Do you see any instructions in the table for moving a turtle around a grid, drawing lines?

                            So through creative use of sequences of the less relevant bitpatterns from this table, you could invent such instructions. People did. A lot of them. A few got “Standardised” as ANSI, but far from all of them and it hasn’t exactly been kept in synch with modern times. Noadays you have (several) full display server protocols worth of them. There is clipboard, program execution, synchronisation, authentication, window management, scrolling, cursor control, history control and so on. Many of them use the ESC entry as a prefix, hence the colloquial ’ESCape sequences”. All this inside humble “just text”. Here is a rough state machine just for >decoding< some of the instructions out there https://vt100.net/emu/dec_ansi_parser .

                            That’s just emulating the instruction part. Then we another dimension of attributes to how these instructions should be interpreted to mimic variability in hardware not captures by the instructions; not all keyboards carried luxuries like a delete key, some had funky states like “num lock”, “print screen” and other bizarre. Not all hardware could communicate with the same speed. See the manpage for ‘stty’ for how inputs should affect the generated code.

                            Then we have data stores with bizarre access permissions (environment variables) that are also used to pick and chose between overlays from an external capability database (that is TERMCAP/TERMINFO, why things behave differently when you set TERM=xterm vs. TERM=linux vs. TERM=doesnotexist). To this day, the often used abstraction library (n)curses has an estimator that uses your capability description to infer an instruction set, combine that with the pending drawing operations and bin against the baudrate of your virtual device to reduce the chance of partial display updates.

                            Then we have Interrupts and Interrupt Service Routines(!) often overlooked just like stty- as you don’t see them in STDIN/STDOUT - i.e. signals and signal handlers. SIGWINCH (Window Changed) tells you that the display magically changed properties. I am just going to leave this here: http://www.rkoucha.fr/tech_corner/sigwinch.html

                            This is a baseline for terminal emulation. We still haven’t gotten to the interactive shells, multiplexers or network translation (i.e. ssh).

                            TLDR; This is absolutely not simple. It is absolutely not efficient.

                            1. 6

                              Here’s a good article about it - https://www.warp.dev/blog/what-happens-when-you-open-a-terminal-and-enter-ls

                              All these features were originally provided by hardware terminals, which are big boxes with keyboards and screens, without computers:

                              • hitting Enter – i.e. you enter a line on a hardware input device, AND it’s buffered there while you type. When you hit enter, it’s sent to the actual computer !
                                • this also predates screens – teletypes were basically “interactive printers” as far as I understand. Like you got physical paper for everything you type !!!
                              • control codes like Ctrl-C and Ctrl-Z - send messages to the computer from the hardware terminal
                              • ANSI color (e.g. ls --color)
                              • ANSI sequences to go back and forth (left arrow, right arrow)
                              • clear the screen

                              Also my understanding is that curses was actually extracted from Vi. These kind of apps also ran on hardware:

                              https://en.wikipedia.org/wiki/Curses_(programming_library)

                              You can think of it as isomorphic to the web:

                              • On the web you send HTML from the server to the browser, which formats and displays the page. Different browsers will format things slightly differently. You can choose your font.
                              • In mainframe computing, you send ANSI escape codes to the terminal device, which formats and displays the screen. Different terminals will format things slightly differently. You can choose your font.

                              There is a program/database called termcap that papers over the historical differences between all the terminals manufactured by different vendors.

                              A big difference is that hardware terminals weren’t programmable, and software terminals today (xterm) aren’t programmable either. That is, there’s no equivalent of JavaScript for the terminal.

                              Also web browsers started out as software, but something like a ChromeBook makes it a little more like hardware. The Chromebook is intended to be stateless and even non-programmable, with all your apps on the server. Just like a terminal is stateless.

                              1. 2

                                I’m not as well versed in, but I can give some input:

                                See for example the wikipedia entry for terminal emulator or computer terminals

                                Most of this is the background for what you call a terminal in combination with a shell. Which is also where the original color or symbol limits stem from. “Command Symbols” are basically just an answer to the question how you get your data together with a display-command in one stream. When all the system can do is shove bytes around and display them as they come in. From the origin of having many non-smart interfaces to one mainframe / system and multiple users and a serial connection. For example the VT100 is such an old piece of original hardware.

                                Then you built a ton of defacto standards which all new programs need to implement, to be actually compatible with what real programs expect, and you end up with what we have. (See also bash/sh) For example problems detecting the terminal capabilities.

                                1. 1

                                  This bit of hardware in particular:

                                  https://en.m.wikipedia.org/wiki/VT100

                                  Terminal emulator, the kernel, the shell, and the program running in a shell all pretend that there’s a “VT100 with more features”. Thats why we have terminal emulator / shell split, and the emulator part at all.

                                  https://poor.dev/blog/terminal-anatomy/ explains how all that works specifically

                                  1. 1

                                    Helix supports the enhanced keyboard protocol, so at least there is some work happening to reform the terminal, if not replace it: https://sw.kovidgoyal.net/kitty/keyboard-protocol/

                                2. 4

                                  Had I not been so stubborn about separating durden/pipeworld/console (wm models), core-arcan (desktop server), arcan-tui (the ncurses replacement), lash (shared shell building base) and cat9 (PoC shell) as interchangable layers but rather as an integrated vertical build you could have done so by now – with the usual nixpicking when it comes to input device and gpu access. I do not nix myself though there are a handful on the discord that do (including ehrm, fully statically linked, software rendering only on m1 mac..) and could perhaps be persuaded ..

                                  On the context of the article and Kakounes integration ambition - one arcan dev/user uses zig/arcan-tui for that, see https://github.com/cipharius/kakoune-arcan . I do not think he has completed popups / windows as discrete objects yet though.

                                  1. 3

                                    Yeah, after writing this comment, I realized that Arcan state in nixland is better these days. I was able to install&run arcan. I couldn’t get cat9 from nixpkgs work, but cloning the repo and following “not for the faint of heart” instructions worked. However, when launching cat9 I noticed that I can’t type ~ symbol, and the key repeat was super slow (like 1 key press/second), at which point I ran out of my tinkering time quant.

                                    1. 1

                                      Key oscillation control is up to the WM side (durden needs to toggle it on / off contextually, console does not). It is set quite low so that a jumpy keyboard that performs their own repeat events and doesn’t tag them as such (several users had that problem) still can reach the runtime config.

                                      There is a recent patch for the keymap script that other WMs that allows offline configuration of it.

                                      arcan_db add_appl_kv console keyrepeat n ; (n=25Hz ticks between each rising/falling event edge)
                                      arcan_db add_appl_kv console keydelay n (n=25Hz ticks before repetition is enabled)
                                      
                                    2. 1

                                      I do not nix myself though there are a handful on the discord that do

                                      I always suspected that nixos-nix and *nix-nix was confusion waiting to happen, but I’ve never seen it in practice until now.

                                    3. 1

                                      Of the top of my head, I can name only two projects which think along the lines of “what if there’s something better than a terminal” and try to build platforms to build composable software on top

                                      Oils too, mentioned in the last post:

                                      https://www.oilshell.org/blog/2023/05/release-0.15.0.html#guis-and-the-headless-shell

                                      Linking back to the genesis in 2021:

                                      https://www.oilshell.org/blog/2021/06/hotos-shell-panel.html#oils-headless-mode-should-be-useful-for-ui-research

                                      The main reason it’s not more prominent is because I’m not “driving it” … but if you want it to happen, you should help us with it :)


                                      I will try to publish a demo – there is actually working code on both sides, and it was recently translated to C++ (by Melvin), so I’d say the last remaining barrier is removed. (In the past, you could say Oils isn’t “production quality” because it was in Python, which was indeed slow for our use case. But now it’s pure native code.)

                                      As mentioned in the post, Oils is different than Arcan as it doesn’t require boiling an ocean, only a large lake :) That is, ls --color, Cargo, Rust compiler, and a bazillion other utilities still work in this model.

                                      The shell itself should not require a terminal, but most tools use some kind of terminal support, so they still need terminals.

                                      Another comparison is the Warp terminal. The difference there is that they inject shell code to your bash and zsh, and then parse the terminal codes in order to get “blocks”. I think this probably works fine for most cases, but I’m not sure it’s correct in the presence of concurrency.

                                      In contrast, Oils does this with terminal FD passing over Unix domain sockets, in a simple and principled way.


                                      So I’d say that Oils is the ONLY shell that can be easily divorced from the terminal. (It will also be the only POSIX-compatible shell that also has structured JSON-like data types; Melvin is also working on translating this now, i.e. divorcing it from Python.)

                                      The slogan is that the shell UI should HAVE a terminal, but it shouldn’t BE a terminal.


                                      OK the original demo screenshots are probably worth a thousands words - https://github.com/subhav/web_shell

                                      So it would be cool if someone (who knows how to program terminals and Unix domain sockets, or wants to learn) can help revive and test it against the fast native implementation of the protocol

                                      The Go side is here - https://github.com/subhav/web_shell/blob/master/fanos.go

                                      Oils C++ side is here - https://github.com/oilshell/oil/blob/master/cpp/fanos_shared.c

                                      Python too: https://github.com/oilshell/oil/blob/master/client/py_fanos.py

                                      The idea is that the GUI creates a terminal and passes through the shell to say Cargo or rustc. The shell itself does not use the terminal – it can send all its error messages and prompts and completion on a different channel, over the socket.

                                      This gets you the “blocks” feature of Warp terminal very easily and naturally, along with shell history / completion / error messages OUTSIDE the terminal, in a GUI.

                                      So in this way the shell is decoupled from the terminal, but the child processes can still use the terminal. Again, unlike in Arcan FE.

                                      1. 5

                                        Arcan covers all permutations of the concept.

                                        The same binary (afsrv_terminal) that sets up lash can be set to:

                                        1. be only a terminal emulator
                                        2. be only a lash runtime
                                        3. wrap single commands (e.g. a legacy shell), presents a raw or vt-interpreted rendered view as well as its respective stdio with arbitrary sets of inherited other descriptors for a shell to output through.
                                        4. the inherited descriptor set from 3 can includes the socket channel that arcan shmif unfolds over, bchunkhint events there comes with the same dynamic descriptor passing mechanism as in your linked.
                                        5. spawn new instances of itself in any of the above configurations that hands over to the outer WM or embeds into itself.

                                        1,3,5 had been demoed before in the dream/dawn/day and pipeworld (https://arcan-fe.com/2021/04/12/introducing-pipeworld/) presentation and was thus left out of the lash#cat9 one.

                                        1. 2

                                          OK, I read over the most recent blog post again

                                          https://arcan-fe.com/2022/10/15/whipping-up-a-new-shell-lashcat9/

                                          It does look pretty awesome … The “jobs” idea looks like the Warp “blocks”, which a FANOS-based GUI would also support naturally

                                          Personally I’m spending most of my time on the new shell language, now called YSH (And that naturally goes into self-hosted / multi-cloud distributed git-like data and polyglot computation)

                                          And I would like some kind of GUI for that, but don’t have time to build it myself (or the expertise, really)

                                          But Oils/YSH is now pure C++ and requires only a shell to compile, not even Make

                                          So it should be pretty trivial to drop into any environment

                                          1. 1

                                            Another thought I had is that the demos still look grid-based and terminal-based to me, i.e. more like tmux than a Window manager

                                            It’s probably easy for people to gloss over the difference when presented like that

                                            I’m thinking more like something with GUI history like browser history, and then a GUI prompt like the address bar, GUI auto-complete, etc.

                                            I feel like that would open people up the idea a bit more

                                            1. 1

                                              Partly a backpressure problem - too many features to present already that there was no real point in going even further (about 1/4th of readers drop for each ‘section’ with a tail of ‘reads it all’ judging by the video downloads).

                                              The non-uniform grid bits are also not developed enough to really show off, while border drawing now works as cell attributes rasterised outside the grid - there are more fancy text rendering nuances that were present when the shell process rasterised the text that had to be temporarily disabled when moving to server-side text rendering.

                                              The interchange format (TPACK) allows for varying line properties (shaped, RTL) and other annotations. WM deferred popups, scrollbars etc. also works but better shown off in an article explicitly about the APIs rather than an example shell.

                                        2. 1

                                          Of the top of my head, I can name only two projects which think along the lines of “what if there’s something better than a terminal” and try to build platforms to build composable software on top:

                                          What about ACME from plan9 (originally)?

                                          http://acme.cat-v.org/

                                          Personally RSI rules out mouse (and heavy trackpad) use - but it should fit the description of doing something new vis-a-vis TTYs?

                                          Maybe worth mentioning Smalltalk (in general) - and Cuis/Pharo as open source IDEs/editing environments too. And possibly (but maybe not) Racket.

                                        1. 2

                                          I expected bad news, but R9 actually sounds really exciting, and might get me to fool around with plan 9 ecosystem stuff again for the first time in years.

                                          1. 1

                                            As far as I understand it, R9 is not a Plan 9 system…?

                                            1. 2

                                              r9

                                              Plan 9 in Rust

                                              R9 is a reimplementation of the plan9 kernel in Rust. It is not only inspired by but in many ways derived from the original Plan 9 source code.

                                              Sounds like Plan 9 to me.

                                              1. 1

                                                The linked website says:

                                                an OS strongly inspired by Plan 9

                                                1. 2

                                                  It’s not the original Plan 9 source code (so it’s technically not the Plan 9), but it’s a Plan 9. It sounds to me that the differences are basically just in flavour.

                                          1. 3

                                            I see where the author is coming from; it can be very discouraging to see people flock to shitty, broken, privacy-destroying “products” over “projects” whose main sin is being chronically under-funded. It’s easy to say, “well, people just don’t care about their privacy,” or whatever the VC violation du jour happens to be.

                                            But many people - millions of people - are willing to put up with a little bit of technical difficulty to avoid being spied on, being cheated, being lied to in the way that so many SaaS products cheat us, lie to us, and spy on us. It’s a tradeoff, and there is a correct decision.

                                            There are basically two options; I don’t know which one is true. Either:

                                            a. with sufficient funding, regulation, and luck, we can build “products” to connect people and provide services that aren’t beholden to VCs or posturing oligarchs, quickly and effectively enough that the tradeoff becomes easier to make, or

                                            b. we cannot, and capital remains utterly and unshakably in control of the Internet, and the rest of our daily lives, until civilization undergoes some kind of fundamental catastrophe.

                                            From the author’s bio:

                                            Twi works tirelessly to make sure that technology doesn’t enslave you.

                                            I appreciate that work. It’s vital. Unfortunately, I’m pretty sure we’re going to lose this war unless we can implement drastic, radical regulation against VC-backed tech companies on, frankly, completely unrealistic timelines. We’re already living the cyberpunk dystopia; get ready for the nightmare.

                                            1. 5

                                              I think the funding discussion needs to start with what libre project are. For all his faults, I think this was best summarized by Drew Devault: Open source means surrendering your monopoly over commercial exploitation . Or perhaps to rephrase, libre software is a communal anti-trust mechanism that functions by stripping the devs of all coercive power.

                                              This is a useful lense to view libre software because plenty of projects are so large that they have a functional monopoly on that particular software stack, which provides some power despite the GPL. Also, “products” (which are UX-scoped) are usually of larger scope than”projects” (which are mechanism-scoped), almost by definition. A lot of libre projects are controversial almost exclusively because they’re Too Big To Fork.

                                              So I think there are two sides to the problem here: decreasing the scope (for the reasons above), and increasing the funding.

                                              Funding-wise, the problem is that 1) running these systems requires money, 2) whoever is providing the money has the power, and 3) users don’t seem to be providing the money.

                                              So, there are three common solutions to this: get the funding from developers (i.e. volunteer-run projects), get the funding from corporate sugar-daddies (either in the form of money or corporate developer contributions), or get the funding from average consumers.

                                              Volunteer-run projects are basically guaranteed to lose to corporations - most devs need a day job, so corporations will largely get their pick of the best devs (the best devs are essentially randomly distributed among the population, so recruiting from the much-smaller pool of only self-funded devs means statistically missing out on the best devs), and typically results in the “free as in free labor” meme.

                                              Corporate funding will, even with the best of intentions by the corporations in questions, tend to result in software that’s more suited to the corporate use-cases than average users - for instance, a home server might primarily need to be simple to set up and maintain by barely-trained users whereas Google’s servers might primarily need to scale well across three continents. This has two effects: first off, it increases the scope, (which is bad per the above paragraphs), and saps the priorities of the project if there ever need to be hard decisions. Second, it gives coercive power to the money-holder, obviously.

                                              So the last option is getting the funding from the average consumer. Honestly, I think this is the only long-term viable solution, but it basically involves rebuilding the culture around voluntarism. As in, if everyone in the FOSS community provides e.g. a consistent $20/month and divvies it up between the projects they either use or plan to use, then that could provide the millions/billions in revenue to actually compete with proprietary ecosystems.

                                              …or it would provide that revenue, if everyone actually paid up. But right now something like 99% of Linuxers et al don’t donate to the libre projects they use. Why is that?

                                              Well for starters, businesses pour huge amounts of effort into turning interested parties into paying customers, whereas plenty of open-source projects literally don’t even have a “donate!” page (and even if they do have such a page, plenty of those are hard to find even when actively looking for them), let alone focusing on the payment UX.

                                              IMO, there needs to be a coordinated make-it-easy-to-pay project (or should I say “product”?), where e.g. every distro has an application preinstalled in the DE that makes it easy to 1) find what projects you most often use (and also what you want to use in future), 2) set up payment, and 3) divvy it up in accordance to what you want to support.

                                              BTW, I hate the term “donate” (and you’ll notice I don’t use it) because it displays and reinforces a mindset that it’s a generous and optional act, as opposed to a necessary part of “bringing about the year of the linux desktop” or “avoiding a cyberpunk dystopia” or such.

                                            1. 21

                                              I agree it is a scam—slightly strong language but hear me out.

                                              Especially the issue with the protocol is important, if your protocol is open source and decentralized then a “private beta” period makes so sense. That alone puts the lie to their advertising.

                                              If they advertised themselves they could make a case for the non-decentralized ID services and limited access are a good thing. For example Signal has done just that. Their protocol is open source and anybody could implement it but they don’t make any pretense of federating with services or even allowing 3rd party apps on their service. Right or wrong, they justify it with the ability to move quickly and fix things without breaking everybody else. Whether you buy that at not at least they have been up front about why they are not distributed.

                                              Bluesky is a scam because their marketing uses words to gain attention while behind the scenes their fundamental business and development model is a different gig.

                                              1. 11

                                                Especially the issue with the protocol is important, if your protocol is open source and decentralized then a “private beta” period makes no sense. That alone puts the lie to their advertising.

                                                Counterpoint: Jonathon Blow’s programming language is (AFAICT) open-source but is currently in closed beta, but nobody is calling that a scam. I think Hare did the same thing. My point here is that a closed beta is about temporarily controlling the distribution to only trusted people, and is not mutually exclusive with being open in the long term.

                                                That said, I agree Bluesky is a scam.

                                                1. 6

                                                  Jai is for sure not open-source - I doubt that it will ever be - as Jonathan is not willing to entertain community contributions.

                                                  1. 16

                                                    Sqlite is open source, but doesn’t accept contributions.

                                                    1. 5

                                                      sqlite is in the public domain.

                                                      What I meant by my comment is that Jai, even when the code will be made public, it will most likely not be released under a license that allows all freedoms of the Open Source definition.

                                                      1. 2

                                                        What makes you say that?

                                                        1. 2

                                                          Watching the streams in which Jonathan works on the compiler. He mentioned multiple times that most likely they’ll come up with their own licence so I was mostly inferring from that.

                                                          I’m not sure how willing he’ll be to allow other entities - and based on the main target demographic of the language, which is game developers, they’ll probably be commercial ones - to repackage/redistribute the compiler.

                                                    2. 13

                                                      Open source does not mean open contribution. It does commonly imply it, but it’s quite literally in the name “open source”.

                                                      1. 13

                                                        There’s never a guarantee of open contribution – a project maintainer can drop your patch on the floor, loudly or quietly – so the real question is always if you have the right to fork your own version?

                                                        Right now the question is moot, since there isn’t any source release at all.

                                                        1. 1

                                                          And, importantly, there isn’t any binaries released either. It’s not a case of, “we will open-source this product eventually, we promise; until then, just use our closed source work”. It’s just not ready yet.

                                                          1. 1

                                                            But there are binaries, they’re released to a restricted (last time he mentioned it, it was about 500 people) closed beta.

                                                1. 8

                                                  I mostly agree, but I think in some cases, this advice oversimplifies. In particular, I think if one thing really just is another thing, it may be appropriate to emphasize that.

                                                  In source code and in our wiki, I have written “The Foo product just is a deployment of our monolith that doesn’t serve normal traffic, only…”

                                                  In that sentence, “just” adds emphasis, and could be omitted, but I think the sentence is better with that emphasis.

                                                  P.S. There’s probably a syntactic/semantic distinction you could draw between the different uses of “just”, but I don’t feel up to trying to draw it.

                                                  1. 6

                                                    P.S. There’s probably a syntactic/semantic distinction you could draw between the different uses of “just”, but I don’t feel up to trying to draw it.

                                                    At a quick glance, I think there’s a relatively clear distinction between just meaning simply or only and just meaning exactly or precisely. (There are also other meanings altogether. E.g., He just made it. where just means barely.) That said, I can imagine cases where it’s hard to decide exactly which use an author intends.

                                                    On a charitable take, the author means for us to (just?) remove the uses that mean simply or only and not others. If I’m reading your example correctly, I think you are using just in the sense of exactly. You’re safe. :)

                                                    1. 4

                                                      There is also a nuance where “just” means “this and no more” e.g. in the Django docs:

                                                      These profile models are not special in any way - they are just Django models that happen to have a one-to-one link with a user model.

                                                      In other words, there is nothing extra going on here. Or, you might have docs that say:

                                                      To enable this feature add just the following items to your config…

                                                      Meaning, “add the following and do not be tempted to do anything extra”.

                                                      This can sometimes contrast with:

                                                      To enable this feature just add the following items to your config…

                                                      Which implies “this isn’t going to be much work, it’s simple”. A native English speaker would understand the difference instinctively, although it’s a very small word order change.

                                                      1. 2

                                                        “Just” basically means “literally only”, so it’s useful in the context of well-defined jargon and precise speaking, but is generally bad when you’re trying to convey a concept in few words, because the latter requires simplification and using “just” contradicts that.

                                                    2. 2

                                                      In Zen and the Art of Motorcycle Maintenance, page 232, Pirsig described “just” as “a purely pejorative term, whose logical contribution to the sentence [is] nil,” when trying to analyze its usage and meaning. The history of “just” goes back to Latin, and its various meanings are concerned with justice and belief. In general, when somebody says “just”, they are saying “I believe that…”

                                                      In your example, I would interpret you to say something like “To this technical writer’s belief, the Foo product is a special deployment of our monolith which serves … instead of normal traffic.” And then, as Pirsig notes, the first clause is non-logical and only annotates the sentence as having some distinct epistemological foundation (based on the writer’s belief rather than e.g. evidence or cultural understanding); it can be dropped.

                                                      Personally, I try to avoid “just”. It has the nature of Iago to it, along with other words like “daresay”; it’s not just a weasel word, but a potent distorter of reality.

                                                      1. 6

                                                        Personally, I try to avoid “just”. It has the nature of Iago to it, along with other words like “daresay”; it’s not just a weasel word, but a potent distorter of reality.

                                                        I’m not sure if you meant this to be a joke, but notice that in the second part of your sentence you’ve used just in a meaning that is distinct from the “I believe that…” meaning that you claim is dominant. In your sentence, just means only as it often does.

                                                        More importantly, I think that Pirsig, you, and teh_codez oversimplify. Words can mean lots of distinct things, and I think we do a disservice to writers and readers if we insist on trying to reduce complex words to (just—i.e. only?) one base meaning.

                                                        1. 1

                                                          In this case I actually don’t think they mean distinct things at all. It’s more of a continuum. But, zooming out, focusing on the word “just”, or even a specific sense of the word “just”, is missing the point—belittling someone’s struggle is done using words, but it is not itself words. Blaming certain words for making the reader feel bad is like blaming certain chemical elements for making people get viruses.

                                                          1. 1

                                                            In this case I actually don’t think they mean distinct things at all. It’s more of a continuum.

                                                            I’m not sure what you mean by this case, but (to be very blunt) the adverb just means several distinct things. I’m not sure why several people in this thread want to insist that they know better than dictionaries what just means. It’s an odd hill to die on.

                                                            1. 1

                                                              I believe your ire may be misdirected. I apologise if I gave you the impression I was one of these people. Have a nice day.

                                                          2. 1

                                                            It was a joke, yes. Remember that “but” means the same thing as “and”, logically; the logical content is the same as if I were to phrase everything with the subjunctive mood.

                                                            English is not a good language and we should not suffer its historical warts.

                                                      1. 4

                                                        So if I’m reading this article correctly, the point of this is to remove more potential sources of nondeterminism from Nix. Have there been any demonstrated benefits so far, or is this still all theoretical/robustness/WIP?

                                                        1. 14

                                                          It’s mostly about running nix-built OpenGL/Cuda binaries on a foreign distribution (Ubuntu, Fedora, Debian…). You need a way to inject some sort of GPU driver to the Nix closure. You won’t be able to run a nix-built OpenGL program on a foreign distribution if you don’t do so.

                                                          NixGLHost is an alternative* approach to do this.


                                                          * Alternative to NixGL. NixGLHost is in a very very alpha stage.

                                                          1. 3

                                                            One of my gripes with nixgl is that i have to run all my nix applications via nixgl. If I run a non-nix binary with nixgl it usually doesn’t go well, so i can’t run my whole user session with nixgl and have it propagate to child processes. Is there any, for example, NIX_LD_PRELOAD one could use, that could be set system-wide, that is ignored by non-nix binaries?

                                                            1. 2

                                                              To be honest, that’s not a use case I had in mind when exploring this problem space. I’d probably need more than 5 minutes to correctly think about this, take what I’m about to say with a grain of salt.


                                                              My gut instinct is that we probably don’t want to globally mix the GPU Nix closure with the host one. I guess an easy non-solution for this would be to skip the problem altogether by provisioning the Nix binaries through a Nix shell. In this Nix shell, you could safely discard the host library paths and inject the nix-specific GPU libraries directly through the LD_LIBRARY_PATH (via nix-gl or nix-gl-host).

                                                              Now, if you think about it more, the use case you’re describing seems valid UX-wise. I’m not sure what would be the best way to tackle it. The main danger is getting your libraries mixed up. NIX_LD_PRELOAD could be a nice trick, but it’s kind of a shotgun approach, you end up preloading your shim for each and every Nix program, regardless if they depend on OpenGL or not.

                                                              As long as you don’t plan to use CUDA, I think the best approach would be injecting the GPU DSOs from libglvnd. There’s already all you need to point to your EGL DSOs through the __EGL_VENDOR_LIBRARY_DIRS env variable. There’s no handy way to do that for GLX, but I wrote a small patch you could re-use to do so.


                                                              I’ll try to think more about that, cool use case, thanks for the feedback.

                                                        1. 14

                                                          Connections from CloudFlare’s reverse proxy are dropped. Do not help one private company expand its control over all internet traffic.

                                                          This is my favorite line of their docs and good on someone for doing it.

                                                          One feature I would really love to see though is adding HTTP headers not in the HTML document (like how Netlify does it) as there are certain things you can’t add the document like X-Frame-Options I think, it can simplify your build phase having features to build in, files are smaller, and you can get a head start on preload/prefetch/preconnect links because the document doesn’t have to first be finished downloading and have its <head> parsed.

                                                          1. 3

                                                            This is my favorite line of their docs and good on someone for doing it.

                                                            I haven’t heard this before. Why is Cloudflare’s reverse proxy bad?

                                                            1. 24

                                                              Cloudflare users are getting man-in-the-middle by cloudflare, for technical reasons.¹ Because they already have ~25% of the internet traffic as customers,² they’re in a unique position to do cross-site tracking without any cookie, or complex fingerprinting techniques. Of course, being able to do something does not mean that they are doing it. But history has proven that when companies and governments are able to do something, they do it.

                                                              Cloudflare also contribute to the reduction of privacy. They maintain a list of IPs of Tor exit nodes. They force Tor users to solve a captcha for every page, or allow users to block the country “Tor” directly.


                                                              ¹. Cloudflare will install captchas on your website during a DDoS, to limit access to legitimate users and weed out bots.
                                                              ². This is my guess-timate

                                                              1. 8

                                                                Cloudflare runs a large part of the internet to “protect” sites from DDoS attacks – but they also host the very same DDoS webshop sites, where you can ruin someone’s business for the price of a cup of coffee. There has been thousands of articles about this.

                                                                1. 5

                                                                  Adding, Cloudflare has gone down and we saw a massive chunk of the net just fail because of a single point of failure. There’s also a ton of hCAPTCHA sudokus to solve for Cloudflare for free for the privilege to see the site behind it if you’re using Tor, a VPN service, or just live in a non-Western country. Then, as a result they suggest you use their DNS and browser extension to ‘help’ with the situation to further collect even more data on users.

                                                              1. 7

                                                                Neat service, but I imagine that the ratio of sourcehut users who don’t already have their own web hosting is a lot lower than it is for github.

                                                                1. 7

                                                                  That doesn’t mean it’s useless or that no one is using it…

                                                                  I have a few projects on sr.ht but also use sourcehut pages for hosting some things even though I’m totally capable of self-hosting. It’s just really convenient and can be automated nicely with builds.sr.ht.

                                                                  1. 2

                                                                    As an analogy, you’d say not many will use github pages as well, yet they do. It’s nice to have a site next to a project.

                                                                  2. 6

                                                                    True but IMO irrelevant - you don’t cater exclusively to the userbase you already have, you cater to the userbase you want to have. Otherwise you can’t grow except, essentially, by accident.

                                                                    1. 2

                                                                      Yeah, I’m not saying it as a criticism, just an observation.

                                                                    2. 4

                                                                      I’m a paying sorcehut user. I have a small handful of projects that are still on gitlab because sourcehut lacked this feature. I suppose I’ll set about seeing if those can move to sorcehut in the near future.

                                                                      I do also have my own web hosting. But this kind of (push a commit just updates the site) behavior would add enough admin overhead for me that I currently leave it on gitlab for those few projects. I’d rather have those on sourcehut.

                                                                      1. 1

                                                                        Eh, it’s always convenient to host project pages close to the repo. I use Netlify for my personal site, but various little project pages get on Codeberg Pages.

                                                                      1. 6

                                                                        the USB-enthusiast’s laptop of choice

                                                                        1. 4

                                                                          Isn’t everyone a USB enthusiast these days

                                                                          1. 4

                                                                            Headphone enthusiasts seem to prefer 3.5mm jacks over USB.

                                                                            1. 4

                                                                              USB audio devices basically have the “sound card” (digital-analog converter or analog-digital converter, or DAC/ADC) built into them. If you plug your headphones into a 3.5mm jack, it is using an analog signal, so it is going to the sound hardware on the motherboard. Motherboards can often (but not always) afford to put more work into better audio hardware than headphone, especially cheap ones.

                                                                              1. 1

                                                                                Except no laptop or motherboard ever has a good output.

                                                                                They’re noisy due to poor isolation (irrespective of the chips involved having excellent specs), anemic (can’t power my HD600) or both.

                                                                                Topping DX3 Pro+ is flat, provides plenty of power and has excellent measurements.

                                                                          1. 1

                                                                            “‘Only an issue for 3.3 or earlier’ but I had the problem with 3.16”

                                                                            Must be a typo there, either meant later or typo is in the version numbers.

                                                                            1. 13

                                                                              16 > 3

                                                                              1. 1

                                                                                Maybe it was a typo, and they meant 3.33?

                                                                            1. 1

                                                                              It sounds very modular, so hopefully it’s easy to replace the keyboard out easily - putting a gamepad module into that thing would be pretty cool.

                                                                              1. 43

                                                                                I still like Zulip after about 5 years of use, e.g. see https://oilshell.zulipchat.com . They added public streams last year, so you don’t have to log in to see everything. (Most of our streams pre-date that and require login)

                                                                                It’s also open source, though we’re using the hosted version: https://github.com/zulip

                                                                                Zulip seems to be A LOT lower latency than other solutions.

                                                                                When I use Slack or Discord, my keyboard feels mushy. My 3 GHz CPU is struggling to render even a single character in the browser. [1]

                                                                                Aside from speed, the big difference between Zulip and the others is that conversations have titles. Messages are grouped by topic.

                                                                                The history and titles are extremely useful for avoiding “groundhog day” conversations – I often link back to years old threads and am myself informed by them!

                                                                                (Although maybe this practice can make people “shy” about bringing up things, which isn’t the message I’d like to send. The search is pretty good though.)

                                                                                When I use Slack, it seems like a perpetually messy and forgetful present.

                                                                                I linked to a comic by Julia Evans here, which illustrates that feature a bit: https://www.oilshell.org/blog/2018/04/26.html

                                                                                [1] Incidentally, same with VSCode / VSCodium? I just tried writing a few blog posts with it, because of its Markdown preview plugin, and it’s ridiculously laggy? I can’t believe it has more than 50% market share. Memories are short. It also has the same issue of being controlled by Microsoft with non-optional telemetry.

                                                                                1. 9

                                                                                  +1 on zulip.

                                                                                  category theory https://categorytheory.zulipchat.com/ rust-lang https://categorytheory.zulipchat.com/

                                                                                  These are examples of communities that moved there and are way easier to follow than discord or slack.

                                                                                  1. 9

                                                                                    Zulip is light years ahead of everything else in async org-wide communications. The way the messages are organized makes it extremely powerful tool for distributed teams and cross-team collaboration.

                                                                                    The problems:

                                                                                    • Clients are slow when you have 30k+ unread messages.
                                                                                    • It’s not easy (possible?) to follow just a single topic within a stream.
                                                                                    • It’s not federated.
                                                                                    1. 12

                                                                                      We used IRC and nobody except IT folks used it. We switched to XMPP and some of the devs used it as well. We switched to Zulip and everyone in the company uses it.

                                                                                      We self-host. We take a snapshot every few hours and send it to the backup site, just in case. If Zulip were properly federate-able, we could just have two live servers all the time. That would be great.

                                                                                      1. 6

                                                                                        It’s not federated.

                                                                                        Is this actually a problem? I don’t think most people want federation, but easier SSO and single client for multiple servers gets you most of what people want without the significant burdens of federation (scaling, policy, etc.).

                                                                                        1. 1

                                                                                          Sorry for a late reply.

                                                                                          It is definitely a problem. It makes it hard for two organizations to create shared streams. This comes up e.g. when an organization with Zulip for internal communications wants to contract another company for e.g. software development and wants them to integrate into their communications. The contractor needs accounts at the client’s company. Moreover, if multiple clients do this, the people working at the contracted company now have multiple scattered accounts at clients’ instances.

                                                                                          Creating stream shared and replicated across the relevant instances would be way easier, probably more secure and definitely more scalable than adding wayf to relevant SSOs. The development effort that would have to go into making the web client connect to multiple instances would probably be also rather high and it would not be possible to perform it incrementally. Unlike shared streams that might have some features disabled (e.g. custom emojis) until a way forward is found for them.

                                                                                          But I am not well versed in the Zulip internals, so take this with a couple grains of sand.

                                                                                          EDIT: I figure you might be thinking of e.g. open source projects each using their own Zulip. That sucks and it would be nice to have a SSO service for all of them. Or even have them somehow bound together in some hypothetical multi-server client. I would love that as well, but I am worried that it just wouldn’t scale (performance-wise) without some serious though about the overall architecture. Unless you are thinking about the Pidgin-style multi-client approach solely at the client level.

                                                                                      2. 7

                                                                                        This is a little off topic, but Sublime Text is a vastly more performant alternative to VSCode.

                                                                                        1. -4

                                                                                          Also off-topic: performant isn’t a word.

                                                                                        2. 3

                                                                                          I feel like topic-first organization of chats is, which Zulip does, is the way to go.

                                                                                            1. 16

                                                                                              It still sends some telemetry even if you do all that

                                                                                              https://github.com/VSCodium/vscodium/blob/master/DOCS.md#disable-telemetry

                                                                                              That page is a “dark pattern” to make you think you can turn it off, when you can’t.


                                                                                              In addition, extensions also have their own telemetry, not covered by those settings. From the page you linked:

                                                                                              These extensions may be collecting their own usage data and are not controlled by the telemetry.telemetryLevel setting. Consult the specific extension’s documentation to learn about its telemetry reporting and whether it can be disabled.

                                                                                              1. 4

                                                                                                It still sends some telemetry even if you do all that

                                                                                                I’ve spent several minutes researching that, and, from the absence of clear evidence that telemetry is still being sent if disabled (which evidence should be easy to collect for an open codebase), I conclude that this is a misleading statement.

                                                                                                The way I understand it, VS Code is a “modern app”, which uses a boatload online services. It does network calls to update itself, update extensions, search in the settings and otherwise provide functionality to the user. Separately, it collects gobs of data without any other purpose except data collection.

                                                                                                Telemetry disables the second thing, but not the first thing. But the first thing is not telemetry!

                                                                                                • Does it make network calls? Yes.
                                                                                                • Can arbitrary network calls be used for tracking? Absolutely, but hopefully the amount of legal tracking allowable is reduced by GDPR.
                                                                                                • Should VS Code have a global “use online services” setting, or, better yet, a way to turn off node’s networking API altogether? Yes.
                                                                                                • Is any usage of Berkeley socket API called “telemetry”? No.
                                                                                                1. 3

                                                                                                  It took me awhile, but the source of my claim is from VSCodium itself, and this blog post:

                                                                                                  https://www.roboleary.net/tools/2022/04/20/vscode-telemetry.html

                                                                                                  https://github.com/VSCodium/vscodium/blob/master/DOCS.md#disable-telemetry

                                                                                                  Even though we do not pass the telemetry build flags (and go out of our way to cripple the baked-in telemetry), Microsoft will still track usage by default.

                                                                                                  Also, in 2021, they apparently tried to deprecate the old setting and introduce a new one:

                                                                                                  https://news.ycombinator.com/item?id=28812486

                                                                                                  https://imgur.com/a/nxvH8cW

                                                                                                  So basically it seems like it was the old trick of resetting the setting on updates, which was again very common in the Winamp, Flash, and JVM days – dark patterns.

                                                                                                  However it looks like some people from within the VSCode team pushed back on this.

                                                                                                  Having worked in big tech, this is very believable – there are definitely a lot of well intentioned people there, but they are fighting the forces of product management …


                                                                                                  I skimmed the blog post and it seems ridiculously complicated, when it just doesn’t have to be.

                                                                                                  So I guess I would say it’s POSSIBLE that they actually do respect the setting in ALL cases, but I personally doubt it.

                                                                                                  I mean it wouldn’t even be a dealbreaker for me if I got a fast and friendly markdown editing experience! But it was very laggy (with VSCodium on Ubuntu.)

                                                                                                  1. 2

                                                                                                    Yeah, “It still sends some telemetry even if you do all that” is exactly what VS Codium claim. My current belief is that’s false. Rather, it does other network requests, unrelated to telemetry.

                                                                                                2. 2

                                                                                                  These extensions may be collecting their own usage data and are not controlled by the telemetry.telemetryLevel setting.

                                                                                                  That is an … interesting … design choice.

                                                                                                  1. 7

                                                                                                    At the risk of belaboring the point, it’s a dark pattern.

                                                                                                    This was all extremely common in the Winamp, Flash, and JVM days.

                                                                                                    The thing that’s sad is that EVERYTHING is dark patterns now, so this isn’t recognized as one. People will actually point to the page and think Microsoft is being helpful. They probably don’t even know what the term “dark pattern” means.

                                                                                                    If it were not a dark pattern, then the page would be one sentence, telling you where the checkbox is.

                                                                                                    1. 6

                                                                                                      They probably don’t even know what the term “dark pattern” means.

                                                                                                      I’d say that most people haven’t been exposed to genuinely user-centric experiences in most areas of tech. In fact, I’d go so far as to say that most tech stacks in use today are actually designed to prevent the development of same.

                                                                                                      1. 2

                                                                                                        The thing that feels new is how non-user-centric development tools are nowadays. And the possibility of that altering the baseline perception of what user-centric tech looks like.

                                                                                                        Note: feels; it’s probably not been overly-user-centric in the past, but they were a bit of a haven compared to other areas of tech that have overt contempt for users (social media, mobile games, etc).

                                                                                                    2. 4

                                                                                                      That is an … interesting … design choice.

                                                                                                      How would you do this differently? The same is true about any system with plugins, including, eg, Emacs and Vim: nothing prevents a plug-in from calling home, except for the goodwill of the author.

                                                                                                      1. 3

                                                                                                        Kinda proves the point, tbh. To prevent a plugin from calling home, you have to actually try to design the plugin API to prevent it.

                                                                                                        1. 4

                                                                                                          I think the question stands: how would you do it differently? What API would allow plugins to run arbitrary code—often (validly) including making network requests to arbitrary servers—but prevent them from phoning home?

                                                                                                          1. 6

                                                                                                            Good question! First option is to not let them make arbitrary network requests, or require the user to whitelist them. How often does your editor plugin really need to make network requests? The editor can check for updates and download data files on install for you. Whitelisting Github Copilot or whatever doesn’t feel like too much of an imposition.

                                                                                                            1. 4

                                                                                                              Capability security is a general approach. In particular, https://github.com/endojs/endo

                                                                                                              For more… https://github.com/dckc/awesome-ocap

                                                                                                            2. 3

                                                                                                              More fun: you have to design a plugin API that doesn’t allow phoning home but does allow using network services. This is basically impossible. You can define a plugin mechanism that has fine-grained permissions and a UI that comes with big red warnings when things want network permissions though and enforce policies in your store that they must report all tracking that they do.

                                                                                                            3. 1

                                                                                                              nothing prevents a plug-in from calling home, except for the goodwill of the author.

                                                                                                              Traditionally, this is prevented by repos and maintainers who patch the package if it’s found to be calling home without permission. And since the authors know this, they largely don’t add such functionality in the first place. Basically, this article: http://kmkeen.com/maintainers-matter/ (http only, not https).

                                                                                                              1. 1

                                                                                                                We don’t necessarily need mandatory technical enforcement for this, it’s more about culture and expectations.

                                                                                                                I think that’s the state of the art in many ecosystems, for better or worse. I’d say:

                                                                                                                • The plugin interface should expose the settings object, so the plugin can respect it voluntarily. (Does it currently do that?)
                                                                                                                • The IDE vendor sets the expectation that plugins respect the setting
                                                                                                                • A plugin that doesn’t respect it can be dealt with in the same way that say malware is dealt with.

                                                                                                                I don’t know anything about the VSCode ecosystem, but I imagine that there’s a way to deal with say plugins that start scraping everyone’s credit card numbers out of their e-mail accounts.

                                                                                                                Every ecosystem / app store- type thing has to deal with that. My understanding is that for iOS and Android app stores, the process is pretty manual. It’s a mix of technical enforcement, manual review, and documented culture/expectations.


                                                                                                                I’d also not rule out a strict sandbox that can’t make network requests. I haven’t written these types of plugins, but as others pointed out, I don’t really see why they would need to access the network. They could be passed the info they need, capability style, rather than searching for it all over your computer and network!

                                                                                                                1. 1
                                                                                                                2. 1

                                                                                                                  Sure, but they don’t offer a “disable telemetry” setting.

                                                                                                                  What I’d do, would be to sandbox plugins so they can’t do any network I/O, then have a permissions system.

                                                                                                                  You’d still rely on an honour system to an extent; because plugin authors could disguise the purpose of their network operations. But you could at least still have a single configuration point that nominally controlled telemetry, and bad actors would be much easier to spot.

                                                                                                                  1. 1

                                                                                                                    There is a single configuration point which nominally controls the telemetry, and extensions should respect it. This is clearly documented for extension authors here: https://code.visualstudio.com/api/extension-guides/telemetry#custom-telemetry-setting.

                                                                                                        1. 2

                                                                                                          The future is impossible to predict, but perhaps the next level is something like computing as in the novel Permutation City - many providers vying for business on a generic MIPS exchange, with computations paused, their state captured, then seamlessly restarted from the same state on a computer somewhere else. It’s a neat thought although this level of interoperability is anathema to cloud provider profits. Competition is expensive; monopoly/oligopoly with vendor lock-in is where the real money is. Beyond those political/economic concerns I’d say data transfer bandwidth is the main technical hurdle to this vision becoming reality.

                                                                                                          1. 5

                                                                                                            Competition is expensive; monopoly/oligopoly with vendor lock-in is where the real money is.

                                                                                                            That’s the most succinct criticism of capitalism I’ve ever read.

                                                                                                            1. 2

                                                                                                              Interestingly I got it from a very pro-capitalist man, Peter Thiel. This is a prominent thesis in his book Zero to One: the purpose of a firm is to achieve monopoly.

                                                                                                              1. 1

                                                                                                                This is a very interesting subject in economics. Are you familiar with the paper from Ronald coast on why firms exist? It is pretty short, readable and changed the whole field. And the whole monopoly thing appears as empire building in governance and information economics. If you like the subject, you will like the texts :)

                                                                                                            2. 2

                                                                                                              The problem is data gravity. With a bit of leg-work, moving container load around multiple clouds is already possible. Kubernetes allows multi-cloud deployments with some effort, for example. But if you rely on DBs or data warehouses in a particular region/cloud, the compute cost savings need to be massive to justify hopping over the egress cost wall.

                                                                                                              There’s also latency requirements for online services. If you need to be in US-central, you’re constrained by the ambient costs of that area - land prices, wages, electricity costs. You can’t just pack up and move to Taiwan.

                                                                                                            1. 3

                                                                                                              I don’t know; I’ve run out of easy stuff on my todo list.

                                                                                                              1. 3

                                                                                                                I’m sorry you got all those angry messages. It’s not right. Every time I hear a FOSS developer write about something like this, it breaks my heart. This is not what FOSS should’ve been.

                                                                                                                I think a major facillitating element of this behaviour is the fact that FOSS software is productized and professionalised, to the point where it presents itself as viable corporate software, rather than a hobby project that some corporate projects mooch off.

                                                                                                                I mean, if I go to the KDE homepage, it says “Powerful, multi-platform, and for everyone”, and there’s a “Products” page. So of course people treat it like all other software from faceless corporations: because KDE really, really tries to be like that. Gnome, too, this isn’t a quip at KDE.

                                                                                                                Personally I think a perfectly appropriate way to handle angry users would be a button that:

                                                                                                                1. Bans them from participating in the bug tracker and adds them to a nope list that all developers can use for their email and blog spam filters
                                                                                                                2. Reports their account to Github or whatever, as I’m pretty sure it violates some code of conduct (I know KDE doesn’t use Github but anyway)
                                                                                                                3. Selects a random string from a very festerous fortune file which it uses as the punchline for an automatically generates an “Asshole of the Year” diploma, and sends it by mail, along with a “Congratulations for your ban!” cover letter explaining – tersely, but obscenely – why they’ve been banned.

                                                                                                                That, of course, wouldn’t fly in a world that uses the word “software supply chain” unironically but who knows. But I think #1 and #2 would be a good start.

                                                                                                                Edit: it’s not that I think this would keep all entitled people away. It’s just that the sanitized corporate look that desktop Linux projects cling to for whatever reason tend to attract entitled complaints because, well, that’s what sanitized corporate things do. There are lots of entitled people complaining about things that are obviously DIY hacker projects, too, but realistically, most people look at those and they just assume they’re something for hippies or cosplayers or whatever.

                                                                                                                But if you want to play the branding game, you gotta play all of it. It’s exhausting and unfair but you can’t say “we build a secure suite of programs for both professional and home use, a free alternative to spooky Windows and macOS” but only deal with the tiny subset of people who are interested in all that but also aren’t entitled little whiny pricks.

                                                                                                                1. 4

                                                                                                                  It’s just that the sanitized corporate look that desktop Linux projects cling to for whatever reason

                                                                                                                  I guess the reason is that people (i.e. users) tend to quickly dismiss anything that doesn’t look like that as unusable and unreliable, so anyone who hopes to get any users (who may become contributors) has to play the branding game.

                                                                                                                  1. 1

                                                                                                                    I’m a little skeptical about how that translates to user-facing projects like KDE. There’s a pretty narrow niche of people with the skills to contribute to a project like KDE (I mean even building and hacking on it is the kind of stuff that second-year students would need some guidance with) and they are not exactly the kind of people who would dismiss something because it doesn’t look like someone rubber stamped it in Cupertino.

                                                                                                                    I definitely agree it’s a factor though. The FOSS community has a history of market share envy, so anything that looks like it might drive people away is seen like a huge problem, even though sometimes it’s really a feature.

                                                                                                                  2. 2

                                                                                                                    One way¹ by which a FOSS project could try to avoid looking ‘productized and professionalised’ might be to use unfashionable, text-focussed Web design, like that of, e,g., QPDF² or ExifTool and many academics, e.g., Knuth or Derek Dreyer.

                                                                                                                    ¹ either as an alternative to being ‘festerous’ and ‘obscene’ or, if one wants, in addition to that
                                                                                                                    ² before its documentation was moved to ReadTheDocs

                                                                                                                    1. 2

                                                                                                                      I think a major facillitating element of this behaviour is the fact that FOSS software is productized and professionalised, to the point where it presents itself as viable corporate software, rather than a hobby project that some corporate projects mooch off.

                                                                                                                      It’s twofold:

                                                                                                                      1. Everyone expects FOSS to be a viable alternative to proprietary software - this is literally the FSF’s goal, in fact.
                                                                                                                      2. FOSS doesn’t have an effective funding mechanism. More specifically, I’d bet that 99% of users pay less than $1/year for any given piece of software they use. This isn’t exactly an accident - the one thing that FOSS can offer better than proprietary software, is less monetization. So if any distro starts pushing users to pay money, then users start looking for the door.

                                                                                                                      The end result is FOSS being the worst of both worlds - being expected to deliver proprietary polish, on a volunteer budget. It’s a structural problem, and is IMO the #1 problem in the FOSS world.

                                                                                                                      1. 3
                                                                                                                        1. Everyone expects FOSS to be a viable alternative to proprietary software - this is literally the FSF’s goal, in fact.

                                                                                                                        Yes, but the way this is usually pictured – both because users are entitled and because lots of folks at the FOSS end of things want to project that image – is that FOSS projects give you exactly what proprietary software gives you, and doesn’t demand the eye-watering fees, either.

                                                                                                                        But other than blog posts by embittered developers, which are largely hidden from the public eye (for statistically significant definitions of “public”), the FOSS world is conspicuously silent about how that free lunch is paid for.

                                                                                                                        Proprietary software sites go all revolutionary this, best-in-its-class that, here’s how much it costs, also here’s a tiny fine print. Corporate-ish-branded free software, like KDE and Gnome, also goes revolutionary this, best-in-its-class that, for everyone, by everyone, but then the tiny fine print that says “testing is best-effort”, “unless you maintain it yourself your favourite feature may be deprecated long before the current Windows version is EOL-ed”, “customer support response times are technically infinite because there’s no customer suppport” is dusted under the rug.

                                                                                                                        These are very much things that have to be said. You can’t not tell people about them, loudly, in the same big bold letters you use to say “Get things done with ease, comfort, and control”, and then wonder why they don’t know them.

                                                                                                                        Don’t get me wrong, like my message above said, I think people have no right to be assholes to FOSS developers, but that’s just common sense. But this is also a corner in which the FOSS community is painting itself.

                                                                                                                        1. 2

                                                                                                                          I would not say this is the problem. FOSS’s greatest contribution to the world, beyond the actual utility of the programs, is just that: profits are not necessary to motivate humans to create amazing and useful things. It does work. They can’t tell us it’s impossible. This flies in the face of the fundamental ideas behind capitalism, which is that it is necessary for workers to be alternately cajoled, abused, and dictated to by capitalists to make anything, with the capitalists taking a massive cut of what’s produced in return for their cajole/abuse/dictate services. The odd blog post by some FOSS maintainer cracking under pressure happens because the rest of society still operates under this barbaric form of labor organization - specifically the abuse aspect, because not working at a capitalist firm is punished by the threat of homelessness with its attendant horrors.

                                                                                                                          1. 1

                                                                                                                            The odd blog post by some FOSS maintainer cracking under pressure happens because the rest of society still operates under this barbaric form of labor organization

                                                                                                                            Yes, feel free to disregard my comment if your country has a communist revolution tomorrow. Until then, however, we live in a capitalist society and the best way to actually earn a living wage as a coder is to get a job working on proprietary software.

                                                                                                                      1. 1

                                                                                                                        The issue of state resetting when swapping between apps is what eventually made me change my last two phones as well. Hopefully now that modern phones contain a few gigs of ram that’ll be a thing of the past, but rate of software memory bloat continues to surprise so who knows.

                                                                                                                        1. 7

                                                                                                                          Hopefully now that modern phones contain a few gigs of ram that’ll be a thing of the past, but rate of software memory bloat continues to surprise so who knows.

                                                                                                                          Once most phones contain a few gigs of ram, devs will start thinking “who cares if I waste a few hundred megabytes? RAM is so ubiquitous nowadays”. This isn’t a problem of technology, it’s a problem of values.

                                                                                                                          In fact, I’d argue that fast RAM increases make the problems worse - imagine if literally half the market today still had the same amount of RAM as phones had in 2012; nobody would even think of being liberal with memory, and your old 2012-phone wouldn’t run out.

                                                                                                                          I’ve been thinking for a while now that the best possible thing for right-to-repair and the environment, would be for Moore’s law (and specifically, all exponential scaling terms for computing) to visibly sputter out.

                                                                                                                          1. 4

                                                                                                                            Yeah, I’m pretty stoked about the fact that new processors/systems aren’t substantially faster than their predecessors. In prior decades the iterative improvements were blatantly noticeable. Today, they are moreso just inching forward (subjectively speaking). The best thing about this is the staying power of “older” hardware (as I type this from a ThinkPad X230 with i5-3320M processor), so supposedly-old stuff is still usable and worth keeping around. The behaviour of frequently replacing your perfectly-working hardware is less worthwhile.