1. 14

    Personally I found Gemini too cliquey and exclusive for my tastes. The people and views I found there were very same-y, too much for me. Social aside, the protocol isn’t technically all that interesting. Overall I found more fun to be had in Yggdrasil, NNCP, and Usenet. Usenet in particular reminds me of what a free-for-all the Net can be, in both a good and bad way, but I’m the dive bar sort so that’s fine with me.

    There’s also a bit of a Gemini Evangelism Strike Force that I’m starting to see in the tech spaces I frequent, which I’m not a fan of, but so far it’s tolerable.

    1. 14

      To me, it’s pretty simple: Gemini’s protocol and markup make an active effort to prevent me from writing about things I find interesting, like functional programming (that often requires inline mathematical notation) or music (that requires notation and sounds). It also prevents me from creating a good reader experience with bi-directional footnotes, good ToC, at least some typography etc.—things I can do on the WWW.

      So I use the WWW, not Gemini.

      1.  

        any particular usenet newsgroups worth reading still? I have not been on usenet for 15 years or so

        1. 5

          Yeah there are a few, I think Usenet is going through a small revival. Some groups I enjoy:

          • comp.sys.raspberry-pi
          • rec.woodworking
          • comp.lang.forth (they talk about Forth, but the discussion can be quite heated at times)
          • alt.fan.cecil-adams (general talk about news/politics)

          There’s also the infossystems groups comp.infosystems.gopher and comp.infosystems.gemini. You could also try rec.bicycles.tech if you’re into bicycles and willing to put up with a bunch of older people who are insulting each other incessantly while talking about bike tech.

          1.  

            comp.infosystems.gemini 💙

            But slightly more seriously: I can only find interesting and active Usenet groups on technical topics, while things like music genres have groups which were active and fun 25 years ago, but are dead now.

            1.  

              I think the few remaining users of usenet (most of it is just spam and binaries now) are ultra-grognards, so not surprised. comp.sys.apple2 is fairly active, but as you say, everyone abandoned the non-technical categories years ago.

            2.  

              Related: how does one read newsgroups now? It looks like major ISPs don’t provide NNTP anymore, and the proliferation of hijacking it to distribute binaries has forced access to be gated by third parties. Since Usenet, when it’s working well, is a better organized series of mailing lists, it seems very strange to pay a monthly fee to read.

              1.  

                I registered for a free account at https://www.eternal-september.org

                No binaries (duh) but otherwise seems to have a full feed otherwise.

                1.  

                  Amazing. NNTP is one of the rare things I actually miss from the 90s.

            3.  

              The people and views I found there were very same-y, too much for me.

              This is very hurtful. 💔

              1. 7

                Lol shrug I don’t know what to say. I’ve never been the clique type. I avoided tightly-knit social groups throughout my schooling days and I’m thankful now as an adult that there’s less social pressure to be in one of those.

                1.  

                  I have a hard time getting along with a lot of people on Gemini (although I’ve found a handful of gems); being seen as samey (I post a lot there) really stings. I’m my own person.

            1. 18

              I think Gemini is defined by what it isn’t rather than what it is (very punk rock), and that’ll be its ultimate downfall. Instead of trying to build something, they worry little nice touches will be the end and become the web again.

              For a good example of what I mean, this is what I think of when I think of Gemini now - bullying people out of purity. (Or RFC LARP.)

              1. 22

                You’ve discovered that Drew Default is a dick. We all knew this. He’s been here. We know.

                One could argue it’s a public service to provide a client that Drew Default will eventually blacklist. The users will be spared whatever his latest fucking insane windmill charge is. Think of the users.

                1. 6

                  Or as Jawbreaker would have it:

                  “You’re not punk, and I’m telling everyone”

                  Save your breath, I never was one

                  Much like any clique, it’s got its purity tests. That’s a good example you’ve got there.

                  1.  

                    It’s a curious perversion of open source development—who has the right to control or limit what other people do in their own software, especially adding features they find useful? One could argue that it’s daft, impractical or morally offensive, and they might be right. You can’t stop people publishing code, which leaves you with a rather primitive set tools—benevolent dictatorship, consensus and coercion—to keep a minimalist project on track. “Nice little touches” were never on the table, and favicons were only one in a litany of attempted extensions. Without continuous backpressure the markup definitely would have reached a level of complexity that browsing from a terminal would be undesirable. Most of the backpressure was achieved by solderpunk writing gently, but ultimately leaning heavily on their authority as the author of the spec to prevent unwanted excursions. Later they reduced their involvement so the availability of tools to keep things on track diminished.

                    That’s what Gemini is, for better or worse. Purity and ensuring purity is in its DNA. When I look at your link, I see that more as a reflection on the project than the person who wrote it. Even if others wouldn’t have been quite so blunt (or didn’t have the leverage to counter code with code) many were definitely thinking it. There’s a shock factor seeing that kind of thing on a GitHub issue—it would be a crazy demand in most software ecosystems, but in context I didn’t find it surprising.

                    1. 5

                      Without continuous backpressure the markup definitely would have reached a level of complexity that browsing from a terminal would be undesirable.

                      Browsing from a terminal isn’t one of the stated goals of the project though. I’m not sure if purity politics were essential for Gemini at the outset, but I do think that users of Gemini are interested in that kind of purity politics because of what Gemini and its purity represents for them.

                      1.  

                        Browsing from a terminal isn’t one of the stated goals of the project though

                        Good point, I misremembered that this was more explicit. However it is a goal of gemtext that it should be usable without any processing of formatting, which amounts to much the same thing.

                        the text/gemini format has been designed so that simple clients can ignore the more advanced features and still remain very usable.

                        It is strictly optional for clients to do anything special at all with headings [/list items/blockquotes]

                        (gemtext format)

                        1.  

                          Good point, I misremembered that this was more explicit. However it is a goal of gemtext that it should be usable without any processing of formatting, which amounts to much the same thing.

                          This exact split is reflected at large in the community and in practice ended with the minimalist group winning. For a lot of folks, they saw the goals of Gemini as a way to create a pure part of the Net where they can use their favorite tools (terminal, text editors, etc) to surf around. I contrast this with Usenet which has a much more laissez-faire attitude on what you can post in an article and how a newsreader should format an article.

                  1. 17

                    I use FreeBSD (if I’m going to use Unix, I might as well use one with good taste), but:

                    • UFS2 is absolutely not a good filesystem. It’s very fragile relative to ext4, which itself isn’t great. ZFS is excellent, but the problem is for small systems (i.e. VMs), it can be quite heavyweight. It’d be nice to have a better filesystem for the smaller scale stuff, or have ZFS fit under a gig of RAM.
                    • I think still advertising jails as if they’re a contender in 2022 is misleading. They completely missed the boat with tooling, let alone containerization trends.
                    • My problem with Bhyve is guest support, but that’s why I run ESXi.
                    1. 6

                      I am similarly biased towards FreeBSD (if I’m going to use an implementation of bad ideas from the ‘70s, at least I’d like a clean and consistent implementation of those bad ideas) and wanted to amplify this point

                      I think still advertising jails as if they’re a contender in 2022 is misleading. They completely missed the boat with tooling, let alone containerization trends.

                      Jails are a superior mechanism for doing shared-kernel virtualisation to the mixture of seccomp-bpf, cgroups, and namespaces that can be assembled on Linux to look like jails. Lots of FreeBSD-related articles like to make that point and they are completely missing the value of the OCI ecosystem. Containers are a mix of three things:

                      • A reproduceable build system with managed dependencies and caching of intermediate steps. FreeBSD has some of this in the form of poudriere, but it’s very specialised.
                      • A distribution and deployment format for self-contained units.
                      • An isolation mechanism.

                      Of these, the isolation mechanism is the least important. Even on Linux, there’s a trend to just using KVM to run a separate kernel for the container and using FUSE-over-VirtIO to mount filesystems from the outside. The overhead of an extra cut-down Linux kernel is pretty small in comparison to the size of a large application.

                      The value in OCI containers is almost entirely in the distribution and deployment model. FreeBSD doesn’t yet have anything here. containerd works on FreeBSD (and with the ZFS snapshotter, works well) but runj is still very immature.

                      My problem with Bhyve is guest support, but that’s why I run ESXi.

                      I’m not sure what this means. Bhyve exposes the same VirtIO devices as KVM.

                      Bhyve may or may not be better than KVM but the separation of concerns is weaker. There’s a lot of exciting stuff (e.g. Kata Containers) that’s being built on top of KVM. Windows now provides a set of APIs to Hyper-V that are a direct equivalents to the KVM ioctls, which means that it’s easy to build systems that are portable between KVM and Hyper-V. There’s no equivalent for bhyve.

                      UFS2 is absolutely not a good filesystem. It’s very fragile relative to ext4, which itself isn’t great. ZFS is excellent, but the problem is for small systems (i.e. VMs), it can be quite heavyweight. It’d be nice to have a better filesystem for the smaller scale stuff, or have ZFS fit under a gig of RAM.

                      I haven’t used UFS2 for over a decade but I’ve run ZFS on systems with 1GiB of RAM with no problem. The rule of thumb is 1GiB of RAM per 1TiB of disk. Most of my VMs have a lot less than 1 TiB of disk. You need to clamp the ARC down a bit, but the ARC is less important if the disks are fast (and they often are in VMs).

                      1.  

                        I’m not sure what this means. Bhyve exposes the same VirtIO devices as KVM.

                        VMware has drivers for weirder guest OSes (including older versions of mainstream stuff… you know, NT), KVM doesn’t. That and I’ve had very bad experience with KVM virtio, but that doesn’t reflect on Bhyve

                        I haven’t used UFS2 for over a decade but I’ve run ZFS on systems with 1GiB of RAM with no problem. The rule of thumb is 1GiB of RAM per 1TiB of disk. Most of my VMs have a lot less than 1 TiB of disk. You need to clamp the ARC down a bit, but the ARC is less important if the disks are fast (and they often are in VMs).

                        This is probably my own paranoia fed by misinfo (or just plain outdated info) about ZFS resource usage.

                        1. 4

                          ZFS doesn’t really need that much RAM. The ARC is supposed to yield to your programs’ demand. But if you’re not comfortable with how much it occupies you can just set arc_max to a small size.

                          1. 4

                            The ZFS recommendations seem to be based around the idea that you want the best performance out of ZFS, running large NFS/iSCSI/SMB hosts with many users. I think FreeNAS also set the bar high just so that users trying to use minimal hardware would not have a reason to complain if it didn’t work very well for them.

                            However, in practice, I rarely need top performance out of ZFS, so even with 512MB of RAM I can use it comfortably for a small VM with just a few services. Granted this was a few years ago, so maybe 1GB is needed nowadays.

                          2. 2

                            small systems (i.e. VMs)

                            NFS from host?

                            1. 1

                              I use ESXi as my host, so probably not.

                              1.  

                                Shared, from another guest, then?

                            2.  

                              I wouldn’t say UFS is great, but I really think Ext4 is worse. I think at least part of the bad reputation is also coming from the fact that UFS isn’t as much info version numbers. UFS-implementaions change.

                              But to not make this FreeBSD vs Linux, see XFS vs Ext4 where there’s a similar situation. Every time Ext4 gets an edge, like metadata speed XFS ends up being improved surpassing Ext4 again.

                              Similar things can be said about UFS, at least for the remark of it being “fragile”.

                              But I’d like to hear it if you have anything to bank that claim.

                              That said I would have agreed with you there about a decade ago.

                            1. 4

                              I wonder if we won’t be seeing more of this. I feel like the whole systemd debacle but perhaps more importantly the philosophical, financial and organizational changes that caused it speak to a real problem in Linux-space where a small number of large-ish companies are determining the future of the platform based on their best interests that may not actually intersect with those of the community.

                              Like, systemd actually DOES seem to bring some value to the desktop, but to the server? The value it brings is much less clear, and the harm it causes by violating decades old interface contracts is non trivial.

                              1. 15

                                Like, systemd actually DOES seem to bring some value to the desktop, but to the server? The value it brings is much less clear, and the harm it causes by violating decades old interface contracts is non trivial.

                                The decades old interfaces were a crufty hack put together by grad students. All the commercial unix variants ditched it long ago. Yes, BSD’s user space is stable, but it’s also awful. It was awful in the 1990’s when I started in on Linux and BSD, and it’s awful today.

                                I only use Linux on servers these days, and I love systemd. If I never have to write another SysV init script again, I will be a happier person. Add things like eBPF? And finally we’re getting btrfs on root with SuSE and getting some of the nice stuff that Solaris had (snapshot before upgrade for trivial rollback). It’s not traditional BSD, and it’s better.

                                1. 5

                                  systemd is the main thing I’m missing on FreeBSD. BSD rc is an improvement over System V init, but not by much.

                                  1. 3

                                    Glad you’re happy with systemd.

                                    The issue here is not the change, but an unwillingness to actually COMMUNICATE that change so sysadmins managing production systems in the field have to find out the hard way that their expectations have not been met.

                                    This is most emphatically NOT the way to develop a very complex software stack.

                                    1. 5

                                      What type of communication, and from whom, would be good enough?

                                      Does this count? https://github.com/systemd/systemd/blob/main/NEWS

                                      1.  

                                        I’m looking forward to the communication SysV init has. Oh wait, it doesn’t and every distribution was just doing its own thing?

                                        1.  

                                          Honestly? No.

                                          As a UNIX administrator I expect man pages to document the interfaces necessary to operate the system.

                                          1.  

                                            What’s lacking in the systemd man pages?

                                            https://man7.org/linux/man-pages/man1/systemd.1.html

                                        2. 4

                                          Can you provide examples of what you feel was not communicated? I’m not sure I understand the complaint.

                                          1.  

                                            When I want to change some detail around DNS resolution, I go to modify /etc/resolv.conf, but that’s not actually the correct mechanism anymore, but I can’t find what the right mechanism IS in the man pages or anywhere else I know to look.

                                            1.  

                                              And in which man page do you spot that /etc/resolv.conf was the right place to look, apart from the man page for resolv.conf?

                                              Discovering /etc/resolv.conf is not the most intuitive thing in the world either.

                                              1.  

                                                There are two important issues in good UI design (okay, more, but two that are relevant here):

                                                • Discoverability
                                                • Consistency

                                                *NIX systems are typically terrible at the first of these. FreeBSD isn’t actually too bad in the first order here because you can add nameservers by running bsdconfig and going to the network settings part. This isn’t great though because it doesn’t tell you what it’s editing and so the only thing that you learn is that you can edit the settings via that UI, not what the underlying service is. RedHat has a similar tool whose name I’ve forgotten. I don’t know what the Debian / Ubuntu equivalent is but I assume there is one.

                                                I learned about resolv.conf on Linux around 2000. For consistency, I’d expect to go and edit it today. Trying this on a FreeBSD and Ubuntu system, I learn quite similar things: it’s not the right thing to do anymore. Both then do well on discoverability: On Ubuntu, it tells me that the file was created by systemd-resolved, on FreeBSD is tells me that it was created by resolvconf. In both cases, I can go to the relevant man page and find out what the new thing is. Whether I prefer systemd-resolved or resolvconf is largely a matter of personal preference (I do enjoy the fact that there’s now a file, complete with man page, on FreeBSD called resolvconf.conf, because what problem isn’t made better by adding an extra layer of indirection?).

                                                1.  

                                                  Can we please stop this back and forth? I realize I was being un-necessarily incendiary by using the word ‘debacle’ and if you contribute to the systemd project and I hurt your feeling I sincerely apologize.

                                          2. 3

                                            If I never have to write another SysV init script again, I will be a happier person

                                            Were you writing these yourself before? Serious question: why?

                                            1. 13

                                              Because I wrote daemons and they had to have init scripts.

                                            2.  

                                              Comparing Linux style SysV init to systemd I get your point, but that’s not how rc.d typically works on a modern BSD.

                                              Take Consul on OpenBSD for example.

                                              It mainly contains:

                                              #!/bin/ksh
                                              daemon="/usr/bin/consul agent"
                                              daemon_flags="-config-dir /etc/consul.d"
                                              daemon_user="_consul"
                                              
                                              . /etc/rc.d/rc.subr
                                              
                                              rc_bg=YES
                                              rc_stop_signal=INT
                                              
                                              rc_cmd $1
                                              
                                              

                                              Then I can add consul_flags="whatever" in /etc/rc.local which describes all my services, what runs and how it is configured. I can add comments, etc.

                                              The equivalent in systemd, after removing everything not needed:

                                              [Unit]
                                              Requires=network-online.target
                                              After=network-online.target
                                              
                                              [Service]
                                              ExecStart=/usr/local/sbin/consul agent $OPTIONS -config-dir=/etc/consul.d
                                              ExecReload=/bin/kill -HUP $MAINPID
                                              KillSignal=SIGINT
                                              
                                              [Install]
                                              WantedBy=multi-user.target
                                              

                                              And harder to just change some flags and get an overview about how the system is configured.

                                              FreeBSD also has daemon which is a simple tool that takes care of everything you might need for running a daemon. Logging, automatic restarts, managing subproceses, chaining user, forking into background, creating a pid file.

                                              I don’t disagree that systemd is better than what many distributions had before. It’s actually a reason of why I got interested in the BSDs. Some systems, like Gentoo or Arch Linux in the past did a better job as well. Especially Gentoo’s OpenRC (also used by some others) comes pretty close to a perfect init in my opinion.

                                              Sorry, I don’t wanna talk about systemd or init systems. There is enough about that elsewhere, but the notion that things on the BSDs are as bad as they were on SysV init based Linux distributions is simply wrong.

                                              About eBPF. dtrace is there and has great integration from Python to Postgres. And btrfs feels like a never ending story and I think even on Linux it’s largely obsoleted by ZFS which just like dtrace has been used in production in very large setups for many years.

                                              See, that’s the thing with Linux though. Things tend to be considered obsolete when they finally manage to stabilize. See Pulseaudio and Primusrun. I don’t think that’s necessarily bad. In fact it’s good that bad stuff gets replaced, but often it’s not about the better option but the newer one it seems and RedHat certainly has interest in pushing their products.

                                              And then you have to hire huge DevOps/SRE-Teams just to keep things compatible with whatever is still supported. Of course that leads to the idea that you need to be able to pull things off Dockerhub and don’t actually manage the system, but outsource things to EKS or GKE.

                                              And I say that as someone whose main source income is consulting helping companies with DevOps related issues, Docker, Kubernetes, etc. It’s a mess, which is why companies throw large sums of money at people like us blowing out the fires.

                                              Whatever it is, being the “cool new thing” managers read in their magazines and “Google uses it” will always win. There’s no shortage of work in the industry if you jump on what’s hot. ;)

                                              1.  

                                                And harder to just change some flags and get an overview about how the system is configured.

                                                “systemctl edit consul” and listing .override files should be the equivalent in systemd world.

                                          1. 0

                                            On macOS, a popular terminal is iterm2.

                                            Why would you list a paid application as a popular choice in an article for newbies? Terminal.app is fine and the obvious starting point for most users.

                                            1. 30

                                              iTerm2 is free and OSS

                                              1. 3

                                                To be honest, I always thought it was paid too. I think there is something that sounds similar that is a commercial app.

                                                1. 2

                                                  It’s funny, I knew a handful of talented Linux kernel devs at an old internship, who all swore by Mobaxterm, a monstrosity of a Windows SSH client (it ships an embedded X server with some form of dwm, wild) that is anything but FLOSS. Oddly enough, I had only seen it otherwise in a non-CS robotics class where it was the batteries-included alternative to PuTTY.

                                                  I totally agree with your sentiment about starting programmers with FLOSS software, even if iterm2 is indeed FLOSS, haha. Many people won’t know what’s out there if the paid/proprietary option is the first they see.

                                                  1. 3

                                                    Oddly enough, I had only seen it otherwise in a non-CS robotics class where it was the batteries-included alternative to PuTTY.

                                                    I feel like a 30 year old boomer for preferring PuTTY over most other WIndows SSH clients. (Well, the other grognard SSH client is Tera Term…)

                                                1. 3

                                                  Can someone offer a rundown of why this is interesting and whether the coreboot config that lets it boot also lets FOSS things boot?

                                                  I’d have expected Windows 11 to boot with Coreboot just fine. The rub would seem to be getting other things to boot with the same config.

                                                  1. 15

                                                    To me the most interesting part of this link is the author’s bio:

                                                    MSFT Director of OS Security.

                                                    1. 5

                                                      Among other things, David is the person who has been driving the effort to have CPU vendors adopt Pluton as their hardware root of trust. I think this project speaks a lot to his motivation: he wants a trustworthy boot chain, from the hardware up to whatever OS you choose to run. That’s why he’s interested in CoreBoot and it’s why he wants everyone to ship Pluton: they both provide parts of this story. This is why all of the ‘Microsoft is trying to take control of all of the things!!!l1111eleventyone’ articles about Pluton make me a bit sad.

                                                      1. 4

                                                        Once bitten, twice shy. Microsoft’s never apologized for their 90s attitude and business approach, nor have they demonstrably changed since then. They were found to be monopolists in both the USA and EU, and did the bare minimum to appease the courts and regulators. The original secure-boot proposals were along the lines of the illegal Wintel trust between Microsoft and Intel, and aimed to exclude other vendors from multiple end-user markets.

                                                        1. 2

                                                          CPU vendors adopt Pluton as their hardware root of trust

                                                          Do you perhaps know what is the difference between Pluton and good old regular fTPM (firmware TPM implemented in the secure enclave of the CPU)? Most of the material about Pluton I found sadly doesn’t go into technical details.

                                                          (Btw nice to see someone from Microsoft Research here, I’m continuously amazed by the work you all do!)

                                                          1. 4

                                                            There are three ways of implementing a TPM:

                                                            • A fTPM is implemented firmware in a separate security level (e.g. TrustZone on Arm, SMM on Intel). This is a problem because it is often vulnerable to side channels and sometimes to things like power-glitching attacks that let untrusted code run in the secure mode.
                                                            • A separate chip. These are often just plain unreliable but even with a good implementation they are problematic because there’s no secure communication path between them and the CPU. A physical attacker can record the measurements that the CPU sends to the TPM and then fake them and request that the TPM signs things on behalf of malicious hardware / software.
                                                            • An on-chip (or on-package) separate core with isolated memory. There are a bunch of corner cases that can make this kind of thing difficult to get right (is the untrusted core able to attack it by adjusting power? What about timing-based attacks from an attacker on the untrusted core with a cycle-accurate view of time?)

                                                            Pluton is basically a good implementation of the last form. It is hardened against various kinds of attacks and it’s had people with physical access attacking the version shipped in the Xbox One for several years without success (most of them probably weren’t willing to pop the top off the SoC and directly probe individual components, but if an attacker is willing to spend that much on a targeted attack on me then I’m probably screwed anyway). It provides the key-management, signing, encryption, and random-number generation functionality that’s necessary to implement the TPM spec (it could also be used to implement other interfaces to the same underlying functionality).

                                                            I don’t really know why we don’t publish more docs on Pluton. My guess is that it’s because people are expected to communicate with it via some higher-level interface (such as the TPM spec or the APIs in Azure Sphere) and so they shouldn’t be exposed to the implementation details. The docs I’ve read are all marked Microsoft-super-duper-secret (or whatever the official term for this is) but they didn’t contain anything that looked like it actually was commercially sensitive: Pluton provides the set of features that I’d hope for (though, sadly, not always get) from an off-the-shelf hardware root of trust. I believe Google’s Titan core provides a very similar set of features (though I’ve not seen any detailed docs about it, so that’s largely conjecture). The exact hardware mitigations that it deploys are probably sensitive because they might help an attacker (even knowing that a particular category of attacks definitely won’t work can significantly reduce work factor).

                                                        2. 1

                                                          Wow. I completely missed that. Yep… that wins.

                                                        3. 4

                                                          I’d have expected Windows 11 to boot with Coreboot just fine.

                                                          Honestly, based on what I’ve seen, I wouldn’t expect anything other than Linux to boot with Coreboot. It seems very much the people who’d explicitly use Coreboot only care about that.

                                                          1. 2

                                                            Maybe the “11” part deserves more emphasis than I’m giving it, but I’ve never found it challenging to boot Windows. If you can make your thing look more-or-less like BIOS and handle x86 instructions, Windows gonna boot. So while we’re in violent agreement that those who go to the trouble to put Coreboot on a thing likely only care that Linux boots on the thing, “Windows boots on PC” feels very “dog bites man” even with coreboot in place. I guess it changes a bit if windows is still willing to do that handful of things it’ll only do with measured boots…

                                                            And I guess that’s the kernel of my question: is a Coreboot config that leaves Windows comfortable doing those things it only wants to do after it likes all the measurements also able to boot Linux?

                                                        1. 2

                                                          What’s interesting about the ui? Simply that it’s vector based?

                                                          1. 13

                                                            I also find interesting the tabbed windows – each tab can be a different app

                                                            1. 5

                                                              Like Haiku!

                                                              1. 2

                                                                Fluxbox also offers that.

                                                            2. 2

                                                              If we’re raising questions, what’s with boots in seconds, don’t all OSes do?

                                                              Edit: not that this doesn’t look interesting, it’s just that that particular boast caught my eye.

                                                              1. 17

                                                                There’s a great Wiki Page for FreeBSD that Colin Percival (of tarsnap fame) has been maintaining on improving FreeBSD boot time. In particular, this tells you where the time goes.

                                                                A lot of the delays come from things that are added to support new hardware or simply from the size of the code. For example, loading the kernel takes 260ms, which is a significant fraction of the 700ms that Essence takes. Apple does (did?) a trick here where did a small amount of defragmentation of the filesystem to ensure that the kernel and everything needed for boot were contiguous on the disk and so could be streamed quickly. You can also address it by making the kernel more modular and loading components on demand (e.g. with kernel modules), but that then adds latency later.

                                                                Some of the big delays (>1s) came from sleep loops that wait for things to stabilise. If you’re primarily working on your OS in a VM, or on decent hardware, then you don’t need these delays but when you start deploying on cheap commodity hardware then you discover that a lot of devices take longer to initialise than you’d expect. A bunch of these things were added in the old ISA days and so may well be much too long. Some of them are still necessary for big SCSI systems (a big hardware RAID array may take 10s of seconds to become available to the OS).

                                                                Once the kernel has loaded, there’s the init system. This is something that launchd, SMF, and systemd are fairly good at. In general, you want something that can build a dynamic dependency graph and launch things as their dependencies are fulfilled but you also need to avoid thundering herds (if you launch all of the services at once then you’ll often suffer more from contention than you’ll gain from parallelism).

                                                                On top of that, on *NIX platforms, there’s then the windowing system and DE. Launching X.org is fairly quick these days but things like KDE and GNOME also bundle a load of OS-like functionality. They have their own event framework and process launchers (I think systemd might be subsuming some of this on Linux?) and so have the same problem of starting all of the running programs.

                                                                The last bit is something that macOS does very well because they cheat. The window server owns the buffer that contains every rendered window and persists this across reboot. When you log back in, it displays all of your apps’ windows in the same positions that they were, with the same contents. If then starts loading them in the background, sorted by the order in which you try to run them. Your foreground app will be started first and so the system typically has at least a few seconds of looking at that before you focus on anything else and so it can hide the latency there.

                                                                All of that said, for a desktop OS, the thing I care about the most is not boot time, it’s reboot time. How long does it take between shutting down and being back in exact same state in all of my apps that I was in before the reboot? If I need a security update in the kernel or a library that’s linked by everything, then I want to store all state (including window positions and my position within all open documents, apply the update, shut down, restart, reload all of the state, and continue working. Somewhat related, if the system crashes, how long does it take me to resume from my previous state? Most modern macOS apps are constantly saving restore points to disk and so if my Mac crashes then it typically takes under a minute to get back to where I was before the reboot. This means I don’t mind installing security updates and I’m much more tolerant of crashes than on any other system (which isn’t a great incentive for Apple’s CoreOS team).

                                                                1. 1

                                                                  And essence basically skips all of that cruft? Again, not to be putting the project down, but all that for a few seconds doesn’t seem much, once a week.

                                                                  I don’t think I reboot my Linux boxes more often, and even my work Windows sometimes reminds me that I must reboot once a week because if company policy.

                                                                  Maybe if I had an old slow laptop it would matter to me more. Or of i was doing something with low-power devices (but then, I would probably be using something more specialised there, if that was important).

                                                                  Again. Impressive feat. And good work and I hope they make something out of it (in the long run, I mean). But doesn’t Google also work on fuchsia, and Apple on macos? They probably have much more chance to become new desktop leaders. I don’t know, this seems nice but I think there biggest benefit is in what the authors will learn from the project and apply elsewhere.

                                                                  1. 2

                                                                    And essence basically skips all of that cruft? Again, not to be putting the project down, but all that for a few seconds doesn’t seem much, once a week.

                                                                    It probably benefits from both being small (which it gets for free by being new) and from not having been tested much on the kind of awkward hardware that requires annoying spin loops. Whether they can maintain this is somewhat open but it’s almost certainly easier to design a system for new hardware that boots faster than it is to design a system for early ’90s hardware, refactor it periodically for 30 years, and have it booting quickly.

                                                                    But doesn’t Google also work on fuchsia, and Apple on macos? They probably have much more chance to become new desktop leaders. I don’t know, this seems nice but I think there biggest benefit is in what the authors will learn from the project and apply elsewhere.

                                                                    I haven’t paid attention to what Fuchsia does for userspace frameworks (other than to notice that Flutter exists). Apple spent a lot of effort on making this kind of thing fast but most of it isn’t really to do with the kernel. Sudden Termination came from iOS but is now part of macOS. At the OS level, apps enter a state where they have no unsaved state and the kernel will kill them (equivalent of kill -9) whenever it wants to free up memory. The WindowServer keeps their window state around so that they can be restored in the background. This mechanism was originally so iOS could kill background apps instead of swapping but it turns out to be generally useful. The OS parts are fairly simple, extending Cocoa so that it’s easy to write apps that respect this rule was a lot more difficult work.

                                                                2. 5

                                                                  In the demo video, it booted in 0.7s, which, to me, is impressive. Starting applications and everything is very snappy too. The wording of the claim doesn’t do it justice though, I agree with that.

                                                                  1. 3

                                                                    Ideally you should almost never have to reboot an OS, so boot time doesn’t interest me nearly as much as good power management (sleep/wake).

                                                                    1. 3

                                                                      how many people live in this ideal world where you never have to reboot the OS?

                                                                      1. 6

                                                                        IBM mainframe operators.

                                                                        1. 4

                                                                          It’s not never, but I basically only reboot my macs and i(pad)os devices for OS updates, which is a handful of times per year. The update itself takes long enough that the reboot time part of it is irrelevant - I go something else while the update is running.

                                                                          1. 3

                                                                            I think it’s only really Windows that gets rebooted. I used to run Linux and OpenBSD without reboots for years sometimes, and like you I only reboot MacOS when I accidentally run out of laptop battery or do an OS update, as you say.

                                                                          2. 3

                                                                            I dunno; how many people own Apple devices? I pretty much only reboot my Macs for OS updates, or the rare times I have to install a driver. My iOS devices only reboot for updates or if I accidentally let the battery run all the way down.

                                                                            I didn’t think this was a controversial statement, honestly. Haven’t Windows and Linux figured out power management by now too?

                                                                            1. 1

                                                                              pretty much only reboot my Macs for OS updates, or the rare times I have to install a driver

                                                                              That’s not “never”, or are MacOS updates really so far/few between?

                                                                            2. 1

                                                                              I feel like this is one of those things where people are still hung up from the days of slow HDDs and older versions of Windows bloated with all kinds of software on startup.

                                                                              1. 1

                                                                                It depends a bit on the use case. For client devices, I agree, boot time doesn’t matter nearly as much as resume speed and application relaunch speed. For cloud deployments, it can matter a lot. If you’re spinning up a VM instance for each CI job, for example, then anything over a second or two starts to be noticeable in your total CI latency.

                                                                            3. 2

                                                                              If it boots fast, does sleep matter?

                                                                              1. 6

                                                                                It does unless you can perfectly save state each time you boot. And boot in less than a second.

                                                                                1. 4

                                                                                  Only if it can somehow store the entire working state, including unsaved changes, and restore it on boot. Since that usually involves relaunching a bunch of apps, it takes significantly longer than a simple boot-to-login-screen.

                                                                                  This isn’t theoretical. Don’t you have any devices that sleep/wake reliably and quickly? It’s profoundly better than having to shut down and reboot.

                                                                                  1. 2

                                                                                    Only if it can somehow store the entire working state, including unsaved changes, and restore it on boot

                                                                                    That’s another interesting piece of the design space. I’ve seen research prototypes on Linux and FreeBSD (I think the Linux version maybe got merged?) that extend the core dump functionality to provide a complete dump of memory and associated kernel state (open file descriptors). Equivalent mechanisms have been around in hypervisors for ages because they’re required for suspend / resume and migration. They’re much easier in a hypervisor because they interfaces for guests have a lot less state: a block device has in-flight transactions, a network device has in-flight packets, and all other state (e.g. TCP/IP protocol state, file offsets) is stored in the guest. For POSIXy systems, various things are increasingly difficult:

                                                                                    • Filesystem things are moderately easy. You need to store the inode and offset. If another process modifies the file while you’re suspended then it’s not really different than another process modifying it while you’re running. Filesystem locks are a bit more interesting - if a process holds a filesystem lock and is suspended to disk, what should happen? You probably don’t want to leave the file locked until the process is reloaded, because it might not be. On the other hand, it will probably break in exciting ways if it silently drops the lock across suspend / resume. This isn’t a problem if you’re suspending / resuming all processes at the same time.
                                                                                    • Network connections are basically impossible, which makes them easy: you just drop all connections and restore listening / bound sockets. Most programs already know how to handle the network going away intermittently.
                                                                                    • Local IPC can be interesting. If I create a pipe and fork, then one of the children is frozen to disk, what should happen? If both are frozen and restored together, ideally they’d both get the pipe back in the same state, which means that I need to persist a UUID or similar for each IPC connection so that restoring groups of processes (e.g. a window server and an application) can work.

                                                                                    If you have this mechanism and you have fast reboot, then you don’t necessarily need OS sleep states. If you also have a sudden termination mechanism then you can use this as fallback for apps that aren’t in sudden-termination state.

                                                                                    Of course, it depends a bit on why people are rebooting. Most of the time I reboot, it’s to install a security update. This is more likely to be in a userspace library than the kernel. As a user, the thing I care most about is how quickly I can restart my applications and have them resume in the same state. If the kernel / windowing system can restart in <1s that’s fine, but if my apps lose all of their state across a restart then it’s annoying. Apple has done a phenomenal amount of work over the last decade to make losing state across app restarts unusual (including work in the frameworks to make it unusual for third-party apps).

                                                                                    1. 1

                                                                                      All my devices can sleep/wake fine, but I almost never use it. My common apps all auto start on boot and with my SSDs I boot in a second or two (so same as time to come out of sleep honestly, in both cases the slowest part is typing my password).

                                                                                      1. 1

                                                                                        On my current laptop, it wakes up instantly when I open the lid, enter the password, and the state is as exactly as I left it. (And it’s probably less gigabytes written to disk than hibernation either.)

                                                                            1. 16

                                                                              I wonder who at System76 was responsible for evaluating all possible directions they could invest in, and decided the desktop environment is the biggest deficiency of System76

                                                                              1. 11

                                                                                It’s also great marketing. I’ve heard “System76” way more since they have released Pop_OS. So while people may not be buying machines for the OS it seems that as a pretty popular distro it keeps the name in their head and they may be likely to buy a system on the next upgrade.

                                                                                1. 1

                                                                                  Well I’d buy a machine, but they’re not selling anything with EU layouts or powercords.

                                                                                2. 5

                                                                                  I know a few people who run Pop_OS, and none of them run it on a System76 machine, but they all choose Pop over Ubuntu for its Gnome hacks.

                                                                                  Gnome itself isn’t particularly friendly to hacks — the extension system is really half baked (though perhaps it’s one of the only uses of the Spidermonkey JS engine outside Firefox, that’s pretty cool!). KDE Plasma has quite a lot of features, but it doesn’t really focus on usability the way they could.

                                                                                  There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.

                                                                                  Honestly, I think that if something more friendly than Gnome and KDE came along and was well-supported, it could really be a big deal. “Year of the Linux desktop” is a meme, but it’s something we’ve been flirting with for decades now and the main holdups are compatibility and usability. Compatibility isn’t a big deal if most of what we do on computers is web-based. If we can tame usability, there’s surely a fighting chance. It just needs the financial support of a company like System76 to be able to keep going.

                                                                                  1. 7

                                                                                    There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.

                                                                                    It’s very difficult to do anything meaningful here. Consistency is one of the biggest features of a good DE. This was something that Apple was very good at before they went a bit crazy around 10.7 and they’re still better than most. To give a couple of trivial examples, every application on my Mac has the buttons the same way around in dialog boxes and uses verbs as labels. Every app that has a preferences panel can open it with command-, and has it in the same place in the menus. Neither of these is the case on Windows or any *NIX DE that I’ve used. Whether the Mac way is better or worse than any other system doesn’t really matter, the important thing is that when I’ve learned how to perform an operation on the Mac I can do the same thing on every Mac app.

                                                                                    In contrast, *NIX applications mostly use one of two widget sets (though there is a long tail of other ones) each of which has subtly different behaviour for things like text navigation shortcut keys. Ones designed for a particular DE use the HIGs from that DE (or, at least, try to) and the KDE and GNOME ones say different things. Even something simple like having a consistent ‘open file’ dialog is very hard in this environment.

                                                                                    Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.

                                                                                    1. 4

                                                                                      There’s a lot of room for disruption in the DE segment of the desktop Linux market.

                                                                                      Ok, so now we have :

                                                                                      • kitchen sink / do everything : KDE

                                                                                      • MacOS like : Gnome

                                                                                      • MacOS lookalike : Elementary

                                                                                      • Old Windows : Gnome 2 forks (eg MATE)

                                                                                      • lightweight environments : XFCE / LXDE

                                                                                      • tiling : i3, sway etc etc (super niche).

                                                                                      • something new from scratch but not entirely different : Enlightment

                                                                                      So what exactly can be disrupted here when there are so many options ? What is the disruptive angle ?

                                                                                      1. 15

                                                                                        I think you’re replying to @br, not to me, but your post makes me quite sad. All of the DEs that you list are basically variations on the 1984 Macintosh UI model. You have siloed applications, each of which owns one or more windows. Each window is owned by precisely one application and provides a sharp boundary between different UIs.

                                                                                        The space of UI models beyond these constraints is huge.

                                                                                        1. 5

                                                                                          I think any divergence would be interesting, but it’s also punished by users - every time Gnome tries to diverge from Windows 98 (Gnome 3 is obvious, but this has happened long before - see spatial Nautilus), everyone screams at them.

                                                                                        2. 3

                                                                                          I would hesitate to call elementary or Gnome Mac-like. Taking elements more than others, sure. But there’s a lot of critical UI elements from Mac OS looking, and they admit they’re doing their own thing, which a casual poke would reveal that.

                                                                                          I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time. (I’d say Gnome 2 draws more from both.)

                                                                                          1. 3

                                                                                            I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time

                                                                                            I would have argued that at one point. I’d have argued it loudly around 2001, which is the last time that I really lived with it for longer than a 6 months.

                                                                                            Having just spent a few days giving KDE an honest try for the first time in a while, though, I no longer think so.

                                                                                            I’d characterize KDE as an attempt to copy all the trends for all time in Windows + Mac + UNIX add a few innovations, an all encompassing settings manager, and let each user choose their own specific mix of those.

                                                                                            My current KDE setup after playing with it for a few days is like an unholy mix of Mac OS X Snow Leopard and i3, with a weird earthy colorscheme that might remind you of Windows XP’s olive scheme if it were a little more brown and less green.

                                                                                            But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.

                                                                                            1. 1

                                                                                              But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.

                                                                                              I stopped using KDE when 4.x came out (because it was basically tech preview and not usable), but before that I was a big fan of the 3.x series. They always had settings for everything. Good to hear they kept that around.

                                                                                          2. 2

                                                                                            GNOME really isn’t macOS like, either by accident or design.

                                                                                          3. 3

                                                                                            I am no longer buying this consistency thing and how the Mac is superior. So many things we do all day are web-apps which all look and function completely different. I use gmail, slack, github enterprise, office, what-have-you daily at work and they are all just browser tabs. None looks like the other and it is totally fine. The only real local apps I use are my IDE which is writen in Java and also looks nothing like the Mac, a terminal and a browser.

                                                                                            1. 7

                                                                                              Just because it’s what we’re forced to accept today doesn’t mean the current state we’re in is desirable. If you know what we’ve lost, you’d miss it too.

                                                                                              1. 2

                                                                                                I am saying that the time of native apps is over and it is not coming back. Webapps and webapps disguised as desktop applications a la Electron are going to dominate the future. Even traditionally desktop heavy things like IDEs are moving into the cloud and the browser. It may be unfortunate, but it is a reality. So even if the Mac was superior in its design the importance of that is fading quickly.

                                                                                                1. 2

                                                                                                  “The time of native apps is over .. webapps … the future”

                                                                                                  Non-rhetorical question: Why is that, though?

                                                                                                  1. 4

                                                                                                    Write once, deploy everywhere.

                                                                                                    Google has done the hard work of implementing a JS platform for almost every computing platform in existence. By targeting that platform, you reach more users for less developer-hours.

                                                                                                    1. 3

                                                                                                      The web is the easiest and best understood application deployment platform there is. Want to upgrade all user? F5 and you are done. Best of all: it is cross platform

                                                                                                    2. 1

                                                                                                      I mean, if you really care about such things, the Mac has plenty of native applications and the users there still fight for such things. But you’re right that most don’t on most platforms, even the Mac.

                                                                                                  2. 2

                                                                                                    And that’s why the Linux desktop I use most (outside of work) is… ChromeOS.

                                                                                                    Now, I primarily use it for entertainment like video streaming. But with just a SSH client, I can access my “for fun” development machine too.

                                                                                                  3. 3

                                                                                                    Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.

                                                                                                    Honestly, I’d say Windows is more easily extensible. I could write a shell extension and immediately reap its benefit in all applications - I couldn’t say the same for other DEs without probably having to patch the source, and that’ll be a pain.

                                                                                                    1. 1

                                                                                                      GNOME HIG also keeps changing, which creates more fragmentation.

                                                                                                      20 years ago, they did express a desire of unification: https://lwn.net/Articles/8210/

                                                                                                  4. 1

                                                                                                    It certainly is a differentiator.

                                                                                                  1. 34

                                                                                                    a poor intern that had a difficult to describe kind of flabbergasted expression on his face once the call connected.

                                                                                                    … is it really that surprising?

                                                                                                    I mean this project is interesting and all that, but it seems kind of unusual to use it for social interaction with coworkers. It seems even more distancing than just not having video. Is it really necessary to distract your coworkers with things like these?

                                                                                                    1. 21

                                                                                                      Agreed. This kind of pushes the bounds of “professional” behavior well beyond what I’d expect people to accept. I’m 100% in favor of people being able to express themselves at work, when it’s not a distraction to actually doing work. However these avatars fall so deeply into the depths of the uncanny valley that they can be incredibly painfully distracting to look at, and I find myself unable to actually pay attention to the content. I’d immediately ask a coworker who used one to turn it off… no webcam at all would be vastly preferable.

                                                                                                      1. 10

                                                                                                        At work, I never turn on my webcam. On my desktop, I don’t even have one plugged in. No one cares.

                                                                                                      2. 16

                                                                                                        I admit that in hindsight that was a mistake. However a lot of the reason I use it sparingly is because I hate how I look and would much rather have the ability to present myself in a way that is not the body I was cursed into when I was born into this plane.

                                                                                                        1. 1

                                                                                                          I think this is fine as long as it is opt-in. The UI would advise everyone involved that there is a ridiculous distraction and only show it to those who are OK with it.

                                                                                                        2. 14

                                                                                                          “Anime is real?!” –Intern

                                                                                                          Honestly I’m personally glad that people are putting in the social capital to make this acceptable. Morphological freedom should be a human right; this is a small step towards that.

                                                                                                        1. 19

                                                                                                          Maybe it’s too simple? This comment was part of the “My first impressions of web3” discussion we are having in parallel:

                                                                                                          A protocol moves much more slowly than a platform. After 30+ years, email is still unencrypted; meanwhile WhatsApp went from unencrypted to full e2ee in a year. People are still trying to standardize sharing a video reliably over IRC; meanwhile, Slack lets you create custom reaction emoji based on your face.

                                                                                                          1. 19

                                                                                                            I don’t want to discount Moxie’s otherwise entirely correct (IMHO) observation here but it’s worth remembering that, like everything else in engineering, this is also a trade-off.

                                                                                                            IRC is an open standard, whereas WhatsApp is both a (closed, I think?) standard and a single application. Getting all implementations of an open standard on the same page is indeed difficult and carries out a lot of inertia. Getting the only implementation of a closed standard on the same page is trivial. However, the technical capability to do so immediately also carries the associated risk that it’s not necessarily the page everyone wants to be on. That’s one of the reasons why, when WhatsApp is going to be as dead as Yahoo! Messenger, people are still going to be using IRC. This shouldn’t be taken to mean that IRC is better than either WhatsApp or Slack – just that, for all its simplicity, there are in fact many good reasons why it outlasted many more capable and well-funded platforms, not all of them purely technical and thus not all of them outperformable by means of technical capability.

                                                                                                            It’s also worth pointing out that, while Slack lets you create custom reaction emoji based on your face, the standard way to share video reliably over both IRC and Slack is posting a Youtube link. More generally, it has been my experience that, for any given task, a trivial application of a protocol that’s better suited for that task will usually outperform any non-core extensions of a less suitable protocol.

                                                                                                            1. 6

                                                                                                              It’s also worth pointing out that, while Slack lets you create custom reaction emoji based on your face, the standard way to share video reliably over both IRC and Slack is posting a Youtube link. More generally, it has been my experience that, for any given task, a trivial application of a protocol that’s better suited for that task will usually outperform any non-core extensions of a less suitable protocol.

                                                                                                              I mean, I’m not uploading an entire 10 minute video, but likely something short from my camera. To do so from Slack, I press the button and pick it out of my camera roll. From IRC, I have to upload it somewhere, copy and paste the link, and paste that…

                                                                                                              1. 1

                                                                                                                Fair point, that’s not the kind of video I was thinking of. You’re right!

                                                                                                                1. 1

                                                                                                                  The simplest way for me to share pics among my friends on IRC is to upload the images to our Discord, then share the image from there. Sad, but true.

                                                                                                                  A big thing hobbling IRC is the lack of decent mobile support without paid services (IRCloud) or using a bouncer.

                                                                                                              2. 10

                                                                                                                meanwhile, Slack lets you create custom reaction emoji based on your face.

                                                                                                                This is exactly why email survived all the IMs du jour that have came and gone. A clear vision of what matters and what the core functionality is as well as the problem which it poses itself to solve. All of which Slack lacks.

                                                                                                                1. 13

                                                                                                                  This is exactly why email survived all the IMs du jour that have came and gone.

                                                                                                                  Does this really matter, though? I’ve probably used 10 messaging platforms in the last 25 years, some concurrently for different purposes. They each served their purpose. The transition was, in some cases, a little rocky, but only for a short time. I don’t really think my life would have been meaningfully improved by having used only a single messaging system during that time period.

                                                                                                                  1. 4

                                                                                                                    I’ve probably used 10 messaging platforms in the last 25 years, some concurrently for different purposes

                                                                                                                    Mostly because all the disruptors bootstrapped their network effect by integration with XMPP and IRC and then dropped it when they had enough market share?

                                                                                                                    The flipside of Moxie’s (excellent) observation is that the slow-moving protocols are easy to support - so that is where the network effects live. If the world is split between centralised network X and Y (which will refiuse to interoperate with each other) and you can get the core functionality with the (more basic) protocol, then there is a a value to the core protocol (you can speak to X and Y)

                                                                                                                    1. 7

                                                                                                                      There are several components to a messaging system:

                                                                                                                      • A client, that the end-user interacts with.
                                                                                                                      • A server (optionally) that the client communicates with.
                                                                                                                      • A protocol that the client and server use to communicate (or that clients use to communicate with each other in a P2P system).
                                                                                                                      • A service, which runs the server and (often) provides a default client.

                                                                                                                      In my experience, the overwhelming majority of users conflate all four of these. Even for email, they see something like Hotmail or GMail as a combination of all of these things and the fact that Hotmail and GMail can communicate with each other is an extra and they have no idea that it works because email is a standard protocol. The fact that WhatsApp doesn’t let you communicate with any other chat service doesn’t surprise them because they see the app as the whole system. The fact that there’s a protocol, talking to a server, operated by a service provider, just isn’t part of their mental model of the system.

                                                                                                                      I read a paper a couple of years ago that was looking at what users thought end-to-end encryption meant. It asked users to draw diagrams of how they thought things like email and WhatsApp worked and it pretty much coincided with my prior belief: there’s an app and there’s some magic, and that’s as far as most users go in their mental models.

                                                                                                                      1. 3

                                                                                                                        …then there is a a value to the core protocol (you can speak to X and Y)

                                                                                                                        But if those two apps won’t talk to one another, why would they speak the shared protocol? Plus, most people don’t choose protocols, they choose applications, and most people don’t choose based on “core functionality”, they choose based on differentiated features. I’m not unsympathetic here, I kicked and screamed about switching from Hipchat to Slack (until Hipchat ceased to exist, of course), but I watched people demand the features that Slack offered (threaded conversations, primarily). They didn’t care about being able to choose their clients, or federation, or being at the mercy of a single company. They cared about the day-to-day user experience.

                                                                                                                    2. 11

                                                                                                                      eh, I think it’s potentially more that email is so deeply embedded into society that it’s difficult to remove, rather than about any defining characteristics of the protocol itself :p

                                                                                                                      1. 10

                                                                                                                        I’m still not sure though if you can call it “survived” when the lowest common denominator for email is still

                                                                                                                        • no dmarc
                                                                                                                        • no SPF
                                                                                                                        • MS blackholes at random(status code ok, mail gone, support can “upgrade” your IP; then you get at least a “denied”, and then you’re told to subscribe to some trust-service for email delivery)
                                                                                                                        • google doesn’t like you sometimes
                                                                                                                        • german telekom doesn’t trust new IPs, wants an imprint to whitelist, no SPF
                                                                                                                        • there’s some random blacklist out there marking IPV4s as “dynamic” since 2002, give them money to change that (no imprint)
                                                                                                                        • and if you want push notification the content goes through your mobile OS provider in plaintext
                                                                                                                        • if you want SMTP/IMAP there is no second factor login at all, so you’re probably using the same credentials that everything else uses etc

                                                                                                                        So everyone goes to AWS for a mail sling or some other service. Because the anti-spam / trust model is inherently broken and centralized. Yes email is still alive and I wouldn’t want to exchange it with some proprietary messenger, but it’s increasingly hard to even remotely self host this or even let some company host this for you that isn’t one of the big 5 (because one bad customer in the IP range and you’re out) - except if you don’t actually care if your emails can be received or send.

                                                                                                                      2. 10

                                                                                                                        It is. To properly use IRC, you don’t just need IRC, you need a load of ad-hoc things on top. For example, IRC has no authentication at all. Most IRC servers run a bot called NickServ. It runs as an operator and if you want to have a persistent identity then you register it by sending this bot the username and a password. The bot then kicks anyone off the service if they try to use your username but don’t first notify the bot with your password. This is nice in some ways because it means that a client that is incredibly simple can still support these things by having the user send the messages directly. This is not very user friendly. There’s no command vocabulary and no generalised discovery mechanism, there’s just an ad-hoc set of extensions that you either support individually or you punt to the user. This also means that you’re limited to plain text for the core protocol. Back in the ’90s, Internet Explorer came with a thing called MS Comic Chat, which provided a UI that looked like a comic on top of IRC. Every other user saw some line noise in your messages because it just put the state of the graphics inline in the message. If IRC were slightly less simple then this could have been embedded as out-of-band data and users of other IRC clients could have silently ignored it or chosen to support it.

                                                                                                                        I’m still a bit sad that SILC never caught on. SILC is basically modernised IRC. It had quite a nice permissively licensed client library implementation and I wrote a SILC bot back when it looked like it might be a thing that took over from IRC (2005ish?). It was basically dead by around 2015 though. It had quite a nice identity model: usernames were not unique but public keys were and clients could tell you if the David that you’re talking to today was the same as the one you were talking to yesterday (or even tell you if the person decided to change nicknames but kept their identity).

                                                                                                                        1. 5

                                                                                                                          authentication

                                                                                                                          SASL is a thing now. And clients can get by being simple by simply letting the user enter IRC commands directly.

                                                                                                                          Nickserv is bad mainly because it is an in-band communications mechanism; that has security implications. I have seen people send e.g. ‘msg nickserv identify hunter2’ to a public channel (mistakenly omitting the leading ‘/’).

                                                                                                                          1. 3

                                                                                                                            For example, IRC has no authentication at all.

                                                                                                                            There’s no command vocabulary and no generalised discovery mechanism

                                                                                                                            There is. https://ircv3.net/specs/extensions/capability-negotiation https://ircv3.net/specs/extensions/sasl-3.1

                                                                                                                            About every client and almost every network (the only notable exceptions are: OFTC/Undernet/EFnet/IRCnet) support both.

                                                                                                                            If IRC were slightly less simple then this could have been embedded as out-of-band data and users of other IRC clients could have silently ignored it or chosen to support it.

                                                                                                                            Today it could be done cleanly on many networks, thanks to https://ircv3.net/specs/extensions/message-tags

                                                                                                                        1. 4

                                                                                                                          This is a nice counter to some of the more panicked readings I’ve seen - thanks for posting.

                                                                                                                          Until we know more, I’m inclined to agree with the author’s general position: Pluton at the moment looks like a different type of TPM, and TPMs haven’t killed free software yet despite similar concerns having been raised about them in the past.

                                                                                                                          Honestly, I’m more concerned that Pluton will take off, but be proprietary and locked down so that nobody who doesn’t run Windows will see the benefits. That would be a shame.

                                                                                                                          Just to dilute the optimistic tone: I’m also looking forward to seeing the remote update mechanism get compromised for some really exciting, persistent attacks. Taking bets on whether researchers publish an attack before a nation state gets caught exploiting in the wild.

                                                                                                                          1. 3

                                                                                                                            I remember back when Pluton was called Palladium…

                                                                                                                            1. 3

                                                                                                                              Currently it’s not even really “a different type”, the current production firmware offers the exact same TPM 2.0 interface as the other stuff. Unless the transport protocol is different (HI GOOGLE), it should Just Work™ for your SSH needs or whatever.

                                                                                                                              compromised

                                                                                                                              Heh, yeah, one fun worry is that with AMD, Intel and Qualcomm all using Microsoft’s firmware, one compromise would affect all three vendors, so buying devices from different vendors is no longer meaningful.

                                                                                                                            1. 18

                                                                                                                              Web3 was and will forever be an attempt at grifters to impart the idea that their grift is somehow a logical conclusion to the current state of the web. They’ve hijacked the version number.

                                                                                                                              Web3 isn’t a thing.

                                                                                                                              1. 6

                                                                                                                                Web3 isn’t a thing.

                                                                                                                                It’s a forced meme - I don’t see organic growth of people using it and spreading that way, but rather money pumped in by people with a stake to gain if it grows. (Which is also a lot of other things, but “Web3” especially.)

                                                                                                                                1. 2

                                                                                                                                  Web3 isn’t a thing.

                                                                                                                                  Neither, I think it’s worth pointing out, were “web 1.0” or “web 2.0”.

                                                                                                                                  This entire discourse ascribes a kind of coherence and concrete reality to what, at the time we were getting bombarded with web 2.0 messaging, was self-evidently a wave of hype. Of course this, too, is part of a hype cycle, so none of this is exactly surprising, but the way people writing about it just accede to the framing is a weird thing to experience.

                                                                                                                                  1. 1

                                                                                                                                    Valid point! I’d still take the previous wave of hype (it was at least more subtle regarding exploitation) than this current one. You knew who the major proponents were sourcing their information from for the most part.

                                                                                                                                1. 15

                                                                                                                                  The emerging centralization is exactly what happened on “traditional” web, and is human nature. The web IS decentralized, its just that nobody wants the burden of constant upkeep and baseline of knowledge/skills necessary to host their own stuff. Its 0% surprising to see that in the crypto world because its even more complicated and has an added feature that mistakes can live forever and be technically irreversible. Thinking about people who have gotten wallets stolen or scammed or irretrievable for other reasons - or how the adidas smart contract meant to limit purchases was not well-designed-enough and there is zero recourse BY DESIGN.

                                                                                                                                  Theres this belief in the crypto community that these are all things that can be ironed out and that there is a technical crypto-based solution to all of this, but it seems to me like they are banging their heads against a wall trying to recreate something that exists and functionally works a lot better for 99.9999% of cases with something new with tons of baggage and unsolved problems for a benefit to 0.0001% of cases. Theres a huge UX gap, and it’s going to get bridged (if at-all) by a centralized company - this whole thing feels like a play for usurpation rather than changing the game.

                                                                                                                                  Knowing that there’s major problems with crypto “to be solved” it’s frustrating to see the hype at this level. Its 100% in gold rush / prospecting mode and so many people are going to lose big even if it emerges as a longterm technology.

                                                                                                                                  My own perspective on crypto/blockchain hasn’t changed much in the last 5 years - its a solution looking for a problem. Lately it’s being touted as “the solution to everything”, and I don’t see that. Even more-problematic is that theres an emerging phenomenon where if you don’t play, you get scammed - lots of noise about people minting NFTs on Openseas for content they don’t own, and theres no recourse for creators outside the system.

                                                                                                                                  More writing along these lines:

                                                                                                                                  Is it technically neat? Yes. Time will tell if it pans out anywhere near the hype, and there is a lot of reason to believe it won’t.

                                                                                                                                  1. 10

                                                                                                                                    a solution looking for a problem

                                                                                                                                    Ain’t that the truth. I think there’s also an assumption that “decentralized” is automatically good, as if fiat currency must be bad because SVN was bad. When the central currency authority is a democracy of citizens, decentralization throws away “one person, one vote” with regard to what money is, in favor of “one node owned, one vote”. That should only be a benefit in places where people didn’t get to vote in the first place.

                                                                                                                                    1. 4

                                                                                                                                      When the central currency authority is a democracy of citizens, decentralization throws away “one person, one vote” with regard to what money is, in favor of “one node owned, one vote”.

                                                                                                                                      Can’t agree more. One of the fundamental supposed advantages of cryptocurrency is that unlike centralized banking, it’s democratic, right? You have to convince 51% of the network to adopt your change in order to make it, instead of the bank unilaterally changing the rules. But the problem with this in practice is that the ability to do this is imbalanced and heavily favors people who are able to run nodes and especially people who know how to code.

                                                                                                                                      The exact same power structures exist, except instead of banks - which are at least somewhat accountable to the government and therefore citizens - being in charge, crypto people are creating a future where they are the ones in charge, with no real accountability or oversight because the ability to code is a prerequisite for active participation in the system. I find this idea terrifying. The US Congress (feel free to mentally change this to your country’s legislature or whatever) is dysfunctional, but I would still rather they were in charge of the entire financial system than the comments section of the Orange Site.

                                                                                                                                    2. 5

                                                                                                                                      Knowing that there’s major problems with crypto “to be solved” it’s frustrating to see the hype at this level.

                                                                                                                                      personally the must frustrating/exhausting aspect of the current crypto hype cycle is the unbridled optimism and lack of foresight i see from users. these systems are supposed to be trustless and magical, except “they’re not fleshed out yet, but we’ll fix it in the future by adding trusted third parties to the system in a way that somehow doesn’t undermine its value proposition”.

                                                                                                                                      imagine if someone advertised DNS V2 today as completely trustless but glossed over the fact that root servers are still going to exist.

                                                                                                                                      1. 1

                                                                                                                                        Even more-problematic is that theres an emerging phenomenon where if you don’t play, you get scammed - lots of noise about people minting NFTs on Openseas for content they don’t own, and theres no recourse for creators outside the system.

                                                                                                                                        From the sidelines, I’ve heard this practice described as scamming or even “theft” or couple of times and I’m really not seeing it. It’s as if I looked at this submission and thought “wow this is a great post by Moxie. I wish I owned the lobste.rs submission; maybe I could buy it off faitswulff?” Suppose faitswulff agreed and for $50 I became the official submitter. Has anyone of Moxie, me or faitswulff been ripped off? Not at all. It’s totally bonkers but I’m hard pressed to see the scamming element.

                                                                                                                                        1. 7

                                                                                                                                          Except that NFTs are always advertised as real, indisputable, true, verified-on-the-blockchain, no-pesky-government-can-ever-confiscate-it ownership. If you go far enough down the rabbit hole you end up in the cliché that the only thing you “own” is a receipt that says “you own this receipt”, but that’s not how people actually talk about or advertise the amazing incredible unbelievable future of “web3” and NFTs and such.

                                                                                                                                          1. 1

                                                                                                                                            Leaving aside many artists’ emotional view of copyright, which does tend to veer mostly towards the “theft” part of the spectrum, it’s a fact that being seen as involved in NFTs is a reputational black mark. Some minter creating NFTs without the artists consent can direct a ton of ire onto the artist, simply because many, many people view NFTs as money-grabbing scams, and hate that their favorite artists and projects are “selling out”.

                                                                                                                                          2. 0

                                                                                                                                            a solution looking for a problem

                                                                                                                                            Have to disagree. This new era of finance brings a lot of possibilities not available in the old bureaucratic system with all its guardians. Not talking about dog coins and outright scams, but the legit projects.

                                                                                                                                            There’s so much composeability and innovation happening. One system plugs into the other, and suddenly you have an ecosystem of financial products miles ahead of what’s available to the average pleb “in the normal world”.

                                                                                                                                            Imagine if we only had closed-source software from old giants like IBM, Oracle, Microsoft and Apple instead of the wild-west of creativity and innovation that is open-source. Same idea.

                                                                                                                                            1. 15

                                                                                                                                              “An ecosystem of financial products” that burn copious amounts of energy, are susceptible to rug-pulls, have no actual technical innovation behind any of them, have their users go through shady leaps and bounds to realize their immaterial wealth, were centralized from the start, and only grow in value if there’s enough committed exchange of actual currencies.

                                                                                                                                              Ponzi scheme enthusiasts/grifters can keep spouting this “everything is great” rhetoric, it is not true and never will be. You are part of the problem.

                                                                                                                                              1. 3

                                                                                                                                                What possibilities? There’s a lot of rhetoric and hopes here, but I don’t see much substance.

                                                                                                                                                1. 1

                                                                                                                                                  There’s a bunch depending on how deep you wanna go. But starting from the top, you can:

                                                                                                                                                  • Borrow dollars against your portfolio and have a line of credit not available to you in legacy finance

                                                                                                                                                  • Deposit dollars and earn a pretty good interest rate not available to you in legacy finance (https://app.anchorprotocol.com/)

                                                                                                                                                  • Buy stock derivatives, enabling people in for example Thailand to invest in American stocks like Apple or Tesla (https://mirrorprotocol.app/)

                                                                                                                                                  • Pay online or in real life with your digital assets using something like Chai, Alice, Kash etc. Chai is used by millions of unsuspecting users in South Korea, and thousands of merchants connected: https://techcrunch.com/2020/12/09/seoul-based-payment-tech-startup-chai-gets-60-million-from-hanhwa-softbank-ventures-asia/

                                                                                                                                                  https://www.alice.co/ https://www.kash.io/

                                                                                                                                                  Alice and Kash goes even further than Chai I think. They are basically neo banks.

                                                                                                                                                  Then, there are some more advanced features which might not be for everyone, but you can also:

                                                                                                                                                  • Help secure the ledger/network by staking assets with a validator node, and get rewarded a portion of the transaction fees. It’s like if Visa or Chase Bank would partner with you and give you a small portion of the revenue.

                                                                                                                                                  • Provide liquidity to various projects and get paid for doing so, for example on exchanges. A little bit of passive income with assets that might otherwise only be speculative.

                                                                                                                                                  Personally I think it’s pretty cool with the composeability of a lot of projects. It’s like Lego or functional programming. The output of one project plugs into the input of another, and you can build some pretty sophisticated products/services using the basic building blocks.

                                                                                                                                                  1. 2

                                                                                                                                                    I can’t tell if you’re just really really enthusiastic about cryptocurrencies to a naive degree or just trying to advance an agenda. But referring to fiat currency as “legacy finance” and linking cryptocurrencies to the idea of functional programming seems to me as cheap attempts at gaining more mindshare among the folks here. Either that, or you’re living in a bubble which is sure to burst at some point.

                                                                                                                                                    In any case, maybe you should relax and stop commenting so much here trying to win people over that don’t want to be won over.

                                                                                                                                                    1. 1

                                                                                                                                                      Yeah ofc I’m just here to corrupt your virtuous souls

                                                                                                                                                      But referring to fiat currency as “legacy finance”

                                                                                                                                                      I didn’t. There’s plenty of fiat (stable coins) in crypto as well. I was talking about old banks & their old tech stacks.

                                                                                                                                                      linking cryptocurrencies to the idea of functional programming seems to me as cheap attempts at gaining more mindshare among the folks here

                                                                                                                                                      I’m just writing about aspects that interests me, one of which is FP. There are even a few projects written in Haskell.

                                                                                                                                                      stop commenting so much here trying to win people over that don’t want to be won over.

                                                                                                                                                      A.k.a you want to maintain the echo chamber? There must be room for other views as well.

                                                                                                                                                      bubble which is sure to burst at some point

                                                                                                                                                      There’s booms and busts everywhere. Same in the stock market & tech startups. The long game matters more.

                                                                                                                                                2. 2

                                                                                                                                                  Mandrean, I think it is great that you showed up in the discussion since you’ve done a very polite job of representing a naive crypto-influenced hopeful.

                                                                                                                                                  A few things I’d like to point out:

                                                                                                                                                  • Proof of Stake is not decentralized and in fact cannot be decentralized. It is a permissionsed system where the only way to get voting rights is an out-of-band transaction (buying tokens) there’s also a convincing argument that you can force these platforms to recentralize if they become decentralized via coordinated bribery offers (but only if you have an oracle).
                                                                                                                                                  • The oracle problem is the elephant in the room and addressing that requires social networking technology not verifiable runtimes (blockchains).
                                                                                                                                                  • Proof of Work and also probably Chia’s Proof of Space+Time are things that work but /just barely/, with chia we may have a energy friendly settlement layer to build a p2p digital society on top of but they decided to premine as much as will be mined over the next 20 years.
                                                                                                                                                  • The fundamental coordination problem is not byzantine fault tolerance but rather the tragedy of the commons (sybil resistance is implied by resistance to TotC). Proof of Space+Time is still a resource wasting race so it’s not an improvement over bitcoin until we have the social layer to regulate the competition.
                                                                                                                                                  • NFTs can eventually become deeds to content but only once we have a legitimate platform, which people believe the combination of twitter and ethereum to be. However the legitimate platform (for the time being) is called government.

                                                                                                                                                  Basically, what someone said about the crypo-world programmers not needing to do their jobs to get paid is not only correct but I also think it is the other way around, if you want to get paid you need to do a scam; the problems are hard to solve, require large investment to make a noticable change in the state of tooling to address the problems and finally there is no way our current economic system will value charitable work. Sorry I have to run to lecture now I’d be willing to go deeper into these points.

                                                                                                                                                  1. 1

                                                                                                                                                    Proof of Stake is not decentralized and in fact cannot be decentralized.

                                                                                                                                                    Disagree; PoS can be decentralized not only in terms of geographic location & jurisdiction, but by voting power as well (and maintained that way).

                                                                                                                                                    For example, see study from Leeds University:

                                                                                                                                                    Our results based on simulated paths of the dynamics of nodes’ coins at stake suggest that decentralization of PoS blockchains can be largely maintained with moderate constant or dynamically adjusted coin inflation while decreasing inflation yields a large loss in active staking nodes over time when coin prices are static. Target node participation rates are not only fairer in terms of coin distribution but also yield higher value-weighted returns for participants.

                                                                                                                                                    The oracle problem is the elephant in the room

                                                                                                                                                    There are already decentralized oracle networks, for example Chainlink and Witnet https://chain.link/use-cases https://witnet.io/

                                                                                                                                                    sybil resistance / ToTC

                                                                                                                                                    It’s an interesting space to follow. Witnet’s whitepaper proposes algorithmic reputation: https://witnet.io/witnet-whitepaper.pdf

                                                                                                                                                    I’m sure researchers will come up with other interesting approaches in the coming years as well.

                                                                                                                                                    if you want to get paid you need to do a scam; the problems are hard to solve

                                                                                                                                                    Same applies to being an engineer at Netflix, Spotify, Tesla etc. as well then. Also scams!

                                                                                                                                                    1. 1

                                                                                                                                                      Disagree; …

                                                                                                                                                      Not a matter of opinion if mathematics is to be trusted. Democracy suffers from the same unsustainability problem (via totc).

                                                                                                                                                      chainlink … witnet..

                                                                                                                                                      I am aware of them, they don’t solve the incentive problem. Chainlink seems more bad-faith than witnet but neither works.

                                                                                                                                                      Netflix, Spotify, Tesla, …

                                                                                                                                                      Yes, this is a very good point. Our economic system systematically undermines non-scams so it’s no surprise that you can name many big companies, this is infact what prompted me to comment in the first place. People like to jump on a “solution” whenever there is a hard scary problem and some con man with a rationalization but the problem of “pollution” needs to be managed no matter whether or not we have a proper solution to it in the abstract, however, what we are actually seeing is that the problem is being ignored or made worse by people following the advice of those who claim to have a solution. COVID is being run on the same dynamic.

                                                                                                                                              1. 4

                                                                                                                                                My Linux desktop is horribly boring (it’s stock Gnome 41 on Fedora with a different wallpaper, yawn) and I’m not even booted into it, but slightly more interesting:

                                                                                                                                                • Developing a Windows application, since I have a Windows-Linux dualboot now
                                                                                                                                                • My MacBook’s desktop (with some stuff hidden for privacy reasons, plus I’m not showing you the work workspace - but that’s almost the same, except I have Calendar.app there and Slack instead of Telegram, and the windows are shuffled around)
                                                                                                                                                1. 2

                                                                                                                                                  I can’t stop thinking about this comment by abstract_type on the blog:

                                                                                                                                                  Tests are entirely useless for correctness guarantees; at best, tests are a description of how some happy path works. Correctness is provided by construction (of a type) and composition (of smaller programs), and the easiest way to enable this is by a sound static type system which acts as a lightweight proof (as opposed to actual theorem provers like Coq, Agda etc).

                                                                                                                                                  I feel like this is itself a happy path view of types and type safety: if you just use the right types, everything else falls into place! But my experience with even minimal business software (not side projects) has shown that they can and do contain logic that defies “simply creating a type”. Business logic will always require runtime verification, which is the domain of tests.

                                                                                                                                                  The norm in Javascript-land is to write reams of “type” validation tests: was I passed a string? Do I throw if I’m passed a non string? Do I return a number? Etc etc etc. It’s exhausting and from this perspective I can see how a static type system sounds like a panacea. But even given that a static type system would catch those errors up front, having one wouldn’t protect from business logic errors such as appending a user to a list of users instead of pretending them, or incorrectly formatting a string, or performing the wrong query.

                                                                                                                                                  I feel like Rich Hickey has talked about this at length but I can’t remember which talk it is. (Maybe Maybe Not?)

                                                                                                                                                  1. 1

                                                                                                                                                    It depends how powerful your type system is and how much you use that power. At the limit, a powerful enough type system (dependent types) used to the max can encode everything a unit test can. The happy place is usually some middle ground of course, but if your type system’s only abilities are to describe “is a number” or “is a string” I would say you’re too far down that spectrum for the types to really be useful yet.

                                                                                                                                                    1. 1

                                                                                                                                                      I don’t know almost anything about dependent types, so can they solve the issues I described? Can you create a type that encodes a correctly formatted string? Can you create a type that encodes the right query with duplicating the query?

                                                                                                                                                      1. 1

                                                                                                                                                        With dependent types, you could write the query as a type and have the implementation be inferred from that type, for example. The types are turing complete and can do anything code can do. That’s not to say you always want to do this of course. I think certain kinds of string formatting (probably the kinds you are thinking of) are not the right place to use type to enforce, personally, but of course we’re not talking about anything concrete here and different people have a different feeling about how far they want their types to go.

                                                                                                                                                    2. 1

                                                                                                                                                      I think the simplest way to put is that tests cover what your type system can’t.

                                                                                                                                                      1. 1

                                                                                                                                                        That’s a great, pithy way to put it, thanks. Explains why there are so many ridiculous tests in Javascript-land.

                                                                                                                                                    1. 1

                                                                                                                                                      I feel CMake would have gotten you most of the way there (and abandoning “simple” makefiles) for ease of cross-compiling (which IMHO, is a false economy, but I digress) and building, while keeping the C side of the portability equation. Of course, it wouldn’t be as much fun, nor give you some of the safety properties.

                                                                                                                                                      1. 1

                                                                                                                                                        I have cmake in MozJPEG project I maintain, and I’m struggling to make it find and link zlib and libpng properly for macOS (which is supposed to use dynamic zlib regardless where it gets libpng from). I get absurd errors like “zlib not found (found version 1.2)”.

                                                                                                                                                        Apart from that, I agree – it is probably the most sensible solution for C currently.

                                                                                                                                                        1. 1

                                                                                                                                                          I agree. Debugging is by far the worst part of CMake, even worse than autotools. config.log is mostly easy to reason with, unless you hit an M4 landmine.

                                                                                                                                                      1. 2

                                                                                                                                                        This caused me to wonder: is there a virtual filesystem like /proc on Windows?

                                                                                                                                                        1. 6

                                                                                                                                                          I’m not sure if this is a kernel feature or just a clever shell trick, but PowerShell lets you browse the Registry like a file system (cd HKLM:\)

                                                                                                                                                          Windows Explorer is infinitely pluggable, with Shell Extensions. The Control Panel (used to?) be part of the Explorer shell, too.

                                                                                                                                                          1. 5

                                                                                                                                                            That’s actually a PowerShell provider. Another useful one is Env:\.

                                                                                                                                                            1. 3
                                                                                                                                                              1. 2

                                                                                                                                                                neat, but it only works on the cli, a real VFS in my eyes also allows explorer to access it without any additions

                                                                                                                                                              2. 6

                                                                                                                                                                As others have pointed out, you can get things that expose that in Windows, but that’s not how Windows thinks of things.

                                                                                                                                                                Instead, in Windows, everything is an object. Objects, not files, are the fundamental building block—and like files in Plan 9, it’s objects all the way down. Files? Objects. Directories? Objects. Windows? Objects. Processes? Objects. Objects? Objects. In this way, I actually find the abstraction pleasing and complete, and its peer technologies, like COM and WSH, make Windows development very pleasant (at least to me), but files it ain’t.

                                                                                                                                                                You can use a tool called WinObj to view the hierarchy, and tools like the ones linked down thread attempt to let you browse the object manager as if it were a VFS, but be aware that doing so is like compressing a 3D object into a 2D space. You’re losing an awful lot of information.

                                                                                                                                                                1. 6

                                                                                                                                                                  I don’t see the appeal of files either. Files are unstructured byte streams, which requires serialization and parsing to communicate.

                                                                                                                                                                2. 1

                                                                                                                                                                  Windows has native NT paths that can give access to various internals. They have the UNC form starting with \\?\, \\.\ or \??\ . I don’t really know enough about windows to tell you more, but WinObj is good for exploring these.

                                                                                                                                                                  1. 1

                                                                                                                                                                    By default - I don’t think so. Can there be - yes. https://www.crossmeta.io/fuse-for-windows/

                                                                                                                                                                  1. 1

                                                                                                                                                                    I’m also interested in this idea of resilient computing. A lot of vintage machines are still up and running, the vintage computing and arcade restoration communities have been keeping them going for decades (and have pretty good intuitions about who built more or less robust technology). I think a more interesting and unexplored domain is designing for continuous operation.

                                                                                                                                                                    The best example I’ve come up with is something like the Voyager Spacecraft which has been in continuous operation for the last 44 years.

                                                                                                                                                                    As a strawman proposal imagine a computer with the following specs:

                                                                                                                                                                    • 100mhz CPU
                                                                                                                                                                    • 1 Gb of RAM
                                                                                                                                                                    • 250 Gb of storage

                                                                                                                                                                    If we target 50 years of continuous operation we’ll exceed the operating lifetimes of the RAM and CPU silicon (but honestly we’ll probably have power supply failures long before that). Anyway I think this is a very Long Now Foundation like question, but in the case of computing it’s hard to even get our design specs out to the 100 year mark.

                                                                                                                                                                    1. 4

                                                                                                                                                                      If we target 50 years of continuous operation we’ll exceed the operating lifetimes of the RAM and CPU silicon (but honestly we’ll probably have power supply failures long before that). Anyway I think this is a very Long Now Foundation like question, but in the case of computing it’s hard to even get our design specs out to the 100 year mark.

                                                                                                                                                                      I think we’re (as technologists) just really insecure about how ephemeral our field is, but most things aren’t permament. What’s wrong with being fleeting?

                                                                                                                                                                      In Japan, they solved the ship of thesus - they just tear down an old shrine and build a new one in its place. I don’t see what’s wrong with things changing and evolving over time - that’s just nature, and something the lasts a long time is an abberation.

                                                                                                                                                                      1. 2

                                                                                                                                                                        I don’t think there’s anything wrong with technology being ephemeral! But i think that’s the status quo, and so thinking about the alternative is interesting.

                                                                                                                                                                        I think it’s interesting to imagine what a computer designed to run for 100 years would look like, to consider what parts would fail first and what tools we have to work around those failures.

                                                                                                                                                                        1. 1

                                                                                                                                                                          In Japan, they solved the ship of thesus - they just tear down an old shrine and build a new one in its place. I don’t see what’s wrong with things changing and evolving over time - that’s just nature, and something the lasts a long time is an abberation.

                                                                                                                                                                          Usually I see the refrain that “bridges last for decades so why doesn’t software”, but that belies the reality that bridges are one of the few things that humans build that need to last that long due to the sheer capital cost in building them. Even then, bridges (like anything else) need maintenance. Everything else we build, from bicycles to combustion engines to single-family homes, changes as humans and human society does.

                                                                                                                                                                          1. 1

                                                                                                                                                                            but that belies the reality that bridges are one of the few things that humans build that need to last that long due to the sheer capital cost in building them.

                                                                                                                                                                            This is an interesting point, that I would have agreed with before reading some public documents on a few relatively simple IT projects recently. And I’ve seen similar public documents on some road construction contracts, including small bridges (but that have high weight limits for lumber movement.) Software projects aren’t as cheap as we think they are, unfortunately. :(

                                                                                                                                                                            The up front costs on these relatively simple software projects makes the bridges look cheap. And the software projects don’t last a decade before they’re replaced or overhauled.

                                                                                                                                                                      1. 7

                                                                                                                                                                        The title of this article had me do a double-take: C and C++ development on Windows is great. No sanity is needed.

                                                                                                                                                                        But that’s not what the article is about. What the article is about is that the C runtime shim that ships with Visual Studio defaults to using the ANSI API calls without supporting UTF-8, goes on to identify this as “almost certainly political, originally motivated by vendor lock-in” (which it’s transparently not), and then talks how Windows makes it impossible to port Unix programs without doing something special.

                                                                                                                                                                        I half empathize. I’d empathize more if (as the author notes) you couldn’t just use MinGW for ports, which has the benefit that you can just use GCC the whole way down and not deal with VC++ differences, but I get that, when porting very small console programs from Unix, this can be annoying. But when it comes to VC++, the accusations of incompetence and whatnot are just odd to me. Microsoft robustly caters to backwards compatibility. This is why the app binaries I wrote for Windows 95 still run on my 2018 laptop. There are heavy trade-offs with that approach which in general have been endlessly debated, one of which is definitely how encodings work, but they’re trade-offs. (Just like how Windows won’t allow you to delete or move open files by default, which on the one hand often necessitates rebooting on upgrades, and on the other hand avoids entire classes of security issues that the Unix approach has.)

                                                                                                                                                                        But on the proprietary interface discussion that comes up multiple times in this article? Windows supports file system transactions, supports opting in to a file being accessed by multiple processes rather than advisory opt-out, has different ideas on what’s a valid filename than *nix, support multiple data streams per file, has an entirely different permission model based around ACLs, etc., and that’s to say nothing of how the Windows Console is a fundamentally different beast than a terminal. Of course those need APIs different from the C runtime, and it’s entirely reasonable that you might need to look at them if you’re targeting Windows.

                                                                                                                                                                        1. 4

                                                                                                                                                                          Windows won’t allow you to delete or move open files by default

                                                                                                                                                                          Windows lets the file opener specify whether it supports concurrent delete or move via FILE_SHARE_DELETE, which is badly named and badly understood.

                                                                                                                                                                          I think the bigger issue, which comes back to the spirit of this article, is what to do when a program doesn’t use a Windows API that can specify this behavior: when I last looked the C runtime library didn’t let programs specify this - _SH_DENYNO is for read and write only. So there’s a lot of people who think Windows doesn’t allow deletes or moves of opened files, because they’re running on an abstraction layer that doesn’t allow it.

                                                                                                                                                                          1. 4

                                                                                                                                                                            Yeah, the entire thing leaves a sour taste in the mouth; portability shouldn’t have to mean “it’s just a different variant on Unix”.

                                                                                                                                                                            Hell, I actually prefer developing on Windows with the caveat that you aren’t trying to develop Unix applications on Windows. Of course you’d have a bad time. (Though I do wish the narrow Win32 APIs supported UTF-8 as a system codepage… I think Windows 10 finally fixed this.)

                                                                                                                                                                            1. 1

                                                                                                                                                                              (Though I do wish the narrow Win32 APIs supported UTF-8 as a system codepage… I think Windows 10 finally fixed this.)

                                                                                                                                                                              Yeah, they do; that’s mentioned in the article. I agree that probably ought to have been done earlier, but the sheer level to which normalized UTF-16 is baked into Win32 means it’s usually less mental gymnastics for me to just convert to and from at the API boundary and use the wide APIs.

                                                                                                                                                                              1. 1

                                                                                                                                                                                I opted for using the UTF-8 codepage so I don’t have to think about converting, especially with all the places the application I inherited touches the Win32 APIs. If the API boundary contained into one unit, and converting a MultiByte application to UTF-16 wasn’t so painful, I maybe have decided on a different path.

                                                                                                                                                                                I did file a Wine bug however.

                                                                                                                                                                          1. 20

                                                                                                                                                                            Thermonuclear take: Why use a “TUI”? You’re crudely imitating a real GUI with the crippling limitations of a vt220, when you’re in an environment that can almost certainly handle a real GUI.

                                                                                                                                                                            1. 17

                                                                                                                                                                              The biggest reasons for me:

                                                                                                                                                                              • Low resource usage
                                                                                                                                                                              • I can run it on a different machine and SSH to it (e.g. IRC bouncer)

                                                                                                                                                                              (And for a combination of those two: I can run it on a low-powered machine like my home raspberry pi server…)

                                                                                                                                                                              1. 7

                                                                                                                                                                                I’ve found that the richest TUIs are often very CPU heavy. Redrawing TUIs seems much more expensive than updating GUIs. It’s not very surprising since they’re not really meant for selective updates unlike current graphical technologies.

                                                                                                                                                                              2. 5

                                                                                                                                                                                Terminals are an excellent example of a mixed bag. There’s a lot about terminals that is not great, perhaps most importantly the use of inband signalling for control sequences. That said, they’re also a testament to what we can achieve when we avoid constantly reinventing everything all the time.

                                                                                                                                                                                There are absolutely limitations in the medium, but the limitations aren’t crippling or nobody would be getting anything done with terminal-based software. This is clearly just not true; people use a lot of terminal-based software to great effect all the time. Unlike most GUI frameworks, one even has a reasonable chance of building a piece of software that works the same way on lots of different platforms and over low-bandwidth or high-latency remote links.

                                                                                                                                                                                1. 9

                                                                                                                                                                                  How are modern terminals not a case of reinventing? They’ve taken the old-school VT100 with its escape sequences and bolted on colors (several times), bold/italic/wide characters, mouse support, and so on. All of this in parallel with the development of GUIs, and mostly while running on top of an actual GUI.

                                                                                                                                                                                  I’m not denying there’s a benefit to having richer I/O in a CLI process where you’re in a terminal anyway, but a lot of the fad for TUI apps (Spotify? Really?) seems to me like hairshirt computing and retro fetishization.

                                                                                                                                                                                  If you’d asked 1986 me, sitting at my VT220 on a serial line to a VAX, whether I’d rather have a color terminal with a mouse or a Mac/Linux/Windows GUI desktop, I’d have thought you were crazy for even offering the first one.

                                                                                                                                                                                  1. 5

                                                                                                                                                                                    How are modern terminals not a case of reinventing? They’ve taken the old-school VT100 with its escape sequences and bolted on colors (several times), bold/italic/wide characters, mouse support, and so on. All of this in parallel with the development of GUIs, and mostly while running on top of an actual GUI.

                                                                                                                                                                                    I would not consider it reinventing because in many cases, at least when done well, you can still use these modern applications on an actual VT220. Obviously that hardware doesn’t provide mouse input, and is a monochrome-only device; but the control sequences for each successive new wave of colour support have generally been crafted to be correctly ignored by earlier or less capable terminals and emulators. Again, it’s not perfect, but it’s nonetheless an impressive display of stable designs: backwards compatibility and long-term incremental improvement with tangible results for users.

                                                                                                                                                                                    I’m not denying there’s a benefit to having richer I/O in a CLI process where you’re in a terminal anyway, but a lot of the fad for TUI apps (Spotify? Really?) seems to me like hairshirt computing and retro fetishization.

                                                                                                                                                                                    I’m not sure what you mean by “hairshirt” but it certainly sounds like intentionally loaded, pejorative language. I have been using the desktop Spotify application for a while, and it uses a lot of resources to be impressively sluggish and unreliable. I expect a terminal-based client would probably feel snappy and meet my needs. Certainly Weechat does a lot better for me than the graphical Slack or Element clients do.

                                                                                                                                                                                    I’m not going to make you use any of this software, but I would suggest that even if it is only a “fad”, who cares? If it makes people happy, and it hurts nobody, then people should probably just do it. Both graphical bitmap displays and character-cell terminals have been around for a long time; they both have pros and cons, and I don’t expect one size will ever fit all users or applications.

                                                                                                                                                                                2. 4

                                                                                                                                                                                  That’s a very good question, honestly.

                                                                                                                                                                                  However, I haven’t seen any kind of graphical application (like, using the whole set of features gotten from full access to visual display) yet still being competely usable from keyboard only. Except Emacs, which is a very nice example, but I intentionally wanted to avoid any kind of text editors in this discussion.

                                                                                                                                                                                  After all, if I even stumble upon some sort of UI framework including full keyboard operation in REPL-style maner + shortcuts, showing various graphical data types (interactive tables, charts, data frames, scrollable windows, etc.) I’ll definitely test it thoroughly as long as it’s able to be shipped onto customers’ desktops (so yeah, Arcan is a suggestion, but not really fitting in current model of application deployment).

                                                                                                                                                                                  1. 6

                                                                                                                                                                                    Most GUI toolkits can be operated by keyboard? Windows was designed to be usable without a mouse, for instance.

                                                                                                                                                                                    I do note that GUI vs. CLI (and other things like mouse/keyboard dependency) isn’t a dichotomy. See: CLIM.

                                                                                                                                                                                    1. 1

                                                                                                                                                                                      A couple examples of the top of my head (though I’m not trying to make the case that all GUI apps can be driven this way, and there are tons of terrible GUI apps out there) that do offer full keyboard operation:

                                                                                                                                                                                      • IntelliJ IDEs
                                                                                                                                                                                      • Pan newsreader
                                                                                                                                                                                      • Nautilus file browser
                                                                                                                                                                                      • Evince PDF reader
                                                                                                                                                                                      • KeePassX

                                                                                                                                                                                      Those are just some apps I regularly use with no mouse usage at all.

                                                                                                                                                                                      1. 1

                                                                                                                                                                                        Most well-implemented Mac apps can be used keyboard-only, thanks to Apple’s accessibility features and lesser-known keyboard shortcuts like those for activating the menu bar.

                                                                                                                                                                                      2. 3

                                                                                                                                                                                        I think it’s for the same reason people write web GUI even if native GUI is generally superior.

                                                                                                                                                                                        1. 2

                                                                                                                                                                                          If I really mess up my Linux computer and I can’t get my window manager / X11 / Wayland to run, I can still get stuff done in TUIs while I attempt to fix it.

                                                                                                                                                                                          Also, while others point out low resource usage, I’ll specifically mention lack of GPU acceleration as a situation where I’d rather use a TUI. For example, software support for the GPU on my MNT Reform is spotty, which means some GUIs are painfully slow (e.g. Firefox crawls because WebRender doesn’t support the GPU), but there’s no noticeable difference in my terminal emulator.

                                                                                                                                                                                          1. 1

                                                                                                                                                                                            I currently do all my work sshed into my desktop in the office (combination of wfh and work’s security policies which mean I can’t access the code on my laptop). TUIs are great for that.

                                                                                                                                                                                            1. 1

                                                                                                                                                                                              Because TUI might be esoteric enough to avoid the attentions that might lead it into the same CADT that X11 got?

                                                                                                                                                                                              1. 4

                                                                                                                                                                                                Unix retrofetishists love TUI stuff, so no.

                                                                                                                                                                                                Besides, ncurses and the VT isn’t much better than X API-wise, anyways.

                                                                                                                                                                                                1. 1

                                                                                                                                                                                                  Guess that’s true. Still want to shake my stick at them until they get off the grass.