Threads for br

  1. 11

    So this is not a dig at them, but what are they trying to achieve? Mozilla overall has recently issues with both user retention and funding. I’m not sure i understand why they’re pushing for an entirely new thing (which I assume cost them some money to acquire k9) rather than improving the core product situation?

    Guesses: a) those projects so separate in funding that it’s not an issue at all, or b) they’re thinking of an enterprise client with a paid version?

    1. 9

      These things are indeed separate in funding. Thunderbird is under a whole different entity than, say, Firefox

      1. 2

        Aren’t they both funded by the Mozilla Foundation? How are they separate?

          1. 7

            @caleb wrote:

            Aren’t they both funded by the Mozilla Foundation? How are they separate?

            Your link’s first sentence:

            As of today, the Thunderbird project will be operating from a new wholly owned subsidiary of the Mozilla Foundation […]

            I’m confused…

            1. 2

              Seems pretty clear by the usage of the word “subsidiary”

              Subsidiaries are separate, distinct legal entities for the purposes of taxation, regulation and liability. For this reason, they differ from divisions, which are businesses fully integrated within the main company, and not legally or otherwise distinct from it.[8] In other words, a subsidiary can sue and be sued separately from its parent and its obligations will not normally be the obligations of its parent.

              The parent and the subsidiary do not necessarily have to operate in the same locations or operate the same businesses. Not only is it possible that they could conceivably be competitors in the marketplace, but such arrangements happen frequently at the end of a hostile takeover or voluntary merger. Also, because a parent company and a subsidiary are separate entities, it is entirely possible for one of them to be involved in legal proceedings, bankruptcy, tax delinquency, indictment or under investigation while the other is not.

      2. 5

        They’re going to need to work on a lot of things, including a lot of stability improvements as well as better/more standard support for policies and autoconfig/SSO for Thunderbird to really be useful in enterprise.

        Frankly, Thunderbird is the only real desktop app that I know of that competes with Outlook, and it’s kind of terrible… there really is a market here, and I don’t think that working on android client is what they need

        1. 2

          Gnome Evolution works better than Thunderbird in an enterprise. For thunderbird IIUC you need a paid add-on to be able to connect to Office365 Outlook mailboxes (in the past there used to be an EWS plugin that worked with onprem Outlook, but doesn’t seem to work with O365), whereas Evolution supports OAuth out of the box.

          1. 4

            Thunderbird supports IMAP/SMTP Oauth2 out of the box, which O365 has if your org has it enabled. What it lacks (and what Evolution has in advantage) is Exchange support.

            If your org has IMAP/SMTP/activesync enabled then you can even do calendaring and global address completion using TbSync, which I rely heavily on for CalDAV / CardDAV support anyway (though I hear Thunderbird is looking to make these two an OOB experience as well)

        2. 3

          I can’t say for certain, but I think maybe they’re looking to provide a similar desktop experience on mobile. I use Firefox and Thunderbird for work, and it is a curious thing to note that Thunderbird did not get any kind of Android version. Firefox already released base and Focus as Android applications, so it would be cool to see a Thunderbird exist in the (F)OSS Android ecosystem.

          I have been a K-9 user for a number of years but I do think it’s UI could use a bit of an update. I have been using it since Android 5.0 and it has basically had the same interface since the initial Material release. This could be an exciting time for K-9 to get a new coat of paint. I will love K-9 mail even if this doesn’t pan out well.

          1. 4

            K-9 mail is almost perfect the way it currently is on Android (at least when it comes to connecting to personal mailboxes). I can’t speak about how well it’d work in an enterprise because I keep work stuff off my phone on purpose.

            1. 4

              The biggest functional shortcoming with K-9 is no support for OAuth2 logins, such as GMail and Office365. You can currently use K-9 Mail with an app-specific password in GMail, but Google will be taking that ability away soon. I also have some minor issues with notifications; my home IMAP server supports IDLE, but I still often see notifications being significantly delayed.

              In terms of interface, there was a Material cleanup a while ago, and the settings got less complicated and cluttered, so it’s very usable and reasonably presentable. But it does look increasingly out of date (though that’s admittedly both subjective and an endless treadmill).

              1. 2

                oauth2 was merged a few days ago https://github.com/thundernest/k-9/pull/6082

                1. 1

                  Ah, yeah, I saw elsewhere that it’s the only priority for the next release.

        1. 3

          Under the WSL and Wine section, this page mentions that Microsoft did some kinds of unholy hacks with binfmt_misc to transparently call PE executables on the WSL command line to run them in Windows. I don’t know too much about binfmt_misc, but I’m really curious how this works, and if it’s possible to make it something more standard. I can’t imagine turning that off in WSL for anything — it’s one of my most-used features (and I have no idea how I’d get around it.)

          Can anyone shed some light on how the PE .exe support in WSL works, please? I’d love to read more about it. I’m not really sure how tk RE it myself.

          Two things off the top of my head that I really rely on this compatibility for: xdg-open-wsl (which implements xdg-open through Windows’ associations), and clip.exe which essentially does what xclip does (that is, copy and paste on the Windows clipboard).

          1. 3

            That’s something I’m wondering about too. The entry in /proc/sys/fs/binfmt_misc that seems to be used for Windows doesn’t have any configurable parameters like magic. If I try to define one that has a more specific MZ magic, e.g. 4d5a71467044, then it doesn’t respect that. So it’s probably a bug in the WSL kernel.

          1. 3

            The RPi 400 was still in stock recently — it’s a slightly-overclocked RPi 4 built into a keyboard. I bought one from SparkFun in April.

            1. 4

              The 400 is worth it IMHO, relative to the plain rpi4. The whole keyboard acts as passive heatsink. It outperforms rpi4 with fancy cooling solutions. And it is cheaper than these rpi4 after you add the cost of the cooling solutions.

              1. 3

                I hadn’t heard this before and had been looking into cooling cases for an rpi4. So I had a quick search and found this article, really interesting that yes, the rpi400 stays passively cooler than the rpi4 in an active Argon case. Awesome!

                https://tutorial.cytron.io/2020/11/02/raspberry-pi-400-thermal-performance/

                1. 1

                  apart from the looks / space required that is actually a good argument for taking it as “homeserver”

                2. 3

                  Plus, where else are you going to find a keyboard with a raspberry key?

                  1. 1

                    Here’s one that doesn’t have a pi built-in: https://www.raspberrypi.com/products/raspberry-pi-keyboard-and-hub/

                3. 1

                  Microcenter has those in stock locally here right now, too.

                1. 1

                  Wow, that’s a throwback! RedHat Linux 5.1 was the first Linux distro I used!

                  1. 3

                    Obviously linux itself isn’t going to drop BIOS support, so how much stuff would this let RH get rid of?

                    I’m asking because everyone seems to agree that UEFI has a better programming model+api, but I can’t see how removing BIOS support makes further improvements to the UEFI code paths

                    1. 4

                      The idea isn’t that removing BIOS support directly improves UEFI, or anything like that. But using BIOS enforces a lot of ancient, weird conventions that require hacks or otherwise complex code to maintain feature parity with UEFI. A lot of these hacks and such are mature, but they’re still extra codepaths to be maintained.

                      One example that comes to mind: if BIOS support is deprecated, also deprecating MBR Extended Partition support will affect very, very few additional users.

                      As another example, you don’t really need to support any bootloaders (not counting efistub) if you only support installation on UEFI platforms.

                      It also makes things like one-button recovery/reset much easier to do the way Apple and Microsoft have made possible on their own systems the past couple of years.

                      1. 1

                        I have mixed feelings about this, because I only recently retired my last non-UEFI machine and it was still for sale 2-3 years ago, but probably quite a lot. The old BIOS model meant that you had to support MBR-based partition tables and bootloaders. This was less of a problem for *BSD than Linux because Linux used MBR partitions directly, whereas BSDs used a single MBR partition and then stuck a BSD partition table inside it. If you want to do something like UFS boot environments then you need at least two root partitions (A/B - one in use, one for installing updates), one partition for user data, and one swap partition. You might also want a separate boot partition. With *BSD, all of these could go in a single MBR partition, with Linux you’d be hitting the limit of MBR partitions if you needed this (and you’re passing it if the user wants to support it). With UEFI and Linux, this is completely fine.

                        With the legacy BIOS boot process, the first-stage boot loader has to fit in a single disk sector. This then has to find the second-stage loader somewhere else. You can use an XT or PS/2 keyboard in that mode because it’s exposed directly by the BIOS but you don’t have the space for a USB stack, so you can’t use a USB input device during the first stage (which may be when you need to select the physical disk to boot from). Some BIOSes support PS/2 emulation by running a USB stack in the BIOS but often doing this prevents the guest OS from seeing the USB HID and so prevents you from using additional features of the keyboard. With UEFI, the firmware runs a USB stack and exposes services for using it to the bootloader.

                        All of the work arounds for the old BIOS-style systems increase the complexity of the testing matrix. You can start depending on UEFI functionality for non-optional features of the system. That’s probably a big win.

                      1. 8

                        A good side effect of using a password manager in the browser, is that it won’t be fooled by this. The user may of course override it by pasting in their password regardless – it is therefore necessary to train the users to always be extremely suspicious if the username and password isn’t autofilled/detected by the password manager.

                        1. 2

                          I’ve noticed a number of legitimate (Shopify?) e-commerce websites that prompt the user to enter their PayPal credentials directly into elements on the merchant’s website. It’s crazy that they’re encouraging this kind of user behavior.

                          1. 3

                            Or there’s Plaid, which has you enter the credentials for your bank and then the 2FA code into whatever app or website you are connecting.

                            1. 1

                              I’ve noticed a number of legitimate (Shopify?) e-commerce websites that prompt the user to enter their PayPal credentials directly into elements on the merchant’s website. It’s crazy that they’re encouraging this kind of user behavior.

                              Crazy or not crazy, it depends on how willing you are to even entertain the idea of the current web as something sane.

                          1. 1

                            How does one use a parallel port device now-a-days?

                            1. 1

                              The short answer is you either use an older motherboard with a built-in parallel port, or you use an add-in card that adds a port via PCIe. As far as I understand, the PCIe cards are “real” parallel ports, but USB to LPT adapters don’t work properly for anything that’s not a printer.

                            1. 2

                              This is pretty cool! I love to see articles about RK3399 support, because the Pinebook Pro uses the same SoC — it’s basically a RockPro64 shoved into a laptop chassis.

                              1. 3

                                Interestingly, someone who posted to the HN thread managed to compile and run the game. I hope they connected with the author of the post.

                                1. 3

                                  Yeah, I had the tools handy, so I posted a quick comment over there. It looks like he got it working after reading my comment! I had previously resurrected my old Flash toolchain because the community for a game I made was asking me to help them with their modding efforts, so it wasn’t much effort for me to get it working.

                                  1. 2

                                    It doesn’t solve the core issue of the proprietary technology you were using being discontinued, but it’s also totally possible to just rig up older versions of compilers in an emulator or an older machine. This kind of thing is a huge part of why I keep old Macs lying around.

                                  1. 1

                                    Someone commented on TFA:

                                    Any reason A/B updates approach was chosen over OStree?

                                    I don’t understand enough about OSTree, but I thought it was complementary to A/B updates. That is, you can have both, and the benefits don’t really overlap all that much (besides having an immutable rootfs).

                                    Can anyone weigh in on this? I feel like either I or the commenter I’m quoting are missing something serious.

                                    1. 4

                                      OSTree does A/B updates but on a single filesystem, rather than on a pair of partitions. The upsides include on-disk deduplication of the two versions: identical files between the current and rollback deployments are hardlinks to one another. But it means less isolation between the two deployments, and the need to mount the rootfs read-write (/usr is a read-only bind mount but the filesystem as a whole is writable).

                                    1. 3

                                      That doesn’t explain why they still use it.

                                      When Apple moved from Classic macOS (i.e. MacOS 9) to Mac OS X, they kept HFS+, which uses a colon as path separator. On traditional MacOS it was never really exposed to the user so it didn’t matter what character it was.

                                      In Darwin HFS+ still uses : as the separator, but it’s translated to / everywhere. The only way you’d know this is if you try to create a file or directory with : in the name.

                                      1. 1

                                        If you write AppleScripts for Classic Mac OS, you’ll see colon path separators. In fact modern AppleScript still uses colons as path separators, with modifiers such as […] as POSIX path to convert between slashes and colons.

                                        1. 1

                                          That kind of forward slash usage was no doubt why IBM insisted on keeping it in DOS 2.0. Changing the slash semantics had a clear potential for destroying data, especially when running batch files written for DOS 1.1. Something like ‘COPY FOO + BAR /A’ has rather different semantics when /A is a switch vs. when /A is a file or directory in the disk’s root directory.

                                          1. 1

                                            Does a file system ever deal with path separators? Isn’t that part of the “DOS” layer?

                                            1. 1

                                              In Linux, there’s a VFS which does path parsing so it can talk to the file system in terms of a single object.

                                              In Windows, the file system owns the mount point and just gets a string, so it’s responsible for path parsing and maintaining a tree of objects.

                                              (I don’t know about OS 9.)

                                              The meta-point to me is if you start building an operating system with one file system, there’s no point having anything like a VFS. Doing something like that makes sense when you have many file systems and want to maximize code reuse to keep each file system focused on disk formats, not reinventing data structures. Unfortunately that’s an evolutionary process that occurs as the number of file systems grows.

                                          1. 5

                                            I’d prefer a modern A/UX to Win95 really - anyone made a Mac Classic Platinum look-alike on a Unix core?

                                            1. 2

                                              I, too, would be here for this. TBH, I have a great nostalgia for Classic Mac OS; ok, sure, rebooting all the damn time was a drag, but being able to copy the System folder to my RAM disk and then reboot literally in seconds …

                                              Well. Surely the world of computing has progressed, but I can’t help feel like something has been lost.

                                              1. 2

                                                Booting from a RAM disk was gone I think as early as the first PPC, which is the 6100 from around 1991. But I agree that there were a lot of really neat features that were sort of lost to time.

                                                One thing that was super cool was MacsBug. You copy a file into the system folder and reboot, and then if you just press a button on the computer or a modified 3-finger salute it would drop into a debugger with a disassembler, register values, peeking/poking, jumping to different parts of memory, and more.

                                                I don’t think it’s even possible to do anything like that anymore (on any modern operating system) without a separate debugger host attached to a special kernel in some sort of debug mode, sometimes even with special hardware.

                                                1. 1

                                                  ISTR being able to boot from a RAM disk on my 8600, but that might just be brain worms.

                                            1. 18

                                              ctrl+L 🤫

                                              1. 4

                                                why is this the most up-voted comment?

                                                1. 6

                                                  I assume lots of people didn’t know that most terminals use readline and thus obey the typical Emacs keyboard shortcuts.

                                                  1. 8

                                                    Terminals don’t use readline, shells do. The terminal is just a box that sends and receives characters (including key combinations such as control+L — that’s just another ASCII character).

                                                    This probably sounds pedantic, but it’s an important distinction because not all shells do use readline, which can make things feel or behave rather differently.

                                                    For example, fish most certainly does not use readline (on any terminal), and that’s one of what makes it so different from ordinary shells. Neither does zsh, although it does a great job at emulating it (and bash as a whole, frankly).

                                                    1. 4

                                                      Admittedly, I was really unsure if it’s the shell or the terminal, when I sent this comment late last night. Thank you for providing the additional details. I agree it’s an important one :)

                                                    2. 4

                                                      Alright, but your comment completely misses the point of the linked project? Did you actually think people were looking for a faster way to clear their screen after seeing this? lol

                                                      1. 1

                                                        Yeah, I believe there were some who didn’t know about the short cut (but also people who upvoted because they agreed with the dismissive subtext and think that CTRL+L is better :))

                                                        1. 2

                                                          The idea of the latter is just insane to me, but I think you have to be right. It is not better, the goal of this project was obviously not for speed, but for entertainment. ctrl+l (you don’t need a capital L btw) is definitely losing by that criteria.

                                                1. 1

                                                  Password mangers built into the browser also help here - the remote browser won’t have the passwords saved, which hopefully will raise an alarm or stop the attack (if the saved password was too complex to remember).

                                                  1. 1

                                                    And the local browser won’t be able to fill the login screen anyway, since the form will be rasterized. As far as the browser is concerned, it’s a static image or something.

                                                    This is still ridiculously devious, though, and probably very effective. Most people still don’t even use password managers.

                                                    1. 1

                                                      At $WORK, they’ve been pushing to eliminate passwords so much that it now seems incredibly retro when I see one and on any work system it’s a big red flag if something wants me to enter a password: it should happen only once, when I first use a device. The authorisation flow looks something like this:

                                                      1. I get presented with a list of known accounts.
                                                      2. I select the one that I want.
                                                      3. The server sends a random number to my browser.
                                                      4. My browser requests TPM access to encrypt it with a public key.
                                                      5. The system requires biometric ID to authorise the signing.
                                                      6. My browser encrypts sends the encrypted version to the server.
                                                      7. The server decrypts it with the public keys it has on file for me and either accepts it or decides that this is a resource needing MFA. In the second case:
                                                        1. I get a notification on my phone and the browser provides me a two-digit number.
                                                        2. I enter the number in my phone, tap the fingerprint reader, and hit approve.

                                                      This kind of attack would first fail in step 1: I wouldn’t see my accounts listed. If I entered my username, then at step 3, the remote browser wouldn’t have the keys and so it would fall back to password auth. This is now something sufficiently unusual that I’d start to get really nervous. Step 7 doesn’t really help with this kind of attack except by giving me a bit more thinking time.

                                                  1. 4

                                                    The biggest challenge was storage. Hetzner charges around €50/month for a 1 TB volume (others have comparable pricing).

                                                    They also offer dedicated servers with 8TB of storage and 64GB of RAM for around €45 a month. I know because I’ve been looking into a similar setup for hosting my plex library. I’m still trying to decide whether I want to rent a dedicated server, or bite the bullet and build a proper homelab.

                                                    1. 4

                                                      Why not use a Hetzner storagebox ( https://www.hetzner.com/storage/storage-box ) or even a storage share ( https://www.hetzner.com/storage/storage-share )? 5 TB for €11/month for example, and traffic between that and a hetzner cloud vm (or any of the real servers there) is free (and fast enough for stuff like this). I don’t think there is a real need to have block (nor blob) storage, mounting webdav, samba or sshfs works perfectly fine in my experience.

                                                      1. 1

                                                        I actually wasn’t aware of that (apparently I didn’t do enough research). In my particular case though, I want to use the machine to host a few VMs as well. €11/month for 5TB (€12 after VAT for me) is pretty good though. If storage boxes supported iSCSI, I’d definitely consider it, but for now I’m leaning toward the homelab option as I don’t strictly need anything public facing.

                                                        1. 2

                                                          Yeah, if they offered block storage via iSCSI, that’d be something else. We can always dream :-)

                                                          Personally, I have a Hetzner AMD box (can’t remember which exactly) with 2 ssd drives (raided) that host my containers and VMs, and a storage box mounted via CIFS for anything largeish that doesn’t need to be accessed fast, as well as backup of my VM snapshots and other data. That worked out quite well so far, esp. for the price.

                                                          1. 1

                                                            That’s a nice solution, and I’ll look into it. A smaller amount RAIDed NVMe attached directly to the server for VMs, and a reasonably sized storage box mounted via CIFS would actually be better for my use case than 8TB of slow HDDs, which now looks like a poor compromise between the two needs.

                                                            I’m curious now about the rest of your setup. What host OS / hypervisor / container-orchestrator are you using for containers/VMs?

                                                            1. 1

                                                              Yes, I thought the same when considering the spinning disk version of this server, and the storage box solution also has the advantage that I don’t have to worry about raiding the hard-disks to get some redundancy, effectively cutting storage space in half. I trust the Hetzner storage box will do a much better job at keeping my data relatively safe than I could. I was a bit concerned about access performance at first, but for what I do with it (which is arguably nothing very demanding), it has been no problem at all.

                                                              It’s been a while since I set-up that box, it just chugs along without many problems, just doing regular updates on it. This is the second iteration of my setup, for the first one I used Proxmox, which worked quite well, but for this I didn’t even bother, and just installed k3s on Debian stable. That runs most of my services. I also have kvm to host a Mac OS X VM which I use to build Mac binaries, and another special purpose Linux VM if I remember right. It’s a bit more effort to setup than Proxmox, but overall more flexible and nowadays it’s just considerably easier to do containers in kubernetes than lxc.

                                                              I’ve got my 2 ssds split into half each, and using a raid1 btrfs for system and boot (so if I loose one disk the system will still boot – plus there might be a tiny bit better read performance) and data I want to keep a bit more safe (in addition to backup), and the other half is just for ‘normal’ fast storage that is attached to some of the vms/containers, so effectively I’m getting ~ 750GB of fast storage space, 250 of which are raid-1’ed.

                                                              Ah, and as I’ve written in another comment, I also use gocryptfs to encrypt part of the storagebox filesystem that is mounted via CIFS (so, basically gocryptfs inside a samba share). Which at first was just an experiement, because I thought the performance would surely be atrocious. But it turned out to not be too bad, every few months it hangs itself, but a force remount usually fixes that.

                                                              Wireguard for accessing the box and the VMs.

                                                        2. 1

                                                          Someone on Twitter (was that you?) also suggested that, that looks great. I think I’ll switch to a storage box, rclone supports SFTP after all.

                                                          1. 1

                                                            Or you could just go with CIFS and don’t use rclone at all. That works quite well for my Hetzner root box. I haven’t used it with cloud VMs myself yet, but I don’t think it’d work any different. I’ve encrypted my storagebox using gocryptfs (which probably adds a bit of latency), but even without that I don’t see any advantage of using SFTP over CIFS from a Hetzner VM. Filesystem listings and random access should be quite a bit faster compared to the rclone solution, but that’s just me guessing, maybe rclone is exceptionally good with caching.

                                                            But I’m probably just over-optimizing, esp. if it’s just for Plex. As long as you make sure to have the Plex metadata locally on your VM disk, the worst that will happen is that a movie starts a second or two later, probably.

                                                            EDIT: or you could just filesystem-mount via sshfs or webdav if you don’t like CIFS? I haven’t tried that myself, so it might not work with storagebox, but I’d imagine any (more or less) ‘native’ way to mount a filesystem would always be better than rclone, if available?

                                                        3. 3

                                                          Homelabs are definitely cool! If you have a public IPv4 this is probably the cheapest option.

                                                          The dedicated server makes sense when you have media that needs to be re-encoded as you‘ll have access to the hardware encoder.

                                                          1. 1

                                                            8TB storage and 64GB of RAM for around €45 a month

                                                            Really? Is there a catch? That’s a pretty ridiculously low price even just for 8 TB block storage alone without any compute!

                                                            1. 3
                                                              1. Spinning rust

                                                              2. Limited selection of data centers (Germany and Finland)

                                                              3. You’re literally just renting two hard disks, with no bells and whistles so you don’t have any redundancy or reliability that cloud block storage (e.g. AWS EBS) would provide.

                                                              By default it’s two 4TB mechanical disks configured as software RAID1, but you should be able to install your own OS and treat it as 8TB of RAID0 block storage if you want.

                                                              https://www.hetzner.com/dedicated-rootserver/ex42/

                                                              That works fine for a server storing torrented pirated media linux ISOs, but it’s not ideal for everything.

                                                          1. 2

                                                            This seems a bit silly to ask, but if I don’t care about data integrity is there a way to make Windows/NTFS work similarly, waiting to flush caches until they’re full (ideally with a timeout…)?

                                                              1. 1

                                                                Thanks!

                                                            1. 3

                                                              Can git be implemented theoretically for System {6, 7}? I suppose stuff like sha256ing will be difficult for the hardware?

                                                              1. 7

                                                                I don’t see why not - libgit2 itself is written in C89 (and I’m the guy that’s pedantic about it by building it with VC++6 for my own perverse projects), but it’s a matter of OS support stuff.

                                                                What’ll be an annoyance is implementing the classic Mac OS stuff - the path syntax, the concept of resource forks, and \r newlines. Oh, and a GUI, because the classic Mac OS has no command line.

                                                                1. 2

                                                                  If classic MacOS doesn’t have a command line, what kind of interface does it have for having programs interact with each other in general? Or was that simply not a thing?

                                                                  1. 13

                                                                    From our friend ugh.pdf:

                                                                    Pipes are not the be-all and end-all of program communication. Our favorite Unix-loving book had this to say about the Macintosh, which doesn’t have pipes:

                                                                    The Macintosh model, on the other hand, is the exact opposite. The system doesn’t deal with character streams. Data files are extremely high level, usually assuming that they are specific to an application. When was the last time you piped the output of one program to another on a Mac? (Good luck even finding the pipe symbol.) Programs are monolithic, the better to completely understand what you are doing. You don’t take MacFoo and MacBar and hook them together.

                                                                    —From Life with Unix, by Libes and Ressler

                                                                    Yeah, those poor Mac users. They’ve got it so rough. Because they can’t pipe streams of bytes around how are they ever going to paste artwork from their drawing program into their latest memo and have text flow around it? How are they going to transfer a spreadsheet into their memo? And how could such users expect changes to be tracked automatically? They certainly shouldn’t expect to be able to electronically mail this patched-together memo across the country and have it seamlessly read and edited at the other end, and then returned to them unscathed. We can’t imagine how they’ve been transparently using all these programs together for the last 10 years and having them all work, all without pipes.

                                                                    1. 2

                                                                      I’m not even talking at that level! I just meant like “open a subprocess to run this image processing thing, then immediately quit”. Just bog standard glue stuff. Windows has BAT scripts and they are helpful after all.

                                                                      I do understand how lots of common user stuff will just work anyways, though, just seems that without “call another program somehow” your choices for code sharing are like “just give people the source to integrate into their own app” (with all the problems that can come with that, especially pre-internet), or “write a file, ask the user to manually open this other program and do a thing, then come back”

                                                                    2. 8

                                                                      AppleEvents are your friend. You could send and receive events from your app. This is how many classic MacOS Web Servers implemented CGI programs back in the day. The Web App would just be another “desktop” app running on the Mac exchanging AppleEvents with the Web Server.

                                                                      A long time ago in the early days of Mac OS X, I got a freelance gig to write a little Apache gizmo that would allow users to keep using AppleEvent-based CGI programs. Many universities had invested a long time developing courseware and online exam systems that relied on that technology. I had a wonderful time, it was so easy to craft a little unixy-CGI that would pick the info from the request, dispatch an AppleEvent and marshal the result back to Apache. I miss those days.

                                                                      1. 2

                                                                        Applescript.

                                                                        Command lines aren’t useful for making interactive programs interact anyway.

                                                                        1. 1

                                                                          gitk / git-gui perhaps.

                                                                        2. 2

                                                                          Oh, and a GUI, because the classic Mac OS has no command line.

                                                                          You could use mpw to get around this without building a GUI. There was a gcc port (among other things) that did.

                                                                          1. 1

                                                                            MPW existed, but the fact it wasn’t common even among us developers (Think/CW were more popular IIRC) means you couldn’t rely on it as a crutch. You live and die by the GUI (and quality of it) in the classic OS.

                                                                            1. 2

                                                                              I did almost all of my actual programming in CW back then. And the rest was Think C or Think Pascal. But MPW was still there. The source control we used for most of our projects absolutely required MPW. (The name of the tool was Apple Projector, and this is a decent discussion of it in context).

                                                                              It was not the only tool we used that required MPW, but it was the one I touched most frequently.

                                                                              1. 1

                                                                                It’s slightly wild to see Linux servers and classic Mac OS dev in the same article, but 1998 is the right timeframe…

                                                                          2. 1

                                                                            How many Unixisms does Libgit2 take for granted?

                                                                            1. 3

                                                                              Surprisingly, not many other than what’s in a typical C library implementation. The Windows API backend handles most of the scenarios. I also know someone is working on an AmigaOS backend too.

                                                                              1. 1

                                                                                AmigaOS backend

                                                                                Hopefully, classic AmigaOS (as in not 4+)?

                                                                                1. 2

                                                                                  AmigaOS 4 already has a port AFAIK.

                                                                          3. 1

                                                                            How far into the transition from SHA-1 to SHA-256 is the Git ecosystem?

                                                                            1. 1

                                                                              MacRelix includes git. I don’t think it’s very complete unless things have changed lately.

                                                                          1. 8

                                                                            Tadpole SparcBook, ha. Those things sold for around $10,000+ when they were new, and that was over 20 years ago! Especially with inflation, laptops have gotten cheaper.

                                                                            The article also mentions the Pinebook Pro as an example of a good laptop, and that’s pretty funny to me. I have one, it works very well and I love it… but you can’t even put it to sleep if you have an M.2 SSD. It’s a bit unpolished, to say the least.

                                                                            1. 5

                                                                              Even bus width can be impacted; you’d need a lot of slots to match the bandwidth possible with soldered-down LPDDR. I think it’s a fair compromise.

                                                                              I’m sorry, but what? Do you have any source to back that up?

                                                                              Physics does get involved with high speed memory, but I’m pretty sure the pros and cons of soldered RAM vs memory slots are not related to bandwidth.

                                                                              1. 11

                                                                                It definitely is on Apple’s M1 Pro/Max machines. They have a completely non-standard memory layout with DRAM chips on-package instead of soldered SO-DIMMS, and have 256/512-bit bus interfaces which would require 4/8 channels to match. The ability to solder the BGA packages directly onto the SoC package also means that signal lines to the DRAM are significantly shorter, which allows signalling frequencies significantly higher than the same modules could achieve spread out over a wider area due to physical limitations needed to route to 8 individual modular slots.

                                                                              1. 16

                                                                                I wonder who at System76 was responsible for evaluating all possible directions they could invest in, and decided the desktop environment is the biggest deficiency of System76

                                                                                1. 11

                                                                                  It’s also great marketing. I’ve heard “System76” way more since they have released Pop_OS. So while people may not be buying machines for the OS it seems that as a pretty popular distro it keeps the name in their head and they may be likely to buy a system on the next upgrade.

                                                                                  1. 1

                                                                                    Well I’d buy a machine, but they’re not selling anything with EU layouts or powercords.

                                                                                  2. 5

                                                                                    I know a few people who run Pop_OS, and none of them run it on a System76 machine, but they all choose Pop over Ubuntu for its Gnome hacks.

                                                                                    Gnome itself isn’t particularly friendly to hacks — the extension system is really half baked (though perhaps it’s one of the only uses of the Spidermonkey JS engine outside Firefox, that’s pretty cool!). KDE Plasma has quite a lot of features, but it doesn’t really focus on usability the way they could.

                                                                                    There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.

                                                                                    Honestly, I think that if something more friendly than Gnome and KDE came along and was well-supported, it could really be a big deal. “Year of the Linux desktop” is a meme, but it’s something we’ve been flirting with for decades now and the main holdups are compatibility and usability. Compatibility isn’t a big deal if most of what we do on computers is web-based. If we can tame usability, there’s surely a fighting chance. It just needs the financial support of a company like System76 to be able to keep going.

                                                                                    1. 7

                                                                                      There’s a lot of room for disruption in the DE segment of the desktop Linux market. This is a small segment of an even smaller market, but it exists, and most people buying System76 machines are part of it.

                                                                                      It’s very difficult to do anything meaningful here. Consistency is one of the biggest features of a good DE. This was something that Apple was very good at before they went a bit crazy around 10.7 and they’re still better than most. To give a couple of trivial examples, every application on my Mac has the buttons the same way around in dialog boxes and uses verbs as labels. Every app that has a preferences panel can open it with command-, and has it in the same place in the menus. Neither of these is the case on Windows or any *NIX DE that I’ve used. Whether the Mac way is better or worse than any other system doesn’t really matter, the important thing is that when I’ve learned how to perform an operation on the Mac I can do the same thing on every Mac app.

                                                                                      In contrast, *NIX applications mostly use one of two widget sets (though there is a long tail of other ones) each of which has subtly different behaviour for things like text navigation shortcut keys. Ones designed for a particular DE use the HIGs from that DE (or, at least, try to) and the KDE and GNOME ones say different things. Even something simple like having a consistent ‘open file’ dialog is very hard in this environment.

                                                                                      Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.

                                                                                      1. 4

                                                                                        There’s a lot of room for disruption in the DE segment of the desktop Linux market.

                                                                                        Ok, so now we have :

                                                                                        • kitchen sink / do everything : KDE

                                                                                        • MacOS like : Gnome

                                                                                        • MacOS lookalike : Elementary

                                                                                        • Old Windows : Gnome 2 forks (eg MATE)

                                                                                        • lightweight environments : XFCE / LXDE

                                                                                        • tiling : i3, sway etc etc (super niche).

                                                                                        • something new from scratch but not entirely different : Enlightment

                                                                                        So what exactly can be disrupted here when there are so many options ? What is the disruptive angle ?

                                                                                        1. 15

                                                                                          I think you’re replying to @br, not to me, but your post makes me quite sad. All of the DEs that you list are basically variations on the 1984 Macintosh UI model. You have siloed applications, each of which owns one or more windows. Each window is owned by precisely one application and provides a sharp boundary between different UIs.

                                                                                          The space of UI models beyond these constraints is huge.

                                                                                          1. 5

                                                                                            I think any divergence would be interesting, but it’s also punished by users - every time Gnome tries to diverge from Windows 98 (Gnome 3 is obvious, but this has happened long before - see spatial Nautilus), everyone screams at them.

                                                                                          2. 3

                                                                                            I would hesitate to call elementary or Gnome Mac-like. Taking elements more than others, sure. But there’s a lot of critical UI elements from Mac OS looking, and they admit they’re doing their own thing, which a casual poke would reveal that.

                                                                                            I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time. (I’d say Gnome 2 draws more from both.)

                                                                                            1. 3

                                                                                              I’d also argue KDE is more the Windows lookalike, considering how historically they slavishly copied whatever trends MS was doing at the time

                                                                                              I would have argued that at one point. I’d have argued it loudly around 2001, which is the last time that I really lived with it for longer than a 6 months.

                                                                                              Having just spent a few days giving KDE an honest try for the first time in a while, though, I no longer think so.

                                                                                              I’d characterize KDE as an attempt to copy all the trends for all time in Windows + Mac + UNIX add a few innovations, an all encompassing settings manager, and let each user choose their own specific mix of those.

                                                                                              My current KDE setup after playing with it for a few days is like an unholy mix of Mac OS X Snow Leopard and i3, with a weird earthy colorscheme that might remind you of Windows XP’s olive scheme if it were a little more brown and less green.

                                                                                              But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.

                                                                                              1. 1

                                                                                                But all the options are here, from slavish mac adherence to slavish win3.1 adherence to slavish CDE adherence to pure Windows Vista. They’ve really left nothing out.

                                                                                                I stopped using KDE when 4.x came out (because it was basically tech preview and not usable), but before that I was a big fan of the 3.x series. They always had settings for everything. Good to hear they kept that around.

                                                                                            2. 2

                                                                                              GNOME really isn’t macOS like, either by accident or design.

                                                                                            3. 3

                                                                                              I am no longer buying this consistency thing and how the Mac is superior. So many things we do all day are web-apps which all look and function completely different. I use gmail, slack, github enterprise, office, what-have-you daily at work and they are all just browser tabs. None looks like the other and it is totally fine. The only real local apps I use are my IDE which is writen in Java and also looks nothing like the Mac, a terminal and a browser.

                                                                                              1. 7

                                                                                                Just because it’s what we’re forced to accept today doesn’t mean the current state we’re in is desirable. If you know what we’ve lost, you’d miss it too.

                                                                                                1. 2

                                                                                                  I am saying that the time of native apps is over and it is not coming back. Webapps and webapps disguised as desktop applications a la Electron are going to dominate the future. Even traditionally desktop heavy things like IDEs are moving into the cloud and the browser. It may be unfortunate, but it is a reality. So even if the Mac was superior in its design the importance of that is fading quickly.

                                                                                                  1. 2

                                                                                                    “The time of native apps is over .. webapps … the future”

                                                                                                    Non-rhetorical question: Why is that, though?

                                                                                                    1. 4

                                                                                                      Write once, deploy everywhere.

                                                                                                      Google has done the hard work of implementing a JS platform for almost every computing platform in existence. By targeting that platform, you reach more users for less developer-hours.

                                                                                                      1. 3

                                                                                                        The web is the easiest and best understood application deployment platform there is. Want to upgrade all user? F5 and you are done. Best of all: it is cross platform

                                                                                                      2. 1

                                                                                                        I mean, if you really care about such things, the Mac has plenty of native applications and the users there still fight for such things. But you’re right that most don’t on most platforms, even the Mac.

                                                                                                    2. 2

                                                                                                      And that’s why the Linux desktop I use most (outside of work) is… ChromeOS.

                                                                                                      Now, I primarily use it for entertainment like video streaming. But with just a SSH client, I can access my “for fun” development machine too.

                                                                                                    3. 3

                                                                                                      Any new DE has a choice of either following the KDE or GNOME HIGs and not being significantly different, or having no major applications that follow the rules of the DE. You can tweak things like the window manager or application launcher but anything core to the behaviour of the environment is incredibly hard to do.

                                                                                                      Honestly, I’d say Windows is more easily extensible. I could write a shell extension and immediately reap its benefit in all applications - I couldn’t say the same for other DEs without probably having to patch the source, and that’ll be a pain.

                                                                                                      1. 1

                                                                                                        GNOME HIG also keeps changing, which creates more fragmentation.

                                                                                                        20 years ago, they did express a desire of unification: https://lwn.net/Articles/8210/

                                                                                                    4. 1

                                                                                                      It certainly is a differentiator.