1. 28
  1.  

  2. 9

    I’m seriously considering getting one of those M1 macs eventually (currently using a Dell laptop running Linux). However, one of my big worries is Big Sur. Not only doesn’t macOS look “pretty” or “serious” anymore in my eyes; the increasingly locked down and iOS-ified nature of macOS isn’t very attractive.

    As usual with Apple, their hardware seems amazing, but their entire software philosophy is keeping me away. This used to only really apply to iPhones and iPads, but is clearly increasingly applying to Macs as well.

    1. 2

      But why is it appealing to buy hardware where you can’t even replace you RAM or SSD to upgrade or fix it. Where you have to use dongles for everything that’s not charging and where you have to discard the whole MoBo when for example RAM/SSD is broken while the company claims to be so environment friendly ? And don’t get me started on trying to provide local backups in timemachine for apple users without the Cloud. The only real reason I can see apple going all-in on security is that you can suddenly sell DRM on SSDs and Buttons in the name of hardware verification.

      Sure if you want this processing power and battery lifetime today, you’ll have to buy these machines. Otherwise you could wait till the competition has something to provide, while surely even more software is capable of running on these chips by then (and apple shipped the next version of their hardware, probably making the old version obsolete).

      1. 4

        I’m not convinced that the competition will keep up with Apple’s CPU design. If they were capable of that, surely there would have been Android phones with CPUs which could compete with iPhone CPUs on both raw performance and performance per watt, but to my knowledge, Apple is generally years ahead of the Android market on the CPU design front.

        Granted, this is largely based on a feeling I’ve gotten from following the tech world for many years, not based on hard numbers. But I looked up some benchmarks just now for the purpose of this comment, and it seems like my impression is correct. If you have some source which shows that the Android market is usually on par with contemporary iPhones in the CPU performance department though, I’m all ears.

        (I kind of ignored your other points because I largely agree with them by the way. We could go back and forth regarding my own personal reasons why it might or might not be a worthwhile trade-off, but I don’t think that would be a very interesting discussion.)

        1. 1

          surely there would have been Android phones with CPUs which could compete with iPhone CPUs

          I don’t think you can compare android and apple phones to latops or non-mobile processors. Most of the android phones are plenty fast while not costing as much as the apple versions. I personally won’t buy a phone > 400€. And I can get the ~top model Hardware for 300€ from chinese vendors (or get really reusable smartphones from fairphone). And all of them are more than enough for some browsing and messaging. The Camera is already top notch for me (9t pro). So I don’t see why you would require even more CPU power and battery lifetime. ( I can pretty much not charge my phone for a week if I don’t go on reddit)

          For laptops and desktop systems I think that it’s something entirely different. Because you obviously want more power for you portable devleoper/games hardware and long lifetimes. Here People showed that they’d buy hardware with windows that is evenly priced to apple laptops (or more). So it’s not a niche. The same goes for server processors.

          As far as I know there are two additional things: Intel is still struggling to reach the same structural size for their chips as the apple ones, which are manufactured somewhere else. And AMD is currently catching up pretty fast. On top of that ARM isn’t something only apple can use and ARM servers are getting more popular. So all of this sounds to me like arm vendors/intel/amd will catch up. One thing where I can imagine apple having a big lead is with their SoC base that allows to easily optimize the CPU with multiple ASICS like video encoding or machine learning co-processors. But ultimately qualcom does SoCs (for different reasons like LTE + GPU) since ages for smartphones while providing the base for Apple. So I’d guess that we will see more SoC systems and matching drivers (no problem on windows/apple) in time, which take advantage of cpu+co-processors sitting on the same die. [If nvidia could buy ARM this could get massive in terms of cuda/gpu and ARM cpu on the same chip.]

          1. 1

            SoC base that allows to easily optimize the CPU with multiple ASICS like video encoding or machine learning co-processors

            Which just means we will need more support for chips that can do more. Basically moving from “oh a CPU + interrupt controller and some basic GPU” to CPU+GPU+.. in one packaged and maybe some kind of unified interface as apple has, to allow your users to just use an accelerator chip if one exists. But looking at the state of video acceleration I’m not sure if we’re moving forwards or backwards in linux.

      2. 2

        I would probably still recommend doing it (I rely on Parallels to run Windows and a few tools that use kexts so neither Big Sur nor M1 is a viable path for me so far, for the record). However, I kind of unconsciously started to rely on bash-/ruby-scriptable things more over time and increasingly try to run all kinds of automation on my home Ubuntu server (with xrdp for occasional things that require a GUI) or my secondary Thinkpad machine with Manjaro. The UI stability and the battery life are still awesome on macOS and latter just got better (and yes, you still need to fiddle with packages from AUR to get a relatively old Thinkpad T430 to work well in 2021 and yes, a package upgrade can still make your GUI flicker until you reboot, which was one of the main reasons to switch from Arch to macOS for me in the first place). Also, if you are ready to pay, apps that run on macOS get more polish. However, I don’t want to do anything on my macOS that cannot be done on Ubuntu these days unless I really need to (eg I do most of my diagramming using draw.io unless I really need OmniGraffle, which obviously blows draw.io out of the water). Also, many apps are doing “move fast and break things” stuff, e.g. Evernote dropping AppleScript automation, which further reduces any desire to invest into lots of macOS-specific workflows/automations.

      3. 5

        As a point of comparison… a sudo[1] upgrade on Arch Linux needs to transfer package metadatas + 1.1MB for the package itself. Takes probably less than a minute to upgrade with zero downtime.

        Similar numbers for all other Linux distros of course. Sometimes I miss Linux.

        [1] https://archlinux.org/packages/core/x86_64/sudo/

        1. [Comment removed by author]

          1. 11

            Then I upgraded the disk to an SSD and installed Arch Linux. I’ve never looked back and that machine has never been running as smooth as now. Upgrades are quick.

            So you replaced a piece of hardware known to be slow with one known to be fast, installed a different OS, and assume it’s the OS that makes it faster? Ok sure, that makes perfect sense

            1. [Comment removed by author]

              1. 3

                And yet you apparently didn’t actually ever compare them directly on the same hardware, so your claims are still just anecdotal apples and oranges.

                1. [Comment removed by author]

                  1. 3

                    I also have years of experience with macOS and Arch Linux; to whit, I add my personal Gladwell to the pile and say that I’ve never had any slowness problems with macOS except when I’ve expected it to be slow (running a render, generating wireguard vanity keys, crunching a stupid test suite that hammered Postgres into the ground).

                    1. 2

                      This is comparing one OS with another OS, not “apples and oranges”.

                      You’re comparing a macOS system running on a mechanical hard drive, with a Linux system running on a solid state drive.

                      You are pointing out what you perceive to be the errors of my ways, without contributing any useful information or data yourself.

                      My experience isn’t relevant to the fault in your comparison. You’re comparing a system with a hardware and software change, and then trying to advocate that it was the software change that made all the difference, which is absurd.

                      But if it makes you feel all warm and fuzzy, I’ve used Macs since the early-mid 90s. I endured Windows PCs for about 3 years as a student and then about 5 years when I had government jobs. I’ve managed Linux servers for a bit over a decade.

                      Some of them were slow some were fast. Some had mechanical drives some had solid state drives.

                      1. 1

                        A useful idea is to not purchase hardware from hostile vendors, even if one intends to discard the vendor’s software and run their own preferred software.

            2. 3

              I get that this person is peeved by Big Sur’s usage of a sealed System folder, but I don’t think users are losing out because of it. I develop on a Mac with Big Sur everyday, so does my team, and nobody in our org has had any issues with this. I really think this is a non-issue.

              1. 2

                Seems like a pretty bad implementation. The nice thing about using merkel tree is that you can validate signatures securely with only a subset of the signed data.

                1. 2

                  Seems like they heard of Merkle Trees, but never understood the idea behind. The scheme they are using is the same as if one would run shasum over the whole volume.

                  1. 16

                    There are a lot of smug posts in this thread, so I’m just going to reply to you:

                    It’s exactly the same characteristics that dm-verity (which is what Android uses now for the base system image) has on Linux: There is a hash in a single block and that is the root of the Merkel tree. Small updates in this kind of scheme are very difficult to do right because a power failure at the wrong time can leave your system in an inconsistent state that will appear as tampering.

                    Almost as importantly, the root hash needs to be checked during boot to ensure that no one has tampered with your disk. If you allow arbitrary updates then you need to do some local signing and getting the security of that is incredibly hard. To avoid that, you’d need to provide a block-level transactional update mechanism that would apply snapshots on top of a block device in an existing known state (somewhat akin to zfs send / receive). That’s not impossible, but it still has all of the problems involved with serialising the updates correctly, so you’d want block-level journalling for updates in addition to this.

                    None of this is trivial engineering work. The Linux equivalent of this, dm-verity, has been in the kernel for years and remains read-only.

                    1. 4

                      To avoid that, you’d need to provide a block-level transactional update mechanism that would apply snapshots on top of a block device in an existing known state (somewhat akin to zfs send / receive).

                      You would think a tech company with a $1T market cap would be able to pull this off but apparently not!

                      1. 1

                        They seem to have pulled it off, but not as well as they probably will.

                      2. 2

                        The reason for my smug comment comes from this article: https://eclecticlight.co/2021/01/09/boot-disk-layout-on-intel-and-m1-macs-high-sierra-to-big-sur/. The layout for Big Sur suggests that what is sealed is a snapshot of a volume in a volume group. Thus, it’s not the whole disk that is protected, but just one part. Also, signing and the root of the merkle hash are store in volume’s metadata, not at the root of the disk. Thus, my reasoning is the following: if we have a snapshot, which is read only, what prevents us from making a clone of the said snapshot, applying several changes, verifying that the accompanied changes match the advertised new root hash, making a snapshot of that clone, and then doing an atomic update to the new sealed volume? Note that I’m not talking about arbitrary updates, but updates that could be signed by Apple (signing the changes + new merkle hash).

                        1. 1

                          So a sort of blue green System folder?

                          Start spreading that idea around and I’m sure some linux people will make it happen.

                  2. 1

                    Is Big Sur using dual, alternating system partitions like Android or Chrome OS? If not, that would speed up the update process, though at the expense of disk space.