This does make me regret selling my M1 Mini. Apple pissed me off enough with the whole CSAM/on device scanning thing (along with a range of other things) that I got rid of it, at some substantial loss, figuring that the Linux projects on it wouldn’t come to anything useful (at the time, they were still working out some basic functionality). Asahi has come a long ways since then.
They’ve done a really, really good job with hardware reverse engineering, and have quite exceeded my expectations for what they’d accomplish - I’m glad to see it!
I’m curious as to why you regret selling the hardware. A company proposes awful spyware for it’s hardware which you respond by deciding to sell it’s product but now you experience regret because you can’t test out software that the same company is not supporting or encouraging in the slightest? What is it about Asahi that makes you want an Apple device? I know Apple makes good hardware but I don’t think the hardware is THAT good that some independently contributed software is a game changer for it. Especially when many other vendors make pretty great hardware that support the software.
I like ARM chips, and I utterly love what Apple has done with the SoC design - huge caches, and insane gobs of memory bandwidth. The performance out of it backs the theory I’ve run with for a while that a modern CPU is simply memory limited - they spend a lot of their time waiting on data. Apple’s designs prove this out. They have staggeringly huge L1 cache for the latency, and untouchable memory bandwidth, with the performance one expects of that. On very, very little power.
Various people have observed that I like broken computers, and I really can’t argue. I enjoy somewhat challenging configurations to run - Asahi on Apple hardware tickles all those things, but with hardware that’s actually fast. I’ve used a variety of ARM SBCs as desktops for years now - Pis, ODroids, I had a Jetson Nano for a while, etc. They’re all broken in unique and novel ways, which is interesting, and a fun challenge to work around. I just didn’t expect Asahi to get this usable, this quickly.
I sold the M1 Mini and a LG 5k monitor at about a $1000 loss over what I’d paid for them (over a very short timescale - I normally buy computers and run them to EOL in whatever form that takes). I didn’t have any other systems that could drive the 5k, so it made sense to sell both and put a lower resolution monitor in the spot, but the M1/LG was still the nicest computer I’ve ever used in terms of performance, responsiveness, etc. And it would, in Rosetta emulation, run Kerbal Space Program, which was about the one graphically demanding thing I used to do with computers.
In any case, getting rid of it drove some re-evaluation of my needs/desires/security stances, and now I run Qubes on just about everything, on x86. Painful as it is to say, relative to my previous configurations, this all qualifies as “Just works.”
What did you replace it with? The basic Mac Mini M2 is really cheap now at ~$600. I run NixOS on a decade-old NUC, and it is tempting as a replacement. A good fanless x86 is easily 50% more and not nearly as energy efficient as the Mini.
However, right now one needs to keep macOS to do firmware updates. And AFAIK there is limited video output support on the M2.
An ASRock 4x4 Box 5000U system, with a 5800U and 64GB RAM. It dual boots QubesOS and Ubuntu, though almost never runs Ubuntu, and the integrated graphics aren’t particularly good (the theory had been that I could do a touch of light gaming in Ubuntu, but reality is that it doesn’t run anything graphically intensive very well). It was not a well thought out purchase, all things considered, for my needs. Though, at the time, I was busily re-evaluating my needs and assumed I needed more CPU and RAM than I really do.
I’m debating stuffing an ODroid H3+ in place of it. It’s a quad core Atom x86 small board computer of the “gutless wonder” variety, and it runs Qubes quite well, just with less performance than the AMD box. However, as I’ve found out in the migration to QubesOS, the amount of computer I actually need for my various uses is a lot less than I’ve historically needed, and a lot less than I assumed I would need when I switched this stuff out. It turns out, the only very intensive things I do anymore are some bulk offline video transcoding, and I have a pile of BOINC compute rigs I can use for that sort of task.
I pretty much run Thunderbird, a few Firefox windows, sometimes Spotify or PlexAmp (which, nicely, run on x86 - that was a problem in my ARM era), a few communications clients, and some terminals. If I have dev projects, I spin up a new AppVM or standalone VM to run them in, and… otherwise, my computers just spend more and more time shut down.
the same company is not supporting or encouraging in the slightest
That’s not quite true — apple went out of their way to make the boot secure for third-party OSs as well, according to one of the Asahi devs. So I don’t see them doing any worse than.. basically any other hardware company - those just use hardware components that already have a driver reverse engineered, or in certain cases contributed to some by the producing company, e.g. intel. Besides not having much incentive for opening up the hardware, companies often legally can’t do it due to patents not owned by them, only licensed.
And as for the laptop space, I would argue that the M-series is that good — no other device currently on the market get even in the same ballpark of their performance-efficiency plot to the point I can barely use my non-M laptop as a mobile device.
This is the attitude that I don’t understand. To pick four big chip-makers, I would rank Apple below nVidia, Intel, and AMD; the latter three all have taken explicit steps to ensure that GNU/Linux cleanly boots with basic drivers on their development and production boards. Apple is the odd one out.
On my m1 mac laptop I can work a full 10 hour work day compiling mutliple times without once plugging in my machines. The compiles are significantly faster than other machines and the laptop barely get’s above room temperature.
This is significantly better hardware wise than any other machine I could buy or build on the market right now. It’s not even a competition. Apples hardware is so far ahead of the competition right now that it looks like everyone else is asleep at the wheel.
If you can run your favorite OS on it and get the same benefits why wouldn’t you?
My most recent laptop purchase was around $50 USD. My main workstation is a refurbished model which cost around $150. My Android phone cost $200. For most of my life, Apple products have been firmly out of budget.
They had to, as servers are predominantly running linux and they are in the business of selling hardware components. Apple is not, so frankly I don’t even get the comparison.
Also, look back a bit earlier and their images are far from stellar, with plenty painstaking reverse engineering done by the Linux community. Also, nvidia is not regarded as a good linux citizen by most people, remember Linus’s middle finger? There has only recently been changes so their video cards could be actually utilized by the kernel through the modern display buffer APIs, instead of having to install a binary blob inside xserver coming with their proprietary driver.
The performance and fanless arguments are icing on the cake and completely moot to me. No previous CPU was ever hindering effectiveness of computing. While I’m glad a bunch of hardware fanatics are impressed by it, the actual end result of what the premium locked down price tag provides is little better in real workload execution. Furthermore they’re not sharing their advancements with greater computing world, they’re hoarding it for themselves. All of which is a big whoop-dee-doo if you’re not entranced by Apple’s first-world computing speeds mindset.
AMD definitely does; their biggest contribution is amd64. Even nVidia contributes a fair amount of high-level research back to the community in the form of GPU gems.
AMD definitely does; their biggest contribution is amd64
Which AMD covered in a huge pile of patents. They have cross licensing agreements with Intel and Via (Centaur) that let them have access to the patents, but anyone else who tries to implement x86-64 will hear from AMD’s lawyers. Hardly sharing with the world. The patents have mostly expired on the core bits of x86-64 now, but AMD protects their new ISA extensions.
Didn’t they back off from this at least though? It almost seemed like a semi rogue VP pushed the idea, which was subsequently nixed by likely Tim Cook himself. The emails leaked during that debacle point to a section of the org going off on it’s own.
They did. Eventually. After a year of no communications on the matter beyond “Well, we haven’t shipped it yet.”
I didn’t pay attention to the leaked emails, but it seemed an odd hill to die on for Apple after their “What’s on your phone is your business, not ours, and we’re not going to help the feds with what’s on your device” stance for so long. They went rather out of their way to help make the hardware hard to attack even with full physical access.
Their internal politics are their problem, but when the concept is released as a “This is a fully formed thing we are doing, deal with it,” complete with the cover fire about getting it done over the “screeching voices of the minority” (doing the standard “If you’re not with us, you’re obviously a pedophile!” implications), I had problems with it. Followed by the FAQ and follow on documents that read as though they were the result of a team running on about 3 days of no sleep, frantically trying to respond to the very valid objections raised. And then… crickets for a year.
I actually removed all Apple hardware from my life in response. I had a 2015 MBP that got deprecated, and a 2020 SE that I stopped using in favor of a flip phone (AT&T Flip IV). I ran that for about a year, and discovered, the hard way, that a modern flip phone just is a pile of trash that can’t keep up with modern use. It wouldn’t handle more than a few hundred text messages (total) on the device before crawling, so I had to constantly prune text threads and hope that I didn’t get a lot of volume quickly. The keypad started double and triple pressing after a year, which makes T9 texting very, very difficult. And for reasons I couldn’t work out nor troubleshoot, it stopped bothering to alert me of incoming messages, calls, etc. I’d open it, and it would proceed to chime about the messages that had come in over the past hour or two, and, oh yeah, a missed phone call. But it wouldn’t actually notify me of those when they happened. Kind of a problem for a single function device.
Around iOS 16 coming out, I decided that Lockdown mode was, in fact, what I was looking for in a device, so I switched back to my iOS device, enabled Lockdown, stripped the hell out of everything on it (I have very few apps installed, and nothing on my home screen except a bottom row of Phone, Messages, Element (Matrix client), and Camera), and just use it for personal communications and not much else. The MBP is long since obsolete and gets kept around for a “Oh, I have this one weird legacy thing that needs MacOS…” device, and I’ve no plans to go back to them - even though, as noted earlier, I think the M series chips are the most exciting bits of hardware to come out of the computing world in the last decade or two, and the M series MacBook Pros are literally everything I want in a laptop. Slab sided powerhouses with actual ports. I just no longer trust Apple to do that which they’ve done in the past - they demonstrated that they were willing to burn all their accumulated privacy capital in a glorious bonfire of not implementing an awful idea. Weird as hell. I’ve no idea what I’m going to do when this phone is out of OS support. Landline, maybe.
You may be wondering what new features are coming, but we’ll have to keep that a secret until release time (stuff isn’t even integrated yet, you’re not going to get a sneak peek even if you install early).
Just make sure the Asahi kernel version is compatible with the ZFS module, set boot.supportedFileSystems when building the installer, and things should be all set. I think that was the only issue I ran into when I tried that specific setup a few months ago.
Yes, there’s should be no reason it won’t work on either ALARM or Fedora Asahi Remix because we support running Nix on standalone distributions on aarch64 all the same as NixOS.
The only exception I can think of is maybe some SELinux incompatibilities on Fedora? I believe I’ve read about having to do some weird hacks, but I don’t know the details myself. Try it and see?
I run Nix on my personal and work macs and it…basically works?
Like any mix of Nix and <OS that isn’t NixOS> you have to be really mindful of what’s in your path, find out which shebangs + and hard-coded commands in scripts are lying around, be okay with never being able to use (say) pre-compiled blobs for Node or Python libraries might get pulled automatically, etc.
Modulo all of that, you can at least do a quick ‘nix run’ invocation to pick up a package, build from flakes, etc.
Generally, though, I find myself delegating more and more to a NixOS VM running on the same host. Nix-on-Docker is another option that actually does some stuff more cleanly than native nix-darwin, if you’re okay with containerizing all the things.
Running it on Asahi in particular was my concern. It means you and I both can avoid the NixOS VM step and just use Nix on Linux, which in my experience is fine (I’m sorry it’s treated you so poorly!).
Oh, sorry if I gave the impression that Nix-on-Linux was a bad experience. I chose/continue to choose to use it because it solves a whole bunch of other pain points, like needing to write f’ing awful makefiles (or even worse, use autotools…shudder) to build across multiple target platforms.
I’m just spoiled by how well things work in full-blown NixOS, I guess. :)
This does make me regret selling my M1 Mini. Apple pissed me off enough with the whole CSAM/on device scanning thing (along with a range of other things) that I got rid of it, at some substantial loss, figuring that the Linux projects on it wouldn’t come to anything useful (at the time, they were still working out some basic functionality). Asahi has come a long ways since then.
They’ve done a really, really good job with hardware reverse engineering, and have quite exceeded my expectations for what they’d accomplish - I’m glad to see it!
I’m curious as to why you regret selling the hardware. A company proposes awful spyware for it’s hardware which you respond by deciding to sell it’s product but now you experience regret because you can’t test out software that the same company is not supporting or encouraging in the slightest? What is it about Asahi that makes you want an Apple device? I know Apple makes good hardware but I don’t think the hardware is THAT good that some independently contributed software is a game changer for it. Especially when many other vendors make pretty great hardware that support the software.
I like ARM chips, and I utterly love what Apple has done with the SoC design - huge caches, and insane gobs of memory bandwidth. The performance out of it backs the theory I’ve run with for a while that a modern CPU is simply memory limited - they spend a lot of their time waiting on data. Apple’s designs prove this out. They have staggeringly huge L1 cache for the latency, and untouchable memory bandwidth, with the performance one expects of that. On very, very little power.
Various people have observed that I like broken computers, and I really can’t argue. I enjoy somewhat challenging configurations to run - Asahi on Apple hardware tickles all those things, but with hardware that’s actually fast. I’ve used a variety of ARM SBCs as desktops for years now - Pis, ODroids, I had a Jetson Nano for a while, etc. They’re all broken in unique and novel ways, which is interesting, and a fun challenge to work around. I just didn’t expect Asahi to get this usable, this quickly.
I sold the M1 Mini and a LG 5k monitor at about a $1000 loss over what I’d paid for them (over a very short timescale - I normally buy computers and run them to EOL in whatever form that takes). I didn’t have any other systems that could drive the 5k, so it made sense to sell both and put a lower resolution monitor in the spot, but the M1/LG was still the nicest computer I’ve ever used in terms of performance, responsiveness, etc. And it would, in Rosetta emulation, run Kerbal Space Program, which was about the one graphically demanding thing I used to do with computers.
In any case, getting rid of it drove some re-evaluation of my needs/desires/security stances, and now I run Qubes on just about everything, on x86. Painful as it is to say, relative to my previous configurations, this all qualifies as “Just works.”
What did you replace it with? The basic Mac Mini M2 is really cheap now at ~$600. I run NixOS on a decade-old NUC, and it is tempting as a replacement. A good fanless x86 is easily 50% more and not nearly as energy efficient as the Mini.
However, right now one needs to keep macOS to do firmware updates. And AFAIK there is limited video output support on the M2.
An ASRock 4x4 Box 5000U system, with a 5800U and 64GB RAM. It dual boots QubesOS and Ubuntu, though almost never runs Ubuntu, and the integrated graphics aren’t particularly good (the theory had been that I could do a touch of light gaming in Ubuntu, but reality is that it doesn’t run anything graphically intensive very well). It was not a well thought out purchase, all things considered, for my needs. Though, at the time, I was busily re-evaluating my needs and assumed I needed more CPU and RAM than I really do.
I’m debating stuffing an ODroid H3+ in place of it. It’s a quad core Atom x86 small board computer of the “gutless wonder” variety, and it runs Qubes quite well, just with less performance than the AMD box. However, as I’ve found out in the migration to QubesOS, the amount of computer I actually need for my various uses is a lot less than I’ve historically needed, and a lot less than I assumed I would need when I switched this stuff out. It turns out, the only very intensive things I do anymore are some bulk offline video transcoding, and I have a pile of BOINC compute rigs I can use for that sort of task.
I pretty much run Thunderbird, a few Firefox windows, sometimes Spotify or PlexAmp (which, nicely, run on x86 - that was a problem in my ARM era), a few communications clients, and some terminals. If I have dev projects, I spin up a new AppVM or standalone VM to run them in, and… otherwise, my computers just spend more and more time shut down.
That’s not quite true — apple went out of their way to make the boot secure for third-party OSs as well, according to one of the Asahi devs. So I don’t see them doing any worse than.. basically any other hardware company - those just use hardware components that already have a driver reverse engineered, or in certain cases contributed to some by the producing company, e.g. intel. Besides not having much incentive for opening up the hardware, companies often legally can’t do it due to patents not owned by them, only licensed.
And as for the laptop space, I would argue that the M-series is that good — no other device currently on the market get even in the same ballpark of their performance-efficiency plot to the point I can barely use my non-M laptop as a mobile device.
This is the attitude that I don’t understand. To pick four big chip-makers, I would rank Apple below nVidia, Intel, and AMD; the latter three all have taken explicit steps to ensure that GNU/Linux cleanly boots with basic drivers on their development and production boards. Apple is the odd one out.
On my m1 mac laptop I can work a full 10 hour work day compiling mutliple times without once plugging in my machines. The compiles are significantly faster than other machines and the laptop barely get’s above room temperature.
This is significantly better hardware wise than any other machine I could buy or build on the market right now. It’s not even a competition. Apples hardware is so far ahead of the competition right now that it looks like everyone else is asleep at the wheel.
If you can run your favorite OS on it and get the same benefits why wouldn’t you?
My most recent laptop purchase was around $50 USD. My main workstation is a refurbished model which cost around $150. My Android phone cost $200. For most of my life, Apple products have been firmly out of budget.
That is certainly a fair consideration. But for those who can afford it there are many reasons the price is worth it.
They had to, as servers are predominantly running linux and they are in the business of selling hardware components. Apple is not, so frankly I don’t even get the comparison.
Also, look back a bit earlier and their images are far from stellar, with plenty painstaking reverse engineering done by the Linux community. Also, nvidia is not regarded as a good linux citizen by most people, remember Linus’s middle finger? There has only recently been changes so their video cards could be actually utilized by the kernel through the modern display buffer APIs, instead of having to install a binary blob inside xserver coming with their proprietary driver.
The performance and fanless arguments are icing on the cake and completely moot to me. No previous CPU was ever hindering effectiveness of computing. While I’m glad a bunch of hardware fanatics are impressed by it, the actual end result of what the premium locked down price tag provides is little better in real workload execution. Furthermore they’re not sharing their advancements with greater computing world, they’re hoarding it for themselves. All of which is a big whoop-dee-doo if you’re not entranced by Apple’s first-world computing speeds mindset.
Does Nvidia, Intel or AMD actually “share their advancements with greater computing world”?
AMD definitely does; their biggest contribution is
amd64
. Even nVidia contributes a fair amount of high-level research back to the community in the form of GPU gems.If you count GPU gems, I don’t see how apple research is not at least equally valuable.
Which AMD covered in a huge pile of patents. They have cross licensing agreements with Intel and Via (Centaur) that let them have access to the patents, but anyone else who tries to implement x86-64 will hear from AMD’s lawyers. Hardly sharing with the world. The patents have mostly expired on the core bits of x86-64 now, but AMD protects their new ISA extensions.
Didn’t they back off from this at least though? It almost seemed like a semi rogue VP pushed the idea, which was subsequently nixed by likely Tim Cook himself. The emails leaked during that debacle point to a section of the org going off on it’s own.
They did. Eventually. After a year of no communications on the matter beyond “Well, we haven’t shipped it yet.”
I didn’t pay attention to the leaked emails, but it seemed an odd hill to die on for Apple after their “What’s on your phone is your business, not ours, and we’re not going to help the feds with what’s on your device” stance for so long. They went rather out of their way to help make the hardware hard to attack even with full physical access.
Their internal politics are their problem, but when the concept is released as a “This is a fully formed thing we are doing, deal with it,” complete with the cover fire about getting it done over the “screeching voices of the minority” (doing the standard “If you’re not with us, you’re obviously a pedophile!” implications), I had problems with it. Followed by the FAQ and follow on documents that read as though they were the result of a team running on about 3 days of no sleep, frantically trying to respond to the very valid objections raised. And then… crickets for a year.
I actually removed all Apple hardware from my life in response. I had a 2015 MBP that got deprecated, and a 2020 SE that I stopped using in favor of a flip phone (AT&T Flip IV). I ran that for about a year, and discovered, the hard way, that a modern flip phone just is a pile of trash that can’t keep up with modern use. It wouldn’t handle more than a few hundred text messages (total) on the device before crawling, so I had to constantly prune text threads and hope that I didn’t get a lot of volume quickly. The keypad started double and triple pressing after a year, which makes T9 texting very, very difficult. And for reasons I couldn’t work out nor troubleshoot, it stopped bothering to alert me of incoming messages, calls, etc. I’d open it, and it would proceed to chime about the messages that had come in over the past hour or two, and, oh yeah, a missed phone call. But it wouldn’t actually notify me of those when they happened. Kind of a problem for a single function device.
Around iOS 16 coming out, I decided that Lockdown mode was, in fact, what I was looking for in a device, so I switched back to my iOS device, enabled Lockdown, stripped the hell out of everything on it (I have very few apps installed, and nothing on my home screen except a bottom row of Phone, Messages, Element (Matrix client), and Camera), and just use it for personal communications and not much else. The MBP is long since obsolete and gets kept around for a “Oh, I have this one weird legacy thing that needs MacOS…” device, and I’ve no plans to go back to them - even though, as noted earlier, I think the M series chips are the most exciting bits of hardware to come out of the computing world in the last decade or two, and the M series MacBook Pros are literally everything I want in a laptop. Slab sided powerhouses with actual ports. I just no longer trust Apple to do that which they’ve done in the past - they demonstrated that they were willing to burn all their accumulated privacy capital in a glorious bonfire of not implementing an awful idea. Weird as hell. I’ve no idea what I’m going to do when this phone is out of OS support. Landline, maybe.
Very curious as to what these are 👀
My bet is that speaker support is one of the features
Brilliant
I don’t want to be that guy but does the Nix package manager work on Asahi, since it supports aarch64?
Yes! I’ve been running full nixOS on my M1 Air with the help of Asahi kernels and mesa built to the nix packages/modules at https://github.com/tpwrules/nixos-apple-silicon
oooh, i’ve been looking for something like this! I don’t suppose I could also run it all on zfs root, for the win…
Just make sure the Asahi kernel version is compatible with the ZFS module, set
boot.supportedFileSystems
when building the installer, and things should be all set. I think that was the only issue I ran into when I tried that specific setup a few months ago.Yes, there’s should be no reason it won’t work on either ALARM or Fedora Asahi Remix because we support running Nix on standalone distributions on aarch64 all the same as NixOS.
The only exception I can think of is maybe some SELinux incompatibilities on Fedora? I believe I’ve read about having to do some weird hacks, but I don’t know the details myself.
Try it and see?I run Nix on my personal and work macs and it…basically works?
Like any mix of Nix and <OS that isn’t NixOS> you have to be really mindful of what’s in your path, find out which shebangs + and hard-coded commands in scripts are lying around, be okay with never being able to use (say) pre-compiled blobs for Node or Python libraries might get pulled automatically, etc.
Modulo all of that, you can at least do a quick ‘nix run’ invocation to pick up a package, build from flakes, etc.
Generally, though, I find myself delegating more and more to a NixOS VM running on the same host. Nix-on-Docker is another option that actually does some stuff more cleanly than native nix-darwin, if you’re okay with containerizing all the things.
Running it on Asahi in particular was my concern. It means you and I both can avoid the NixOS VM step and just use Nix on Linux, which in my experience is fine (I’m sorry it’s treated you so poorly!).
Oh, sorry if I gave the impression that Nix-on-Linux was a bad experience. I chose/continue to choose to use it because it solves a whole bunch of other pain points, like needing to write f’ing awful makefiles (or even worse, use
autotools
…shudder) to build across multiple target platforms.I’m just spoiled by how well things work in full-blown NixOS, I guess. :)