I bought one last week and have used it for 7 days now. I was in an initial hype phase as well, but I am more critical now and doubting whether I should return it.
Performance of native apps is as great as everyone claims. But I think it is a bit overhyped, recent AMD APUs come close in multi-core performance. Of course, that the Air works with passive cooling is a nice bonus.
Rosetta works great with native x86_64 applications, but performance is abysmal with JIT-ing runtimes like the JVM. E.g. JetBrains currently do not have a native version of their IDEs (JVM, but I think they also use some other non-Java code) and their IDEs are barely usable due to slowness. If you rely on JetBrains IDEs, wait until they have an Apple Silicon version.
Also, performance of anything that relies on SIMD instructions (AVX, AVX2) is terrible under Rosetta. So, if you are doing data science or machine learning with heavier loads, you may want to wait. Some libraries can be compiled natively of course, but the problem is that there is no functioning Fortran compiler supported on Apple Silicon (outside an experimental gcc branch) and many packages in that ecosystem rely on having a Fortran compiler.
Another issue with Rosetta vs. native in development is that it is very easy to get environments where native and x86_64 binaries/libraries are mixed (e.g. when doing x86_64 development and CMake building ARM64 objects unless you set CMAKE_OSX_ARCHITECTURES=x86_64), and things do not build.
Then Big Sur on Apple Silicon is also somewhat beta. Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well). When working longer, this issue disappears, but it’s annoying nonetheless.
If you haven’t ordered one, it’s best to wait a while until all issues are ironed out. There is currently a lot of (justified hype) around Apple Silicon, but that doesn’t mean that the ecosystem is ready yet. Unless all you do is web browsing, e-mailing, and an occasional app from the App Store.
Aside from this, I think there are some ethical (sorry for the lack of a better term) issues with newer Apple models. For example, Apple excluding their own services from third-party firewalls/VPNs, no extensibility (reducing the lifespan of hardware), and their slow march to a more and more closed system.
If you need a macbook now , for whatever reason, buying one with an Arm chip does sound the most future-proof option. The Intel ones will be the “old” ones soon, and will then be 2nd rate. It’s what happened with the PowerPC transition as well.
If only there would be the Macs with 32GB RAM I would buy one as I was in need. However due to that, I bought 32GB 13” MacBook Pro instead. I will wait for polishing out the ARMs before next upgrade.
From what I read, you get way more bang for your RAM in Apple processors. It’s all integrated on the same chip so they can do a lot of black magic fuckery there.
In native applications - I am pretty sure that this works well, however as an Erlang/Elixir developer I use 3rd party GCed languages and DBs that can use more RAM anyway. However the fact that it is possible to run native apps from iOS and iPad could save some RAM on Slack and Spotify for sure.
What I mean is, they probably swap to NAND or something, which could very likely be similar performance-wise to RAM you’d find on a x64 laptop (since they have a proprietary connection there instead of NVMe/M.2/SATA). Plus I imagine the “RAM” on the SoC is as fast as a x64 CPU cache. So essentially you’d have “infinite” RAM, with 16gb of it being stupid fast.
This is just me speculating btw, I might be totally wrong.
Lots of valuable insights here and I’m interested in discussing.
Performance of native apps is as great as everyone claims. But I think it is a bit overhyped, recent AMD APUs come close in multi-core performance. Of course, that the Air works with passive cooling is a nice bonus.
Sure, but the thing is that the AMD 4800U, their high-end laptop chip, runs at 45W pretty much sustained, whereas the M1 caps out at 15W. This is a very significant battery life and heat/sustained non-throttled performance difference. Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.
Rosetta works great with native x86_64 applications, but performance is abysmal with JIT-ing runtimes like the JVM. E.g. JetBrains currently do not have a native version of their IDEs (JVM, but I think they also use some other non-Java code) and their IDEs are barely usable due to slowness. If you rely on JetBrains IDEs, wait until they have an Apple Silicon version.
Yeah, I didn’t test anything Java. You might be right. You also mention Fortran though and I’m not sure how that matters in 2020?
Another issue with Rosetta vs. native in development is that it is very easy to get environments where native and x86_64 binaries/libraries are mixed (e.g. when doing x86_64 development and CMake building ARM64 objects unless you set CMAKE_OSX_ARCHITECTURES=x86_64), and things do not build.
This isn’t as big of a problem as it might seem based on my experience. You pass the right build flags and you’re done. It’ll vanish in time as the ecosystem adapts.
Then Big Sur on Apple Silicon is also somewhat beta. Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well). When working longer, this issue disappears, but it’s annoying nonetheless.
Big Sur has been more stable for me on Apple Silicon than on Intel. 🤷
If you haven’t ordered one, it’s best to wait a while until all issues are ironed out. There is currently a lot of (justified hype) around Apple Silicon, but that doesn’t mean that the ecosystem is ready yet. Unless all you do is web browsing, e-mailing, and an occasional app from the App Store.
I strongly disagree with this. I mean, the M1 MacBook Air is beating the 16” MacBook Pro in Final Cut Pro rendering times. Xcode compilation times are twice as fast across the board. This is not at all a machine just for browsing and emailing. I think that’s flat-out wrong. It’s got performance for developers and creatives that beats machines twice as expensive and billed as made for those types of professionals.
Aside from this, I think there are some ethical (sorry for the lack of a better term) issues with newer Apple models. For example, Apple excluding their own services from third-party firewalls/VPNs, no extensibility (reducing the lifespan of hardware), and their slow march to a more and more closed system.
You also mention Fortran though and I’m not sure how that matters in 2020?
There’s really rather a lot of software written in Fortran. If you’re doing certain kinds of mathematics or engineering work, it’s likely some of the best (or, even, only) code readily available for certain work. I’m not sure it will be going away over the lifetime of one of these ARM-based notebooks.
I’m not sure it will be going away over the lifetime of one of these ARM-based notebooks.
There will be gfortran for Apple Silicon. I compiled the gcc11 branch with support and it works, but possibly still has serious bugs. I read somewhere that the problem is that gcc 11 will be released in December, so Apple Silicon support will miss that deadline and will have to wait until the next major release.
No, Numpy is written in C with Python wrappers. It can call out to a Fortran BLAS/LAPACK implementation but that doesn’t necessarily need to be Fortran, although the popular ones are. SciPy does have a decent amount of Fortran code.
Almost anyone who does any sort of scientific or engineering [in the structural/aero/whatever sense] computing! Almost all the ‘modern’ scientific computing environments (e.g. in python) are just wrappers around long-extant c and fortran libraries. We are among the ones that get a bit upset when people treat ‘tech’ as synonymous with internet services and ignore (or are ignorant of) the other 90% of the iceberg. But that’s not meant as a personal attack, by this point it’s a bit like sailors complaining about the sea.
Julia is exciting as it offers the potential to change things in this regard, but there is an absolute Himalaya’s worth of existing scientific computing code that is still building the modern physical world that it would have to replace.
This is a very significant battery life and heat/sustained non-throttled performance difference.
I agree.
Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.
I am not sure what you mean. Modern Intel/AMD CPUs have AES instructions. AMD GPUs (including those in APUs) have acceleration for H.264/H.265 encoding/decoding. AFAIR also VP9. Neural depends a bit on what is expected, but you can do acceleration of neural network training, if AMD actually bothered to support Navi GPUs and made ROCm less buggy.
That said, for machine learning, you’ll want to get an discrete NVIDIA GPU with Tensor cores anyway. It blows anything else that is purchasable out of the water.
You also mention Fortran though and I’m not sure how that matters in 2020?
A lot of the data science and machine learning infrastructure relies on Fortran directly or indirectly, such as e.g. numpy.
I strongly disagree with this. I mean, the M1 MacBook Air is beating the 16” MacBook Pro in Final Cut Pro rendering times. Xcode compilation times are twice as fast across the board. This is not at all a machine just for browsing and emailing. I think that’s flat-out wrong.
Sorry, I didn’t mean that it is not fit for development. I meant that if you are doing development (unless it’s constrained to Xcode and Apple Frameworks), it is better to wait until the dust settles in the ecosystem. I think for most developers that would be when a substantial portion of Homebrew formulae can be built and they have pre-compiled bottles for them.
Sorry, I didn’t mean that it is not fit for development. I meant that if you are doing development (unless it’s constrained to Xcode and Apple Frameworks), it is better to wait until the dust settles in the ecosystem. I think for most developers that would be when a substantial portion of Homebrew formulae can be built and they have pre-compiled bottles for them.
My instinct here goes in the opposite direction. If we know Apple Silicon has tons of untapped potential, we should be getting more developers jumping on that wagon especially when the Homebrew etc. toolchain aren’t ready yet, so that there’s acceleration towards readying all the toolchains quickly! That’s the only way we’ll get anywhere.
Well, I need my machine for work. So, these issues just distract. If I am going to spend a significant chunk of time. I’d rather spend it on an open ecosystem rather than doing free work for Apple ;).
Sure, but the thing is that the AMD 4800U, their high-end laptop chip, runs at 45W pretty much sustained, whereas the M1 caps out at 15W. This is a very significant battery life and heat/sustained non-throttled performance difference. Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.
Like all modern laptop chips, you can set the thermal envelope for your AMD 4800U in the firmware of your design. The 4800U is designed to target 15W by default - 45W is the max boost, foot to the floor & damn the horses power draw. Also, the 4800U has a GPU…an 8 core Vega design IIRC.
Apple is doing exactly the same with their chips - the accounts I’ve read suggest that the power cost required to extract more performance out of them is steep & since the performance is completely acceptable at 15W Apple limits the clocks to match that power draw.
The M1 is faster than the 4800U at 15W of course, but the 4800U is a Zen2 based CPU - I’d imagine that the Zen3 based laptop APUs from AMD will be out very soon & I would expect those to be performance competitive with Apple’s silicon. (I’d expect to see those officially launched at CES in January in fact, but we’ll have to wait and see when you can actually buy a device off the shelf.)
You say that you returned and ordered a ThinkPad, how has that decision turned out? Which ThinkPad did you purchase? How is the experience comparatively?
I bought a Thinkpad T14 AMD. So far, the experience is pretty good.
Pros:
I really like the keyboard much more than that of the MacBook (butterfly or post-butterfly scissors).
It’s nice to have a many more ports than 2 or 4 USB-C + stereo jack. I can go places without carrying a bunch of adapters.
I like the trackpoint, it’s nice for keeping your fingers on the home row and doing some quick pointing between typing.
Even though it’s not aluminum, I do like the build.
On Windows, battery time is great, somewhere 10-12 hours in light use. I didn’t test/optimize Linux extensively, but it seems to be ~8 hours in light use.
Performance is good. Single core performance is of course worse than the M1, but having 8 high performance cores plus hyperthreading compensates a lot, especially for development.
Even though it has fans, they are not very loud, even when running at full speed.
The GPU is powerful enough for lightweight gaming. E.g., I played some New Super Lucky’s tale with our daughter and it works without a hitch.
Cons:
The speakers are definitely worse than any modern MacBook.
Suspend/resume continues to have issues on Linux:
Sometimes, the screen does not wake up. Especially after plugging or unplugging a DisplayPort alt-mode USB-C cable. Usually moving the TrackPoint fixes this.
Every few resumes, the TrackPad and the left button of the TrackPoints do not work anymore. It seems that (didn’t investigate further) libinput believes that a button is constantly held, because it is not possible to click windows anymore to activate them. So far, I have only been able to reset this state by switching off the machine (sometimes rebooting does not bring bak the TrackPoing).
So far no problems at all with suspend/resume on Windows.
The 1080p screen works best with 125 or 150% scaling (100% is fairly small). Enabling fractional scaling in GNOME 3 works. However, many X11/XWayland applications react badly to fractional scaling, becoming very blurry. Even on a 200% scaled external screen. Also in this department there are no problems with Windows, fractional scaling works fine there.
The finger print scanner works in Linux, but it results in many more false negatives than Windows.
tl;dr: a great experience on Windows, acceptable on Linux if you are willing to reboot every few resumes and can put up with the issues around fractional scaling.
I have decided to run Windows 10 on it for now and use WSL with Nix + home-manager. (I always have my Ryzen NixOS workstation for heavy lifting.)
Background: I have used Linux since 1994, macOS from 2007 until 2020, and only Windows 3.1 and briefly NT 4.0 and Windows 2000.
Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well).
Sleep seems to be broken on the latest MacOS versions: every third time I close the lid of my 2019 mac, I’m opening it later only to see that it has restarted because of an error.
Why does software specifically need to support Apple Silicon, not just Aarch64, to run natively? The instruction set is 9 years old, runs in every mobile device, and has been the second most important instruction set to support for the last 5 years or so: Everyone should expect support for this silicon from every programming language worth using by now.
I’m very surprised that Fortran and Go somehow don’t support it already, and even that general software, no matter if it’s compiled through Homebrew, has issues being compiled on ARM. Such microscopic problems should evaporate pretty quickly once exposed, assuming all APIs stay the same.
What I’m unsurprised about is that JIT runtimes and other software with heavy assembly optimization are more or less lacking ARM/NEON optimizations, because that takes human labor. Also relevant to future proofing, I would like to see a Dav1d benchmark. It should be one of the better optimized code bases by now.
There is an aarch64 port of gfortran, it’s used for e.g. raspbian. However there isn’t yet a stable aarch64 port for darwin as there are substantial ABI differences compared with linux. See the tracking issue.
Why does software specifically need to support Apple Silicon, not just Aarch64
For most software x86 vs ARM doesn’t even matter and it is just a recompile. Most software doesn’t know or even care what architecture it runs on.
Where it does matter, and get a lot more complicated, is with software that does interact with the CPU or the OS at a much lower level. Compilers, code generators, JITs, highly optimized code that uses assembly code.
Brew mostly has issues because of build system issues.
There is. It’s called “Arm Ltd.” In order to add custom instructions (or design your own chip), you need an “architectural license.” Otherwise you must use the CPU core IP as-is (though you can of course add custom peripherals). Apple is one of the few companies with an architectural license.
I am not sure if their license will allow them to use aarch64 name for such “extended architecture”. Also I do not think that they are interested in such extensions to the arch, as I think that they could easily push them into “standard” and then benefit from all the existing features of ARM community. They do not need Embrace, Extend, Extinguish as they are one of the big shareholders of ARM Holdings.
Go won’t officially support Apple Silicon binary compilation until February 2021. This is pretty slow especially compared to Rust. Apple’s been giving out dev kits since June.
(Emphasis in original).
I don’t believe the dev kits were free. They required an Apple dev membership and cost $500 (possibly defrayed by a rebate on new hardware when it became available) and there wasn’t an infinite amount of them.
I assume the main reason for this is the Go release cycle. It basically has a release every six months and three months of code freeze before that. Therefore, when the DTKs were shipped, the code freeze for the release in August had already happened. The next release is the upcoming one in February. The ..x releases are made just for fixing “critical issues”.
This probably also means that most of the hard work is done and the upcoming beta of Go 1.16 will support Apple Silicon.
Surely Apple and Google could agree on a bunch of dev kits so that Apple Silicon could launch with support for one the world’s most important programming languages?
Agreed. I know that even the Nix foundation got one. I assume it is more a matter of putting it somewhere in the release schedule. The other issue is that you couldn’t really set up CI infrastructure until the M1 Macs were released.
I remember Go’s design targeting particular scale problems seen at Google, notably the need for fast compiles. To what degree are Go’s priorities still set by Google? If that’s significant, what is their business interest in compiling to ARM?
I got a M1 mini w/ 16G ram. It’s probably my fave desktop I’ve ever owned, and I got it for audio stuff (Logic is fine for now). That said, a week in I started up a dev box on GCE just to have zero headaches when it came to ‘vanilla’ dev.
On the portable front, I imagine the Macbook Pros will become “impressive” instead of heavier touchbar airs in a few iterations.
I’m finally excited by something Apple have done for the first time in years, but I’m gonna wait and see how locked down these machines (on Big Sur) are before I buy one.
This is largely because the 20% VAT is included in the price and because the EU mandates twice the mandatory warranty of the US on all purchased electronics. So no, the price isn’t really that different.
No. Apple’s listed prices are more expensive in Europe as discussed above, due to higher VAT.
On top of that, over here (Europe) the advertised price almost always includes those taxes; unlike in the US where they are added at the time of purchase.
The reason I posted this, is because I think these price comparisons between difference currencies have no meaning. Why post a dollar amount for europeans who can only buy in euros. If you want to compare prices, compare to something like the big mac index or a cost of living index.
Ah. That’s a pretty good point to make, and I completely agree. But I don’t think that’s clear from your original comment.
Why post a dollar amount for europeans who can only buy in euros. If you want to compare prices, compare to something like the big mac index or a cost of living index.
For an accurate comparison, I think you’d have to compare the price to your chosen index across various US states as well.
And then, there are countries in Europe that are not a part of the Euro zone yet and still have their own currencies, and that dosn’t make the situation any better.
I bought one last week and have used it for 7 days now. I was in an initial hype phase as well, but I am more critical now and doubting whether I should return it.
Performance of native apps is as great as everyone claims. But I think it is a bit overhyped, recent AMD APUs come close in multi-core performance. Of course, that the Air works with passive cooling is a nice bonus.
Rosetta works great with native x86_64 applications, but performance is abysmal with JIT-ing runtimes like the JVM. E.g. JetBrains currently do not have a native version of their IDEs (JVM, but I think they also use some other non-Java code) and their IDEs are barely usable due to slowness. If you rely on JetBrains IDEs, wait until they have an Apple Silicon version.
Also, performance of anything that relies on SIMD instructions (AVX, AVX2) is terrible under Rosetta. So, if you are doing data science or machine learning with heavier loads, you may want to wait. Some libraries can be compiled natively of course, but the problem is that there is no functioning Fortran compiler supported on Apple Silicon (outside an experimental gcc branch) and many packages in that ecosystem rely on having a Fortran compiler.
Another issue with Rosetta vs. native in development is that it is very easy to get environments where native and x86_64 binaries/libraries are mixed (e.g. when doing x86_64 development and CMake building ARM64 objects unless you set
CMAKE_OSX_ARCHITECTURES=x86_64
), and things do not build.Then Big Sur on Apple Silicon is also somewhat beta. Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well). When working longer, this issue disappears, but it’s annoying nonetheless.
If you haven’t ordered one, it’s best to wait a while until all issues are ironed out. There is currently a lot of (justified hype) around Apple Silicon, but that doesn’t mean that the ecosystem is ready yet. Unless all you do is web browsing, e-mailing, and an occasional app from the App Store.
Aside from this, I think there are some ethical (sorry for the lack of a better term) issues with newer Apple models. For example, Apple excluding their own services from third-party firewalls/VPNs, no extensibility (reducing the lifespan of hardware), and their slow march to a more and more closed system.
Edit: returned and ordered a ThinkPad.
If you need a macbook now , for whatever reason, buying one with an Arm chip does sound the most future-proof option. The Intel ones will be the “old” ones soon, and will then be 2nd rate. It’s what happened with the PowerPC transition as well.
If only there would be the Macs with 32GB RAM I would buy one as I was in need. However due to that, I bought 32GB 13” MacBook Pro instead. I will wait for polishing out the ARMs before next upgrade.
From what I read, you get way more bang for your RAM in Apple processors. It’s all integrated on the same chip so they can do a lot of black magic fuckery there.
In native applications - I am pretty sure that this works well, however as an Erlang/Elixir developer I use 3rd party GCed languages and DBs that can use more RAM anyway. However the fact that it is possible to run native apps from iOS and iPad could save some RAM on Slack and Spotify for sure.
What I mean is, they probably swap to NAND or something, which could very likely be similar performance-wise to RAM you’d find on a x64 laptop (since they have a proprietary connection there instead of NVMe/M.2/SATA). Plus I imagine the “RAM” on the SoC is as fast as a x64 CPU cache. So essentially you’d have “infinite” RAM, with 16gb of it being stupid fast.
This is just me speculating btw, I might be totally wrong.
Edit: https://daringfireball.net/2020/11/the_m1_macs CTRL+F “swap”
Just wondering if you had any take on this, idk if I’m off base here
Lots of valuable insights here and I’m interested in discussing.
Sure, but the thing is that the AMD 4800U, their high-end laptop chip, runs at 45W pretty much sustained, whereas the M1 caps out at 15W. This is a very significant battery life and heat/sustained non-throttled performance difference. Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.
Yeah, I didn’t test anything Java. You might be right. You also mention Fortran though and I’m not sure how that matters in 2020?
This isn’t as big of a problem as it might seem based on my experience. You pass the right build flags and you’re done. It’ll vanish in time as the ecosystem adapts.
Big Sur has been more stable for me on Apple Silicon than on Intel. 🤷
I strongly disagree with this. I mean, the M1 MacBook Air is beating the 16” MacBook Pro in Final Cut Pro rendering times. Xcode compilation times are twice as fast across the board. This is not at all a machine just for browsing and emailing. I think that’s flat-out wrong. It’s got performance for developers and creatives that beats machines twice as expensive and billed as made for those types of professionals.
Totally with you on this. Don’t forget also Apple’s apparent lobbying against a bill to punish forced labor in China.
There’s really rather a lot of software written in Fortran. If you’re doing certain kinds of mathematics or engineering work, it’s likely some of the best (or, even, only) code readily available for certain work. I’m not sure it will be going away over the lifetime of one of these ARM-based notebooks.
There will be gfortran for Apple Silicon. I compiled the gcc11 branch with support and it works, but possibly still has serious bugs. I read somewhere that the problem is that gcc 11 will be released in December, so Apple Silicon support will miss that deadline and will have to wait until the next major release.
Isn’t Numpy even written in FORTRAN? That means almost all science or computational anything done with Python relies on it.
No, Numpy is written in C with Python wrappers. It can call out to a Fortran BLAS/LAPACK implementation but that doesn’t necessarily need to be Fortran, although the popular ones are. SciPy does have a decent amount of Fortran code.
Wow, who knew.
Almost anyone who does any sort of scientific or engineering [in the structural/aero/whatever sense] computing! Almost all the ‘modern’ scientific computing environments (e.g. in python) are just wrappers around long-extant c and fortran libraries. We are among the ones that get a bit upset when people treat ‘tech’ as synonymous with internet services and ignore (or are ignorant of) the other 90% of the iceberg. But that’s not meant as a personal attack, by this point it’s a bit like sailors complaining about the sea.
Julia is exciting as it offers the potential to change things in this regard, but there is an absolute Himalaya’s worth of existing scientific computing code that is still building the modern physical world that it would have to replace.
I agree.
I am not sure what you mean. Modern Intel/AMD CPUs have AES instructions. AMD GPUs (including those in APUs) have acceleration for H.264/H.265 encoding/decoding. AFAIR also VP9. Neural depends a bit on what is expected, but you can do acceleration of neural network training, if AMD actually bothered to support Navi GPUs and made ROCm less buggy.
That said, for machine learning, you’ll want to get an discrete NVIDIA GPU with Tensor cores anyway. It blows anything else that is purchasable out of the water.
A lot of the data science and machine learning infrastructure relies on Fortran directly or indirectly, such as e.g. numpy.
Sorry, I didn’t mean that it is not fit for development. I meant that if you are doing development (unless it’s constrained to Xcode and Apple Frameworks), it is better to wait until the dust settles in the ecosystem. I think for most developers that would be when a substantial portion of Homebrew formulae can be built and they have pre-compiled bottles for them.
My instinct here goes in the opposite direction. If we know Apple Silicon has tons of untapped potential, we should be getting more developers jumping on that wagon especially when the Homebrew etc. toolchain aren’t ready yet, so that there’s acceleration towards readying all the toolchains quickly! That’s the only way we’ll get anywhere.
Well, I need my machine for work. So, these issues just distract. If I am going to spend a significant chunk of time. I’d rather spend it on an open ecosystem rather than doing free work for Apple ;).
Like all modern laptop chips, you can set the thermal envelope for your AMD 4800U in the firmware of your design. The 4800U is designed to target 15W by default - 45W is the max boost, foot to the floor & damn the horses power draw. Also, the 4800U has a GPU…an 8 core Vega design IIRC.
Apple is doing exactly the same with their chips - the accounts I’ve read suggest that the power cost required to extract more performance out of them is steep & since the performance is completely acceptable at 15W Apple limits the clocks to match that power draw.
The M1 is faster than the 4800U at 15W of course, but the 4800U is a Zen2 based CPU - I’d imagine that the Zen3 based laptop APUs from AMD will be out very soon & I would expect those to be performance competitive with Apple’s silicon. (I’d expect to see those officially launched at CES in January in fact, but we’ll have to wait and see when you can actually buy a device off the shelf.)
That made me chuckle. Good choice!
You say that you returned and ordered a ThinkPad, how has that decision turned out? Which ThinkPad did you purchase? How is the experience comparatively?
I bought a Thinkpad T14 AMD. So far, the experience is pretty good.
Pros:
Cons:
tl;dr: a great experience on Windows, acceptable on Linux if you are willing to reboot every few resumes and can put up with the issues around fractional scaling.
I have decided to run Windows 10 on it for now and use WSL with Nix + home-manager. (I always have my Ryzen NixOS workstation for heavy lifting.)
Background: I have used Linux since 1994, macOS from 2007 until 2020, and only Windows 3.1 and briefly NT 4.0 and Windows 2000.
Sleep seems to be broken on the latest MacOS versions: every third time I close the lid of my 2019 mac, I’m opening it later only to see that it has restarted because of an error.
Maybe wipe your disk and try a clean reinstall?
Why does software specifically need to support Apple Silicon, not just Aarch64, to run natively? The instruction set is 9 years old, runs in every mobile device, and has been the second most important instruction set to support for the last 5 years or so: Everyone should expect support for this silicon from every programming language worth using by now.
I’m very surprised that Fortran and Go somehow don’t support it already, and even that general software, no matter if it’s compiled through Homebrew, has issues being compiled on ARM. Such microscopic problems should evaporate pretty quickly once exposed, assuming all APIs stay the same.
What I’m unsurprised about is that JIT runtimes and other software with heavy assembly optimization are more or less lacking ARM/NEON optimizations, because that takes human labor. Also relevant to future proofing, I would like to see a Dav1d benchmark. It should be one of the better optimized code bases by now.
There is an aarch64 port of gfortran, it’s used for e.g. raspbian. However there isn’t yet a stable aarch64 port for darwin as there are substantial ABI differences compared with linux. See the tracking issue.
For most software x86 vs ARM doesn’t even matter and it is just a recompile. Most software doesn’t know or even care what architecture it runs on.
Where it does matter, and get a lot more complicated, is with software that does interact with the CPU or the OS at a much lower level. Compilers, code generators, JITs, highly optimized code that uses assembly code.
Brew mostly has issues because of build system issues.
This is me being cynical, but I expect Apple to start extending Aarch64 with custom instructions any day now. Have to wonder how ARM feels about that.
They’ve been shipping their own ARM chips for a decade, so if that’s going to happen soon, it would likely be happening already. (Is it?)
That’s a good point. I thought it was to a small extent at least, but I can’t find details on such if they exist, so I might be wrong.
AFAIK They are already doing that. Apple can probably do whatever they want on their own platform. There is no ARM police.
There is. It’s called “Arm Ltd.” In order to add custom instructions (or design your own chip), you need an “architectural license.” Otherwise you must use the CPU core IP as-is (though you can of course add custom peripherals). Apple is one of the few companies with an architectural license.
Which isn’t too surprising since ARM was originally founded back in 1990 as a joint venture between Apple, Acorn and VLSI.
I am not sure if their license will allow them to use
aarch64
name for such “extended architecture”. Also I do not think that they are interested in such extensions to the arch, as I think that they could easily push them into “standard” and then benefit from all the existing features of ARM community. They do not need Embrace, Extend, Extinguish as they are one of the big shareholders of ARM Holdings.They don’t use the aarch64 name though.
Google for “A13 AMX” - which is their CPU instruction set extension for matrix operations.
I find mostly French tanks so I couldn’t check if this is coprocessor or extension to the main CPU, but I believe that you may be right.
It is not documented very well - mostly reverse engineered …
Seems not bad and for sure not reverse engineered.
From the “cons” section:
(Emphasis in original).
I don’t believe the dev kits were free. They required an Apple dev membership and cost $500 (possibly defrayed by a rebate on new hardware when it became available) and there wasn’t an infinite amount of them.
I assume the main reason for this is the Go release cycle. It basically has a release every six months and three months of code freeze before that. Therefore, when the DTKs were shipped, the code freeze for the release in August had already happened. The next release is the upcoming one in February. The ..x releases are made just for fixing “critical issues”.
This probably also means that most of the hard work is done and the upcoming beta of Go 1.16 will support Apple Silicon.
Most of the work has been done. You can grab tip and run that rather successfully right now.
Surely Apple and Google could agree on a bunch of dev kits so that Apple Silicon could launch with support for one the world’s most important programming languages?
Agreed. I know that even the Nix foundation got one. I assume it is more a matter of putting it somewhere in the release schedule. The other issue is that you couldn’t really set up CI infrastructure until the M1 Macs were released.
I remember Go’s design targeting particular scale problems seen at Google, notably the need for fast compiles. To what degree are Go’s priorities still set by Google? If that’s significant, what is their business interest in compiling to ARM?
I got a M1 mini w/ 16G ram. It’s probably my fave desktop I’ve ever owned, and I got it for audio stuff (Logic is fine for now). That said, a week in I started up a dev box on GCE just to have zero headaches when it came to ‘vanilla’ dev.
On the portable front, I imagine the Macbook Pros will become “impressive” instead of heavier touchbar airs in a few iterations.
I’m finally excited by something Apple have done for the first time in years, but I’m gonna wait and see how locked down these machines (on Big Sur) are before I buy one.
Or $1340 for Europeans.
This is largely because the 20% VAT is included in the price and because the EU mandates twice the mandatory warranty of the US on all purchased electronics. So no, the price isn’t really that different.
Thanks for the reply! So Americans actually pay $1100 for what they call a $1000 product.
Still a difference of $240.
(BTW, this is not meant as negative criticism of your review – I actually like it a lot)
Not in all states. When I was in Oregon (not sure if this is still true), they didn’t have sales tax.
Still true. No state wide sales tax in Oregon.
Or $2430 for Brazilians :) (I’m actually crying)
You mean 835 EUR right?
No. Apple’s listed prices are more expensive in Europe as discussed above, due to higher VAT.
On top of that, over here (Europe) the advertised price almost always includes those taxes; unlike in the US where they are added at the time of purchase.
The reason I posted this, is because I think these price comparisons between difference currencies have no meaning. Why post a dollar amount for europeans who can only buy in euros. If you want to compare prices, compare to something like the big mac index or a cost of living index.
Ah. That’s a pretty good point to make, and I completely agree. But I don’t think that’s clear from your original comment.
For an accurate comparison, I think you’d have to compare the price to your chosen index across various US states as well.
And then, there are countries in Europe that are not a part of the Euro zone yet and still have their own currencies, and that dosn’t make the situation any better.
I have a 2013 mbp and it’s definitely due for an upgrade. However I’m going to wait until the next MBA, I hear the M1X chip is bonkers
I think M1X is going to be intended for high performance computers like the iMac and 16” MacBook Pro. You’ll be waiting for the M2 most likely.