1. 12
  1. 30

    I wouldn’t expect an objective viewpoint on anything Apple-related from PC World. Tom’s Hardware takes a more skeptical look and finds some sleight of hand, like Intel cherry-picking which Apple systems to use in various benchmarks.

    Apple’s current M1 systems are low-end, replacing the bottom of their Mac lineup. It’s going to get really interesting when the performance-oriented systems like the 16” MacBook Pro (and maybe Mac Pro?) come out later this year.

    1. 3

      I look at the current generation of M1 Macs as basically the M1 beta, necessary because devs need to be able to test on Apple Silicon but not really aimed at being anyone’s main machine. Like all new Apple products, the first iteration or two is going to be a bit of a letdown, but the numbers we’re already seeing on such low end machines are pretty amazing nonetheless. The next five years are going to be very interesting.

      1. 3

        I haven’t used one yet, but they seem like good main machines for typical consumers who don’t run a lot of CPU-intensive or RAM-hungry apps. Apple’s apps are native, Office is native, Chrome is native, stuff is getting ported at a good rate because these days recompiling for a new architecture is easy. (‘T’wasn’t always so. Us old timers have tales of the 68k-PPC transition that’d curl your hair.)

        The developer beta was the “transition kit” mini that was available to developers last summer. Those were pretty cheap and not hard to get.

        Me, I’m pulling the trigger the second the rumored 16” MacBook Pro comes out.

        1. 3

          because these days recompiling for a new architecture is easy

          To be fair, these ARM chips are barely different from the x86 chips they replace from a software perspective. They’re 64 bits, little endian, with all the same integer sizes, they’re both superscalar out-of-order architectures with a ton of general purpose registers so optimizers will optimize fairly similarly, char has the same signedness, you can use all the same compilers, both chips even support unaligned memory access. Recompiling for a drastically different architecture is still hard today; it’s just that aarch64 is almost a drop-in replacement for x64, so broken C code (code which relies on undefined or implementation-defined behavior) which worked on x64 will work on aarch64 too. The only difficulty might be the lack of SSE/AVX, but anyone in their right mind has a pure C fallback for that stuff anyways.

          I’m not disagreeing with you, and I’m sure you know this already. I’m just providing some context for why this particular architecture transition is easier than many others.

          1. 1

            They also both support unaligned loads and stores, though x86 supports atomics that span a cache line boundary, whereas AArch64 doesn’t (I don’t know if the M1 does).

    2. 19

      I’m pretty disappointed in this article. It’s a very interesting premise, and I’d love to see someone go through Intel’s claims thoroughly and see how relevant they are, how cherry-picked they are, etc. This article isn’t that.

      For example, the benchmark says Intel crushes the M1 in some specific AI workloads. I would expect the article to analyze the result with respect to:

      1. Is it accurate to say that Intel is better than the M1 in AI workloads across the board, or is Topaz Labs just an outlier?
      2. Does the Topaz Labs AI solutions perform so much better on Intel because Intel is just so much better at general purpose compute, or is it essentially “cheating” where the AI tasks are hardware accelerated on Intel but not on M1?
      3. If the results are so much better due to hardware acceleration, is that unique to Intel, or does the M1 (or the Apple computers which ship with an M1) have the same kind of capabilities? Could the Topaz Labs tools be updated to take advantage of the M1’s AI acceleration capabilities if it isn’t already?
      4. If the Topaz Labs tools were optimized to take advantage of the M1 just as they have been optimized to take advantage of Intel, which chip would be faster? (You couldn’t answer this definitively of course, but one could make some very good educated guesses if one put the time into researching and benchmarking and comparing against other AI tools and such).
      5. How is power usage? Does the Intel CPU achieve 6x the score with comparable power usage or not?

      As it stands, the article has none of that. They just say… they have witnessed the same thing when they ran the exact same AI benchmark, which we would expect. Nothing new is being contributed here.

      The same goes for all of these points. Nothing of value is being said, nothing is being critically analyzed. It’s arguably somewhat valuable if you’re looking for advice on which laptop to buy right now before any software is adapted or optimized for the new chip, but it does absolutely nothing to answer any of the interesting questions the benchmark results raise in terms of the actual performance of the chips themselves.

      1. 2

        Yup. Basically just doing Intel a big favor by re-broadcasting their claims.

        1. 2

          Agreed. The only thing that’s relevant is performance per watt, where I’m willing to consider to points of view:

          • Comparing the currently available Apple ARM CPU (5nm process) with a currently available Intel (or AMD) CPU (14nm/10nm/7nm processes).
          • Comparing the currently available Apple ARM CPU with an extrapolation of an Intel (or AMD) CPU at the same process node as Apple’s ARM (5nm). (To figure out how much of Apple’s advantages come from the process.)
        2. 10

          I refer you to The Only M1 Benchmark That Matters - how long does it take to compile Emacs! :-) (Spoiler: the M1/clang does well.)

          1. 4

            But how long does it take to compile Vim and save the kids in Uganda?

          2. 3

            Personal Anecdata: Minecraft on a 16GB M1 Macbook Pro runs more smoothly with barely noticeable heat, zero fans, and hours of battery life under Rosetta2 than it does on a 32GB i9 Macbook Pro (roasting heat, full fans, hour or two maybe) with the exact same Java, settings, etc.

            1. 2

              “Core i7 Crushes M1 in AI”

              What is “Topaz Lab’s Gigapixel AI and Denoise AI”? This is a strange thing to highlight instead of Apple’s own AI offerings (Accelerate/CreateML/CoreML). Apple has a bunch of undocumented ARM ISA extensions that enable accelerated matrix multiplications (https://gist.github.com/dougallj/7a75a3be1ec69ca550e7c36dc75e0d6f) – is it clear that Topaz is using these?