1. 30
  1.  

  2. 28

    2019 Intel i9 16” MacBook Pro

    Apple reinforced Intel’s “contextless tier name” marketing trick so much. It was always just “i9” which sounds impressive, doesn’t it. The actual chip name is i9-9880H. That’s Coffee Lake, a mild refresh of a mild refresh of Skylake, still on the 14nm process which was around since 2014.

    1. 19

      That is important info.

      That said, they kind of deserved this with their obstruse product names. Up till Pentium 4 or so, I could easily follow what is the new model. But having newer i5 being faster than an old i9, I hate that. The important info is then a cryptic number.

      1. 16

        But the i9-9880H was launched in Q2 2019. It’s not like Apple is putting old chips in their laptops; the i9-9880H was about the best mobile 45W chip Intel offered at the time.

        It’s just both the best Intel had to offer in 2019 and a refresh of Skylake on their 14nm process.

        There’s a reason Apple and AMD are both surpassing Intel at the moment.

        1. 3

          Note that it’s a i9-9880H inside a MacBook which has a cooling system that heavily prioritises quietness and slimness over performance. This is advantageous for the M1 since Intel chips are heavily dependent on thermals in order to reach and maintain boost clocks.

      2. 17

        So you’re saying a cpu of a tweaked 5 year old architecture optimized for single core speed on a 6 year old process node in a chasis notorious for poor thermal design thermally throttles worse on an all-core task than a cpu optimized for efficiency on a brand new process node?

        Yes, it’s a good low power CPU, but these blog posts pointing out that new technology is better than half-a-decade-old technology get tiresome.

        1. 2

          Yes, this is an indictment of Intel more than anything else. The M1 is still an impressive chip though.

        2. 9

          1m30.6s on a 32 core Threadripper. Not a fair comparision by any metric, but pretty amusing.

          1. 9

            With that type of speed, you can just compile the kernel at boot… https://bellard.org/tcc/tccboot.html

            1. 5

              Of course it’s Fabrice Bellard. This kind of whacky shenanigans is exactly the kind of thing I’ve come to expect from him. His focus seems to be relatively narrow (C, Linux), but his work is still breathtaking.

              UPDATE: He’s the author of qemu. Of course he is.

              1. 4

                Don’t forget ffmpeg, oh and the 4G mobile base station implementation.

                If I could stay a tenth as focused & productive I would probably feel I was doing well.

                1. 2

                  And, most recently, QuickJS. So many impressive projects from one person.

          2. 5

            I’m not an expert but I’m pretty sure that there’s no reason a cross-compiler should run slower than a native compiler… so I don’t think that the original author needs apologize for the not-exactly apples to apples comparison.

            1. 14

              Theoretically, a cross-compiler has to either not do any constant propagation at compile time or it has to emulate the target architecture’s rounding rules, for example. A native compiler can (assuming the rules match with the compiled language’s rules) just evaluate constant expressions using native code and be done with it.

              (If you’re compiling from/to a 64-bit, two’s-complement, no-exception-on-overflow architectures, theoretically it should just work, but it’s not a guarantee.)

              I’m aware of several compilers that do not do constant calculation at compile time, specifically to avoid this issue and keep the same code regardless of target architecture.

              There are various other potential sources of slowdown, but those are the main ones (and I can’t imagine any that would actually have significant impacts when compiling normal code).

              1. 3

                Sure, but across 64-bit x86 and ARM constant folding should be pretty much identical (except for any x87 80-bit floating point code).

                1. 3

                  Right, hence my “theoretically”. ;)

              2. 2

                A cross-compiler should not be slower, but a cross-buildsystem often can be. For example:

                • If you are building any tools that are used in your build process and installed, you may need to build them twice in the cross-compile case and only once in the normal case.
                • Some build steps may need to run in emulation (this is rare for build systems that natively support cross-build, but common when cross-build is retrofitted)

                From what I recall of the Linux build system, neither of these apply: it does build some tools that are used in the build, but it doesn’t build versions of these for the target system, only for the host. There’s a slight apples-to-oranges thing here, because those tools will be compiled for AArch64 in one configuration and x86-64 in the other, but they are such a tiny fraction of the total build time that it probably doesn’t make a difference.