1. 13
  1.  

  2. 3

    Apple Silicon definitely interests me and I’m really eyeing getting a 13” MBP with an A14 chip, but…I can’t. I need to be able to run a lot of amd64 VMs, and that’s not going to work in any useful way for a long time if ever.

    1. 1

      Unless we port half of the universe to ARM64.

    2. 4

      Yes.

      1. 5

        But… but… what about Betteridge?

      2. 1

        I admittedly don’t give a shit about R, but this is a very interesting part to me:

        However, the Apple silicon platform uses a different application binary interface (ABI) which GFortran does not support, yet.

        Does this mean that the ABI for core Apple libs is different? That seems expected if you’re switching to a whole new arch. Or do they mean that something like the calling convention is different? I’m super interested in the differences here.

        1. 1

          I have no expertise on the platform, but I did find in some Apple docs a reference to the C++ ABI now matching that of iOS: https://developer.apple.com/documentation/xcode/writing_arm64_code_for_apple_platforms#//apple_ref/doc/uid/TP40009020-SW1 (which itself makes reference to developer.arm.com, so changing ABI is likely not a decision made by Apple alone).

          1. 9

            Most of those look pretty much like the 64-bit Arm PCS. I presume that Apple is using the same ABI for AArch64 macOS as iOS. The main way that I’m aware that this differs from the official one is in handling of variadic arguments. Apple’s variadic ABI is based on an older version of the Arm one, where all variadic arguments were passed on the stack. This is exactly the right thing to do for two reasons:

            • Most variadic functions are thin wrappers around a version that takes a va_list, so anything other than passing them on the stack requires the caller to put them into registers and then the callee to spill them to the stack. This is much easier if the caller just sticks them on the stack in the first place.
            • If all variadic arguments are contiguous on the stack, the generated code for va_next is simpler. So much simpler that, in more complex implementations, va_start is often compiled to something that writes all of the arguments that are in registers into the stack.

            As an added bonus, if you have CHERI, MTE, or Asan, you can trivially catch callees going past the last argument. This is exactly how variadics worked on the PDP-11 and i386, because all arguments were passed on the stack. In K&R C, you didn’t actually have variadics as a language feature, you just took the address of the last formal argument and kept walking up the stack.

            The down side is that now your variadic and non-variadic calling conventions are different if you non-variadic convention passes any arguments in registers. That shouldn’t matter, because it’s undefined behaviour in C to call a function with the wrong calling convention. It did matter in practice because (when AArch64 was released, at least, possibly fixed now) some high-impact bits of software (the Perl and Python interpreters, at least) used a table of something like int(*)(int, ...) function pointers and didn’t bother casting them to the correct type before invoking them. They worked because on most mainstream architectures the because the variadic and non-variadic conventions happened to be the same for functions that up to four integer-or-pointer arguments.

            I am still sad that Arm made the (commercially correct) decision not to force people to fix their horrible code for AArch64.

            I believe that the new Apple chips also support Arm’s pointer signing extension and so there are a bunch of features in the ABI related to that, which probably aren’t in GCC yet.

            1. 1

              It did matter in practice because (when AArch64 was released, at least, possibly fixed now) some high-impact bits of software (the Perl and Python interpreters, at least) used a table of something like int(*)(int, …) function pointers and didn’t bother casting them to the correct type before invoking them.

              I think you just explained for me why apple’s ObjC recently started demanding explicit casts of IMP (some thing like id(*)(id, SEL, …), which I’m aware you already know but readers may not).

              1. 1

                I don’t think that should be a new thing. Back in the PowerPC days, there were a bunch of corner cases (particularly around things that involved floating-point arguments) where that cast was important. On 32-bit x86, if you called a function using the IMP type signature but it returned a float or double then it would leave the x87 floating point stack in an unbalanced state and lead to a difficult-to-debug crash later on.

                On Apple AArch64; however, you’re right that it’s a much bigger impact: all arguments other than self and _cmd will be corrupted if you call a method using the IMP signature.

                One of the breaking changes I’d like to make to Objective-C is adding a custom calling convention to IMP so that C functions that you want to use as IMPs have to be declared with __attribute__((objc_method)) or similar. It would take a few years of that being a compiler warning before code is migrated but once it’s done you have the freedom to make the Objective-C calling convention diverge from the C ones.