1. 2

    With the introduction of macOS on ARM, I was curious how well Windows applications have been updated to support Windows on ARM, which was first released a little over two years ago. Since then, well, it doesn’t look like much progress has been made. Of the popular applications I was able to come up with, only one, VLC, has support for ARM64. I think a large part of this difficulty is the fact that virtually no development platforms support ARM64. Many Windows apps are built with WPF, which doesn’t support ARM64. Even on the Microsoft side, only a handful of applications are compiled natively for ARM64.

    I hope that by calling out the lack of support for ARM64 I can help push the platform forward and encourage more applications to release ARM64 versions!

    1. 8

      Microsoft doesn’t have fat binaries. That makes a huge difference.

      On macOS I press “Build” in Xcode and ship it. Assuming the code was portable, that’s all I need to do. Users don’t even need to know what CPU they have, and apps will continue to work—natively—even when the user copies them to a machine with a different CPU.

      For Windows, I need to offer a separate download, and ask users to choose the version for the CPU they have, and then deal with support tickets for “what is CPU and why your exe is broken?” Or maybe build my own multi-arch installer that an ARM machine can run under emulation, but it can still detect and install for the non-emulated CPU. I don’t have time for either of these, so I don’t ship executables for ARM Windows, even though I could build them.

      1. 4

        On macOS I press “Build” in Xcode and ship it.

        I don’t mean this to be too snarky, but do you test it on both systems?

        I already have a multi-arch installer, and my code compiles for ARM64 fine, but I wouldn’t want to update the installer to point to a binary that I’ve never executed. The lack of supported virtualization options is noteworthy here. Right now my only real option is to spend $500-$1000 on an ARM Windows machine for this specific purpose.

        Without a Mac devkit it’s hard to be sure, but I’d swear I saw a demo where Xcode can just launch the x64 version under Rosetta, so it becomes possible to test both on one machine. Unfortunately developers need new hardware because there’s no reverse-Rosetta for running ARM code on x64, so porting will still take time.

        1. 3

          Unfortunately developers need new hardware because there’s no reverse-Rosetta for running ARM code on x64

          I’m not so sure that we really need reverse-Rosetta. The iOS simulator runs x86_64 binaries and is really accurate (except performance-wise). The Apple ecosystem already has an extensive experience of supporting both ARM and x86_64 binaries, and most Macs should be ARM in a few years in anyway. And there is already the ARM Mac mini thingy for developers.

          1. 1

            do you test it on both systems?

            I haven’t, actually. In case of PPC and x86->x64 switches I just bought the new machine and tested only there. I already knew my code worked on the old architecture, so testing on both didn’t seem critical. In Apple’s case these are transitions rather than additions of another platform.

          2. 3

            Microsoft doesn’t have fat binaries

            I don’t know if anyone is actually shipping things like this, but it is possible to do this on Windows by building the application as a DLL and then using a tiny .NET assembly that queries the current architecture and then loads and P/Invokes the correct version of the DLL. I saw a proof-of-concept for this a very long time ago, but I don’t think there’s tooling for it.

            I’m not really convinced by how Apple does fat binaries. It might be a space saving if the linker could dediplicate data segments, but (last I checked) ld64 didn’t and so you really end up with two binaries within a single file. The NeXT approach was a lot more elegant. Files specific to each OS / Architecture (NeXT supported application bundles that ran on OpenStep for Windows or OpenStep for Solaris as well as OPENSTEP) was in a separate directory within the bundle, along with directories for common files. You could put these on a file server and have apps and frameworks that worked on every client that mounted the share, or you could install them locally and trivially strip out the versions that you didn’t need by just deleting their directories.

            The ditto tool on macOS was inherited from NeXT and supported thinning fat bundles and was extended to support thinning fat binaries when Apple started shipping them. That’s a bit awkward for intrusion detection things because it requires modifying the binary and so tooling needs to understand to check signatures within the binary, whereas the NeXT approach just deleted files.

            Now that no one runs applications from a file share, the main benefit from fat binaries is during an upgrade. When you buy a new Mac, there’s a migration tool that will copy everything from the old machine to the new one, including applications. With a decent app store or repo infrastructure, such a tool would be able to just pull down the new versions. Honestly, I’d much rather that they just extended the metadata in application and library bundles to include a download location and hash of the versions for other architectures. Then you could go and grab them when you migrated to a different system but not waste bandwidth and disk space on versions that you don’t need.

            1. 2

              Now that no one runs applications from a file share, the main benefit from fat binaries is during an upgrade. When you buy a new Mac, there’s a migration tool that will copy everything from the old machine to the new one, including applications. With a decent app store or repo infrastructure, such a tool would be able to just pull down the new versions. Honestly, I’d much rather that they just extended the metadata in application and library bundles to include a download location and hash of the versions for other architectures.

              Obviously this was way back before code signatures became very load bearing on OS X… during the Intel transition I used to have a script that would spin over an app bundle and use lipo to create “thin” binaries so I could have enough room on my little SSD for all the things I used. I also pruned unnecessary localization files.

              I forget what size that SSD was, but the difference was significant enough that learning lipo and scripting it out was worth my time.

            2. 1

              I can definitely understand how confusing it would be to offer multiple architecture downloads. That being said I would strongly encourage you to at least provide a way to get to the ARM64 version if it’s trivial for you to build for it. That way seekers can run your app with the best performance on their machine.

              Honestly I’m surprised that tools like MSIX don’t support multiple architectures.

                1. 1

                  Glad to see that! I figured they’d have some sort of solution there.

            3. 1

              Ended up submitting three PRs because it seems that you didn’t notice that Rust and Firefox literally just work as native Arm, Visual Studio Code has it in the beta channel, with release happening within the month too. :-)

              1. 2

                (Just as a note for anyone not following the GitHub discussion)

                VS Code will be marked as available once the ARM64 version is in the stable channel.

                Rust requires an additional step not required by the x86 version, so until the experience is transparent to the user I’m not going to mark it as available. That being said, I’ll be filing an issue with rustup to hopefully get it to install the ARM64 toolchain by default.

                Firefox might get marked as available depending on if Firefox Installer.exe installs the ARM64 version.

            1. 3

              I think that Apple replied in the original post (in the section “A Note On Web Applications Added to the Home Screen”):

              As mentioned, the seven-day cap on script-writable storage is gated on “after seven days of Safari use without user interaction on the site.” That is the case in Safari. Web applications added to the home screen are not part of Safari and thus have their own counter of days of use. Their days of use will match actual use of the web application which resets the timer. We do not expect the first-party in such a web application to have its website data deleted.

              If your web application does experience website data deletion, please let us know since we would consider it a serious bug. It is not the intention of Intelligent Tracking Prevention to delete website data for first parties in web applications.

              As far as I understand, PWAs are not affected so…

              1. 4

                PWAs are not affected if they are installed onto the home screen, if you keep using them inside Safari they are still affected and so are all the other web sites. It is also a bit confusing because the wording under:

                Web applications added to the home screen are not part of Safari and thus have their own counter of days of use.

                Emphasis added by me to highlight that they are counting days for the installed PWA usage, which makes me wonder if they are deleting it as well or why they are counting the days of usage in such cases. I don’t know.

                1. 2

                  I find the text incredibly confusing.

                  have their own counter of days of use.

                  So, what counts as use? Do I have to open the app? And after what number of days is my data deleted?

                  1. 2

                    What counts as use is opening the app from the home screen. Data will be deleted after 7 days without opening the app.

                    1. 2

                      Safari: each day you use it counts as a day. If 7 days of usage have passed without you visiting a specific website, its data is erased.

                      Homescreened webpage: each day you use it counts as a day. If 7 usage days have passed without you visiting the website, its data is erased. But since you visit the website every time you click the icon on your home screen, the counter should never go above 1. (If you’re using some third party domain to store the data, if it gets erased depends on how the webpage works and what your user does.)

                      1. 1

                        I find it confusing as well.

                  1. 18

                    I would say that both Erlang and OpenCL are made for multicore. They are very different languages, because the problems they solve are very different, but they still target multicore systems. There are different aspects of parallelism that they target.

                    As for taking a serial program and making use of parallelism, this is actually possible with C and Fortran. With gcc you can get, if you’re careful, instruction-level parallelism through auto-vectorization. I believe that Intel’s compiler can even give you automatic multithreading.

                    This historical presentation by Guy Steele is an amazing introduction to the problem: What Is the Sound of One Network Clapping? A Philosophical Overview of the Connection Machine CM-5. The people at Thinking Machines Corporation had it down to an art.

                    1. 3

                      Intel ISPC too. (and that supports both SIMD and multicore scaling)

                      1. 2

                        Thank you for that link, it was incredibly educational.

                      1. 1

                        Context: Intel decided to kill the HLE part of TSX on current CPUs via a microcode update… and on Linux it was chosen to kill the other part too instead of relying on mitigations.

                        1. 1
                          1. 1

                            Yes, HLE is only a part of TSX and was disabled outright by Intel in newer microcode.

                            The other part of TSX, explicit TSX, was left enabled by Intel with mitigation recommendations, but it was chosen to disable it outright in Linux.

                        1. 4
                          • probably worth (a) million(s) (of) bucks
                          • unpatchable
                          • done w USB before any security mechanisms in place
                          • reboot removes exploit (“tethered”)
                          • this is what Graykey etc. base their business model off of (er, idk)
                          • gives JTAG interface (it’s like a shell for the CPU)
                          • claims of custom firmware to bypass iCloud lock are already out there (and will probably become public eventually now that this exists publicly)
                          • run windows (or anything) on your iPhone
                          1. 1

                            For Windows, it’s a bit more complex because it doesn’t support interrupt controllers outside of a standard GIC for the ARMv8 port, so some patching will be required.

                            I’m in the process of getting a basic Linux port running since a while though.

                            For GrayKey and such, this allows them to image and then restore back the keys when the SEP thrashes them from NAND after the input attempts are exceeded, making it possible to continue bruteforcing.

                            1. 1

                              Does apple have a custom interrupt controller?? o_0

                              1. 2

                                Yes, they use a custom AIC interrupt controller instead of the ARM GIC unlike pretty much everyone else now.

                                Also, their CPUs since the A10 only implement EL1 and EL0, no EL2 or EL3 anywhere in them (+ a metric ton of custom registers, from KTRR through APRR, and WKdm compression extensions even and more + AMX on A13 onwards)

                                Also about non-standard interrupt controllers and Windows, forgot to talk about the Raspberry Pi exception, which was a very special case that didn’t happen twice.

                            2. 1

                              Did I mention that it can bypass iCloud locked devices? (To turn them on with a custom/stock OS, not to break into another person’s OS, see SEP comment in other comment branch “below”)

                            1. 7

                              If every OSS maintainer had a nickel for every time a random person on the internet implicitly accused them of hating freedom because a program behavior does not align with their political beliefs, the problem of funding OSS maintenance would be solved.

                              1. 1

                                Maybe make it so that you have to pay a few cents to make an issue without a patch in an issue tracker? It’s certainly not a perfect solution, but at least people would have to start thinking about the importance of their comments.

                                1. 2

                                  Making people pay money would just make lots of bugs unreported until a golden master, which won’t be good for testing coverage, especially as businesses do not tend to touch prerelease core OS libraries…

                                  It also would make the process of reporting bugs more complex, with anonymity being harder to guarantee for some users who would rather like it.

                                  1. 1

                                    Agreed. More bugs would go unreported and I fixed that way. It makes for an interesting thought experiment, however.

                              1. 3

                                Another post in that thread argues that these instructions won’t be user facing. We will find out in a few days.

                                1. 2

                                  I can personally confirm that they are running on the main AP cores.

                                  It’s however possible that Xcode public tools won’t have support for it, and apps would only use it through Accelerate.framework in that case. (support for AMX in Accelerate.framework is already done since a long time)

                                1. 11

                                  And then Google on Firefox shows the Download Chrome ad…

                                  It’s interesting too that extensions aren’t mentioned once in that post.

                                  1. 5

                                    Why would extensions be a selling point any more? Now that the extension mechanism has commodified post-Quantum, it can’t be used as a distinguishing factor between the two any more, which was a brilliant move by Google.

                                    1. 9

                                      The article is about desktop, but it would be a huge differentiator on Android, where chrome doesn’t support extensions, except I found Firefox to be unresponsive almost to the point of useless. So I use chrome for most everything (lobsters, etc.) and only use Firefox for selected links I predict will be ad heavy.

                                      1. 7

                                        Interesting. I use Firefox with ublock origin every day on my hilariously underpowered Nexus 5 and it’s great.

                                        I had no idea mobile Chrome had no extension support; what a nightmare.

                                        1. 3

                                          Sounds like you haven’t tried chrome much? I suppose if you’re used to Firefox you accommodate and it’s not a problem? For me, I’m used to tapping a link and having it open. Firefox requires pressing and holding everything for at least half a second or it doesn’t react.

                                          1. 9

                                            That’s definitely just you. Another happy Firefox mobile user here. If I press on a link for half a second I get the context menu.

                                            1. 2

                                              I’ve tried firefox on so many phones and it was a useless crashy dumpster fire of an app. They couldn’t get text input and selection right for years. I literally don’t know anyone who liked it other than firefox enthusiasts online

                                        2. 3

                                          Default Firefox for Android seems a little sluggish to me, but blocking javascript with uMatrix more than compensates.

                                          1. 3

                                            Fenix is a lot faster than both normal Firefox and Chrome on my phone. At the moment it doesn’t support extensions, but I think they’re on the roadmap.

                                            1. 1

                                              The new mobile Firefox doesn’t have extensions at all… maybe they’ll come at some point.

                                            2. 2

                                              https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Firefox_differentiators

                                              and not an official “differentiator” but I could not get SoundFixer to work on Chromium at all because of permission issues.

                                              1. 1

                                                I’d outright argue that it became a distinguishing factor in favour of Chrome or Edge (or Opera/Vivaldi/whatever) at this point… maybe the recent moves by Google around ad blockers will help to reverse the trend.

                                                People see other Chromium browsers as alternatives, even if they use the same engine under the hood. That makes the different engine in Firefox a distinguishing feature that plays against them if sites aren’t properly tested to be working on Firefox too.

                                              2. 2

                                                None of my family members are heavy extension users, so that wasn’t something I had to get them up to speed on. I have a Pi-hole set up at my parents’ house so they don’t have to think about blocking ads in their browsers.

                                                But for lots of people who do use Chrome extensions, that’s another valid point.

                                              1. 10

                                                I don’t use Github and only use Gitlab as a mirror. In general it’s better to avoid features which get you stuck to the platform in a manner where you can’t easily move away later.

                                                1. 3

                                                  Since they were acquired by Microsoft, GitHub is doubling down on their “value-added” model. There should be a point where those additions should be standardised in some extent though, because that lock-in might become a big issue in the future.

                                                  1. 6

                                                    I don’t think it’s in microsoft’s best interest to ‘standardize’ with other CI services. They want to lock you in.

                                                    1. 6

                                                      There’s a book out there about how big change won’t occur until a disaster strikes. It might be “Lessons of Disaster” but I’m not sure if that was it. It was pretty convincing and gave good examples in history. Most importantly, the book showed how a lot of safety laws are implemented, not when people raise concerns, but after many people die from the lack of such laws. It takes a disaster to implement disaster preventions.

                                                      I think that might happen to a lot of FOSS communities, where people talking about how it’s bad to get locked-in to a proprietor/vendor won’t be taken seriously (to the point of action) until disaster strikes. It probably won’t happen for a while and won’t be as dramatic, but I think there’s a good possibility that without standardization/decentralization, many will eventually be confronted with the pain that is vendor lock-in.

                                                      I think Fossil has the right idea about including the issue tracker, wiki, etc. in the decentralized repos. I hope we see more solutions like that come up and see adoption.

                                                      1. 2

                                                        There should be a point where those additions should be standardised in some extent though, because that lock-in might become a big issue in the future.

                                                        For you, or for the org tasked with maximizing the number of mouths at the feeding trough?

                                                      2. 1

                                                        features which get you stuck

                                                        Are you talking about GitHub actions or GitLab CI here?

                                                        Because I don’t think that is much of a problem for GitLab CI. Since your jobs are purely script based, it’s quite easy to transition to different platforms. Yes, you can create stages, job dependencies and what not, but still.

                                                      1. 2

                                                        Seems that it’s already getting submitted to staging at this point, the day when the patent issues got resolved.

                                                        ( https://lkml.org/lkml/2019/8/28/827 )