1. 25

    I bought one last week and have used it for 7 days now. I was in an initial hype phase as well, but I am more critical now and doubting whether I should return it.

    Performance of native apps is as great as everyone claims. But I think it is a bit overhyped, recent AMD APUs come close in multi-core performance. Of course, that the Air works with passive cooling is a nice bonus.

    Rosetta works great with native x86_64 applications, but performance is abysmal with JIT-ing runtimes like the JVM. E.g. JetBrains currently do not have a native version of their IDEs (JVM, but I think they also use some other non-Java code) and their IDEs are barely usable due to slowness. If you rely on JetBrains IDEs, wait until they have an Apple Silicon version.

    Also, performance of anything that relies on SIMD instructions (AVX, AVX2) is terrible under Rosetta. So, if you are doing data science or machine learning with heavier loads, you may want to wait. Some libraries can be compiled natively of course, but the problem is that there is no functioning Fortran compiler supported on Apple Silicon (outside an experimental gcc branch) and many packages in that ecosystem rely on having a Fortran compiler.

    Another issue with Rosetta vs. native in development is that it is very easy to get environments where native and x86_64 binaries/libraries are mixed (e.g. when doing x86_64 development and CMake building ARM64 objects unless you set CMAKE_OSX_ARCHITECTURES=x86_64), and things do not build.

    Then Big Sur on Apple Silicon is also somewhat beta. Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well). When working longer, this issue disappears, but it’s annoying nonetheless.

    If you haven’t ordered one, it’s best to wait a while until all issues are ironed out. There is currently a lot of (justified hype) around Apple Silicon, but that doesn’t mean that the ecosystem is ready yet. Unless all you do is web browsing, e-mailing, and an occasional app from the App Store.

    Aside from this, I think there are some ethical (sorry for the lack of a better term) issues with newer Apple models. For example, Apple excluding their own services from third-party firewalls/VPNs, no extensibility (reducing the lifespan of hardware), and their slow march to a more and more closed system.

    Edit: returned and ordered a ThinkPad.

    1. 9

      it’s best to wait a while

      If you need a macbook now , for whatever reason, buying one with an Arm chip does sound the most future-proof option. The Intel ones will be the “old” ones soon, and will then be 2nd rate. It’s what happened with the PowerPC transition as well.

      1. 2

        If only there would be the Macs with 32GB RAM I would buy one as I was in need. However due to that, I bought 32GB 13” MacBook Pro instead. I will wait for polishing out the ARMs before next upgrade.

        1. 1

          From what I read, you get way more bang for your RAM in Apple processors. It’s all integrated on the same chip so they can do a lot of black magic fuckery there.

          1. 1

            In native applications - I am pretty sure that this works well, however as an Erlang/Elixir developer I use 3rd party GCed languages and DBs that can use more RAM anyway. However the fact that it is possible to run native apps from iOS and iPad could save some RAM on Slack and Spotify for sure.

            1. 2

              What I mean is, they probably swap to NAND or something, which could very likely be similar performance-wise to RAM you’d find on a x64 laptop (since they have a proprietary connection there instead of NVMe/M.2/SATA). Plus I imagine the “RAM” on the SoC is as fast as a x64 CPU cache. So essentially you’d have “infinite” RAM, with 16gb of it being stupid fast.

              This is just me speculating btw, I might be totally wrong.

              Edit: https://daringfireball.net/2020/11/the_m1_macs CTRL+F “swap”

              1. 1

                Just wondering if you had any take on this, idk if I’m off base here

        2. 4

          Lots of valuable insights here and I’m interested in discussing.

          Performance of native apps is as great as everyone claims. But I think it is a bit overhyped, recent AMD APUs come close in multi-core performance. Of course, that the Air works with passive cooling is a nice bonus.

          Sure, but the thing is that the AMD 4800U, their high-end laptop chip, runs at 45W pretty much sustained, whereas the M1 caps out at 15W. This is a very significant battery life and heat/sustained non-throttled performance difference. Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.

          Rosetta works great with native x86_64 applications, but performance is abysmal with JIT-ing runtimes like the JVM. E.g. JetBrains currently do not have a native version of their IDEs (JVM, but I think they also use some other non-Java code) and their IDEs are barely usable due to slowness. If you rely on JetBrains IDEs, wait until they have an Apple Silicon version.

          Yeah, I didn’t test anything Java. You might be right. You also mention Fortran though and I’m not sure how that matters in 2020?

          Another issue with Rosetta vs. native in development is that it is very easy to get environments where native and x86_64 binaries/libraries are mixed (e.g. when doing x86_64 development and CMake building ARM64 objects unless you set CMAKE_OSX_ARCHITECTURES=x86_64), and things do not build.

          This isn’t as big of a problem as it might seem based on my experience. You pass the right build flags and you’re done. It’ll vanish in time as the ecosystem adapts.

          Then Big Sur on Apple Silicon is also somewhat beta. Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well). When working longer, this issue disappears, but it’s annoying nonetheless.

          Big Sur has been more stable for me on Apple Silicon than on Intel. 🤷

          If you haven’t ordered one, it’s best to wait a while until all issues are ironed out. There is currently a lot of (justified hype) around Apple Silicon, but that doesn’t mean that the ecosystem is ready yet. Unless all you do is web browsing, e-mailing, and an occasional app from the App Store.

          I strongly disagree with this. I mean, the M1 MacBook Air is beating the 16” MacBook Pro in Final Cut Pro rendering times. Xcode compilation times are twice as fast across the board. This is not at all a machine just for browsing and emailing. I think that’s flat-out wrong. It’s got performance for developers and creatives that beats machines twice as expensive and billed as made for those types of professionals.

          Aside from this, I think there are some ethical (sorry for the lack of a better term) issues with newer Apple models. For example, Apple excluding their own services from third-party firewalls/VPNs, no extensibility (reducing the lifespan of hardware), and their slow march to a more and more closed system.

          Totally with you on this. Don’t forget also Apple’s apparent lobbying against a bill to punish forced labor in China.

          1. 19

            You also mention Fortran though and I’m not sure how that matters in 2020?

            There’s really rather a lot of software written in Fortran. If you’re doing certain kinds of mathematics or engineering work, it’s likely some of the best (or, even, only) code readily available for certain work. I’m not sure it will be going away over the lifetime of one of these ARM-based notebooks.

            1. 4

              I’m not sure it will be going away over the lifetime of one of these ARM-based notebooks.

              There will be gfortran for Apple Silicon. I compiled the gcc11 branch with support and it works, but possibly still has serious bugs. I read somewhere that the problem is that gcc 11 will be released in December, so Apple Silicon support will miss that deadline and will have to wait until the next major release.

              1. 2

                Isn’t Numpy even written in FORTRAN? That means almost all science or computational anything done with Python relies on it.

                1. 6

                  No, Numpy is written in C with Python wrappers. It can call out to a Fortran BLAS/LAPACK implementation but that doesn’t necessarily need to be Fortran, although the popular ones are. SciPy does have a decent amount of Fortran code.

                2. 1

                  Wow, who knew.

                  1. 23

                    Almost anyone who does any sort of scientific or engineering [in the structural/aero/whatever sense] computing! Almost all the ‘modern’ scientific computing environments (e.g. in python) are just wrappers around long-extant c and fortran libraries. We are among the ones that get a bit upset when people treat ‘tech’ as synonymous with internet services and ignore (or are ignorant of) the other 90% of the iceberg. But that’s not meant as a personal attack, by this point it’s a bit like sailors complaining about the sea.

                    Julia is exciting as it offers the potential to change things in this regard, but there is an absolute Himalaya’s worth of existing scientific computing code that is still building the modern physical world that it would have to replace.

                3. 5

                  This is a very significant battery life and heat/sustained non-throttled performance difference.

                  I agree.

                  Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.

                  I am not sure what you mean. Modern Intel/AMD CPUs have AES instructions. AMD GPUs (including those in APUs) have acceleration for H.264/H.265 encoding/decoding. AFAIR also VP9. Neural depends a bit on what is expected, but you can do acceleration of neural network training, if AMD actually bothered to support Navi GPUs and made ROCm less buggy.

                  That said, for machine learning, you’ll want to get an discrete NVIDIA GPU with Tensor cores anyway. It blows anything else that is purchasable out of the water.

                  You also mention Fortran though and I’m not sure how that matters in 2020?

                  A lot of the data science and machine learning infrastructure relies on Fortran directly or indirectly, such as e.g. numpy.

                  I strongly disagree with this. I mean, the M1 MacBook Air is beating the 16” MacBook Pro in Final Cut Pro rendering times. Xcode compilation times are twice as fast across the board. This is not at all a machine just for browsing and emailing. I think that’s flat-out wrong.

                  Sorry, I didn’t mean that it is not fit for development. I meant that if you are doing development (unless it’s constrained to Xcode and Apple Frameworks), it is better to wait until the dust settles in the ecosystem. I think for most developers that would be when a substantial portion of Homebrew formulae can be built and they have pre-compiled bottles for them.

                  1. 1

                    Sorry, I didn’t mean that it is not fit for development. I meant that if you are doing development (unless it’s constrained to Xcode and Apple Frameworks), it is better to wait until the dust settles in the ecosystem. I think for most developers that would be when a substantial portion of Homebrew formulae can be built and they have pre-compiled bottles for them.

                    My instinct here goes in the opposite direction. If we know Apple Silicon has tons of untapped potential, we should be getting more developers jumping on that wagon especially when the Homebrew etc. toolchain aren’t ready yet, so that there’s acceleration towards readying all the toolchains quickly! That’s the only way we’ll get anywhere.

                    1. 16

                      Well, I need my machine for work. So, these issues just distract. If I am going to spend a significant chunk of time. I’d rather spend it on an open ecosystem rather than doing free work for Apple ;).

                  2. 5

                    Sure, but the thing is that the AMD 4800U, their high-end laptop chip, runs at 45W pretty much sustained, whereas the M1 caps out at 15W. This is a very significant battery life and heat/sustained non-throttled performance difference. Also these chips don’t have GPUs or the plethora of hardware acceleration for video/media/cryptography/neural/etc. that the M1 has.

                    Like all modern laptop chips, you can set the thermal envelope for your AMD 4800U in the firmware of your design. The 4800U is designed to target 15W by default - 45W is the max boost, foot to the floor & damn the horses power draw. Also, the 4800U has a GPU…an 8 core Vega design IIRC.

                    Apple is doing exactly the same with their chips - the accounts I’ve read suggest that the power cost required to extract more performance out of them is steep & since the performance is completely acceptable at 15W Apple limits the clocks to match that power draw.

                    The M1 is faster than the 4800U at 15W of course, but the 4800U is a Zen2 based CPU - I’d imagine that the Zen3 based laptop APUs from AMD will be out very soon & I would expect those to be performance competitive with Apple’s silicon. (I’d expect to see those officially launched at CES in January in fact, but we’ll have to wait and see when you can actually buy a device off the shelf.)

                  3. 1

                    You say that you returned and ordered a ThinkPad, how has that decision turned out? Which ThinkPad did you purchase? How is the experience comparatively?

                    1. 2

                      I bought a Thinkpad T14 AMD. So far, the experience is pretty good.

                      Pros:

                      • I really like the keyboard much more than that of the MacBook (butterfly or post-butterfly scissors).
                      • It’s nice to have a many more ports than 2 or 4 USB-C + stereo jack. I can go places without carrying a bunch of adapters.
                      • I like the trackpoint, it’s nice for keeping your fingers on the home row and doing some quick pointing between typing.
                      • Even though it’s not aluminum, I do like the build.
                      • On Windows, battery time is great, somewhere 10-12 hours in light use. I didn’t test/optimize Linux extensively, but it seems to be ~8 hours in light use.
                      • Performance is good. Single core performance is of course worse than the M1, but having 8 high performance cores plus hyperthreading compensates a lot, especially for development.
                      • Even though it has fans, they are not very loud, even when running at full speed.
                      • The GPU is powerful enough for lightweight gaming. E.g., I played some New Super Lucky’s tale with our daughter and it works without a hitch.

                      Cons:

                      • The speakers are definitely worse than any modern MacBook.
                      • Suspend/resume continues to have issues on Linux:
                        • Sometimes, the screen does not wake up. Especially after plugging or unplugging a DisplayPort alt-mode USB-C cable. Usually moving the TrackPoint fixes this.
                        • Every few resumes, the TrackPad and the left button of the TrackPoints do not work anymore. It seems that (didn’t investigate further) libinput believes that a button is constantly held, because it is not possible to click windows anymore to activate them. So far, I have only been able to reset this state by switching off the machine (sometimes rebooting does not bring bak the TrackPoing).
                        • So far no problems at all with suspend/resume on Windows.
                      • The 1080p screen works best with 125 or 150% scaling (100% is fairly small). Enabling fractional scaling in GNOME 3 works. However, many X11/XWayland applications react badly to fractional scaling, becoming very blurry. Even on a 200% scaled external screen. Also in this department there are no problems with Windows, fractional scaling works fine there.
                      • The finger print scanner works in Linux, but it results in many more false negatives than Windows.

                      tl;dr: a great experience on Windows, acceptable on Linux if you are willing to reboot every few resumes and can put up with the issues around fractional scaling.

                      I have decided to run Windows 10 on it for now and use WSL with Nix + home-manager. (I always have my Ryzen NixOS workstation for heavy lifting.)

                      Background: I have used Linux since 1994, macOS from 2007 until 2020, and only Windows 3.1 and briefly NT 4.0 and Windows 2000.

                    2. 1

                      Edit: returned and ordered a ThinkPad.

                      That made me chuckle. Good choice!

                      1. 1

                        Everytime I wake up my Mac, after a couple of minutes, it switches to sleep again 1-3 times (shutting of the external screen as well).

                        Sleep seems to be broken on the latest MacOS versions: every third time I close the lid of my 2019 mac, I’m opening it later only to see that it has restarted because of an error.

                        1. 1

                          Maybe wipe your disk and try a clean reinstall?

                      1. 1

                        Kubecon!

                        1. 2
                          • Catching up on classwork
                          • Getting desktop dual-booted. I realized the primary benefit of running windows is gaming, so might as well run Linux when I’m not playing games.
                          • I think I may try and do the exercises from This blogpost on parsing ELF headers
                          1. 4

                            Company: Twitter

                            Position: PhD (grad student or postdoc) intern

                            Location: Remote, or possibly SF/NYC/Seattle/Boulder/Toronto/Singapore/London when safe.

                            Description: Open-ended research internship on applied “systems-y” team.

                            We had two PhD interns this past year. One has an accepted first-author paper at OSDI and another paper under submission to NSDI. The other has work that will probably turn into one or two first-author papers. Both interns were remote and we expect this to be the case for this coming year as well. Our team was roughly half remote before the pandemic and we have a remote friendly workflow.

                            The team works on a wide variety of systems problems (cache, storage, scheduling, distributed tracing, kernel tracing, automated systems parameter tuning, monitoring, machine health, etc.). We (full-time folks on the team) often don’t have the bandwidth to go after the kind of work that turns into papers and write it up, but we have no shortage of problems for you to work on if that’s something you’re interested in and you can work on open-ended problems mostly independently. In some cases, we have ideas for suggested approaches that we think are promising (this was the case with some of the papers mentioned above).

                            Tech stack: the company is primarily a Java/Scala shop, but we have a lot of problems that are lower level than this (e.g., one of our interns last year primarily worked in C and a significant fraction of his work was related to PMEM applications) as well as higher level (some tech stack independent examples that are arguably “above” our JVM code include data analysis, novel visualizations, and simulation).

                            Contact: [my full name]@twitter.com

                            1. 1

                              How has this changed your overall workflow? One of the things I’ve found from moving from Ubuntu/Manjaro to OSX is that the overall flow with Spotlight Search / Alfred is very smooth. When I tried to go back to Linux everything just felt clunkier. Do you have a similar experience (albeit with BSD)? Are there any tricks that you feel that you have which come from wanting the OSX-like feel?

                              1. 6

                                I’m making the transition from Macbook to Thinkbook + Linux in a few weeks. I’ll miss the trackpad, but I’ll have a better keyboard, faster CPU, faster GPU, more ports, more expandability and repairability, for 1/2 the price of a new Macbook Pro.

                                Over the past number of years, I’ve seen Mac OS become increasingly developer hostile (from my perspective, writing open source, portable software). Meanwhile, Linux keeps getting better. My friend transitioned from years as a Windows and MacOS power user, and swears that the PopOS desktop is the best designed and most productive environment he’s ever used. I’m told it has an exquisitely tuned, developer centric UI with excellent keyboard shortcuts, with minimal customization required to be productive. So I’m going to start with PopOS and see if I like it.

                                @helithumper: Most of the innovation in desktop environments is happening in Linux these days, so Linux should be more productive, but the problem is that there are so many choices, and it’s hard to find the DE that best fits your style. Elementary OS is touted as the best distro for people who want a Mac-like experience (I haven’t tried it, but I might if PopOS doesn’t live up to its hype).

                              1. 9

                                I use Sublime Text and markdown. I love it too much to move to anything else as markdown is portable (means I can write my own tools for it like this Alfred workflow).

                                My wiki open sourced: https://github.com/nikitavoloboev/knowledge

                                Rendered with GitBook: https://wiki.nikitavoloboev.xyz

                                The GitBook part I’ll probably change soon as I want to customize the rendered output more. But using GitBook is nice as I don’t have to tinker with tools and can just focus on the content so I’d recommend it.

                                This thread might be of interest to you asked 2 years ago :)

                                https://lobste.rs/s/ord0rg/does_anyone_else_keep_their_own_knowledge

                                1. 3

                                  Something that isn’t covered (as far as I can tell) in your wiki but I have been wondering for a while (i’ve seen your wiki a few times on this site): What is your workflow for getting things into the wiki and isn’t it a bit “out of the way” to open up sublime and run git commits? Do you use other tools to help you get data into the wiki? How do you quickly insert new links into the wiki from sublime text, do you just search the entire thing for relevant keywords? Also, how do you get the notes to link to each-other to create the graph you have on the front page?

                                  Overall, it’s a really intensely thorough wiki, but I’ve been wondering what goes into maintaining it?

                                  1. 4

                                    What is your workflow for getting things into the wiki and isn’t it a bit “out of the way” to open up sublime and run git commits?

                                    Opening a file takes 2 seconds at most. o+a will activate search wiki files alfred workflow. Then type few characters of the name, return. Then Sublime Text opens instantly with vim mode. Do search & add the thing. If it’s a link there is a macro that will take current safari URL & construct a link for me so again 2 seconds max.

                                    Running git commits is 1 second. Press backtick+v and https://github.com/nikitavoloboev/gitupdate runs and everything is commited.

                                    Do you use other tools to help you get data into the wiki?

                                    Nope. Everything was added in the manner outlined above.

                                    How do you quickly insert new links into the wiki from sublime text, do you just search the entire thing for relevant keywords?

                                    Here is a screenshot of KM macro that will insert a link as a dashed point. Pressing G in vim mode will go to bottom of file where # Links are.

                                    how do you get the notes to link to each-other to create the graph you have on the front page?

                                    The graph is made with Obsidan. As for interlinking notes inside to other notes, I use manage notes workflow. The workflow also includes the search for files outlined above.

                                    I’ve been wondering what goes into maintaining it?

                                    Lots of time. I am slowly building tools to extract insights from the wiki, notes, links etc. I plan to do a little article/course on maintaining wikis with similar setup.

                                    p.s. I love how extending the wiki doesn’t mean me writing into it everything. I can just link instead, like in this commit I made just now.

                                1. 2

                                  I find the use of Google Drive as a commenting / viewing platform really unique. It makes alot of sense given the large quantity of editing features, etc… built into Google Drive.

                                  1. 1

                                    This will be really useful for automatic web scanning (visually). There already exists eyewitness but this looks to be much more minimal.

                                    1. 6

                                      This is a really neat write-up!

                                      I’ll admit I’ve been rather avoiding Kubernetes and am just barely beginning to get cozy with things like docker-compose and the like, and this article is making me think I should reconsider that choice!

                                      1. 6

                                        I recommend looking into hashicorp’s nomad

                                        1. 1

                                          I adore Hashicorp software, but it would depend upon the goal of working with k8s, wouldn’t it?

                                          If the goal is to deploy a technology as a learning experience because it’s becoming an industry standard, as awesome as I’m sure nomad is, it’s not going to fit the bill I’d think.

                                          I’m still blown away all these years later by Terraform and Consul :) Those tools are just amazing. True infrastructure idempotence, the goal that so many systems have just given up on entirely.

                                          1. 4

                                            To be clear: if your goal is to learn k8s–which is fine; it’s a very marketable skill right now, and I’m 100% empathetic with wanting to learn it for that reason–then I think it makes sense. But for personal use, Nomad’s dramatically simpler architecture and clean integration with other HashiCorp projects is really hard to beat. I honestly even use it as a single-node instance on most of my boxes simply because it gives me a cross-platform cron/service worker that works identically on macOS, Windows, and Linux, so I don’t need to keep track of systemd v. launchd v. Services Manager.

                                        2. 4

                                          Don’t, just don’t… I am trying to avoid k8s in in homelab to reduce the overhead. Since I don’t have a cluster or any feature in k8s that’s missing in a simple docker (-compose) setup

                                          1. 5

                                            It depends on what you call your “lab”. A couple of years ago I realized that there’s only one way I master things: practice. If I don’t run something, I forget 90% about it in 6 months.

                                            My take on the homelab is to use as much overhead as possible. I run a bunch of static sites, an S3-like server, dynamic DNS and not much else, yet I use more stuff/overhead to run it than obviously necessary.

                                            The thing is, I’ve reached a point where more often than not, I’m using the knowledge from the lab at $WORK, even recycling some stuff such as Ansible roles or Kubernetes manifests.

                                            1. 6

                                              I believe this to be the differentiation between a homelab and “selfhosted services”. The purpose of a homelab is to learn how to do things. The purpose of selfhosted services is to host useful services outside of learning time. That is not to say that the two cannot intersect, but a homelab, in my opinion, is primarily for learning and breaking things when it doesn’t affect anything.

                                              1. 2

                                                Yup I think this is the key.

                                                I’m already using docker-compose for my actual self hosted services because it’s simple and easy for me to reason about, back up the configuration of, etc etc.

                                              2. 3

                                                Agreed, it certainly comes with a rather large overhead. I use Kubernetes at work and rather enjoy it. So, it’s great having a lab environment to try things out in and learn, so that’s why I bother hosting my own cluster.

                                              3. 3

                                                I started with docker-compose as I began to learn containerized tech, but transitioned to Kubernetes because the company wanted to use it for prod infrastructure. I actually found that K8s is more consistent and easier to reason about. There are a lot of concepts to learn, but they hang together.

                                                Except PersistentVolumeClaims.

                                                1. 2

                                                  Thank you for reading. I’m glad you enjoyed it :)

                                                  I’ll say, picking up Kubernetes at home is a good choice if it’s something you want to learn. It’s really useful to have a lab environment to try things out and build your knowledge with projects.