The Apple ARM-chips are really great in terms of performance/Watt, but Apple, in my opinion, really dropped the ball in terms of software quality. I had been in the Apple ecosystem for years until I dropped it in 2012 when it became apparent that macOS was on a downward spiral from the excellence I had become used to.
The other operating systems/desktop environments in Windows and Linux can still learn quite a bit from macOS, but the latter is suffering from UI/UX inconsistencies and is unnecessarily locked down. While you could be relatively free 10 years ago with any software of your choice (especially OSS) and have rare breakage between system upgrades, you now have to fight with all kinds of gatekeepers and the system usually wrecks your whole setup with each upgrade.
This might be the main reason why fewer and fewer professionals choose Apple: It becomes less and less justified to pay the Apple tax the more you use your system for actual work.
I dropped it in 2012 when it became apparent that macOS was on a downward spiral from the excellence I had become used to
2012 was eleven years ago, and eight years prior to the introduction of the first MacOS devices running ARM. MacOS software quality has gone up and down over the years, but I don’t think “it sucked over a decade ago on a completely different architecture” is a very useful data point for assessing the quality of MacOS on an M2 machine today.
I have been using macOS as my primary desktop since 2007 (before that Linux and I had a 2 year part-time Linux excursion around 2018 or so). I would agree with the quality suffering after the terrible 2016 MacBooks until about 2019/2020 or so, but the last few releases have been great for me. (And it’s not like early macOS 10.5 or 10.6 releases didn’t have horrible bugs.)
Apple Silicon has been a huge step forward, my machines are lightning fast and last on battery for a long time. I also love the work that they are doing on system security, like sealed volumes, memory protection through the the secure enclave, etc.
With regards to the article, Apple Silicon provides great performance per watt compared to most GPUs. But for some reason people overhyped Apple Silicon GPUs and believe that Apple waved a magical wand and is suddenly competitive with NVIDIA performance-wise. The compute power of the M1 Ultra GPU is 21 TFLOPS, the tensor cores on an RTX 2060 Super are 57 TFLOPS and that’s a budget card from years ago. If you want to do machine learning, get a Linux machine and put an NVIDIA card in it. GPU training on Apple Silicon is currently only useful for small test runs (if the training process doesn’t burn in a fire due to the bugs that are still in the PyTorch/Tensorflow backends).
I use a MacBook desktop, because I get all the nice apps and an extremely predictable environment and use a headless Linux machine with a beefy NVIDIA GPU for training models.
This might be the main reason why fewer and fewer professionals choose Apple: It becomes less and less justified to pay the Apple tax the more you use your system for actual work.
Data?
Jikes, a flat, non-split, non-tented keyboad, way to kill your wrists and shoulders long-term. Also, who looks down at their keyboard? I can only see some fringe applications where you’d use it for some specific applications, but then again, looking down is not great for your neck.
What a cool story painting VB in such a biased way…
I don’t think we can talk about VB without mentioning a certain Greek place that was the seat of a great oracle.
I haven’t spoken to Anders about this, but given that he was the author of both the home of the oracle and of C# (and TypeScript - totally in awe of someone who can create three popular languages), I wonder if the shift to VB.NET and killing VB6 might have been somewhat personal…
a certain Greek place that was the seat of a great oracle
Would anyone we willing to make explicit this intentionally oblique reference?
What Greek place? What great oracle?
Are we talking about Turbo Pascal / Delphi?
Yeah, Delphi. Though I don’t think it’s biased not to talk about Delphi in a story about Visual Basic. The first version of Delphi was released in 1995, four years after Visual Basic 1.0. Interface Builder is more interesting when it comes to prior work:
The first rule of Delphi club is…
Okay I may be taking this too far. I just wanted to say that both VB and Delphi existed at the same time, competed in the same space and influenced each other. Also Delphi did a ton of things better than VB. Not mentioning Delphi feels like writing about WWII and not mentioning one of the major players in the conflict.
Honestly, it’s sad to see OCaml being replaced by Rust (or being discarded in favor of Rust, like it happens for other projects when they’re choosing a stack), mainly due to popularity.
I understand that Rust has its uses (and low-level systems programming is definitely one of them), but the same area is also very well covered by OCaml (only need to look at MirageOS for proof), so it really ends up being mostly about popularity, in some cases.
I wonder if OCaml will ever be able to overcome this problem.
I think it’s opposite: Rust is wildly more popular than OCaml because it happens to be just better across many axis relate to practical programming. Relative to OCaml, it has:
It really is a shame that we don’t have an ML implementation which has “all the nice things”, I feel like a cluster of applications which is written today in Go/Java/C# and, lately, Rust, would’ve been simpler in some vague “modern ML”.
Yeah, and a lot of these things are not really about the actual language at all, but rather the ecosystem around it. (The language has influence on things like stdlib and OS threads of course, but still.) My heart will always have a soft spot for OCaml, but the ecosystem is a bit infuriating.
so it really ends up being mostly about popularity, in some cases.
But that is a fair reason right? It’s far easier to attract new contributors if a project is in Rust than in OCaml. There simply are many more Rust programmers (I don’t think this needs a citation anymore). Running a large project is all about making compromises and sometimes that means giving something up that may like more for something that is better in practice. Besides that, the choice could be worse, Rust the compiler has a history in OCaml and Rust the language is also very clearly influenced by ML/OCaml.
OCaml being replaced by Rust…mainly due to popularity
Why lie about this when it takes reading the article for two minutes to know it’s not true?
Mac Apps:
Unix, etc.
I am saddened to hear that Gandi was bought – apparently by “Montefiore Investment”, of which it doesn’t seem I can find much on the Internet – especially since this holding group appears to invest all over the place (according to their own page, and not focus on a particular market… (I just hope they don’t go the LastPass way after their own acquisition…)
Regarding registrars, I’ll have to take a look at CloudFlare (since I already use them for the DNS), or perhaps NameCheap (one of my colleagues uses it)… But, as the OP said, I’ll definitively stay away from GoDaddy and the like…
Or perhaps I’ll try to find an EU-based registrar that focuses on security and privacy.
P.S.: I always asked myself why do we need to pay yearly for a particular domain registration? Why can’t we buy it once and be done with it. In the end it’s just a record in a database somewhere, which although is massive, I think it can easily be supported by the international community. If for nothing else, let’s just think at the amount of damages due to expired and rebought domains for malware or other such purposes…
Apparently Montefoire has merged Gandi with Total Webhosting Solutions:
https://your.online/press-release/
TWS apparently has a history of buying web hosting companies and cranking up the prices.
Cloudflare lacks at least some regional TLDs support. After they started providing registrar services, I contacted them and asked for plans to introduce the .pl TLD. Was told something in lines of “soon” in 2018 and the .pl TLD cannot be registered with them yet.
I think that if you buy them directly from the rotld registrar you get them for life/a very long time.
The situation with RoTLD has changed about 3 or 4 years ago. Now they want yearly payment… (And for the same money, I’ve moved to Gandi.)
I had my domain purchased in 2006, and at that time I’ve payed perhaps ~100 EUR (I don’t remember exactly) and they’ve promised that would cover 99 years. :)
P.S.: I always asked myself why do we need to pay yearly for a particular domain registration?
Maintenance isn’t free. Although, I’m fairly sure the prices of some domains go way beyond maintenance costs …
Why do we need registrars to keep paying staff and electricity for the domains we have registered? Might as well pay once and get the single write operation to the domain database in exchange for that money. After that it’s best effort. If money runs out and domains stop working, well at least you only paid once
For example the registrars could, on a yearly basis, let the actual DNS hosting services pay (if they are large enough) or let the owner pay, else that domain is “delisted” but without touching the ownership. This way, if the owner can’t pay for it anymore (for a particular time-frame), he can always just come back and re-list it at a later time. (As said, in such a “delisted” state it’s just another record in a database somewhere; I’m sure the registrar can come-up with a real figure for an-up-front payment that would cover the storage costs for that record, say for the next 100 years…)
On the other hand, it’s not like nobody registers completely new domain names, thus covering the day-to-day costs…
Or, as hinted initially, perhaps the ISP’s can support / sponsor some global organization to manage all this. (The Internet Archive is a good candidate.)
BTW, how do we pay for the DNS root servers? Why can’t we apply the same model here?
I used VS Code for a few months recently and I was quite happy, but the vim support automatically disintegrated after some time with more and more functionality breaking (not sure why) and I switched back to Doom when u
would only do one step of undo.
I do not understand why “Don’t spy on people without their consent” is such a hard thing for programmers to accept.
On the other hand, I don’t understand how collecting anonymous usage data that is trivial to opt out of is at all equivalent to spying or is harmful to anyone. I was hopeful when reading the original post that having an example of a well designed anonymous telemetry system would encourage other people to adopt that approach, but given it wasn’t treated any differently as non-anonymous telemetry by the community I don’t know why anyone would go through the effort.
There is no such thing as “anonymous data” when it’s paired with an IP address.
Even when it’s trivial to opt out, it’s usually extremely difficult to never use the software in a context where you haven’t set the opt-out flag or whatever. Opting out for one operation might be trivial, remaining opted out continuously across decades without messing up once is non-trivial.
Just. Don’t. Spy. On. People. Without. Consent.
I agree IP address is non anonymous, which is why this system doesn’t collect it. Most privacy laws also draw the line at collecting PII as where consent is required and I think that’s a reasonable place to draw the line.
Most software and websites I use has far more invasive telemetry than this proposal, and I think my net privacy would be higher taking an approach like Go proposed rather than the status quo, which is why I was excited about it being a positive example of responsible telemetry. Good for you if you can go decades without encountering any of the existing telemetry that’s out there.
How does the telemetry get sent to Google’s servers in a way which doesn’t involve giving Google the IP address?
I agree that website telemetry is also an issue. But this discussion is about Go. There is no good example of responsively spying on users without their consent.
You do have to trust Google won’t retain the IP addresses, but the Go module cache also involves exposing IP addresses to Google. I think the on by default but turn it off if you don’t trust Google is reasonable. I also trust that the pre-built binaries don’t contain backdoors or other bad code, but if you don’t want to trust that you can always compile the binaries from source.
Anyways, I’m not trying to change your mind just trying to explain why some people don’t consider anonymous telemetry that’s opt-out to be non-consensual spying.
guidance of both GDPR and CCPA is that an IP address is not considered PII until it is actively correlated / connected to an individual.
None of the counters that are proposed to be collected contain your name, email, phone number or anything else that could personally identify you.
IANAL, but collectioning data associated with an IP address (or some other unique identifier) definitely required consent under the GDPR.
An IP address or UUID is considered pseudonymous data:
‘pseudonymisation’ means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person;
https://gdpr-info.eu/art-4-gdpr/
Pseudonymous data is subject to the GDPR:
What differs pseudonymisation from anonymisation is that the latter consists of removing personal identifiers, aggregating data, or processing this data in a way that it can no longer be related to an identified or identifiable individual. Unlike anonymised data, pseudonymised data qualifies as personal data under the General Data Protection Regulation (GDPR). Therefore, the distinction between these two concepts should be preserved.
That is some really creative copy pasting you did there. I am also not a lawyer but I don’t think it is super relevant for this proposal since they follow the first principle of data collection: “do not collect personal data”.
Imagine the discussion goes like this:
You: “Hello Google, I am a Go user and according to the GDPR I would like you to send me a dump of my personal data that was sent via the Go tooling telemetry. To which I OPTED-IN when it was released.”
Google: “That data is anonymized. It is not connected to any personal data. We have the data you submitted but we cannot connect it to individuals.”
You: “Here is my IP address, will that help?”
Google: “No, we do not process or store the IP address for this data. (But thank you! now we know your IP! Just kidding!)”
You: “Here is the UUID that was generated for my data, will that help?”
Google: Unfortunately we cannot verify that is actually your UUID for this telemetry. And thus we don’t know whether you are requesting data for yourself.”
..
That is some really creative copy pasting you did there.
You can find all this in the GDPR. At any rate, I wasn’t criticizing The Go proposal, only the statement:
guidance of both GDPR and CCPA is that an IP address is not considered PII until it is actively correlated / connected to an individual.
But I see now that this is a bit ambiguous. I read it as analytics associated with IP addresses is not PII, which is not really relevant, since it is pseudonymization according to the GDPR and pseudonymous data is subject to the GDPR. But I think what you meant (which becomes clear from your example) was that in this case there is no issue, because even though Google may temporarily have your IP address (they have to if you contact their servers), but they are not storing the IP address with the analytics. I completely agree that the analytics data is then not subject to the GDPR. (Still IANAL.)
Found this musing on the diverse range of upgrade pressures easy to read.
And while not directly related to the main points of the post, I resonated with this observation:
Engineers shouldn’t be defining such requirements [for app features that require upgrades] anyway. If we let that happen, we’d only have command line interfaces for Linux desktops.
… because at $DAY_JOB, as an engineer writing their own requirements, I totally (and happily!) write only for [cross-platform] command-line interfaces — you caught me!
I know, right? But having just gotten my first mac a month ago for work, after decades of linux (and especially the last decade), I would say some linux desktops are almost more consistent then the macs are. I know I am biased, but I just can’t get the workflows that I want, I have to use the touchpad.
On the other hand, my wife has also started a new job, and has gotten a windows laptop (a thinkpad p14 so at least performance is okay). She’s also feeling like stepping down in some pretty essential things.
Finally, in my experience, engineers end up at least refining the requirements if not defining them. Otherwise product people (or car salesmen) would still be selling us faster horses
I have to use the touchpad.
This is one of the annoying defaults in macOS, but if you toggle it the experience is great. Under accessibility in system preferences, you can turn on keyboard navigation and then you can do pretty much everything with no pointing device. In particular, as I recall (and it’s 15 years since I enabled this and forgot about it, so it might have changed), you couldn’t navigate dialog boxes with the keyboard without it. With it, you can, space presses the current button (arrows to navigate), enter presses the default button. The keyboard navigation is always set up so enter is forward and space is abort by default in any dialog.
Also Shortcat:
A keyboard driven command-palette for pretty much all macOS apps.
Storing your Dropbox folder on an external drive is no longer supported by macOS.
As someone who has used Windows, macOS, Linux, and FreeBSD extensively as professional desktop OSs I still don’t understand the love so many hackers have for Apple kit.
Because not everyone needs the same features as you. I like that MacOS behaves close enough to a Linux shell, but with a quality GUI. I particularly like emacs bindings in all GUI text fields, and the ctrl/cmd key separation that makes terminals so much nicer to use. I like the out-of-the-box working drivers, without having to consult Wikis about which brands have working Linux drivers. I like the hardware, which is best in class by all metrics that matter to me, especially with Apple Silicon. I like the iPhone integration, because I use my iPhone a lot. I like AppleScript, and never bothered to learn AutoHotKey. I like that my MacBook wakes up from sleep before I can even see the display. I like the massive trackpad, which gives me plenty of space to move around the mouse. I like Apple Music and Apple Photos and Apple TV, which work flawlessly, and stream to my sound system running shairport-sync. I like Dash for docs, which has an okay-ish Linux port but definitely not the first class experience you get on MacOS. I like working gestures and consistent hotkeys, tightly controlled by Apple’s app design guidelines. I like that I can configure caps lock -> escape in the native keyboard settings, without remembering the X command or figuring out Wayland or installing some Windows thing that deeply penetrates my kernel.
I use Linux for servers. I have a Windows gaming PC that hangs on restart or shut down indefinitely until you cut power, and I don’t care enough to fix it because it technically still functions as a gaming PC. But for everything else I use MacOS, and I flat out refuse to do anything else.
As someone who ran various Linux distros as my main desktop OS for many years, I understand exactly why so many developers use Apple products: the quality of life improvement is staggeringly huge.
And to be honest, the longer I work as a programmer the more I find myself not caring about this stuff. Apple has demonstrated, in my opinion, pretty good judgment for what really matters and what’s an ignorable edge case, and for walking back when they make a mistake (like fixing the MBP keyboards and bringing back some of the removed ports).
You can still boot Linux or FreeBSD or whatever you want and spend your life customizing everything down to the tiniest detail. I don’t want to do that anymore, and Apple is a vendor which caters to my use case.
I am a macOS desktop user and I like this change. Sure, it comes with more limitations, but I think it is a large improvement over having companies like Dropbox and Microsoft (Onedrive) running code in kernel-land to support on-demand access.
That said, I use Maestral, the Dropbox client has become too bloated, shipping a browser engine, etc.
I don’t follow why the - good! - move to eliminate the need for kernel extensions necessitates the deprecation of external drives though.
I’m not a Dropbox user so I’ve never bothered to analyse how their kext worked. But based on my own development experience in this area, I assume it probably used the kauth kernel API (or perhaps the never-officially-public-in-the-first-place MAC framework API) to hook file accesses before they happened, download file contents in the background, then allow the file operation to proceed. I expect OneDrive and Dropbox got special permission to use those APIs for longer than the rest of us.
As I understand it, Apple’s issue with such APIs is twofold:
These aren’t unreasonable concerns, although the fact they’re still writing large amounts of kernel vulnerabilities panic bugs code themselves somewhat weakens their argument.
So far, they’ve been deprecating (and shortly after, hard-disabling) kernel APIs and replacing them with user-space based APIs which only implement a small subset of what’s possible with the kernel API. To an extent, that’s to be expected. Unrestricted kernel code is always going to be more powerful than a user space API. However, one gets the impression the kernel API deprecations happen at a faster pace than the user space replacements have time to mature for.
In this specific case, NSFileProvider
has a long and chequered history. Kauth was one of the very first kernel APIs Apple deprecated, back on macOS 10.15. It became entirely unavailable for us plebs on macOS 11, the very next major release. Kauth was never designed to be a virtual file system API, but rather an authorisation API: kexts could determine if a process should be allowed to perform certain actions, mainly file operations. This happened in the form of callback functions into the kext, in the kernel context of the thread of the user process performing the operation.
Unfortunately it wasn’t very good at being an authorisation system, as it was (a) not very granular and (b) leaving a few gaping holes because certain accesses simply didn’t trigger a kauth callback. (Many years ago, around the 10.7-10.9 days, I was hired to work on some security software that transparently spawned sandboxed micro VMs for opening potentially-suspect files, and denied access to such files to regular host processes; for this, we actually tried to use kauth for its intended purpose, but it just wasn’t a very well thought-out API. I don’t think any of Apple’s own software uses it, which really is all you need to know - all of that, sandboxing, AMFI (code signing entitlements), file quarantine, etc. uses the MAC framework, which we eventually ended up using too, although the Mac version of the product was eventually discontinued.)
Kauth also isn’t a good virtual file system API (lazily providing file content on access atop the regular file system) but it was the only public API that could be (ab)used for this purpose. So as long as the callback into the kext didn’t return, the user process did not make progress. During this time, the kext (or more commonly a helper process in user space) could do other things, such as filling the placeholder file with its true content, thus implementing a virtual file system. The vfs kernel API on the other hand, at least its publicly exported subset, is only suitable for implementing pure “classic” file systems atop block devices or network-like mounts. NSFileProvider
was around for a few years on iOS before macOS and used for the Usual File Cloud Suspects. Reports of problems with Google Drive or MS OneDrive on iOS continue to this day. With the 10.15 beta SDK, at the same as deprecating kauth, everyone was supposed to switch over to EndpointSecurity or NSFileProvider on macOS too. NSFileProvider dropped out of the public release of macOS 10.15 because it was so shoddy though. Apple still went ahead and disabled kauth based kexts on macOS 11 though. (EndpointSecurity was also not exactly a smooth transition: you have to ask Apple for special code signing entitlements to use the framework, and they basically ignored a load of developers who did apply for them. Some persevered and eventually got access to the entitlement after more than a year. I assume many just didn’t bother. I assume this is Apple’s idea of driving innovation on their platforms.)
Anyway, NSFileProvider did eventually ship on macOS too (in a slightly different form than during the 10.15 betas) but it works very differently than kauth did. It is an approximation of an actual virtual file system API. Because it originally came from iOS, where the UNIXy file system is not user-visible, it doesn’t really match the way power users use the file system on macOS: all of its “mount points” are squirrelled away somewhere in a hidden directory. At least back on the 10.15 betas it had massive performance problems. (Around the 10.14 timeframe I was hired to help out with a Mac port of VFSforGit, which originally used kauth. (successfully) With that API being deprecated, we investigated using NSFileProvider, but aside from the mount point location issue, it couldn’t get anywhere near the performance required for VFSforGit’s intended purpose: lazily cloning git repos with hundreds of thousands of files, unlike the kauth API. The Mac port of VFSforGit was subsequently cancelled, as there was no reasonable forward-looking API with which to implement it.)
So to come back to your point: these limitations aren’t in any way a technical necessity. Apple’s culture of how they build their platforms has become a very two-tier affair: Apple’s internal developers get the shiny high performance, powerful APIs. 3rd party developers get access to some afterthought bolt-on chicken feed that’s not been dogfooded and that you’re somehow supposed to plan and implement a product around during a 3-4 month beta phase, the first 1-2 months of which the only window in which you stand any sort of slim chance of getting huge problems with these APIs fixed. Even tech behemoths like Microsoft don’t seem to be able to influence public APIs much via Apple’s Developer Relations.
At least on the file system front, an improvement might be on the horizon. As of macOS 13, Apple has implemented some file systems (FAT, ExFAT and NTFS I think) in user space, via a new user space file system mechanism. That mechanism is not a public API at this time. Perhaps it one day will be. If it does, the questions will of course be whether
(The vfs subsystem could be used for implementing a virtual file system if you had access to some private APIs - indeed, macOS contains a union file system which is used in the recovery environment/OS installer - so there’s no reason Apple couldn’t export features for implementing a virtual file system to user space, even if they don’t do so in the current kernel vfs API.)
Part of that may also be the hardware, not the software.
My datapoint: I’ve never really liked macOS, and tried to upgrade away from a MacBook to a “PC” laptop (to run KDE on Linux) two years ago. But after some research, I concluded that - I still can’t believe I’m saying this - the M1 MacBook Air had the best value for money. All “PC” laptops at the same price are inferior in terms of both performance and battery life, and usually build quality too (but that’s somewhat subjective).
I believe the hardware situation is largely the same today, and will remain the same before “PC” laptops are able to move to ARM.
macOS itself is… tolerable. It has exactly one clear advantage over Linux desktop environments, which is that it has working fonts and HiDPI everywhere - you may think these are just niceties, but they are quite important for me as a Chinese speaker as Chinese on a pre-hiDPI screen is either ugly or entirely unreadable. My pet peeve is that the dock doesn’t allow you to easily switch between windows [1] but I fixed that with Contexts. There are more solutions today since 2 years ago.
[1] macOS’s Dock only switches between apps, so if you have multiple windows of the same app you have to click multiple times. It also shows all docked apps, so you have to carefully find open apps among them. I know there’s Expose, but dancing with the Trackpad to just switch to a window gets old really fast.
[macOS] has exactly one clear advantage over Linux desktop environments, which is that it has working fonts and HiDPI everywhere
Exactly one? I count a bunch, including (but not limited to) better power management, better support for external displays, better support for Bluetooth accessories, better and more user-friendly wifi/network setup… is your experience with Linux better in these areas?
My pet peeve is that the dock doesn’t allow you to easily switch between windows
Command+~ switches between windows of the active application.
better power management
This is one area that I’d concede macOS is better for most people, but not for me. I’m familiar enough with how to configure power management on Linux, and it offers much more options (sometimes depending on driver support). Mac does have good power management out of the box, but it requires third party tools to do what I consider basic functions like limiting the maximal charge.
The M1 MBA I have now has superior battery life but that comes from the hardware.
better support for external displays
I’ve not had issues with support for external monitors using KDE on my work laptop.
The MBA supports exactly one external display, and Apple removed font anti aliasing so I have to live with super big fonts in external displays. I know the Apple solution is to buy a more expensive laptop and a more expensive monitor, so it’s my problem.
better support for Bluetooth accessories
Bluetooth seems suck the same everywhere, I haven’t notice any difference on Mac - ages to connect, random dropping of inputs. Maybe it works better for Apple’s accessories which I don’t have any of, so it’s probably also my problem.
better and more user-friendly wifi/network setup
KDE’s wifi and network management is as intuitive. GNOME’s NetworkManager GUI used to suck, but even that has got better these days.
Command+~ switches between windows of the active application.
I know, but
I’ve used a Mac for 6 years as my personal laptop and have been using Linux on my work laptop.
Back then I would agree that macOS (still OS X then) was much nicer than any DE on Linux. But Linux DEs have caught up (I’ve mainly used KDE but even GNOME is decent today), while to an end user like me, all macOS seems to have done are (1) look more like iOS (nice in some cases, terrible in others) and (2) gets really buggy every few releases and returns to an acceptable level over the next few versions. I only chose to stay on a Mac because of hardware, their OS has lost its appeal to me except for font rendering and HiDPI.
better support for Bluetooth accessories
A bluetooth headset can be connected in two modes (or more), one is A2DP with high quality stereo audio, but no microphone channel, and the other one is the headset mode which has low quality audio but a microphone channel. On macOS the mode will be switched automatically whenever I join or leave a meeting, on Linux this was always a manual task that most of the time didn’t even work, e.g. because the headset was stuck in one of the modes. I can’t remember having a single issue with a bluetooth headset on macOS, but I can remember many hours of debugging pulseaudio or pipewire just to get some sound over bluetooth.
My pet peeve is that the dock doesn’t allow you to easily switch between windows
It sounds like you found a third-party app you like, but for anyone else who’s annoyed by this, you may find this keyboard shortcut helpful: when you’re in the Command-Tab app switcher, you can type Command-Down Arrow to see the individual windows of the selected app. Then you can use the Left and Right Arrow keys to select a window, and press Return to switch to that window.
This is a little fiddly mechanically, so here’s a more detailed explanation:
(On my U.S. keyboard, the Backtick key is directly above Tab. I’m not sure how and whether these shortcuts are different on different keyboard layouts.)
This seems ridiculous when I write it all out like this, but once you get it in your muscle memory it’s pretty quick, and it definitely feels faster than moving your hand to your mouse or trackpad to switch windows. (Who knows whether it’s actually faster.)
Thanks, I’ve tried this before but my issue is that this process involves a lot of hand-eye loop (look at something, decide what to do, do it). On the other hand, if I have a list of open windows, there is exactly one loop - find my window, move the mouse and click.
I hope people whose brain doesn’t work like mine find this useful though :)
It’s kind of nutty, but my fix for window switching has been to set up a window switcher in Hammerspoon: https://gist.github.com/jyc/fdf5962977943ccc69e44f8ddc00a168
I press alt-tab to get a list of windows by name, and can switch to window #n in the list using cmd-n. Looks like this: https://jyc-static.com/9526b5866bb195e636061ffd625b4be4093a929115c2a0b6ed3125eebe00ef20
Thanks for posting this! I have an old macbook that I never really used OSX on because I hated the window management. I gave it another serious try after seeing your post and I’m finding it much easier this time around.
I ended up using https://alt-tab-macos.netlify.app over your alt tab, but I am using Hammerspoon for other stuff. In particular hs.application.launchOrFocus is pretty much the win+1 etc hotkeys on Windows.
Once you factor in the longevity of mac laptops vs pcs, the value proposition becomes even more striking. I think this is particularly true at the pro level.
I use both. But to be honest, on the Linux side I use KDE plasma and disable everything and use a thin taskbar at the top and drop all the other stuff out of it and use mostly the same tools I use on macOS (neovim, IntelliJ, Firefox, etc…).
Which is two extremes. I’m willing to use Linux so stripped down in terms of GUI that I don’t have to deal with most GUIs at all other than ones that are consistent because they’re not using the OS GUI framework or macOS.
There’s no in between. I don’t like Ubuntu desktop, or gnome, or any of the other systems. macOS I am happy to use the guis. They’re consistent and for the most part. Just work. And I’ve been using Linux since they started mailing it out on discs.
I can’t tell you exactly why I’m happy to use macOS GUIs but not Linux based GUIs, but there is something clearly not right (specifically for me to be clear; everyone’s different) that causes me to tend to shun Linux GUIs altogether.
If I cared about hacking around with the OS (at any level up to the desktop) or the hardware, I wouldn’t do it on Apple kit, but I also wouldn’t do it on what I use every day to enable me to get stuff done, so I’d still have the Apple kit for that.
I want to look a bit more deeply into CoreOS. Since I have last used Fedora Silverblue, they have added Docker/OCI as a transport method for OSTree, which seems quite interesting.
Besides that family visits and maybe try to make some progress on buying a kitchen (most kitchen companies here are quite scummy).
Messy desk and desktop is just default KDE - my main interests lie in terminal emulator anyway. The only mildly interesting thing is an e-ink monitor and monochrome setup for it.
Thanks for the write-up of the monitor! I am still on the fence whether to spend to money to try it, so reading about experiences of others is helpful.
That e-ink monitor looks really cool!
I always wonder how people can work with a desk that is not height-adjustable? You tune your chair following ergonomic guidelines and then the table/keyboard is too high/low to preserve a good 90 degree angle and then what?
(Admittedly, I have worked with a non-adjustable desk and even chair when I was younger, and I have come to regret it.)
I adjust the chair and my elbows lie on its armrests, yeah. I’m of generic height so it somehow worked to me still with generic desks, though I’m tempted to get an adjustable one every now and then.
What keyboard is that in the e-ink monitor image? I’ve been looking for a low profile (choc or similar) split keyboard for a while, and haven’t run across anything that I wouldn’t have had to self-assemble (not really interested in doing that).
Mistel Barocco MD650L - old model full of mini- and microusb connectors :) I think they’ve got a newer ones with usb-c by now.
I’m also using an e-ink monitor. For those of you interested, it can be seen in action. The videos are boring, but in part 3 you can see how the display works.
I’m currently alternating between a normal chair and a kneeling chair. I’ve been having a lot of lower back pain lately, and the kneeling chair really helps with that, but it makes my knees and tail bone hurt. Right now I just switch between standing, kneeling and sitting. I write emails standing, and code sitting and kneeling. I was never able to focus on code standing for some reason but I tend to pace when I write emails, so standing for emails works well.
The orange cloth is used to cover my LCD monitor. Sometimes I need color, speed, or just a second monitor. Unfortunately, it takes like 6 seconds for my monitor to turn on. When I need to switch between screens I keep the glowy one covered.
Our WFH desk, very minimalistic to make it easy to switch between sitting/standing. My wife and I can easily switch by exchanging MacBooks and keyboards/pointing devices (she uses a Magic Keyboard, I use a Kinesis Advantage360 Pro). At the office-office (not there today, so no pic), I use a 4k LG screen (hopefully to be replaced some day), Thunderbolt dock, Kinesis Advantage2 and a regular Ikea desk (hope to replace it by a standing desk at some point as well).
I also use a headless Linux GPU machine, but it is stashed away in a closet along with other stuff (home NAS, etc.).
Yes, fellow Kinesis Advantage 360 user! https://lobste.rs/s/wkeaed/lobsters_battlestations_screenshots#c_bsllva
It’s the best keyboard :)
Given the timeframe it’s quite likely that machine included defective caps from the capacitor plague[0]. Any well-made ones, even of the same age, should be significantly better.
Amazing DIY effort!
Check out keymouse (dot com) for the original split with a trackball. It used to be wireless (that’s the one I run), but they gave up with that and went back to wired.
Thanks! Ha, keymouse seems… interesting. I’m not sure I would actually use two trackballs, heh. I’d probably get used to only using one with my right hand.
took more than a fair bit to get used to, but an eye tracker combined with a 3d mouse - scale it down and combine in a way like this (https://hackaday.com/2017/07/27/unholy-mashup-of-spacemouse-and-sculpt-keyboard-is-rather-well-done/ ) and it might be something. Twisting left/right is a nice scroll up/down, lifting/pushing as zoom in/out and the tilting to pan. Eye tracker gets you the coarse initial warp-to point while the 3d mouse adds the missing precision.
I have scroll horiz and scroll vert on two different layers, so one thumb press (not even sure which one now since it’s all muscle memory now) and I’m scrolling away.
Are you saying that you actually use an eye tracker for this? And it works well? Holy cow that is freaking amazing!
I have both the original Alpha model and the Track. Both are good, but I far prefer the track. (I have had serious RSI issues over the years.)
I do mostly use the track ball in the right hand, but I do a bit of both. It’s pretty neat … worth trying if you can get your hands on one. And Heber (the founder) is a tech guy who has a passion for it … definitely not a get-rich-quick scheme 🤣
How do you like the thumb clusters? With the distance between the thumb keys and the trackball, it seems like you have to stretch the Thumb quite a bit to move to one or the other?
(I thought I had pretty much seen it all after following /r/ergomechboards for a while, but somehow hadn’t seen keymouse yet…)
I have the original and not the current. Thumb clusters are good. Personally, I’d drop the furthest thumb reach (the down and away button) and add two proper keys at the top of the existing cluster. Also, I don’t use the outside pinky column at all (I have a completely custom layout). Any extra reach is really hard on RSI, so I do 99% on 3 rows (I rarely use the numbers row) plus the thumb keys, with only one-key horizontal stretch on either the forefinger (easy) or pinky (less easy). I generally write 50-100kloc per year, plus lots of non-code stuff. No RSI in years now.
Great to hear! I have switched to a Kinesis Advantage a couple months ago and my wrist pains have disappeared. But I am still very much interested in designs that push the state of the art forward.
Kinesis are great keyboards. I have 2 of the Advantage Pros (with foot pedals) 🤣 and that’s all I used for many years.
It’s been a while since I’ve used them, but the foot pedals are mainly for modifier keys or changing layers (e.g. accessing Kinesis macros). The Kinesis models that I have are a bit old now, so they don’t have the amazing level of programmability that you’d expect today from new keyboards with built in ARM chips or whatever. But I did use them to write a few software products (i.e. I personally typed in many hundreds of thousands of lines of code) without inflaming my horrible RSI, so I have a great deal of love and appreciation for Kinesis 😊 … so if you’re in doubt, always give Kinesis the benefit of the doubt by default.
On my Keymouse setup, I’ve added a dedicated layer for each hand to put all of the modifiers (ctrl, shift, alt, cmd) on home row. So one layer turns the left hand into a dedicated modifier set (and leaves the right hand unchanged), and another layer turns the right hand into a dedicated modifier set (and leaves the left hand unchanged). Then I have a layer for num pad (left hand is all modifiers, right hand is num pad), and a layer for function keys (left hand is all modifiers, right hand is all function keys). Here’s my layouts as of a year ago: https://1drv.ms/w/s!Al7tOqyQS2IveWlYnwO2D9msNHE
(Edit: I should explain a bit about the layers. I often have to type crazy combos like shift-cmd-8 or alt-command-f7 or whatever. This is the IDE keystroke hell that programmers have to deal with sometimes to avoid the mouse, e.g. in the amazing IntelliJ IDEA debugger.)
Coincidentally, this week I had hand surgery (unrelated to RSI) and I now have a literal club hand wrapped with an inch of protective stuff with a few fingers semi-sticking out, so for the first time in years, I’m using a normal keyboard 🤣
It would be interesting to list all the actual API / GUI library used here.
What am I missing?
MacOS: (essentially 2 layers?), Carbon, Cocoa?
Modern Linux (could be 5 layers?) GTK4, GTK3, Qt6, Qt5, maybe occasional GTK2 application? (Yes, I am aware of other toolkits but they may also just be for a 1-off application).
At least on Windows you have the option of all these layers where as I believe the only Linux distro even packaging GTK1.2 these days is Slackware.
Re: macOS, Carbon is long gone and unsupported. The Cocoa API used today is the same one introduced with Mac OS X around 2000. The only big fork since Carbon was Cocoa itself. Cocoa was pretty futuristic at the time, and it has lasted.
Regarding appearance, Cocoa has evolved release by release—hardware accelerating the compositing and then the window drawing, ditching stripes and brushed metal and largely toning down Aqua, introducing translucent sidebars, dark mode, tabs everywhere, and Big Sur’s revised toolbar style. But with few exceptions, the system apps all kept up to date by virtue of being compiled with the latest SDK version and adapting to any breaking changes.
Recently, we also have the option of building with SwiftUI or Catalyst which sit a layer above, but they inherit and depend on Cocoa’s UI rather than forking it. SwiftUI is increasingly emphasized and may be the main developer path soon. You could say that Catalyst adds to Cocoa by providing the option of iOS or macOS appearance, but the iOS appearance is considered to be a half-baked app implementation, just for the convenience of porting your iOS app without design adjustments.
This is part of the reason macOS users may push back harder against alternate UI appearances and behaviors like you find in Electron apps or JetBrains tools, or lament that the best tool for the job behaves uniquely. Total consistency is almost within reach.
Cocoa was pretty futuristic at the time, and it has lasted.
Cocoa in OS X 10.0 had a very small number of changes from the 1992 OpenStep specification. The fact that it was futuristic is a bit depressing.
The big changes were largely hidden from developers. Windows were always buffered (the unbuffered flag was there, it was just ignored) because RAM was cheaper but CPUs hadn’t increased in performance in line with requirements and so it was faster to buffer windows that were hidden than to redraw them on expose events.
Regarding appearance, Cocoa has evolved release by release—hardware accelerating the compositing and then the window drawing, ditching stripes and brushed metal and largely toning down Aqua, introducing translucent sidebars, dark mode, tabs everywhere, and Big Sur’s revised toolbar style. But with few exceptions, the system apps all kept up to date by virtue of being compiled with the latest SDK version and adapting to any breaking changes.
This is something that I’ve become really impressed by when I learned AppKit by modernizing an old 10.6/10.9 era application. There wasn’t much in the way of breaking changes (pretty much just QTKit to AVFoundation) despite tons of deprecations, and adopting the modern UI conventions both made it look like an application from 2022 and cut a ton of code. For someone who’s main GUI experience is Win32, it’s been an enlightening experience.
Win16, Win32s, and Win32 are all the same thing - an evolution or thunks to/from an evolved version. Windows Forms is a .NET wrapper around Win32.
WPF is its own thing, as is whatever the heck UWP does (I lost track).
While Carbon is gone for developers as other commentators point out, it technically is still around in the dark corners of the OS (menus and the menu bar are Carbon interrnally). There’s a lot of weirdness from Cocoa and Carbon having to coexist and be integrated into each other.
Carbon was removed in 10.15:
If I get the Molex connectors timely, installing a KinT controller in my second Kinesis Advantage. Also got the printed edition of The RISC-V Reader for Christmas, so probably some reading as well. Other than that mostly relaxing and doing some stuff around the house that we have been postponing.
You probably don’t want disk encryption on root with Nix. Given the statelessness of the design, anyone should by able to pull your config—which most are putting up on a code forge—and create the same machine. You’re effectively leaving “HH” at the end of your message for the Allies to decrypt, but for your disk. A coworker and I decided a better setup would be creating a subset(s) of the disk to be encrypted with parts that are actually mutated, like the home directory, shared media, var (which in my case meant setting up an a ZFS mount with filesystem encryption and a separate partition to encrypt swap/support hibernate).
Modern encryption systems should be robust against known-plaintext attacks. Different people encrypting disks with the same (or substantially similar) /nix/
paths should have very different random-looking ciphertext, and if they don’t that’s a serious security vulnerability with the encryption algorithm. Of course this requires that they have different encryption keys which means that the encryption keys themselves can’t be managed in a completely stateless way with nix, but that’s necessary regardless.
There’s value in encrypting absolutely everything on a disk as a matter of course, regardless of whether or not it’s easily-regenerateable data like /nix
store paths from a publicly-available nix configuration. And it’s not like people are obliged to make their configuration.nix files publicly available - they might be modified to have some custom, confidential nix packages or other nix configuration in any case!
The encryption does more than make things unreadable. It also prevents anyone from modifying the contents of a machine without a login. Then secure boot ties it all together with “you need to authenticate in order to unlock the filesystem” and nothing can be modified before that step.
That’s something I hadn’t thought of. I was thinking more along the threat vector of the device getting stolen and forgetting to wipe the drives when I was done with it, not someone modifying it to give it back to me.
That’s fair, in the end it depends on what are credible threats and for most people it’s just theft.
The initrd is not signed, so an adversary could easily add a keylogger. It primarily protects agains theft.
It depends on the distro. It often isn’t, but for proper secure boot it should be. https://haavard.name/2022/06/22/full-uefi-secure-boot-on-fedora-using-signed-initrd-and-systemd-boot/
But we were talking about NixOS specifically here. That said, are there any mainstream distributions that do initrd signing and verification out of the box?
Very neat, I might want to play around more with a steam deck someday. I wonder if it can run Wayland…
However, I’m not sure you get to shame the Steam deck for not including a default password, then recommend doing curl some-url | sh
to install something. :-P Though from the look of it the script escalates with sudo as necessary, and they don’t say to do curl | sudo sh
, so this is a nitpick.
. I wonder if it can run Wayland…
It can and does by default. When running in game-mode, steam games are run through gamescope, a wayland compositor.
In their defense, this installation method is taken verbatim from the Nix docs. Still unfortunate, but the blame is elsewhere.
Aha, thanks. The method does seem to be becoming less fashionable, but I do wish it would do so faster, if only to get more people working with Flatpak or AppImage or OS package managers instead of rolling their own thing every time.
I don’t think that’s possible for something like Nix. The Nix installer creates the root /nix
directory, configures some build daemon users, sets up a systemd daemon and changes the user’s .profile
. Can this be done within Flatpak or AppImage namespaces?
Even the Nix project managing this complexity for each distribution’s package manager would be a nightmare.
The Nix installer has a bunch of problems but I don’t see a way of doing hugely better.
There is
https://nix-community.github.io/nix-installers/
With packages. Most of the stuff is done in a post-install hook, but at least it cleans up after removal, works with distributions that have SELinux enabled, etc.
I… I use a perfectly average flat keyboard and mouse. At a standing desk, sure, but… am I the weird one? Been doing this for a decade now and my legs hurt long before my hands and wrists do.
Flat keyboards are fine for most people. Sure, split, tented, orthogonal keyboard is a little better in some cases (like when you’re already susceptible to RSI) but it’s not the Lord and Saviour of all wrists under the Sun as some imply.
They were fine for me until they weren’t and when your income depends on being able to type on a keyboard, that’s very scary. I switched to a Kinesis Advantage and have been slowly recovering.
The flat ones are fine for me as well. I only get physical trouble if I’m either stressed or worked too much. Both are a good sign to take a break/weekend.
No, I use one as well. When I was 18, I broke my right wrist and ended up typing everything for a couple of months and all of that typing was done with my left hand. As a result of this, I ended up with a typing style where I use my left hand for about 2/3 of the keyboard. I cannot use split keyboards at all as a result because my left hand doesn’t reach the keys.
A few weeks ago I broke my right shoulder and had to type with my left hand for 2-3 weeks. Several of my colleagues were surprised at how quickly I type with only my left hand. With a split keyboard, I’d have tired out my hand moving it between the two halves.
I’m definitely the weird one (in this and other contexts) though.