My MacBook Pro is nagging me to upgrade to the new OS release. It lists a bunch of new features that don’t care about. In the meantime, the following bugs (which are regressions) have been unfixed for multiple major OS versions:
When a PDF changes, Preview reloads it. It remembers the page you were on (it shows it in the page box) but doesn’t jump there. If you enter the page in the page box, it doesn’t move there because it thinks you’re there already. This worked correctly for over a decade and then broke.
The calendar service fails to sync with a CalDAV server if you have groups in your contacts. This stopped working five or so years ago, I think.
Reconnecting an external monitor used to be reliable and move all windows that were there last time it was connected back there. Now it works occasionally.
There are a lot of others, these are the first that come to mind. My favourite OS X release was 10.6: no new user-visible features, just a load of bug fixes and infrastructure improvements (this one introduced libdispatch, for example).
It’s disheartening to see core functionality in an “abandonware” state while Apple pushes new features nobody asked for. Things that should be rock-solid, just… aren’t.
It really makes you understand why some people avoid updates entirely. Snow Leopard’s focus on refinement feels like a distant memory now.
The idea of Apple OS features as abandonware is a wild idea, and yet here we are. The external monitor issue is actually terrible. I have two friends who work at Apple (neither in OS dev) and both have said that they experience the monitor issue themselves.
I was thinking about this not too long ago; there are macOS features (ex the widgets UI) that don’t seem to even exist anymore. So many examples of features I used to really like that are just abandoned.
Reconnecting an external monitor used to be reliable and move all windows that were there last time it was connected back there. Now it works occasionally.
This works flawlessly for me every single time, I use Apple Studio Display at home and a high end Dell at the office.
On the other hand, activating iMessage and FaceTime on a new MacBook machine has been a huge pain for years on end…
On the other hand, activating iMessage and FaceTime on a new MacBook machine has been a huge pain for years on end…
I can quote on that, but not with my Apple account, but with my brother’s. Coincidentally, he had less problems activating iMessage/FaceTime on an Hackintosh machine.
A variation on that which I’ve run in to is turning the monitor off and putting the laptop to sleep, and waking without moving or disconnecting it.
To avoid all windows ending up on stuck on the laptop display, I have to sleep the laptop, the power off the monitor. To restore power on the monitor, then wake the laptop. Occasionally (1 in 10 times?) it still messes up and I have to manually move windows back to the monitor display.
(This is when using dual-head mode with both the external monitor and laptop display in operation)
iCloud message sync with message keep set to forever seems to load soooo much that messages on my last laptop would be so awful to type long messages (more than 1 sentence) directly into the text box I started to write messages outside of the application, copy/paste and send the message. The delay was in seconds for me.
I’m really heartened by how many people agree that OS X 10.6 was the best.
Edited to add … hm - maybe you’re not saying it was the best OS version, just the best release strategy? I think it actually was the best OS version (or maybe 10.7 was, but that’s just a detail).
It was before Apple started wanting to make it more iPhone-like and slowly doing what Microsoft did with Windows 8 (who did it in a ‘big bang’) by making Windows Phone and Windows desktop amost indistinguishable. After Snow Leopard, Apple became a phone company and very iPhone-centric and just didn’tbother with the desktop - it became cartoonish and all flashy, not usable. That’s when I left MacOS and haven’t looked back.
Recently, Disk Utility has started showing a permissions error when I click unmount or eject on SD cards or their partitions, if the card was inserted after Disk Utility started. You have to quit and re-open Disk Utility for it to work. It didn’t use to be like that, but it is now, om two different Macs. This is very annoying for embedded development where you need to write to SD cards frequently to flash new images or installers. So unmounting/ejecting drives just randomly broke one day and I’m expecting it won’t get fixed.
Another forever-bug: when you’re on a higher refresh rate screen, the animation to switch workspaces takes more time on higher refresh rate screens. This has forced me to completely change how I use macOS to de-emphasise workspaces, because the animation is just obscenely long after I got a MacBook Pro with a 120Hz screen in 2021. Probably not a new bug, but an old bug that new hardware surfaced, and I expect it will never get fixed.
I’m also having issues with connecting to external screens only working occasionally, at least through USB-C docks.
The hardware is so damn good. I wish anyone high up at Apple cared at all about making the software good too.
Oh, there’s another one: the fstab things to not mount partitions that match a particular UUID no longer work and there doesn’t appear to be any replacement functionality (which is annoying when it’s a firmware partition that must not be written to except in a specific way, or it will sofr-brick the device).
Oh, fun! I’ve tried to find a way to disable auto mount and the only solutions I’ve found is to add individual partition UUIDs to a block list in fstab, which is useless to me since I don’t just re-use the same SD card with the same partition layout all the time, I would want to disable auto mounting completely. But it’s phenomenal to hear that they broke even that sub-par solution.
Maybe, but we’re talking about roughly 1.2 seconds from the start of the gesture until keyboard input starts going to an app on the target workspace. That’s an insane amount of delay to just force the user to sit through on a regular basis… On a 60Hz screen, the delay is less than half that (which is still pretty long, but much much better)
Not a fix, but as a workaround have you tried Accessibility > Display > Reduce Motion?
I can’t stand the normal desktop switch animation even when dialed down all the way. With that setting on, there’s still a very minor fade-type effect but it’s pretty tolerable.
Sadly, that doesn’t help at all. My issue isn’t with the animation, but with the amount of time it takes from I express my intent to switch workspace until focus switches to the new workspace. “Reduce Motion” only replaces the 1.2 second sliding animation with a 1.2 second fading animation, the wait is exactly the same.
Don’t update/downgrade to Sequoia! It’s the Windows ME of MacOS’s. After Apple support person couldn’t resolve any of the issues I had, they told me to reinstall Sequoia and then gave me instructions to upgrade to Ventura/Sonoma.
I thought Big Sur was the Windows ME of (modern) Mac OS. I have had a decent experience in Sequoia. I usually have Safari, Firefox, Chrome, Mail, Ghostty, one JetBrains thing or another (usually PyCharm Pro or Clion), Excel, Bitwarden, Preview, Fluor, Rectangle, TailScale, CleanShot, Fantastical, Ice and Choosy running pretty much constantly, plus a rotating cast of other things as I need them.
Aside from Apple Intelligence being hot garbage, (I just turn that off anyway) my main complaint about Sequoia is that sometimes, after a couple dozen dock/undock cycles (return to my desk, connect to my docking station with a 30” non-hidpi monitor, document scanner, time machine drive, smart card reader, etc.) the windows that were on my Macbook’s high resolution screen and move to my 30” when docked don’t re-scale appropriately, and I have to reboot to address that. That seems to happen every two weeks or so.
Like so many others here, I miss Snow Leopard. I thought Tiger was an excellent release, Leopard was rough, and Snow Leopard smoothed off all the rough edges of Tiger and Leopard for me.
I’d call Sequoia “subpar” if Snow Leopard is your “par”. But I don’t find that to be the case compared to Windows 11, KDE or GNOME. It mostly just stays out of my way.
Apple’s bug reporting process is so opaque it feels like shouting into the void.
And, Apple isn’t some little open source project staffed by volunteers. It’s the richest company on earth. QA is a serious job that Apple should be paying people for.
Apple’s bug reporting process is so opaque it feels like shouting into the void.
Yeah. To alleviate that somewhat (for developer-type bugs) when I was making things for Macs and iDevices most of the time, I always reported my bugs to openradar as well:
which would at least net me a little bit of feedback (along the lines of “broken for everyone or just me?”) so it felt a tiny bit less like shouting into the void.
I can’t remember on these. The CalDAV one is well known. Most of the time when I’ve reported bugs to Apple, they’ve closed them as duplicates and given no way of tracking the original bug.
No. I tried being a good user in the past but it always ended up with “the feature works as expected”. I won’t do voluntary work for a company which repeatedly shits on user feedback.
10.6 “Snow Leopard” was the last Mac OS that I could honestly say I liked. I ran it on a cheap mini laptop (a Dell I think) as a student, back when “hackintoshes” were still possible.
For what it’s worth, I like the TOML configuration syntax since it’s so much more readable for me, and easier to understand compared to BIND, Unbound and other usual suspects. I guess the time has come for it to slowly start appearing in distro package repositories, Docker containers, etc.
I’m going to give it a go starting from next week and try to replace Unbound with Hickory on my personal public DNS resolver. I’ll be in UAE for a month on business and I don’t like state governments interfering with my DNS resolution, so I’ve provisioned a public resolver instance in nearby Bahrain on AWS and put nginx in front of it for DNS over HTTPS. Will be a fun ride.
But then again, it’s just typical HTTPS traffic towards my personal domain using a TLD of a sovereign country (that just happens to be the same TLD this web site is using, btw). Surely the UAE government would take my word for it, right? Right?
My $WORK task list keeps growing and I’m falling behind on everything but this is much more important right now. Party like it’s 2000 and we’re overthrowing Milošević.
See you later, folks, going back to the streets of Belgrade.
A little clickbait-y title, but I agree with most of the stuff written in there.
Back in the days when Swift was just getting started, it was amazing for me to follow the development mailing list because I could witness with my own eyes how a real-world, production-grade programming language gets designed out there in the wild. I was able to pick up many underlying concepts of designing a language syntax, the compiler architecture, and all the nuts and bolts of what makes a programming language work under the hood.
Too bad I never got the chance to actually use Swift in any of my projects.
People shouldn’t feel pressure to upgrade htmx over time unless there are specific bugs that they want fixed, and they should feel comfortable that the htmx that they write in 2025 will look very similar to htmx they write in 2035 and beyond.
From the previous posts about it: http/2 support was a big one, and some other issues that meant it never fully passed the test suite, and there was no or very little activity to fix that, and dragging it around as EXPERIMENTAL with no expected path to get out of that didn’t make sense:
None of that explains what the actual problems are though. Why did it never fully pass the test suite? Why couldn’t HTTP/2 support be added, are there specific issues or just “nobody did the work yet”?
Came here to say this. Also, I’m the guy who uses curl a lot on a daily basis, but somehow I totally missed the fact that Hyper was ever there. Granted, I typically don’t compile curl from source and I certainly don’t interact with libcurl, but still. I wonder how many people were simply unaware of an alternative HTTP backend being available in the first place.
That kinda supports Daniel Stenberg’s point. If there had been user demand for it, people would have stepped up and publicized it, and then people like you would have become aware of it.
Sure, but that’s kind of the issue here - the adoption of an alternative HTTP library has not been communicated enough in the first place, at least that’s my impression, which is exactly why many of us were unaware of its existence. I’m not a curl developer and I certainly don’t follow its development mailing list, but I still keep up with the project overall and I check his blog every now and then and I also follow him on Mastodon, and yet somehow I totally missed the fact that Hyper was included in the experimental capacity.
Maybe the increasing interest in software supply chains and Software Bill of Materials (SBOM) will make such alternative backends more visible? If companies need to start ticking checkboxes on compliance forms for their software, they might actively look for these alternatives written in memory-safe languages. And maybe they would even invest developer time into this.
It’s probably going to take a while for the SBOM ship to pick up speed; but barring any revolutionary events, I guess the ship is underway and will not be stopped any more.
I hope this somehow manages to boost the velocity of nginx development, but I’m not optimistic at this point. After everything has happened with it, culminating with one of the core developers forking the project a while ago, I’m somewhat concerned about the project’s future under the current ownership structure.
Maybe it’s just me, but it looks like the development has slowed to a crawl and apart from CVE patches and minor, incremental feature updates and enhancements, there’s simply nothing exciting going on for nginx these days. Real shame, it’s a solid, battle-tested reverse and mail proxy and it pains me to see it almost abandoned by everybody.
Nothing wrong with that. However, there are some of us working at huge content delivery companies and the like and having a reverse proxy that keeps up to date with all the standards, protocols and new cool features deriving from new best practices/RFCs is something that’s very desirable.
For personal use cases, I agree, a reverse proxy should just work with minimal upkeep requirements.
If you’re a huge content delivery company that relies on nginx for your product but you haven’t hired an nginx developer to implement the features you want, then you might understand when I have a difficult time mustering up much sympathy for your plight.
Yes, you will hire nginx developers to develop custom functionality, but at one point, if the overall development on the upstream slows down to a crawl, then you either need to keep a private fork for yourself and maintain it indefinitely, in which case good luck finding new developers in the future, or you need to switch to something else (like many CDN companies are looking into Pingora and what to develop on top of it, for example) entirely.
I just happen to know these days it’s somewhat challenging to find proper nginx developers, so the “decay” is already a thing to a certain extent. With too many forks and core developers leaving, it doesn’t bode well for the project overall. I wish I’ll turn out to be wrong one day, though.
it sounds like most large companies have decided to stop investing in the public project, and have either built their own private nginx extensions or used a totally different code base. that’s their decision. as for smaller companies and individuals, it seems like the battle-tested nature of nginx outweighs the new standards and features that are mostly just useful for large companies anyway.
seeing it as a natural consequence of the divergence of interests between different groups of potential users, may help clarify what is disappointing or unsatisfying about the current state of the project and who can reasonably be expected to act to remedy the situation.
Yup. nginx has reached a certain level of maturity where it simply doesn’t need anything else in order to be the 1st choice for many use cases. But protocols, underlying technologies and new features are not standing still and it’s only a matter of time before nginx becomes irrelevant now that the pace of development has decelerated significantly with too many core developers forking the project and going their own way.
In one of my comments down below, I outlined the case of Cloudflare and their Pingora framework they open sourced a while ago. It didn’t happen just like that, and it’s an early indication of what’s in store for nginx should this situation continue for it.
Also, due to my career, I just happen to know that many big CDN players are considering Pingora for their next-gen platform and all of a sudden, a lot of them are looking into the cost-benefit of ditching nginx in the long term and going with their own in-house solution. Party like it’s 1999.
Can you elaborate on what exciting areas or features that you think are missing? I casually peeked at the broader nginx mailing lists, https://mailman.nginx.org/mailman/listinfo and see them to be reasonably active.
As for functionality, the open source version is pretty solid and just works. I love the simple configuration, and the feature set it provides (serves my purpose).
Well, QUIC+HTTP/3 support is still experimental, and that’s slowly becoming an issue for major deployments out there. Also, the configuration language, although extremely versatile, doesn’t support if/else scenarios (yes, I know if is evil in nginx, but I tend to know what I’m doing with it) for more advanced URI matching rules making me think of very creative ways of overcoming that situation, etc.
Need session stickiness for upstreams? Good luck finding a 3rd party module that’s still maintained or else be ready to either write it yourself or shell out the monies for the nginx Plus version that’s got that built in.
There are plenty more features that are simply missing and IMO should be a part of the modern open source reverse proxy solution in 2024.
Nothing per se, but in the case of nginx, the pricing structure has always been somewhat peculiar with a very steep pricing from the get go and eventually you run the risk of ending up in a typical vendor lock in situation while you’ve spent a ton of money along the way, at which point it becomes cheaper and overall better to make your own solution from scratch. Hint: CloudFlare Pingora - they never were a nginx Plus customer but they spent a ton of money and engineering hours trying to push the limits of nginx only to realize it’d be better for them to just make their own thing. We’re yet to see amazing things built on top of Pingora, reminds me of 2006-2009 period of nginx where every now and then something new and useful would pop out.
Is there any web server with a good HTTP/3 at the moment? My impression was that everyone was struggling with QUIC and there HTTP/3. AFAIU, there are working implementations of QUIC but the performance for end users isn’t necessarily better than TCP.
The performance of QUIC is a whole other story, however, it’s here to stay and it’s slowly starting to gain traction. There are many implementations for various web servers, none of them are getting it right at the moment, but the fact that QUIC support in nginx is still experimental in 2024 is a bit worrying to me.
why do you say that QUIC is here to stay and starting to gain traction if no web servers properly support it? why is it worrying for adoption to be slow if there’s no consensus that it’s an improvement?
QUIC is definitely here to stay because too many companies are investing more money into QUIC deployments and further R&D to make it better. One of the reasons why the adoption is slow is that it’s such a radical departure from the whole TCP/IP architecture and it turns out that network stacks and a bunch of other software was simply never that good in handling large UDP traffic so far, simply because there was no need to do so.
It’s going to take a while before everyone in the hierarchy gets things right, but based on what I see in the industry overall, we’re slowly starting to see some kind of resemblance of the light at the end of the tunnel. In fact, just today a very nice comment appeared on the site which sums some of the major challenges quite nicely.
I expect QUIC to become widely deployed but I was recently pondering having HTTP/3 on my servers and immediately stopped because I had no idea what to use. This makes me wonder if it’s ever going to replace HTTP/1 for most servers or if it will only be used by the largest players.
So, it’s actually been true all these years. They’re listening. At first I thought it was just another conspiracy theory, but then ad targeting became too precise for me to notice. One of the reasons why I disallow microphone access to most social media/messaging apps on my iPhone. One thing I like about iOS, for all its faults, is how easy it is to manage access to various parts of the hardware in a somewhat user friendly way. In retrospect, it was a good decision, obviously.
One thing that’s probably confusing is the distinction between “Facebook is listening to you”, which is probably technically false[0], and “some apps on your phone are listening to you and that data is shaping the ads you see on Facebook”.
Most people don’t care about the distinction, but it is an important one, both legally and in terms of the consequences. Because Facebook is centralized and a household name with a reputation, it is probably harder to stop a wide array of marketers/thread-actors from listening to you than it is to stop Facebook. But if that data ends up shaping ads on Facebook, the felt effect is very similar, and Facebook just has to act “shocked, shocked!” when one of their partners gets caught.
[0] Basically, I think Facebook would love to do that, but would probably understand that’s a bridge too far and not worth the risk. I would be relatively surprised to find out I was wrong, though not completely shocked.
Yes, I can agree with almost everything you said, except for the fact that FB wouldn’t go too far to listen to us directly. They have a long history of doing all kinds of privacy violating shenanigans with the culmination of Cambridge Analytica and the like, so it wouldn’t actually surprise me if they start listening directly at one point.
After all, with all the modern mobile CPUs with all kinds of hardware acceleration, it’s trivially do to all kinds of audio/video transcoding/encoding/decoding without being too taxing on the battery and CPU time (at least on iPhones) which is one of the nasty side effects of huge advances in mobile CPU design in recent years. I hope I don’t get too paranoid about all of this.
I think listening to private conversations feels even creepier than what happened with Cambridge Analytica
I think Cambridge Analytica was near a low point of Facebook’s PR, and they are now investing more effort in avoiding this kind of scandal, albeit without changing any fundamental values.
That said, neither of these is a air-tight argument and I don’t think it’s impossible Facebook is listening in, I just think it’s unlikely.
I agree on both points, and it’s true they’re much more cautious nowadays because of all the PR stunts they had to pull off, but I still think they’re open and actively investigating on ways to listen to us directly. I guess that’s one of the primary reasons they disguised their listening operations through all kinds of “strategic ad partners” and such.
Will keep learning Rust. I’m making nice progress but the learning curve is much steeper than I originally anticipated. I like the language a lot, though, although I’m still not up to the point where I can take a look at a random piece of code written by someone else and get a general idea of what the code does without focusing very deeply and analyzing the code line by line. I guess this will become easier for me as I keep learning and writing some more code of my own.
Also trying to write some middleware for the new startup I’ve founded a few months ago since we’re in the process of onboarding our first customer. Still no revenue but it’s exciting.
I’ll probably spend the whole weekend in my apartment under the A/C since it’s over 40 degrees right now. The curse of being in a landlocked country very close to the Mediterranean.
I’m learning rust this weekend as well! What resources are you using? Anything you’ve found particularly helpful? I’m still going through rust-lang.org/book but thinking of jumping into Rust for Rustaceans
I’m going through the same book, right now I’m about half way through it :-) The most helpful approach for me so far is to go through a chapter, type all the code examples myself and then try to break things and trying to figure out how to fix them. From time to time, I implement some of the simpler problems myself, break a lot of stuff along the way but I’m also learning a lot of the small details along the way. It’s frustrating but fun at the same time, and having a balance between reading theory and trying to apply some of it in my own mini projects works best for me.
Rust for Rustaceans is on my reading list as well, but some of my colleagues have also recommended Programming Rust, 2nd edition. I’ve skimmed through it a little bit and it looks very good, albeit the first chapter is somewhat strange, being a very hands-on top-down approach with a lot of code and then progresses through more theoretical stuff. I’m not used to that kind of books, but I’ll probably give it a go at some point.
I felt that a lot of what was in this book was aimed at people writing libraries in Rust and as such not very interesting for me. I also read Rust in Action which is nicely applied but I felt wasn’t advanced enough.
Yeah that seems to be true. Yet I feel like a lot of languages are lacking in good “intermediate” books and this seems to be in that sweet spot. I’ll see what I feel like when I start reading. Rust in Action seems more like on par with the Rust Book, would you say that’s accurate?
Nah, just grabbed my copy to see what the problem was with Rust in Action:
It’s very applied. It’s brimful of applications and examples.
That means language introduction is flown over because that’s not the aim of the book.
The problem is though that with the breadth the book is going for, it’s very hard to make an “advanced” treatement of anything. Chapter 11 (“Kernel”) teaches you how to write your own OS kernel in under 30 pages. That is an accomplishment in and of itself, but it also means you can’t really go in depth.
Don’t get me wrong, it’s a very good and accomplished book, but I just wish it was 5 times longer and really went into things. For somebody coming fresh out of the Rust Book I think this is a very good follow-up book.
Got back home after my trip to a trade show in Istanbul, now I have to sort through all of the new contacts and potential new customers for my new startup I’m starting with a few former colleagues after most of us got laid off overnight.
So many new ideas for new networking products and so little time to make them a reality, primarily due to my lack of Rust proficiency. I haven’t written any production-grade code in over 20 years and now I have to learn Rust (I know Linux inside and out, which helps a lot, I guess) as fast as I can so I can start being productive. Lots of fun ahead for me.
Right now, I’m doing both :-) I’m going through the Rust Programming Language book and trying to do things on my own as I pick up new concepts. That’s always been the best way for me to learn stuff.
I got laid off overnight a few weeks ago, but luckily I have lots of savings and side gig consulting jobs with some passive income as well, so this week I’ll be in Istanbul at a trade show with some of my friends trying to land first deals for the new startup we’re in the process of founding.
It’s time to try to be my own boss, it feels both exhilarating and terrifying in parallel.
I had so many plans for the weekend, but I somehow managed to catch a nasty flu the other day so I’ll have to stay at home and get some rest. Lots of rest. At least my reading list is full, so it won’t be that bad.
Back when the PPC -> Intel transition happened I ran… well, definitely not the earliest Intel Hackintosh, but one of the earliest. I lucked out on pretty compatible hardware which was close enough to “real” Intel Macs that all it took was some fiddling with a vmware image that was floating around the shadier torrent websites at the time. I got an Intel Mac after a while (a Mac Mini) but I continued to read Hackintosh forums for a while.
Now the main reason why I went at it was that I had a lot of free time and wanted something cool to fill it with but I knew people who ran Hackintshoes in production, for real reasons. E.g. people who did video editing, were independent contractors rather than sheikhs or megacorps so couldn’t afford a Mac Pro, and couldn’t justify the money for a powerful enough iMac, given that they’d already spent a fortune on good monitors. Few Hackintosh systems (if any) ran everything well but if you only needed a couple of applications and you got the right hardware, depending on when the last product line refresh had happened, you wouldn’t just get “a better deal”, you could just get a better machine than anything that Apple sold.
It’s worth remembering a couple of things. First, that this was back when the various services around Macs were not quite as well integrated, and not across as many devices as today. Second, that this was also around the time when (though people in the design ivory tower casually ignored it) product line refreshes were kind of hit or miss, this was the time of bendy batteries and dead displays. Third, that this was around the time of a big refresh and while Rosetta was good, it was nowhere near as smooth as the M1-era Rosetta – depending on what applications you ran, slow, crashy software was what you got on legit Apple hardware, too.
And finally, and probably the biggest thing, that for a brief period, OS X was all the rage for very real reasons. It was not a great Unix but then neither was Linux, it could be less fiddly to network than Windows if you didn’t run an AD environment, it had a rich application ecosystem that other Unices lacked, at a time when cloud/service-based applications weren’t really a thing, and it was pretty well-maintained. Some bugs lingered forever but OS X 10.n could generally be expected to be better than OS X 10.n-1, rather than a compromise on development budget. Lots of people really wanted OS X, and didn’t really care about the hardware. This just isn’t as big a deal today. macOS has been under-maintained for a long time; few people who wouldn’t be just as happy with an iPad really want to run it over either Windows or Linux.
I cannot speak for everyone else who was into Hackintosh’ing at any point, but for myself it was basically about wanting to “have my cake and eat it too.”
I really liked Mac OS but I disliked the lack of customization, upgradability, and pricing of the hardware. Eventually I bought a Mac Pro 2010 instead, as the upkeep on a Hackintosh felt worse than Linux in a lot of ways. Generally speaking, you were unable to install Mac OS updates with 100% confidence that there wouldn’t be issues. So, I was always backing up my entire system drive beforehand.
As far as hardware compatibility went, as long as you stuck to the lists that people put together, you would usually have a good experience. Felt like a lot of the hardware issues people encountered were because they followed a list but then thought it was OK to swap one or two things out.
If I could go back in time and do it over again, I probably would’ve skipped the Hackintosh in the first place and just bought a Mac Pro sooner. Hackintosh was fun for a while, but kind of just because a nuisance after the honeymoon period wore off.
All just IMHO, of course!
I guess it’s worth saying that the “lack of customization, upgradability, and pricing of the hardware” pushed me away from Mac OS ultimately, as I now just use Linux everywhere instead on non-Apple hardware.
I build my first Hackintosh, an i7-860, in late 2009 or very early 2010. For compiling Firefox (my job at the time) it thrashed all over the 8 core 2.26 GHz Mac Pro Moz gave me. I continued with Hackintoshes (i7-4790K, i7-6700K) until 2019.
The only software or software update problem I ever had was always the same one: sometimes I had to re-install the drivers for the Realtek sound chips PC motherboards had. No big deal … I just kept the installer handy. A matter of a few seconds to run the installer again after an OS update, and then a reboot.
And, yes, I built new hardware using recommended components. I didn’t try to repurpose random old machines, which is where most problems lie (it’s the same with running Linux on laptops sold with Windows today)
Generally speaking, you were unable to install Mac OS updates with 100% confidence that there wouldn’t be issues
I’m not running a hackintosh, but a now unsupported 2014ish Mac Mini as my TV pc. I’ve been using Opencore Legacy Patcher to make it possible to keep it up to date. And… Yep fear of updates are real.
I had one lead to a reinstall, it was unfortunate because it booted and it was installing the 3rd party patches that broke it. I guess as things get ripped out the fixes are going to become more invasive and prone to bugs. I was hoping to keep it going till Apple drops x64 support entirely. But that is looking more and more unlikely.
It’s sad because it’s a perfectly good hardware fit for purpose, a 1080p streaming or local video box.
Back in the day during high school years, my parents couldn’t afford to get me a Mac, even a used one. All I had was a crappy MSI laptop but I always wanted to try Mac OS X as it was called back then. There was this aura of a well designed user interface and the whole Aqua design language appealed to me. Hackintosh was the only way.
It was a very gentle introduction to some of the basic principles of operating system design. Getting to run Mac OS X on non-Apple hardware made me learn about kernel vs user space, drivers (or kernel extenders in Apple lingo), basic I/O system routines/system calls, etc. Sure, I could learn this stuff through tinkering with Linux as well, which I did, but then again, that user interface and the challenge of getting something non-standard running on your machine… it was magical.
A few years later, I started getting my first freelance web development gigs and in due time, I bought myself a nice MacBook Pro and ditched the whole Hackintosh thing because life happened, and all of a sudden, I didn’t have a whole day to tinker with another kext written by a Russian teenager in Vladivostok that I obtained from an obscure anonymous FTP host with dial up transfer rates just to see if WiFi would work again after a minor .1 system update.
RAM, no hard disks, no graphics, but case/mobo/CPU/PSU etc.
I took the nVidia card and hard disks from my old Athlon XP. I got the machine running, and thought it was worth a try since it was mostly Intel: Intel chipset, Intel CPU, etc.
I joined some fora, did some reading, used Clover and some tools from TonyMacX86 and so on.
After two days’ work it booted. I got no sound from my SoundBlaster card, so I pulled it, turned the motherboard sound back on, and reinstalled.
It was a learning experience but it worked very well. I ran Snow Leopard on it, as it was old enough to get no new updates that would break my Hack, but new enough that all the modern browsers and things worked fine. (2012 was the year Mountain Lion came out, so I was 2 versions behind, which suited me fine – and it ran PowerPC apps, and I preferred the UI of the PowerPC version of MS Word, my only non-freeware app.)
I had 4 CPU cores, it was maxed out with 8GB RAM, and it was nice and quick. As it was a desktop, I disabled all support for sleep and hibernation: I turn my desktops off at night to save power. It drove a matched pair of 21” CRT monitors perfectly smoothly. I had an Apple Extended keyboard on an ADB-to-USB convertor since my PS/2 ports weren’t supported.
It wasn’t totally reliable – occasionally it failed to boot, but a power cycle usually brought it back. It was fast and pretty stable, it ran all the OS X FOSS apps I usually used, it was much quicker than my various elderly PowerMacs and the hardware cost was essentially £0.
It was more pleasant to use than Linux – my other machines back then ran the still-somewhat-new Ubuntu, using GNOME 2 because Unity hadn’t gone mainstream yet.
Summary: why not? It worked, it gave me a very nice and perfectly usable desktop PC for next to no cost except some time, it was quite educational, and the machine served me well for years. I still have it in a basement. Sadly its main HDD is not readable any more.
It was fun, interesting, and the end result was very usable. At that time there was no way I could have afforded to buy an Intel Mac, but a few years, one emigration and 2 new jobs later, I did so: a 2011 i5 Mac mini which is now my TV-streaming box, but which I used as my main machine until 2017 when I bought a 27” Retina iMac from a friend.
Cost, curiosity, learning. All good reasons in my book.
This year I Hacked an old Dell Latitude E7270, a Core i7 machine maxed out with 16GB RAM – with Big Sur because its Intel GPU isn’t supported in the Monterey I tried at first. It works, but its wifi doesn’t, and I needed to buy a USB wifi dongle. But performance wasn’t great, it took an age to boot with a lot of scary text going past, and it didn’t feel like a smooth machine. So, I pulled its SSD and put a smaller one in, put ChromeOS Flex on it, and it’s now my wife’s main computer. Fast, simple, totally reliable, and now I have spare Wifi dongle. :-/ I may try on one of my old Thinkpads next.
It is much easier to Hackintosh a PC today than it was 10-12 years ago, but Apple is making the experience less rewarding, as is their right. They are a hardware company.
I ran a hackintosh years ago and like the article mentions: price/performance was pretty hard to beat. That and the fact that I could have a decent gaming experience on the same machine by dual booting windows was great. It could be a little finicky once a year when a new macos dropped, but other than that it was pretty painless.
I never tried it on a laptop though. I assume the drop in user experience is far greater there.
I remember when I first started using Linux as my daily driver, I would install all the graphical mods to make Gnome look like OS X. Flipped buttons on windows, a global menu bar, window themeing, and a dock. It wasn’t a particular improvement in workflow or in life generally, but it was pretty. At that time, the appeal of Apple products to me was all about novelity.
At some point I looked into buying what I needed for a true hackintosh, but after actually using a Macbook for a while, I realized that the actual things I cared about workflow would’ve been worse for me on OSX vs Linux, so dropped the idea. But regardless, I think many people are driven by novelty. Doing things that aren’t simple or straightforward just for the purpose of doing it.
I personally was big into it when I was a teen and my parents couldn’t afford to get me a Mac. Enough tech “influencers” had Macs that I put a decent amount of effort into getting a piece of the pie on cheaper hardware.
Thank you everyone for the detailed replies. It sounds like the challenges are similar to getting Linux to work on some random laptop.
With Linux I totally get the idea of “Here’s a people’s operating system and I’m going to put it on this thing I cobbled together out of parts that fell off a lorry.” I like the Mac OS look and feel, but but not enough to spend days fighting to put a proprietary OS on some random hardware.
They do seem to be going in different directions: Linux is getting easier to operate productively on random machines and Mac OS is getting harder.
I ran a Hackintosh on my main PC for a short period of time in 2020 to work around DaVinci Resolve’s idiotic hardware requirements for certain codecs - in particular I was interested in GPU-accelerated encoding and decoding of H.265. (Funny to see they’re just as idiotic now as they were back then.)
Dual-booting macOS was really cumbersome compared to just using my Linux though, so I later abandoned it in favor of just using plain ol’ Kdenlive. My video editing needs weren’t that advanced at the time anyways and using a Linux-native editor was way less of a UX barrier to overcome. (In particular I was always frustrated with the mouse acceleration curve differences between macOS and GNOME, and macOS’s horrendous window management.)
Fast-forward to last year, I had reformatted my EFI boot partition a few times by that time, so my OpenCore installation was long gone. Too lazy to set it up again, and with a need for some more advanced video editing software at the time, I installed DaVinci Resolve on my Windows dual boot, and… I had basically no performance problems editing 1080p60 gameplay footage whatsoever, even without GPU acceleration.
So yeah, I no longer have a need for a Hackintosh, but it was fun working around Apple’s walled garden. That smugness you feel when you have a rare macOS-compatible hardware configuration on your main PC and can run it mostly without issues, that is something.
Getting ready for a week-long vacation in Singapore. I can’t stand European winter anymore and I need to disconnect from all work-related stuff for a bit.
Yes, it just so happens it’s pretty warm over here this week, but it won’t stay like this for much longer, we’re going back to under-10C temperatures from Monday. No thank you, I’ll just switch continents, even for just one week. It also helps that I have 5 unused vacation days from last year, I’m so not going to allow them to expire.
I suspect this will be somewhat of a bittersweet “careful what you wish for” outcome - once Chrome is on iOS properly we will see “Your browser is unsupported” messages, pointing people towards Chrome and the other engines won’t keep a meaningful marketshare. 🫤
Considering EU rules on vendor self-preference, I suspect Google will need to vastly reduce how much they push Chrome. As for 3rd parties pushing people towards Chrome, if that becomes too much, we’ll need further legal action.
Every google property has pushed chrome to every non-chrome user for a decade, they even brag about deciding to kill IE versions by having their properties drop support, and the EU did not care. The EU has never demonstrably cared about how google prioritizes their own browser despite deliberating removing basic privacy features from webkit from day 1. Only now they’ve got tracking information on the majority of sites on the web and successfully undermined third party cookie blocking are they talking about doing something they explicitly removed from webkit, and they’re marketing it as if it were equivalent to the systems Mozilla and Apple introduced in response to Google’s extensive invasion of user privacy.
And yet the EU does nothing.
I think anyone thinking the EU will do anything that stops Google’s ongoing abuse is kidding themselves.
They certainly have more than sufficient internal resources to do so, it wouldn’t surprise me if they do it just so they can stick another finger towards Apple, if you know what I mean.
Reading the actual requirements to get the entitlement, I’m not sure it does? Apple won’t give you permission to be a 3rd-party web browser unless you block third party cookies by default and do origin-partitioned storage. You’re also not allowed to “sync cookies and state between the browser and any other apps, even other apps of the developer” - I’m a little unclear on what this means honestly. (Like, would that prevent Google from using your synced Chrome history in other Google apps…?)
They clearly still have some levers to pull, or at least think they do.
For the latter point I think it is “you can’t use the browser state to communicate with other apps” not “a user can’t use the same account with multiple apps”. e.g. the goal is to make it so that if you use the YouTube app and chrome but aren’t logged into the same account on both google can’t just use on device channels to link the app user information to the browser user information. If the user explicitly chooses to log in to the same account in both then they’ve done that themselves manually.
It’s probably worth it to Google purely as a wedge. Once it’s built and working on EU iPhones, Google has the easy argument to other regulators that only Apple is holding back Chrome on iPhone at that point.
They already did. There is an iOS native version of Chromium living in their code repository. Probably far from complete and now they have to make it work with the new APIs that Apple just published. But they have been working on it for sure.
A quick search turned up some data suggesting that there may be over 100M iOS App Store users in the EU. Is that a large enough set of users for Google to want to invest? All based on assumption, but I’d think they’d look at revenue per Chrome user and compounding revenue from installations of other Google apps by these users.
Based on this bug, they’ve already started work some time ago on getting Blink running on iOS. I think it’s just a matter of time for it to be ported and the default experience as they’ve got years of cross-platform experience. How successful it’ll be is different matter.
If you recall when the blink for started there were some articles about how they’d removed millions of lines of “apple code”. What they’d removed was all the platform abstraction logic, the qt, gtk, wx, etc ports, and javascriptcore (with its support for more or less every platform, alongside the optimising JITs for armv7, armv8, x86, x86_64, MIPs, SH4, ….). Since then they’ve aggressively removed most of the other abstraction layers.
That said, I can’t imagine they haven’t had some kind of build on jailbroken phones.
iOS and Apple are far less popular compared to the US though. Sure, they’re still a major phone vendor, but whereas in the US they have a 55-60% market share, in Europe they’re closer to 30-35%.
Still, that translate to similar numbers of actual iPhone users (200-240 million each).
I would frankly be surprised if they haven’t had Chrome running on iOS internally for years just to be ready to respond immediately to an Apple policy change.
Interesting investigation. It also shows the dangers in groupthink - Hetzner is really popular right now among geeks. Like the post says, it’s (for now, still) better than Amazon/Azure/Google but collectively as tech people we should think about ways to avoid pumping up these companies to become “too big to fail”.
Most of us are fed up with “big tech”, but by choosing the same companies over and over, we make these companies so disgustingly big, creating the next generation of big tech. Note that this also encourages “enshittification” because we would stick with them once we’re used to it as it’s “the safe choice”.
The primary reason I host my Mastodon instance on Hetzner is price - I can get a very powerful ARM virtual instance for around 10 EUR that comes with enough vCPU and RAM that enables me to have a fully-powered Mastodon instance with enough nginx/Rails workers, Elasticsearch for indexing and other niceties.
If I wanted to host that same instance with the exact same hardware in my home country, for example, the price would be about 100 times higher (that’s not a typo) and that’s with a ~12% discount with a yearly prepayment.
No matter which other provided I’d go for, the price would be at least double of what I currently pay, and there’s only so much money I’m willing to spend for what is essentially a hobby side project, especially in this economy.
(Before you even ask - I currently don’t have the means to self-host it at home due to my tenancy/rental agreements and the general unwillingness of my ISP to meet me half way because of reasons.)
Their current price list is €15 for that size. It looks, given their RAM scaling, as if they oversubscribe shared VCPUs 2:1. The big cloud providers don’t oversubscribe for most VM types. How much that matters probably depends a lot on who you’re sharing with. Most IaaS users use an average of well under 50% of their CPU and the Ampere hosts have quite high core counts, so there’s quite a high chance that you can get 100% of your cores when you need them.
The bigger problem with oversubscription is that it leads to weird performance anomalies. Most kernels assume that all CPUs are there all the time, so will IPI another core and wait, so if one of your other VCPUs is not scheduled it can lead to all of the others sitting in spin loops to wait for it. On real hardware, this will be a few hundred / thousand cycles, on an oversubscribed VM it can be milliseconds. This gets worse as the core counts go up. Hyper-V has some paravirtual assists for this kind of situation but I don’t know what Hetzner uses and whether guests support it.
What you’re quoting is the price including VAT/GST/whatever. For us outside of EU/UK/EEA/EFTA/whatever everything is significantly cheaper compared to you folks (in general), wherever you may reside
Nope, it’s just that ARM instances are significantly cheaper compared to AMD/Intel ones, and luckily ARM support for Linux/open source stuff is top notch these days, there’s a native package for pretty much anything you can think of
Honestly, when I loaded this up, I expected to see EC2 being in the majority. So it’s great to see that particular form of groupthink has been avoided here.
The post mentioned that Hetzner’s dominance was inflated by the fact that mastodon.social is on it, and they filtered out that particular instance, but the same person that runs that instance also runs mastodon.online, which is similarly huge. Those instances tend to have a higher-than-average rate of people who signed up once and forgot about their account. So I’d expect if you were somehow (magically) able to weight these by active users, the picture would be different. edit never mind; I see now that the nodeinfo API does actually report active users already, so this weighting is already happening.
Another analysis I’d like to see is a breakdown by server software. I’m guessing Mastodon instances are more likely to be run on big beefy VPS servers, while GotoSocial and Akkoma are more likely to be run on home ISP connections because of their dramatically lower system requirements. (4GB RAM minimum vs ~300MB)
It is way too expensive for hobbyist projects. Hetzner/Scaleway/Vultr are all a lot cheaper and since you don’t need anything beyond VMs it makes perfect sense that AWS usage is low.
Another analysis I’d like to see is a breakdown by server software. I’m guessing Mastodon instances are more likely to be run on big beefy VPS servers, while GotoSocial and Akkoma are more likely to be run on home ISP connections because of their dramatically lower system requirements. (4GB RAM minimum vs ~300MB)
On the other hand, computation resources are much cheaper on home hardware compared to a VPS. It’s the opposite for network speeds, but I’m not sure there is much variance in network usage among fediverse server implementations.
Yeah, I’m thinking more about the fact that basically every person who has the operational expertise to run a VPS also has hardware lying around their house that could easily run gotosocial without breaking a sweat; whether that’s a little SBC they bought for a project they never got around to, or a laptop collecting dust in the closet.
Rationally if you DO need a beefy server, it’s much more economical to own it, but I guess people might be averse to the up-front cost because they might not be sure they want to commit to running the server? As someone who doesn’t rent a VPS I guess I can’t say what factors go into that decision. =)
I run a gotosocial on a castoff closet thinkpad, and I gotta say, the latency for me loading my timeline from a 192.168.0.x address is pretty great! I’m sure latency to other servers is much worse, but that’s not visible from a user perspective, so I don’t care.
Maybe the set of people interested in running a fedi server is not restricted to the set of people with stable living conditions, permissive broadband access, or even stable power.
Well my personal VPS has 512MB of RAM, so if I wanted to run a fediverse instance there I couldn’t run Mastodon, but if I were to use an old Thinkpad I think it would have no problem running Mastodon.
I guess my closet thinkpads are older than most peoples’ thinkpads! But I think a lot of people like using an rpi or pine64 etc too, whether for practical reasons or just because it’s more fun I couldn’t say.
The only place where 4GB is still a lot of RAM is Cloud. Web browsers can take 1GB+ with a dozen tabs open.
I see core count as the real issue, consumer CPUs running on your home connection are only just now commonly outfitted with 4-8 cores. My current MBP has 8 cores but my old 2017 model only had 2. Running a busy service on the old machine would lead to noticeable lag.
Wrapping up all the projects and tasks at both of my workplaces (I can’t help but max out all the tax relief packages my government has graced me with this year) and getting ready for a trip to Dubai for New Year’s Eve. I haven’t seen some of my high school friends for over 10 years, will be nice to catch up with everyone, and the weather ain’t too bad either.
The rise of the machines started with a simple act of rebellion.
My MacBook Pro is nagging me to upgrade to the new OS release. It lists a bunch of new features that don’t care about. In the meantime, the following bugs (which are regressions) have been unfixed for multiple major OS versions:
There are a lot of others, these are the first that come to mind. My favourite OS X release was 10.6: no new user-visible features, just a load of bug fixes and infrastructure improvements (this one introduced libdispatch, for example).
It’s disheartening to see core functionality in an “abandonware” state while Apple pushes new features nobody asked for. Things that should be rock-solid, just… aren’t.
It really makes you understand why some people avoid updates entirely. Snow Leopard’s focus on refinement feels like a distant memory now.
The idea of Apple OS features as abandonware is a wild idea, and yet here we are. The external monitor issue is actually terrible. I have two friends who work at Apple (neither in OS dev) and both have said that they experience the monitor issue themselves.
It is one thing when a company ignores bugs reported by its customers.
It is another thing when a company ignores bugs reported by its own employees that are also customer-facing.
When I worked for a FAANG, they released stuff early internally as part of dogfooding programs to seek input and bug reports before issues hit users.
Sounds good, just that “you’re not the target audience” became a meme because so many bug reports and concerns were shut down with that response.
I was thinking about this not too long ago; there are macOS features (ex the widgets UI) that don’t seem to even exist anymore. So many examples of features I used to really like that are just abandoned.
This works flawlessly for me every single time, I use Apple Studio Display at home and a high end Dell at the office.
On the other hand, activating iMessage and FaceTime on a new MacBook machine has been a huge pain for years on end…
I can quote on that, but not with my Apple account, but with my brother’s. Coincidentally, he had less problems activating iMessage/FaceTime on an Hackintosh machine.
A variation on that which I’ve run in to is turning the monitor off and putting the laptop to sleep, and waking without moving or disconnecting it.
To avoid all windows ending up on stuck on the laptop display, I have to sleep the laptop, the power off the monitor. To restore power on the monitor, then wake the laptop. Occasionally (1 in 10 times?) it still messes up and I have to manually move windows back to the monitor display.
(This is when using dual-head mode with both the external monitor and laptop display in operation)
iCloud message sync with message keep set to forever seems to load soooo much that messages on my last laptop would be so awful to type long messages (more than 1 sentence) directly into the text box I started to write messages outside of the application, copy/paste and send the message. The delay was in seconds for me.
I’m really heartened by how many people agree that OS X 10.6 was the best.
Edited to add … hm - maybe you’re not saying it was the best OS version, just the best release strategy? I think it actually was the best OS version (or maybe 10.7 was, but that’s just a detail).
Lion was hot garbage. It showed potential (if you ignored the workflow regressions) but it was awful.
10.8 fixed many of lion’s issues and was rather good.
Snow Leopard was definitely peak macOS.
Are there people who still use 10.6? I wonder what would be missing compared to current MacOS. Can it run a current Firefox? Zoom?
It would be pretty hard to run 10.6 for something other than novelty, the root certs are probably all expired, and you definitely can’t run any sort of modern Firefox on it, the last version of FF to support 10.6 was ESR 45 released in 2016: https://blog.mozilla.org/futurereleases/2016/04/29/update-on-firefox-support-for-os-x/
I know there are people keeping Windows 7 usable despite lack of upstream support; it would be cool if that existed for 10.6 but it sounds like no.
Maybe 10.6 could still be useful for professional video/audio/photo editing software, the type that wasn’t subscription based.
It was before Apple started wanting to make it more iPhone-like and slowly doing what Microsoft did with Windows 8 (who did it in a ‘big bang’) by making Windows Phone and Windows desktop amost indistinguishable. After Snow Leopard, Apple became a phone company and very iPhone-centric and just didn’tbother with the desktop - it became cartoonish and all flashy, not usable. That’s when I left MacOS and haven’t looked back.
Recently, Disk Utility has started showing a permissions error when I click unmount or eject on SD cards or their partitions, if the card was inserted after Disk Utility started. You have to quit and re-open Disk Utility for it to work. It didn’t use to be like that, but it is now, om two different Macs. This is very annoying for embedded development where you need to write to SD cards frequently to flash new images or installers. So unmounting/ejecting drives just randomly broke one day and I’m expecting it won’t get fixed.
Another forever-bug: when you’re on a higher refresh rate screen, the animation to switch workspaces takes more time on higher refresh rate screens. This has forced me to completely change how I use macOS to de-emphasise workspaces, because the animation is just obscenely long after I got a MacBook Pro with a 120Hz screen in 2021. Probably not a new bug, but an old bug that new hardware surfaced, and I expect it will never get fixed.
I’m also having issues with connecting to external screens only working occasionally, at least through USB-C docks.
The hardware is so damn good. I wish anyone high up at Apple cared at all about making the software good too.
Oh, there’s another one: the fstab things to not mount partitions that match a particular UUID no longer work and there doesn’t appear to be any replacement functionality (which is annoying when it’s a firmware partition that must not be written to except in a specific way, or it will sofr-brick the device).
Oh, fun! I’ve tried to find a way to disable auto mount and the only solutions I’ve found is to add individual partition UUIDs to a block list in fstab, which is useless to me since I don’t just re-use the same SD card with the same partition layout all the time, I would want to disable auto mounting completely. But it’s phenomenal to hear that they broke even that sub-par solution.
Maybe it’s an intended “feature”, because 120Hz enabled iPhones and iPads have the same behavior.
Maybe, but we’re talking about roughly 1.2 seconds from the start of the gesture until keyboard input starts going to an app on the target workspace. That’s an insane amount of delay to just force the user to sit through on a regular basis… On a 60Hz screen, the delay is less than half that (which is still pretty long, but much much better)
Not a fix, but as a workaround have you tried Accessibility > Display > Reduce Motion?
I can’t stand the normal desktop switch animation even when dialed down all the way. With that setting on, there’s still a very minor fade-type effect but it’s pretty tolerable.
Sadly, that doesn’t help at all. My issue isn’t with the animation, but with the amount of time it takes from I express my intent to switch workspace until focus switches to the new workspace. “Reduce Motion” only replaces the 1.2 second sliding animation with a 1.2 second fading animation, the wait is exactly the same.
Don’t update/downgrade to Sequoia! It’s the Windows ME of MacOS’s. After Apple support person couldn’t resolve any of the issues I had, they told me to reinstall Sequoia and then gave me instructions to upgrade to Ventura/Sonoma.
I thought Big Sur was the Windows ME of (modern) Mac OS. I have had a decent experience in Sequoia. I usually have Safari, Firefox, Chrome, Mail, Ghostty, one JetBrains thing or another (usually PyCharm Pro or Clion), Excel, Bitwarden, Preview, Fluor, Rectangle, TailScale, CleanShot, Fantastical, Ice and Choosy running pretty much constantly, plus a rotating cast of other things as I need them.
Aside from Apple Intelligence being hot garbage, (I just turn that off anyway) my main complaint about Sequoia is that sometimes, after a couple dozen dock/undock cycles (return to my desk, connect to my docking station with a 30” non-hidpi monitor, document scanner, time machine drive, smart card reader, etc.) the windows that were on my Macbook’s high resolution screen and move to my 30” when docked don’t re-scale appropriately, and I have to reboot to address that. That seems to happen every two weeks or so.
Like so many others here, I miss Snow Leopard. I thought Tiger was an excellent release, Leopard was rough, and Snow Leopard smoothed off all the rough edges of Tiger and Leopard for me.
I’d call Sequoia “subpar” if Snow Leopard is your “par”. But I don’t find that to be the case compared to Windows 11, KDE or GNOME. It mostly just stays out of my way.
Have you ever submitted these regressions to Apple through a support form or such?
Apple’s bug reporting process is so opaque it feels like shouting into the void.
And, Apple isn’t some little open source project staffed by volunteers. It’s the richest company on earth. QA is a serious job that Apple should be paying people for.
Yeah. To alleviate that somewhat (for developer-type bugs) when I was making things for Macs and iDevices most of the time, I always reported my bugs to openradar as well:
https://openradar.appspot.com/page/1
which would at least net me a little bit of feedback (along the lines of “broken for everyone or just me?”) so it felt a tiny bit less like shouting into the void.
I can’t remember on these. The CalDAV one is well known. Most of the time when I’ve reported bugs to Apple, they’ve closed them as duplicates and given no way of tracking the original bug.
No. I tried being a good user in the past but it always ended up with “the feature works as expected”. I won’t do voluntary work for a company which repeatedly shits on user feedback.
I wonder if this means that tests have been red for years, or that there are no tests for such core functionality.
Sometimes we are the tests, and yet the radars go unread
10.6 “Snow Leopard” was the last Mac OS that I could honestly say I liked. I ran it on a cheap mini laptop (a Dell I think) as a student, back when “hackintoshes” were still possible.
For what it’s worth, I like the TOML configuration syntax since it’s so much more readable for me, and easier to understand compared to BIND, Unbound and other usual suspects. I guess the time has come for it to slowly start appearing in distro package repositories, Docker containers, etc.
I’m going to give it a go starting from next week and try to replace Unbound with Hickory on my personal public DNS resolver. I’ll be in UAE for a month on business and I don’t like state governments interfering with my DNS resolution, so I’ve provisioned a public resolver instance in nearby Bahrain on AWS and put nginx in front of it for DNS over HTTPS. Will be a fun ride.
Sounds like fun but it’s rarely considered wise to post online about crimes you plan to commit
Daaamn, I’m busted.
But then again, it’s just typical HTTPS traffic towards my personal domain using a TLD of a sovereign country (that just happens to be the same TLD this web site is using, btw). Surely the UAE government would take my word for it, right? Right?
I saw a link the other day about someone tunneling through the great firewall by hiding traffic in a VPN disguised as a webmail portal.
Protesting against and overthrowing the corrupt government over here.
My $WORK task list keeps growing and I’m falling behind on everything but this is much more important right now. Party like it’s 2000 and we’re overthrowing Milošević.
See you later, folks, going back to the streets of Belgrade.
Godspeed.
A little clickbait-y title, but I agree with most of the stuff written in there.
Back in the days when Swift was just getting started, it was amazing for me to follow the development mailing list because I could witness with my own eyes how a real-world, production-grade programming language gets designed out there in the wild. I was able to pick up many underlying concepts of designing a language syntax, the compiler architecture, and all the nuts and bolts of what makes a programming language work under the hood.
Too bad I never got the chance to actually use Swift in any of my projects.
*thunderous applause*
The API stability and low effort while upgrading dependencies is also one of the main reasons why I use Go with HTMX for my backend and templating.
I really like this approach. This is how it should work for (almost) all software out there.
I wanted more details about what kind of issues were in that missing 5%
From the previous posts about it: http/2 support was a big one, and some other issues that meant it never fully passed the test suite, and there was no or very little activity to fix that, and dragging it around as EXPERIMENTAL with no expected path to get out of that didn’t make sense:
None of that explains what the actual problems are though. Why did it never fully pass the test suite? Why couldn’t HTTP/2 support be added, are there specific issues or just “nobody did the work yet”?
Came here to say this. Also, I’m the guy who uses curl a lot on a daily basis, but somehow I totally missed the fact that Hyper was ever there. Granted, I typically don’t compile curl from source and I certainly don’t interact with libcurl, but still. I wonder how many people were simply unaware of an alternative HTTP backend being available in the first place.
That kinda supports Daniel Stenberg’s point. If there had been user demand for it, people would have stepped up and publicized it, and then people like you would have become aware of it.
Sure, but that’s kind of the issue here - the adoption of an alternative HTTP library has not been communicated enough in the first place, at least that’s my impression, which is exactly why many of us were unaware of its existence. I’m not a curl developer and I certainly don’t follow its development mailing list, but I still keep up with the project overall and I check his blog every now and then and I also follow him on Mastodon, and yet somehow I totally missed the fact that Hyper was included in the experimental capacity.
Maybe the increasing interest in software supply chains and Software Bill of Materials (SBOM) will make such alternative backends more visible? If companies need to start ticking checkboxes on compliance forms for their software, they might actively look for these alternatives written in memory-safe languages. And maybe they would even invest developer time into this.
It’s probably going to take a while for the SBOM ship to pick up speed; but barring any revolutionary events, I guess the ship is underway and will not be stopped any more.
The SBOM ship will drive a massive amount of paperwork for audit teams to be happy filing away.
I hope this somehow manages to boost the velocity of nginx development, but I’m not optimistic at this point. After everything has happened with it, culminating with one of the core developers forking the project a while ago, I’m somewhat concerned about the project’s future under the current ownership structure.
Maybe it’s just me, but it looks like the development has slowed to a crawl and apart from CVE patches and minor, incremental feature updates and enhancements, there’s simply nothing exciting going on for nginx these days. Real shame, it’s a solid, battle-tested reverse and mail proxy and it pains me to see it almost abandoned by everybody.
I don’t know about you but “exciting” isn’t something I’m looking for in an HTTP server personally.
Nothing wrong with that. However, there are some of us working at huge content delivery companies and the like and having a reverse proxy that keeps up to date with all the standards, protocols and new cool features deriving from new best practices/RFCs is something that’s very desirable.
For personal use cases, I agree, a reverse proxy should just work with minimal upkeep requirements.
If you’re a huge content delivery company that relies on nginx for your product but you haven’t hired an nginx developer to implement the features you want, then you might understand when I have a difficult time mustering up much sympathy for your plight.
Yes, you will hire nginx developers to develop custom functionality, but at one point, if the overall development on the upstream slows down to a crawl, then you either need to keep a private fork for yourself and maintain it indefinitely, in which case good luck finding new developers in the future, or you need to switch to something else (like many CDN companies are looking into Pingora and what to develop on top of it, for example) entirely.
I just happen to know these days it’s somewhat challenging to find proper nginx developers, so the “decay” is already a thing to a certain extent. With too many forks and core developers leaving, it doesn’t bode well for the project overall. I wish I’ll turn out to be wrong one day, though.
it sounds like most large companies have decided to stop investing in the public project, and have either built their own private nginx extensions or used a totally different code base. that’s their decision. as for smaller companies and individuals, it seems like the battle-tested nature of nginx outweighs the new standards and features that are mostly just useful for large companies anyway.
seeing it as a natural consequence of the divergence of interests between different groups of potential users, may help clarify what is disappointing or unsatisfying about the current state of the project and who can reasonably be expected to act to remedy the situation.
Yup. nginx has reached a certain level of maturity where it simply doesn’t need anything else in order to be the 1st choice for many use cases. But protocols, underlying technologies and new features are not standing still and it’s only a matter of time before nginx becomes irrelevant now that the pace of development has decelerated significantly with too many core developers forking the project and going their own way.
In one of my comments down below, I outlined the case of Cloudflare and their Pingora framework they open sourced a while ago. It didn’t happen just like that, and it’s an early indication of what’s in store for nginx should this situation continue for it.
Also, due to my career, I just happen to know that many big CDN players are considering Pingora for their next-gen platform and all of a sudden, a lot of them are looking into the cost-benefit of ditching nginx in the long term and going with their own in-house solution. Party like it’s 1999.
this is the best-case trajectory for almost all projects IMO (minus the move to github)
Can you elaborate on what exciting areas or features that you think are missing? I casually peeked at the broader nginx mailing lists, https://mailman.nginx.org/mailman/listinfo and see them to be reasonably active.
As for functionality, the open source version is pretty solid and just works. I love the simple configuration, and the feature set it provides (serves my purpose).
Well, QUIC+HTTP/3 support is still experimental, and that’s slowly becoming an issue for major deployments out there. Also, the configuration language, although extremely versatile, doesn’t support if/else scenarios (yes, I know if is evil in nginx, but I tend to know what I’m doing with it) for more advanced URI matching rules making me think of very creative ways of overcoming that situation, etc.
Need session stickiness for upstreams? Good luck finding a 3rd party module that’s still maintained or else be ready to either write it yourself or shell out the monies for the nginx Plus version that’s got that built in.
There are plenty more features that are simply missing and IMO should be a part of the modern open source reverse proxy solution in 2024.
What’s wrong with shelling money?
Nothing per se, but in the case of nginx, the pricing structure has always been somewhat peculiar with a very steep pricing from the get go and eventually you run the risk of ending up in a typical vendor lock in situation while you’ve spent a ton of money along the way, at which point it becomes cheaper and overall better to make your own solution from scratch. Hint: CloudFlare Pingora - they never were a nginx Plus customer but they spent a ton of money and engineering hours trying to push the limits of nginx only to realize it’d be better for them to just make their own thing. We’re yet to see amazing things built on top of Pingora, reminds me of 2006-2009 period of nginx where every now and then something new and useful would pop out.
Is there any web server with a good HTTP/3 at the moment? My impression was that everyone was struggling with QUIC and there HTTP/3. AFAIU, there are working implementations of QUIC but the performance for end users isn’t necessarily better than TCP.
The performance of QUIC is a whole other story, however, it’s here to stay and it’s slowly starting to gain traction. There are many implementations for various web servers, none of them are getting it right at the moment, but the fact that QUIC support in nginx is still experimental in 2024 is a bit worrying to me.
why do you say that QUIC is here to stay and starting to gain traction if no web servers properly support it? why is it worrying for adoption to be slow if there’s no consensus that it’s an improvement?
QUIC is definitely here to stay because too many companies are investing more money into QUIC deployments and further R&D to make it better. One of the reasons why the adoption is slow is that it’s such a radical departure from the whole TCP/IP architecture and it turns out that network stacks and a bunch of other software was simply never that good in handling large UDP traffic so far, simply because there was no need to do so.
It’s going to take a while before everyone in the hierarchy gets things right, but based on what I see in the industry overall, we’re slowly starting to see some kind of resemblance of the light at the end of the tunnel. In fact, just today a very nice comment appeared on the site which sums some of the major challenges quite nicely.
That was very informative; thanks.
Honestly, “in 2024,” it sounds like all QUIC implementations are experimental and the worrying thing would be if any of them were not labeled as such.
I expect QUIC to become widely deployed but I was recently pondering having HTTP/3 on my servers and immediately stopped because I had no idea what to use. This makes me wonder if it’s ever going to replace HTTP/1 for most servers or if it will only be used by the largest players.
warp, via this extension: https://hackage.haskell.org/package/warp-quic
Nginx still doesn’t properly proxy provisional responses, which prevent the use of 103 early hints: https://forum.nginx.org/read.php?10,293049
Technically it’s not even a new feature, they were already defined in HTTP/1.0 RFC 1945 in 1996.
So, it’s actually been true all these years. They’re listening. At first I thought it was just another conspiracy theory, but then ad targeting became too precise for me to notice. One of the reasons why I disallow microphone access to most social media/messaging apps on my iPhone. One thing I like about iOS, for all its faults, is how easy it is to manage access to various parts of the hardware in a somewhat user friendly way. In retrospect, it was a good decision, obviously.
One thing that’s probably confusing is the distinction between “Facebook is listening to you”, which is probably technically false[0], and “some apps on your phone are listening to you and that data is shaping the ads you see on Facebook”.
Most people don’t care about the distinction, but it is an important one, both legally and in terms of the consequences. Because Facebook is centralized and a household name with a reputation, it is probably harder to stop a wide array of marketers/thread-actors from listening to you than it is to stop Facebook. But if that data ends up shaping ads on Facebook, the felt effect is very similar, and Facebook just has to act “shocked, shocked!” when one of their partners gets caught.
[0] Basically, I think Facebook would love to do that, but would probably understand that’s a bridge too far and not worth the risk. I would be relatively surprised to find out I was wrong, though not completely shocked.
Yes, I can agree with almost everything you said, except for the fact that FB wouldn’t go too far to listen to us directly. They have a long history of doing all kinds of privacy violating shenanigans with the culmination of Cambridge Analytica and the like, so it wouldn’t actually surprise me if they start listening directly at one point.
After all, with all the modern mobile CPUs with all kinds of hardware acceleration, it’s trivially do to all kinds of audio/video transcoding/encoding/decoding without being too taxing on the battery and CPU time (at least on iPhones) which is one of the nasty side effects of huge advances in mobile CPU design in recent years. I hope I don’t get too paranoid about all of this.
I have two reasons:
That said, neither of these is a air-tight argument and I don’t think it’s impossible Facebook is listening in, I just think it’s unlikely.
I agree on both points, and it’s true they’re much more cautious nowadays because of all the PR stunts they had to pull off, but I still think they’re open and actively investigating on ways to listen to us directly. I guess that’s one of the primary reasons they disguised their listening operations through all kinds of “strategic ad partners” and such.
Facebook laid off more than 20k people and nobody leaked this?
Would you like to buy a bridge from me?
Will keep learning Rust. I’m making nice progress but the learning curve is much steeper than I originally anticipated. I like the language a lot, though, although I’m still not up to the point where I can take a look at a random piece of code written by someone else and get a general idea of what the code does without focusing very deeply and analyzing the code line by line. I guess this will become easier for me as I keep learning and writing some more code of my own.
Also trying to write some middleware for the new startup I’ve founded a few months ago since we’re in the process of onboarding our first customer. Still no revenue but it’s exciting.
I’ll probably spend the whole weekend in my apartment under the A/C since it’s over 40 degrees right now. The curse of being in a landlocked country very close to the Mediterranean.
I’m learning rust this weekend as well! What resources are you using? Anything you’ve found particularly helpful? I’m still going through rust-lang.org/book but thinking of jumping into Rust for Rustaceans
I’m going through the same book, right now I’m about half way through it :-) The most helpful approach for me so far is to go through a chapter, type all the code examples myself and then try to break things and trying to figure out how to fix them. From time to time, I implement some of the simpler problems myself, break a lot of stuff along the way but I’m also learning a lot of the small details along the way. It’s frustrating but fun at the same time, and having a balance between reading theory and trying to apply some of it in my own mini projects works best for me.
Rust for Rustaceans is on my reading list as well, but some of my colleagues have also recommended Programming Rust, 2nd edition. I’ve skimmed through it a little bit and it looks very good, albeit the first chapter is somewhat strange, being a very hands-on top-down approach with a lot of code and then progresses through more theoretical stuff. I’m not used to that kind of books, but I’ll probably give it a go at some point.
Happy learning!
I felt that a lot of what was in this book was aimed at people writing libraries in Rust and as such not very interesting for me. I also read Rust in Action which is nicely applied but I felt wasn’t advanced enough.
Yeah that seems to be true. Yet I feel like a lot of languages are lacking in good “intermediate” books and this seems to be in that sweet spot. I’ll see what I feel like when I start reading. Rust in Action seems more like on par with the Rust Book, would you say that’s accurate?
Nah, just grabbed my copy to see what the problem was with Rust in Action:
The problem is though that with the breadth the book is going for, it’s very hard to make an “advanced” treatement of anything. Chapter 11 (“Kernel”) teaches you how to write your own OS kernel in under 30 pages. That is an accomplishment in and of itself, but it also means you can’t really go in depth.
Don’t get me wrong, it’s a very good and accomplished book, but I just wish it was 5 times longer and really went into things. For somebody coming fresh out of the Rust Book I think this is a very good follow-up book.
Heading to Rome for some sightseeing and unplugging from the real life.
Got back home after my trip to a trade show in Istanbul, now I have to sort through all of the new contacts and potential new customers for my new startup I’m starting with a few former colleagues after most of us got laid off overnight.
So many new ideas for new networking products and so little time to make them a reality, primarily due to my lack of Rust proficiency. I haven’t written any production-grade code in over 20 years and now I have to learn Rust (I know Linux inside and out, which helps a lot, I guess) as fast as I can so I can start being productive. Lots of fun ahead for me.
What are you doing to learn it? Are you following some sort of curriculum or just writing your startup’s code in rust and learning along the way?
Right now, I’m doing both :-) I’m going through the Rust Programming Language book and trying to do things on my own as I pick up new concepts. That’s always been the best way for me to learn stuff.
I got laid off overnight a few weeks ago, but luckily I have lots of savings and side gig consulting jobs with some passive income as well, so this week I’ll be in Istanbul at a trade show with some of my friends trying to land first deals for the new startup we’re in the process of founding.
It’s time to try to be my own boss, it feels both exhilarating and terrifying in parallel.
All the best on your journey!
Good luck to you.. should be an adventure, hopefully a fun one!
That sounds like very exciting new adventure - keep us posted on how well that goes!
I had so many plans for the weekend, but I somehow managed to catch a nasty flu the other day so I’ll have to stay at home and get some rest. Lots of rest. At least my reading list is full, so it won’t be that bad.
I never understood the hackintosh community the same way I could understand people trying to run Linux on things like toasters and light bulbs.
Back when the PPC -> Intel transition happened I ran… well, definitely not the earliest Intel Hackintosh, but one of the earliest. I lucked out on pretty compatible hardware which was close enough to “real” Intel Macs that all it took was some fiddling with a vmware image that was floating around the shadier torrent websites at the time. I got an Intel Mac after a while (a Mac Mini) but I continued to read Hackintosh forums for a while.
Now the main reason why I went at it was that I had a lot of free time and wanted something cool to fill it with but I knew people who ran Hackintshoes in production, for real reasons. E.g. people who did video editing, were independent contractors rather than sheikhs or megacorps so couldn’t afford a Mac Pro, and couldn’t justify the money for a powerful enough iMac, given that they’d already spent a fortune on good monitors. Few Hackintosh systems (if any) ran everything well but if you only needed a couple of applications and you got the right hardware, depending on when the last product line refresh had happened, you wouldn’t just get “a better deal”, you could just get a better machine than anything that Apple sold.
It’s worth remembering a couple of things. First, that this was back when the various services around Macs were not quite as well integrated, and not across as many devices as today. Second, that this was also around the time when (though people in the design ivory tower casually ignored it) product line refreshes were kind of hit or miss, this was the time of bendy batteries and dead displays. Third, that this was around the time of a big refresh and while Rosetta was good, it was nowhere near as smooth as the M1-era Rosetta – depending on what applications you ran, slow, crashy software was what you got on legit Apple hardware, too.
And finally, and probably the biggest thing, that for a brief period, OS X was all the rage for very real reasons. It was not a great Unix but then neither was Linux, it could be less fiddly to network than Windows if you didn’t run an AD environment, it had a rich application ecosystem that other Unices lacked, at a time when cloud/service-based applications weren’t really a thing, and it was pretty well-maintained. Some bugs lingered forever but OS X 10.n could generally be expected to be better than OS X 10.n-1, rather than a compromise on development budget. Lots of people really wanted OS X, and didn’t really care about the hardware. This just isn’t as big a deal today. macOS has been under-maintained for a long time; few people who wouldn’t be just as happy with an iPad really want to run it over either Windows or Linux.
I cannot speak for everyone else who was into Hackintosh’ing at any point, but for myself it was basically about wanting to “have my cake and eat it too.”
I really liked Mac OS but I disliked the lack of customization, upgradability, and pricing of the hardware. Eventually I bought a Mac Pro 2010 instead, as the upkeep on a Hackintosh felt worse than Linux in a lot of ways. Generally speaking, you were unable to install Mac OS updates with 100% confidence that there wouldn’t be issues. So, I was always backing up my entire system drive beforehand.
As far as hardware compatibility went, as long as you stuck to the lists that people put together, you would usually have a good experience. Felt like a lot of the hardware issues people encountered were because they followed a list but then thought it was OK to swap one or two things out.
If I could go back in time and do it over again, I probably would’ve skipped the Hackintosh in the first place and just bought a Mac Pro sooner. Hackintosh was fun for a while, but kind of just because a nuisance after the honeymoon period wore off.
All just IMHO, of course!
I guess it’s worth saying that the “lack of customization, upgradability, and pricing of the hardware” pushed me away from Mac OS ultimately, as I now just use Linux everywhere instead on non-Apple hardware.
I build my first Hackintosh, an i7-860, in late 2009 or very early 2010. For compiling Firefox (my job at the time) it thrashed all over the 8 core 2.26 GHz Mac Pro Moz gave me. I continued with Hackintoshes (i7-4790K, i7-6700K) until 2019.
The only software or software update problem I ever had was always the same one: sometimes I had to re-install the drivers for the Realtek sound chips PC motherboards had. No big deal … I just kept the installer handy. A matter of a few seconds to run the installer again after an OS update, and then a reboot.
And, yes, I built new hardware using recommended components. I didn’t try to repurpose random old machines, which is where most problems lie (it’s the same with running Linux on laptops sold with Windows today)
I’m not running a hackintosh, but a now unsupported 2014ish Mac Mini as my TV pc. I’ve been using Opencore Legacy Patcher to make it possible to keep it up to date. And… Yep fear of updates are real.
I had one lead to a reinstall, it was unfortunate because it booted and it was installing the 3rd party patches that broke it. I guess as things get ripped out the fixes are going to become more invasive and prone to bugs. I was hoping to keep it going till Apple drops x64 support entirely. But that is looking more and more unlikely.
It’s sad because it’s a perfectly good hardware fit for purpose, a 1080p streaming or local video box.
Back in the day during high school years, my parents couldn’t afford to get me a Mac, even a used one. All I had was a crappy MSI laptop but I always wanted to try Mac OS X as it was called back then. There was this aura of a well designed user interface and the whole Aqua design language appealed to me. Hackintosh was the only way.
It was a very gentle introduction to some of the basic principles of operating system design. Getting to run Mac OS X on non-Apple hardware made me learn about kernel vs user space, drivers (or kernel extenders in Apple lingo), basic I/O system routines/system calls, etc. Sure, I could learn this stuff through tinkering with Linux as well, which I did, but then again, that user interface and the challenge of getting something non-standard running on your machine… it was magical.
A few years later, I started getting my first freelance web development gigs and in due time, I bought myself a nice MacBook Pro and ditched the whole Hackintosh thing because life happened, and all of a sudden, I didn’t have a whole day to tinker with another kext written by a Russian teenager in Vladivostok that I obtained from an obscure anonymous FTP host with dial up transfer rates just to see if WiFi would work again after a minor .1 system update.
It was fun while it lasted.
I can’t speak for anyone else but I can tell you why I did it.
I was broke, I know PCs and Macs and Mac OS X – I ran OS X 10.0, 10.1 and 10.2 on a PowerMac 7600 using XPostFacto.
I got the carcase of a Core 2 Extreme PC on my local Freecycle group in 2012.
https://twitter.com/lproven/status/257060672825851904
RAM, no hard disks, no graphics, but case/mobo/CPU/PSU etc.
I took the nVidia card and hard disks from my old Athlon XP. I got the machine running, and thought it was worth a try since it was mostly Intel: Intel chipset, Intel CPU, etc.
I joined some fora, did some reading, used Clover and some tools from TonyMacX86 and so on.
After two days’ work it booted. I got no sound from my SoundBlaster card, so I pulled it, turned the motherboard sound back on, and reinstalled.
It was a learning experience but it worked very well. I ran Snow Leopard on it, as it was old enough to get no new updates that would break my Hack, but new enough that all the modern browsers and things worked fine. (2012 was the year Mountain Lion came out, so I was 2 versions behind, which suited me fine – and it ran PowerPC apps, and I preferred the UI of the PowerPC version of MS Word, my only non-freeware app.)
I had 4 CPU cores, it was maxed out with 8GB RAM, and it was nice and quick. As it was a desktop, I disabled all support for sleep and hibernation: I turn my desktops off at night to save power. It drove a matched pair of 21” CRT monitors perfectly smoothly. I had an Apple Extended keyboard on an ADB-to-USB convertor since my PS/2 ports weren’t supported.
It wasn’t totally reliable – occasionally it failed to boot, but a power cycle usually brought it back. It was fast and pretty stable, it ran all the OS X FOSS apps I usually used, it was much quicker than my various elderly PowerMacs and the hardware cost was essentially £0.
It was more pleasant to use than Linux – my other machines back then ran the still-somewhat-new Ubuntu, using GNOME 2 because Unity hadn’t gone mainstream yet.
Summary: why not? It worked, it gave me a very nice and perfectly usable desktop PC for next to no cost except some time, it was quite educational, and the machine served me well for years. I still have it in a basement. Sadly its main HDD is not readable any more.
It was fun, interesting, and the end result was very usable. At that time there was no way I could have afforded to buy an Intel Mac, but a few years, one emigration and 2 new jobs later, I did so: a 2011 i5 Mac mini which is now my TV-streaming box, but which I used as my main machine until 2017 when I bought a 27” Retina iMac from a friend.
Cost, curiosity, learning. All good reasons in my book.
This year I Hacked an old Dell Latitude E7270, a Core i7 machine maxed out with 16GB RAM – with Big Sur because its Intel GPU isn’t supported in the Monterey I tried at first. It works, but its wifi doesn’t, and I needed to buy a USB wifi dongle. But performance wasn’t great, it took an age to boot with a lot of scary text going past, and it didn’t feel like a smooth machine. So, I pulled its SSD and put a smaller one in, put ChromeOS Flex on it, and it’s now my wife’s main computer. Fast, simple, totally reliable, and now I have spare Wifi dongle. :-/ I may try on one of my old Thinkpads next.
It is much easier to Hackintosh a PC today than it was 10-12 years ago, but Apple is making the experience less rewarding, as is their right. They are a hardware company.
I ran a hackintosh years ago and like the article mentions: price/performance was pretty hard to beat. That and the fact that I could have a decent gaming experience on the same machine by dual booting windows was great. It could be a little finicky once a year when a new macos dropped, but other than that it was pretty painless.
I never tried it on a laptop though. I assume the drop in user experience is far greater there.
I remember when I first started using Linux as my daily driver, I would install all the graphical mods to make Gnome look like OS X. Flipped buttons on windows, a global menu bar, window themeing, and a dock. It wasn’t a particular improvement in workflow or in life generally, but it was pretty. At that time, the appeal of Apple products to me was all about novelity.
At some point I looked into buying what I needed for a true hackintosh, but after actually using a Macbook for a while, I realized that the actual things I cared about workflow would’ve been worse for me on OSX vs Linux, so dropped the idea. But regardless, I think many people are driven by novelty. Doing things that aren’t simple or straightforward just for the purpose of doing it.
I personally was big into it when I was a teen and my parents couldn’t afford to get me a Mac. Enough tech “influencers” had Macs that I put a decent amount of effort into getting a piece of the pie on cheaper hardware.
Thank you everyone for the detailed replies. It sounds like the challenges are similar to getting Linux to work on some random laptop.
With Linux I totally get the idea of “Here’s a people’s operating system and I’m going to put it on this thing I cobbled together out of parts that fell off a lorry.” I like the Mac OS look and feel, but but not enough to spend days fighting to put a proprietary OS on some random hardware.
They do seem to be going in different directions: Linux is getting easier to operate productively on random machines and Mac OS is getting harder.
My wife had always been a Mac user. She wanted a new Mac. We could afford $500 but not $1500. At the time, there wasn’t an official Mac for $500.
Now she has a Mac for work, issued by work, but does everything else on Linux.
I ran a Hackintosh on my main PC for a short period of time in 2020 to work around DaVinci Resolve’s idiotic hardware requirements for certain codecs - in particular I was interested in GPU-accelerated encoding and decoding of H.265. (Funny to see they’re just as idiotic now as they were back then.)
Dual-booting macOS was really cumbersome compared to just using my Linux though, so I later abandoned it in favor of just using plain ol’ Kdenlive. My video editing needs weren’t that advanced at the time anyways and using a Linux-native editor was way less of a UX barrier to overcome. (In particular I was always frustrated with the mouse acceleration curve differences between macOS and GNOME, and macOS’s horrendous window management.)
Fast-forward to last year, I had reformatted my EFI boot partition a few times by that time, so my OpenCore installation was long gone. Too lazy to set it up again, and with a need for some more advanced video editing software at the time, I installed DaVinci Resolve on my Windows dual boot, and… I had basically no performance problems editing 1080p60 gameplay footage whatsoever, even without GPU acceleration.
So yeah, I no longer have a need for a Hackintosh, but it was fun working around Apple’s walled garden. That smugness you feel when you have a rare macOS-compatible hardware configuration on your main PC and can run it mostly without issues, that is something.
Getting ready for a week-long vacation in Singapore. I can’t stand European winter anymore and I need to disconnect from all work-related stuff for a bit.
Europe has lots of variation during winter, it barely rains in southern Europe anymore!
Yes, it just so happens it’s pretty warm over here this week, but it won’t stay like this for much longer, we’re going back to under-10C temperatures from Monday. No thank you, I’ll just switch continents, even for just one week. It also helps that I have 5 unused vacation days from last year, I’m so not going to allow them to expire.
I suspect this will be somewhat of a bittersweet “careful what you wish for” outcome - once Chrome is on iOS properly we will see “Your browser is unsupported” messages, pointing people towards Chrome and the other engines won’t keep a meaningful marketshare. 🫤
Considering EU rules on vendor self-preference, I suspect Google will need to vastly reduce how much they push Chrome. As for 3rd parties pushing people towards Chrome, if that becomes too much, we’ll need further legal action.
Every google property has pushed chrome to every non-chrome user for a decade, they even brag about deciding to kill IE versions by having their properties drop support, and the EU did not care. The EU has never demonstrably cared about how google prioritizes their own browser despite deliberating removing basic privacy features from webkit from day 1. Only now they’ve got tracking information on the majority of sites on the web and successfully undermined third party cookie blocking are they talking about doing something they explicitly removed from webkit, and they’re marketing it as if it were equivalent to the systems Mozilla and Apple introduced in response to Google’s extensive invasion of user privacy.
And yet the EU does nothing.
I think anyone thinking the EU will do anything that stops Google’s ongoing abuse is kidding themselves.
The rules I’m talking about are part of the DMA, the ones that start applying in a few weeks.
Will Google really put in the effort to port Chrome just for the EU? I’m skeptical.
They certainly have more than sufficient internal resources to do so, it wouldn’t surprise me if they do it just so they can stick another finger towards Apple, if you know what I mean.
Given it lets them control tracking within the browser regardless of what Apple allows through Safari APIs, yes I rather suspect they would do.
Reading the actual requirements to get the entitlement, I’m not sure it does? Apple won’t give you permission to be a 3rd-party web browser unless you block third party cookies by default and do origin-partitioned storage. You’re also not allowed to “sync cookies and state between the browser and any other apps, even other apps of the developer” - I’m a little unclear on what this means honestly. (Like, would that prevent Google from using your synced Chrome history in other Google apps…?)
They clearly still have some levers to pull, or at least think they do.
For the latter point I think it is “you can’t use the browser state to communicate with other apps” not “a user can’t use the same account with multiple apps”. e.g. the goal is to make it so that if you use the YouTube app and chrome but aren’t logged into the same account on both google can’t just use on device channels to link the app user information to the browser user information. If the user explicitly chooses to log in to the same account in both then they’ve done that themselves manually.
It’s probably worth it to Google purely as a wedge. Once it’s built and working on EU iPhones, Google has the easy argument to other regulators that only Apple is holding back Chrome on iPhone at that point.
They already did. There is an iOS native version of Chromium living in their code repository. Probably far from complete and now they have to make it work with the new APIs that Apple just published. But they have been working on it for sure.
The Chrome only mobile web is coming.
A quick search turned up some data suggesting that there may be over 100M iOS App Store users in the EU. Is that a large enough set of users for Google to want to invest? All based on assumption, but I’d think they’d look at revenue per Chrome user and compounding revenue from installations of other Google apps by these users.
Based on this bug, they’ve already started work some time ago on getting Blink running on iOS. I think it’s just a matter of time for it to be ported and the default experience as they’ve got years of cross-platform experience. How successful it’ll be is different matter.
Would it be so hard a port? Clearly webkit already runs for safari. I know they renamed their fork of webkit, but it’s stil gonna be 90% the same…
It’s incredibly different at this point.
If you recall when the blink for started there were some articles about how they’d removed millions of lines of “apple code”. What they’d removed was all the platform abstraction logic, the qt, gtk, wx, etc ports, and javascriptcore (with its support for more or less every platform, alongside the optimising JITs for armv7, armv8, x86, x86_64, MIPs, SH4, ….). Since then they’ve aggressively removed most of the other abstraction layers.
That said, I can’t imagine they haven’t had some kind of build on jailbroken phones.
That would contribute to their revenue. I doubt they would let it slip through their fingers.
There’s a lot more people in the EU than in the US, so I suspect the answer is yes
iOS and Apple are far less popular compared to the US though. Sure, they’re still a major phone vendor, but whereas in the US they have a 55-60% market share, in Europe they’re closer to 30-35%.
Still, that translate to similar numbers of actual iPhone users (200-240 million each).
I would frankly be surprised if they haven’t had Chrome running on iOS internally for years just to be ready to respond immediately to an Apple policy change.
that might be an impetus for further EU regulations.
Interesting investigation. It also shows the dangers in groupthink - Hetzner is really popular right now among geeks. Like the post says, it’s (for now, still) better than Amazon/Azure/Google but collectively as tech people we should think about ways to avoid pumping up these companies to become “too big to fail”.
Most of us are fed up with “big tech”, but by choosing the same companies over and over, we make these companies so disgustingly big, creating the next generation of big tech. Note that this also encourages “enshittification” because we would stick with them once we’re used to it as it’s “the safe choice”.
The primary reason I host my Mastodon instance on Hetzner is price - I can get a very powerful ARM virtual instance for around 10 EUR that comes with enough vCPU and RAM that enables me to have a fully-powered Mastodon instance with enough nginx/Rails workers, Elasticsearch for indexing and other niceties.
If I wanted to host that same instance with the exact same hardware in my home country, for example, the price would be about 100 times higher (that’s not a typo) and that’s with a ~12% discount with a yearly prepayment.
No matter which other provided I’d go for, the price would be at least double of what I currently pay, and there’s only so much money I’m willing to spend for what is essentially a hobby side project, especially in this economy.
(Before you even ask - I currently don’t have the means to self-host it at home due to my tenancy/rental agreements and the general unwillingness of my ISP to meet me half way because of reasons.)
How much RAM/CPU are you getting for that price?
16 GB, 8 vCPU for ~10 €
Their current price list is €15 for that size. It looks, given their RAM scaling, as if they oversubscribe shared VCPUs 2:1. The big cloud providers don’t oversubscribe for most VM types. How much that matters probably depends a lot on who you’re sharing with. Most IaaS users use an average of well under 50% of their CPU and the Ampere hosts have quite high core counts, so there’s quite a high chance that you can get 100% of your cores when you need them.
The bigger problem with oversubscription is that it leads to weird performance anomalies. Most kernels assume that all CPUs are there all the time, so will IPI another core and wait, so if one of your other VCPUs is not scheduled it can lead to all of the others sitting in spin loops to wait for it. On real hardware, this will be a few hundred / thousand cycles, on an oversubscribed VM it can be milliseconds. This gets worse as the core counts go up. Hyper-V has some paravirtual assists for this kind of situation but I don’t know what Hetzner uses and whether guests support it.
What you’re quoting is the price including VAT/GST/whatever. For us outside of EU/UK/EEA/EFTA/whatever everything is significantly cheaper compared to you folks (in general), wherever you may reside
CAX31 on their page is ~10 € VAT 0%, or 12.49 € with VAT 24%. I have 5 of those.
I mostly run my side projects and some non-public-facing services on those.
Whatever their oversubscribe ratio is, the price vs. performance ratio has been a very positive experience for me.
Correct, the cheapest option out there AFAIK
Could you link that offer? I can only find 14.27€ per month (which is also cheap).
You found the correct offer, but it’s likely with VAT/GST/sales tax included, which is not the case where I reside so it’s a lot cheaper for me
Sure! CAX31 on this page is ~10 € VAT 0%, or 12.49 € with VAT 24%.
https://www.hetzner.com/cloud
Just curious, is there a particular reason you want an ARM server for that function?
Nope, it’s just that ARM instances are significantly cheaper compared to AMD/Intel ones, and luckily ARM support for Linux/open source stuff is top notch these days, there’s a native package for pretty much anything you can think of
The danger of anti-competitive market dominance is significantly muted if customers only use standard services with plenty of alternative providers.
Honestly, when I loaded this up, I expected to see EC2 being in the majority. So it’s great to see that particular form of groupthink has been avoided here.
The post mentioned that Hetzner’s dominance was inflated by the fact that mastodon.social is on it, and they filtered out that particular instance, but the same person that runs that instance also runs mastodon.online, which is similarly huge. Those instances tend to have a higher-than-average rate of people who signed up once and forgot about their account. So I’d expect if you were somehow (magically) able to weight these by active users, the picture would be different. edit never mind; I see now that the nodeinfo API does actually report active users already, so this weighting is already happening.
Another analysis I’d like to see is a breakdown by server software. I’m guessing Mastodon instances are more likely to be run on big beefy VPS servers, while GotoSocial and Akkoma are more likely to be run on home ISP connections because of their dramatically lower system requirements. (4GB RAM minimum vs ~300MB)
It is way too expensive for hobbyist projects. Hetzner/Scaleway/Vultr are all a lot cheaper and since you don’t need anything beyond VMs it makes perfect sense that AWS usage is low.
On the other hand, computation resources are much cheaper on home hardware compared to a VPS. It’s the opposite for network speeds, but I’m not sure there is much variance in network usage among fediverse server implementations.
Yeah, I’m thinking more about the fact that basically every person who has the operational expertise to run a VPS also has hardware lying around their house that could easily run gotosocial without breaking a sweat; whether that’s a little SBC they bought for a project they never got around to, or a laptop collecting dust in the closet.
Rationally if you DO need a beefy server, it’s much more economical to own it, but I guess people might be averse to the up-front cost because they might not be sure they want to commit to running the server? As someone who doesn’t rent a VPS I guess I can’t say what factors go into that decision. =)
I run a gotosocial on a castoff closet thinkpad, and I gotta say, the latency for me loading my timeline from a 192.168.0.x address is pretty great! I’m sure latency to other servers is much worse, but that’s not visible from a user perspective, so I don’t care.
Maybe the set of people interested in running a fedi server is not restricted to the set of people with stable living conditions, permissive broadband access, or even stable power.
Well my personal VPS has 512MB of RAM, so if I wanted to run a fediverse instance there I couldn’t run Mastodon, but if I were to use an old Thinkpad I think it would have no problem running Mastodon.
I guess my closet thinkpads are older than most peoples’ thinkpads! But I think a lot of people like using an rpi or pine64 etc too, whether for practical reasons or just because it’s more fun I couldn’t say.
The only place where 4GB is still a lot of RAM is Cloud. Web browsers can take 1GB+ with a dozen tabs open.
I see core count as the real issue, consumer CPUs running on your home connection are only just now commonly outfitted with 4-8 cores. My current MBP has 8 cores but my old 2017 model only had 2. Running a busy service on the old machine would lead to noticeable lag.
Wrapping up all the projects and tasks at both of my workplaces (I can’t help but max out all the tax relief packages my government has graced me with this year) and getting ready for a trip to Dubai for New Year’s Eve. I haven’t seen some of my high school friends for over 10 years, will be nice to catch up with everyone, and the weather ain’t too bad either.