If I recall properly, the TPM is explicitly not designed to handle local physical attacks like this - there was a decapsulation talk years back about a TPM that was interesting, but also “Firmly outside the design model of the system.”
The main thing to be aware of is that without a password/PIN/etc, the TPM provides no meaningful added security when the system is entirely together. You can just boot the system and read the disk, or sniff the bus, etc.
So I don’t see how this is any worse than a non-encrypted disk. I’m not sure I agree it decreases the security. Though in this case, yes, it’s lower security than the fTPM.
But fundamentally, if you have a “disk encryption key” oracle that spits out the answer to whoever asks, it’s not going to be hard to get keys out of it should someone want them. You have to prove you know something first, or it’s not adding much useful when the system is assembled. It does, however, make it hard to get the disk contents should the disk be removed from the laptop, such as in recycling.
Does the system being entirely together include booting connected to the same LAN as when shut down? I always wondered this, would be an easy way to keep entire workstations from being removed from an office then read. Perhaps would result in some false positives but false positive bitlocker lockouts were the norm when I used it at work, probably had to type in my recovery key once a month or more.
Knowing which LAN you’re connected to is hard, if even possible, unless you’re using 802.1x to authenticate the connection - pretty common in business, but rare in home settings.
Speaking of which, I should set up 802.1x for the work portion of my home network now that the kids are getting old enough to be “creative” with their tech.
This does make me regret selling my M1 Mini. Apple pissed me off enough with the whole CSAM/on device scanning thing (along with a range of other things) that I got rid of it, at some substantial loss, figuring that the Linux projects on it wouldn’t come to anything useful (at the time, they were still working out some basic functionality). Asahi has come a long ways since then.
They’ve done a really, really good job with hardware reverse engineering, and have quite exceeded my expectations for what they’d accomplish - I’m glad to see it!
I’m curious as to why you regret selling the hardware. A company proposes awful spyware for it’s hardware which you respond by deciding to sell it’s product but now you experience regret because you can’t test out software that the same company is not supporting or encouraging in the slightest? What is it about Asahi that makes you want an Apple device? I know Apple makes good hardware but I don’t think the hardware is THAT good that some independently contributed software is a game changer for it. Especially when many other vendors make pretty great hardware that support the software.
I like ARM chips, and I utterly love what Apple has done with the SoC design - huge caches, and insane gobs of memory bandwidth. The performance out of it backs the theory I’ve run with for a while that a modern CPU is simply memory limited - they spend a lot of their time waiting on data. Apple’s designs prove this out. They have staggeringly huge L1 cache for the latency, and untouchable memory bandwidth, with the performance one expects of that. On very, very little power.
Various people have observed that I like broken computers, and I really can’t argue. I enjoy somewhat challenging configurations to run - Asahi on Apple hardware tickles all those things, but with hardware that’s actually fast. I’ve used a variety of ARM SBCs as desktops for years now - Pis, ODroids, I had a Jetson Nano for a while, etc. They’re all broken in unique and novel ways, which is interesting, and a fun challenge to work around. I just didn’t expect Asahi to get this usable, this quickly.
I sold the M1 Mini and a LG 5k monitor at about a $1000 loss over what I’d paid for them (over a very short timescale - I normally buy computers and run them to EOL in whatever form that takes). I didn’t have any other systems that could drive the 5k, so it made sense to sell both and put a lower resolution monitor in the spot, but the M1/LG was still the nicest computer I’ve ever used in terms of performance, responsiveness, etc. And it would, in Rosetta emulation, run Kerbal Space Program, which was about the one graphically demanding thing I used to do with computers.
In any case, getting rid of it drove some re-evaluation of my needs/desires/security stances, and now I run Qubes on just about everything, on x86. Painful as it is to say, relative to my previous configurations, this all qualifies as “Just works.”
What did you replace it with? The basic Mac Mini M2 is really cheap now at ~$600. I run NixOS on a decade-old NUC, and it is tempting as a replacement. A good fanless x86 is easily 50% more and not nearly as energy efficient as the Mini.
However, right now one needs to keep macOS to do firmware updates. And AFAIK there is limited video output support on the M2.
An ASRock 4x4 Box 5000U system, with a 5800U and 64GB RAM. It dual boots QubesOS and Ubuntu, though almost never runs Ubuntu, and the integrated graphics aren’t particularly good (the theory had been that I could do a touch of light gaming in Ubuntu, but reality is that it doesn’t run anything graphically intensive very well). It was not a well thought out purchase, all things considered, for my needs. Though, at the time, I was busily re-evaluating my needs and assumed I needed more CPU and RAM than I really do.
I’m debating stuffing an ODroid H3+ in place of it. It’s a quad core Atom x86 small board computer of the “gutless wonder” variety, and it runs Qubes quite well, just with less performance than the AMD box. However, as I’ve found out in the migration to QubesOS, the amount of computer I actually need for my various uses is a lot less than I’ve historically needed, and a lot less than I assumed I would need when I switched this stuff out. It turns out, the only very intensive things I do anymore are some bulk offline video transcoding, and I have a pile of BOINC compute rigs I can use for that sort of task.
I pretty much run Thunderbird, a few Firefox windows, sometimes Spotify or PlexAmp (which, nicely, run on x86 - that was a problem in my ARM era), a few communications clients, and some terminals. If I have dev projects, I spin up a new AppVM or standalone VM to run them in, and… otherwise, my computers just spend more and more time shut down.
the same company is not supporting or encouraging in the slightest
That’s not quite true — apple went out of their way to make the boot secure for third-party OSs as well, according to one of the Asahi devs. So I don’t see them doing any worse than.. basically any other hardware company - those just use hardware components that already have a driver reverse engineered, or in certain cases contributed to some by the producing company, e.g. intel. Besides not having much incentive for opening up the hardware, companies often legally can’t do it due to patents not owned by them, only licensed.
And as for the laptop space, I would argue that the M-series is that good — no other device currently on the market get even in the same ballpark of their performance-efficiency plot to the point I can barely use my non-M laptop as a mobile device.
This is the attitude that I don’t understand. To pick four big chip-makers, I would rank Apple below nVidia, Intel, and AMD; the latter three all have taken explicit steps to ensure that GNU/Linux cleanly boots with basic drivers on their development and production boards. Apple is the odd one out.
On my m1 mac laptop I can work a full 10 hour work day compiling mutliple times without once plugging in my machines. The compiles are significantly faster than other machines and the laptop barely get’s above room temperature.
This is significantly better hardware wise than any other machine I could buy or build on the market right now. It’s not even a competition. Apples hardware is so far ahead of the competition right now that it looks like everyone else is asleep at the wheel.
If you can run your favorite OS on it and get the same benefits why wouldn’t you?
My most recent laptop purchase was around $50 USD. My main workstation is a refurbished model which cost around $150. My Android phone cost $200. For most of my life, Apple products have been firmly out of budget.
They had to, as servers are predominantly running linux and they are in the business of selling hardware components. Apple is not, so frankly I don’t even get the comparison.
Also, look back a bit earlier and their images are far from stellar, with plenty painstaking reverse engineering done by the Linux community. Also, nvidia is not regarded as a good linux citizen by most people, remember Linus’s middle finger? There has only recently been changes so their video cards could be actually utilized by the kernel through the modern display buffer APIs, instead of having to install a binary blob inside xserver coming with their proprietary driver.
The performance and fanless arguments are icing on the cake and completely moot to me. No previous CPU was ever hindering effectiveness of computing. While I’m glad a bunch of hardware fanatics are impressed by it, the actual end result of what the premium locked down price tag provides is little better in real workload execution. Furthermore they’re not sharing their advancements with greater computing world, they’re hoarding it for themselves. All of which is a big whoop-dee-doo if you’re not entranced by Apple’s first-world computing speeds mindset.
AMD definitely does; their biggest contribution is amd64. Even nVidia contributes a fair amount of high-level research back to the community in the form of GPU gems.
AMD definitely does; their biggest contribution is amd64
Which AMD covered in a huge pile of patents. They have cross licensing agreements with Intel and Via (Centaur) that let them have access to the patents, but anyone else who tries to implement x86-64 will hear from AMD’s lawyers. Hardly sharing with the world. The patents have mostly expired on the core bits of x86-64 now, but AMD protects their new ISA extensions.
Didn’t they back off from this at least though? It almost seemed like a semi rogue VP pushed the idea, which was subsequently nixed by likely Tim Cook himself. The emails leaked during that debacle point to a section of the org going off on it’s own.
They did. Eventually. After a year of no communications on the matter beyond “Well, we haven’t shipped it yet.”
I didn’t pay attention to the leaked emails, but it seemed an odd hill to die on for Apple after their “What’s on your phone is your business, not ours, and we’re not going to help the feds with what’s on your device” stance for so long. They went rather out of their way to help make the hardware hard to attack even with full physical access.
Their internal politics are their problem, but when the concept is released as a “This is a fully formed thing we are doing, deal with it,” complete with the cover fire about getting it done over the “screeching voices of the minority” (doing the standard “If you’re not with us, you’re obviously a pedophile!” implications), I had problems with it. Followed by the FAQ and follow on documents that read as though they were the result of a team running on about 3 days of no sleep, frantically trying to respond to the very valid objections raised. And then… crickets for a year.
I actually removed all Apple hardware from my life in response. I had a 2015 MBP that got deprecated, and a 2020 SE that I stopped using in favor of a flip phone (AT&T Flip IV). I ran that for about a year, and discovered, the hard way, that a modern flip phone just is a pile of trash that can’t keep up with modern use. It wouldn’t handle more than a few hundred text messages (total) on the device before crawling, so I had to constantly prune text threads and hope that I didn’t get a lot of volume quickly. The keypad started double and triple pressing after a year, which makes T9 texting very, very difficult. And for reasons I couldn’t work out nor troubleshoot, it stopped bothering to alert me of incoming messages, calls, etc. I’d open it, and it would proceed to chime about the messages that had come in over the past hour or two, and, oh yeah, a missed phone call. But it wouldn’t actually notify me of those when they happened. Kind of a problem for a single function device.
Around iOS 16 coming out, I decided that Lockdown mode was, in fact, what I was looking for in a device, so I switched back to my iOS device, enabled Lockdown, stripped the hell out of everything on it (I have very few apps installed, and nothing on my home screen except a bottom row of Phone, Messages, Element (Matrix client), and Camera), and just use it for personal communications and not much else. The MBP is long since obsolete and gets kept around for a “Oh, I have this one weird legacy thing that needs MacOS…” device, and I’ve no plans to go back to them - even though, as noted earlier, I think the M series chips are the most exciting bits of hardware to come out of the computing world in the last decade or two, and the M series MacBook Pros are literally everything I want in a laptop. Slab sided powerhouses with actual ports. I just no longer trust Apple to do that which they’ve done in the past - they demonstrated that they were willing to burn all their accumulated privacy capital in a glorious bonfire of not implementing an awful idea. Weird as hell. I’ve no idea what I’m going to do when this phone is out of OS support. Landline, maybe.
Nice concept, and certainly useful to help cut down on the amount of trash coming into inboxes (which I fully support!).
Though with the amount of “interaction required” stuff I’ve run across, I’m not sure how useful it would be. Most of the unsubscribe pages are some variety of “Click to confirm you want to unsubscribe.”
A few years back, I started on the process manually, just… OK. From this day forward, every email gets processed as something I care about, something to read and delete, or something to unsubscribe from. It’s made a massive difference in how useful my inbox is - I now (mostly) care about everything that comes in. If there’s no working unsubscribe link, spam reporting seems to work pretty well.
I’ve also gone back to “Email lives in an email client, and I check it when I’m around a computer, it doesn’t live on my phone.” That’s been very nice for sanity. I even go entire days without checking email on the weekends, now.
Though with the amount of “interaction required” stuff I’ve run across, I’m not sure how useful it would be. Most of the unsubscribe pages are some variety of “Click to confirm you want to unsubscribe.”
I’ve seen a few where the “Unsubscribe” link in the email body requires interaction, but there’s a list-unsubscribe header that supports one click unsubscribe via HTTPS POST. ¯_(ツ)_/¯
It’s an interesting story of how they mostly solved a “The horse has left the barn” sort of problem (Android certainly, at the time, was a “features first, security… ugh, if we must…” sort of feel), but I can’t help but seeing it also as a story of “How we solved a problem caused by complexity by adding massive complexity.”
I don’t recognize about half the media file formats that are listed as “things that can be parsed.” The proposed “How we’d attack it now” path talks about finding the weirder, more obscure ones that are less likely to be well-fuzzed and attacking that path - but it raises the question of, “Why should a phone auto-process these in the first place?” Apple’s approach with Lockdown (which everyone should turn on if you can, IMO) is to just refuse to process any of the weird stuff, and to keep attackers on the well chosen, well-fuzzed paths. It makes their job harder, and would anyone notice if their phone no longer plays WMV files in the browser, without any interaction?
I’d rather see the focus on “Do the common things well, and if you’re about to do something weird, at least give the user the option to say no before you parse it.” Or, better, give the user the option to say “Nope. I don’t want you to do anything weird,” as Apple has done.
… but then again, I also disable Javascript JIT in my browsers. So maybe I’m just weird.
Has anyone ever tried making an “overlay protocol” on top of IRC that adds additional features for rich clients, like images and custom emoticons, while still allowing interaction from a normal IRC client?
I’m sure something like this has existed, given IRC’s long history, but I don’t know where I’d find it if it does.
I’m imagining something like this:
A format for including a base URL in the channel topic (must have the same origin as the IRC server)
When this URL is present, the client expects certain well-known HTTPS endpoints to exist under this URL:
JSON description of capabilities
JSON list of custom emoticons
Set user avatar
Get user avatar
Upload media (images, videos)
View media
Voice/video chat channels, if present
Specialized clients will use these endpoints to give the IRC channel features that look like Discord/Matrix
But normal IRC clients will still see a normal channel. In particular, image posts are just URLs that match the “View media” subpath of the base URL, so normal clients will still see a post with a URL and can follow it.
Various IRC clients over the years have done custom things like this - extensions that show up to other users of the same client. It’s generally been frowned on by “everyone else in the channel,” because they’re a lot of clutter.
I think you’ll find that most users of IRC very actively don’t want any of these features, and so haven’t put any effort into building or supporting things they don’t want in the first place. There’s no shortage of rich clients and chat services (Matrix being a modern self hosted one), but IRC is enjoyable mostly because it remains purely plaintext - and low bandwidth.
Seems like what @ar-nelson suggests would have very little clutter - only a URL in the topic. Maybe the standard could be that you only use the last URL in the topic for the API entry point.
And while Matrix’s bridging is head and shoulders above every other bridge that I’ve ever seen, it’s still not super graceful when it comes to message edits.
Recently they made it so shorter edits get converted into s/foo/bar style invocations, which is a big improvement, but I’d still prefer to be able to disable message edits for Matrix/IRC channels tho; sed style edits should be implemented client-side; it’s pretty easy to do this.
Not to the same extent as what you’re describing, but for a time Slack let you connect with an IRC client. You didn’t get the history or things like that, but for managing day-to-day team communication it was fabulous. And then they killed it.
I wonder how many recursive resolvers have assumptions that don’t match this use of DNS, either in the form of the contents of the records or the size of them.
Me too! But so far, so good? (That said, I’ve been having trouble on one of my machines when it’s connected to Tailscale with MagicDNS. It could be cache poisoning from when I was doing testing during development; with DNS it’s always hard to tell. From a quick google, it looks like Tailscale does have TCP-fallback implemented for DNS, but the symptoms I see are consistent both with cache problems and with lack of correct fallback, so I’ll just wait and see if the problem goes away with time!)
Could you make a debug hostname where you can send different responses depending on if the response is sent over UDP or TCP, or does the DNS server not expose that difference?
I just re-ran it now, with tailscale and magicdns on, and
the first run succeeded, with full records returned. Yay!
the second and subsequent runs were given a truncated record, and so did not work. However, logging on the server side shows that both UDP (truncated) and TCP (nontruncated) requests were made.
So some intermediate caching resolver is making a mistake somewhere. Perhaps tailscale’s local resolver? Perhaps 9.9.9.9? Digging @100.100.100.100 yields the broken behavior, so perhaps it’s a bug there. Speculating: when it cached the response from the server on the first run, perhaps it cached the truncated UDP response instead of the nontruncated TCP response, even though it sent the TCP response back on that first run, and perhaps for the second and subsequent runs it is using its empty cached UDP record.
There’s also a chance that some caching resolver isn’t prepared to store 48 KB data per record and instead only are prepared to store e.g. 2-4 KB per record. When you do the first attempt with a cold cache you might be served the response directly, but the second time with a hot cache you instead get it served from the caching layer that may have silently truncated the response to fit with their (broken) expectation of how much data a DNS response can hold.
However, most “modern” configured domains (DNSSec, various email validation systems, etc) have a variety of decent sized TXT records in their configurations - so it’s possible that there’s already a lot of data being stored about various domains, and this wouldn’t inflate the size too much. Hard to say.
I realize this may be a pretty privileged opinion of have, but I find it much easier to just have a separate laptop for work and for personal stuff. I don’t use enough utilities in my personal life that I feel like the mental overhead imposed by Qubes is worth it, and if I really do need an isolated environment to run something sketchy in it’s not that hard to spin up KVM on Debian or whatever. Qubes feels like a hammer but all of my problems require a screwdriver.
Even for personal stuff, being able to separate out random applications from each other has value - and in terms of metal overhead, I’ve been daily driving it for a few years now and I don’t find much overhead at all. I have a range of VMs set up for different tasks/trust levels, and just use them as I’ve set them up. Random web stuff is in a disposable VM, sysadmin is a VM with my private keys (I’ve not set up split SSH keys, though it’s an option), I stuff media players in a VM that doesn’t have access to anything else, etc.
Separate laptops are certainly a nice option, but with how often we see exploits in browsers and other applications, separating stuff out makes more sense to me than just lumping “work” and “personal” into different computers (in addition to the pain of hauling them around). Given the somewhat increasing drumbeat of “This or that Node package turns out to be evil and attack developer machines,” being able to keep that sort of thing away from my SSH keys and email is of value.
It’s certainly not a great solution for every use case, but I think it is a decent solution to a lot of modern computer problems.
When i said “zero hardware acceleration,” I did mean it. Qubes does not support GPU acceleration of anything meaningful - it’s a bunch of raw framebuffers being passed around. The good news is that modern machines can sling framebuffers quickly. The bad news is that if you want to run things that require actual GPU acceleration (OpenGL and the like), you’re out of luck. There are some ways to do GPU passthrough to a VM if you want to run multiple GPUs in the system, but overall… you’re better off dual booting and taking the possible security hits if you want to do that. Your dual boot environment could do something nasty to the boot partition with Qubes, so evaluate as needed for your actual risks. Or simply have one machine that runs Qubes (a high-RAM integrated graphics system) and something else for games.
I was looking into running Qubes on my daily-use desktop computer not too long ago, and this was the specific thing that stopped me from looking further. I do play video games on it, and having to reboot into a new OS to play a game is annoying. I did this years ago to boot into Windows for games only, before Linux gaming got good enough, and I’m not eager to return to those days.
How well does hibernate support work? If your use case is games, I can imagine that being fine if you can suspend to disk, resume the games OS, then suspend that and switch. For me, the bigger problem is the number of web sites and productivity things that rely on a GPU. I’m a bit surprised that it hasn’t gained GPU support by now, given that modern GPUs support virtualisation.
I’m also quite surprised by the RAM overhead. Modern cloud container systems often do CoW VMs, where you boot a kernel with a base image and then effectively fork it. With FUSE over VirtIO, you can share buffer cache pages between guests and so if every guest runs the same libc (for example) then you have a single copy of this that is read-only shared between all guests. This means that the RAM overhead of a VM is fairly small over the RAM overhead of a separate process.
There is GPU support… sort of. If you want to pass a separate hardware GPU through to a VM, you can hardware accelerate that VM (subject to the usual whims of GPU passthrough and vendors trying very hard to ensure you don’t do this with consumer cards), but you’ll need a separate display output from it. There’s no “copy the framebuffer back” mechanisms available that I’m aware of. So, it’s possible to have a gaming system as a VM, just not a common use case at all. The forum and IRC channels would be good places to start out with that, and assume it will be a bit of fiddling to make it work.
Some websites render slowly, but I don’t find the lack of a GPU to be a general problem in most use - though I’m also happy to not do things that require heavy GPU use these days.
The RAM overhead is mostly a result of Xen not supporting “same page merging” sort of features, and there are some security concerns from it as well (being able to tell what’s in use in other VMs). But I expect it would be offered as an option if Xen were to support something of that nature. It would definitely help on low-RAM systems.
If I recall properly, the TPM is explicitly not designed to handle local physical attacks like this - there was a decapsulation talk years back about a TPM that was interesting, but also “Firmly outside the design model of the system.”
The main thing to be aware of is that without a password/PIN/etc, the TPM provides no meaningful added security when the system is entirely together. You can just boot the system and read the disk, or sniff the bus, etc.
So I don’t see how this is any worse than a non-encrypted disk. I’m not sure I agree it decreases the security. Though in this case, yes, it’s lower security than the fTPM.
But fundamentally, if you have a “disk encryption key” oracle that spits out the answer to whoever asks, it’s not going to be hard to get keys out of it should someone want them. You have to prove you know something first, or it’s not adding much useful when the system is assembled. It does, however, make it hard to get the disk contents should the disk be removed from the laptop, such as in recycling.
Does the system being entirely together include booting connected to the same LAN as when shut down? I always wondered this, would be an easy way to keep entire workstations from being removed from an office then read. Perhaps would result in some false positives but false positive bitlocker lockouts were the norm when I used it at work, probably had to type in my recovery key once a month or more.
Knowing which LAN you’re connected to is hard, if even possible, unless you’re using 802.1x to authenticate the connection - pretty common in business, but rare in home settings.
Speaking of which, I should set up 802.1x for the work portion of my home network now that the kids are getting old enough to be “creative” with their tech.
This does make me regret selling my M1 Mini. Apple pissed me off enough with the whole CSAM/on device scanning thing (along with a range of other things) that I got rid of it, at some substantial loss, figuring that the Linux projects on it wouldn’t come to anything useful (at the time, they were still working out some basic functionality). Asahi has come a long ways since then.
They’ve done a really, really good job with hardware reverse engineering, and have quite exceeded my expectations for what they’d accomplish - I’m glad to see it!
I’m curious as to why you regret selling the hardware. A company proposes awful spyware for it’s hardware which you respond by deciding to sell it’s product but now you experience regret because you can’t test out software that the same company is not supporting or encouraging in the slightest? What is it about Asahi that makes you want an Apple device? I know Apple makes good hardware but I don’t think the hardware is THAT good that some independently contributed software is a game changer for it. Especially when many other vendors make pretty great hardware that support the software.
I like ARM chips, and I utterly love what Apple has done with the SoC design - huge caches, and insane gobs of memory bandwidth. The performance out of it backs the theory I’ve run with for a while that a modern CPU is simply memory limited - they spend a lot of their time waiting on data. Apple’s designs prove this out. They have staggeringly huge L1 cache for the latency, and untouchable memory bandwidth, with the performance one expects of that. On very, very little power.
Various people have observed that I like broken computers, and I really can’t argue. I enjoy somewhat challenging configurations to run - Asahi on Apple hardware tickles all those things, but with hardware that’s actually fast. I’ve used a variety of ARM SBCs as desktops for years now - Pis, ODroids, I had a Jetson Nano for a while, etc. They’re all broken in unique and novel ways, which is interesting, and a fun challenge to work around. I just didn’t expect Asahi to get this usable, this quickly.
I sold the M1 Mini and a LG 5k monitor at about a $1000 loss over what I’d paid for them (over a very short timescale - I normally buy computers and run them to EOL in whatever form that takes). I didn’t have any other systems that could drive the 5k, so it made sense to sell both and put a lower resolution monitor in the spot, but the M1/LG was still the nicest computer I’ve ever used in terms of performance, responsiveness, etc. And it would, in Rosetta emulation, run Kerbal Space Program, which was about the one graphically demanding thing I used to do with computers.
In any case, getting rid of it drove some re-evaluation of my needs/desires/security stances, and now I run Qubes on just about everything, on x86. Painful as it is to say, relative to my previous configurations, this all qualifies as “Just works.”
What did you replace it with? The basic Mac Mini M2 is really cheap now at ~$600. I run NixOS on a decade-old NUC, and it is tempting as a replacement. A good fanless x86 is easily 50% more and not nearly as energy efficient as the Mini.
However, right now one needs to keep macOS to do firmware updates. And AFAIK there is limited video output support on the M2.
An ASRock 4x4 Box 5000U system, with a 5800U and 64GB RAM. It dual boots QubesOS and Ubuntu, though almost never runs Ubuntu, and the integrated graphics aren’t particularly good (the theory had been that I could do a touch of light gaming in Ubuntu, but reality is that it doesn’t run anything graphically intensive very well). It was not a well thought out purchase, all things considered, for my needs. Though, at the time, I was busily re-evaluating my needs and assumed I needed more CPU and RAM than I really do.
I’m debating stuffing an ODroid H3+ in place of it. It’s a quad core Atom x86 small board computer of the “gutless wonder” variety, and it runs Qubes quite well, just with less performance than the AMD box. However, as I’ve found out in the migration to QubesOS, the amount of computer I actually need for my various uses is a lot less than I’ve historically needed, and a lot less than I assumed I would need when I switched this stuff out. It turns out, the only very intensive things I do anymore are some bulk offline video transcoding, and I have a pile of BOINC compute rigs I can use for that sort of task.
I pretty much run Thunderbird, a few Firefox windows, sometimes Spotify or PlexAmp (which, nicely, run on x86 - that was a problem in my ARM era), a few communications clients, and some terminals. If I have dev projects, I spin up a new AppVM or standalone VM to run them in, and… otherwise, my computers just spend more and more time shut down.
That’s not quite true — apple went out of their way to make the boot secure for third-party OSs as well, according to one of the Asahi devs. So I don’t see them doing any worse than.. basically any other hardware company - those just use hardware components that already have a driver reverse engineered, or in certain cases contributed to some by the producing company, e.g. intel. Besides not having much incentive for opening up the hardware, companies often legally can’t do it due to patents not owned by them, only licensed.
And as for the laptop space, I would argue that the M-series is that good — no other device currently on the market get even in the same ballpark of their performance-efficiency plot to the point I can barely use my non-M laptop as a mobile device.
This is the attitude that I don’t understand. To pick four big chip-makers, I would rank Apple below nVidia, Intel, and AMD; the latter three all have taken explicit steps to ensure that GNU/Linux cleanly boots with basic drivers on their development and production boards. Apple is the odd one out.
On my m1 mac laptop I can work a full 10 hour work day compiling mutliple times without once plugging in my machines. The compiles are significantly faster than other machines and the laptop barely get’s above room temperature.
This is significantly better hardware wise than any other machine I could buy or build on the market right now. It’s not even a competition. Apples hardware is so far ahead of the competition right now that it looks like everyone else is asleep at the wheel.
If you can run your favorite OS on it and get the same benefits why wouldn’t you?
My most recent laptop purchase was around $50 USD. My main workstation is a refurbished model which cost around $150. My Android phone cost $200. For most of my life, Apple products have been firmly out of budget.
That is certainly a fair consideration. But for those who can afford it there are many reasons the price is worth it.
They had to, as servers are predominantly running linux and they are in the business of selling hardware components. Apple is not, so frankly I don’t even get the comparison.
Also, look back a bit earlier and their images are far from stellar, with plenty painstaking reverse engineering done by the Linux community. Also, nvidia is not regarded as a good linux citizen by most people, remember Linus’s middle finger? There has only recently been changes so their video cards could be actually utilized by the kernel through the modern display buffer APIs, instead of having to install a binary blob inside xserver coming with their proprietary driver.
The performance and fanless arguments are icing on the cake and completely moot to me. No previous CPU was ever hindering effectiveness of computing. While I’m glad a bunch of hardware fanatics are impressed by it, the actual end result of what the premium locked down price tag provides is little better in real workload execution. Furthermore they’re not sharing their advancements with greater computing world, they’re hoarding it for themselves. All of which is a big whoop-dee-doo if you’re not entranced by Apple’s first-world computing speeds mindset.
Does Nvidia, Intel or AMD actually “share their advancements with greater computing world”?
AMD definitely does; their biggest contribution is
amd64. Even nVidia contributes a fair amount of high-level research back to the community in the form of GPU gems.If you count GPU gems, I don’t see how apple research is not at least equally valuable.
Which AMD covered in a huge pile of patents. They have cross licensing agreements with Intel and Via (Centaur) that let them have access to the patents, but anyone else who tries to implement x86-64 will hear from AMD’s lawyers. Hardly sharing with the world. The patents have mostly expired on the core bits of x86-64 now, but AMD protects their new ISA extensions.
Didn’t they back off from this at least though? It almost seemed like a semi rogue VP pushed the idea, which was subsequently nixed by likely Tim Cook himself. The emails leaked during that debacle point to a section of the org going off on it’s own.
They did. Eventually. After a year of no communications on the matter beyond “Well, we haven’t shipped it yet.”
I didn’t pay attention to the leaked emails, but it seemed an odd hill to die on for Apple after their “What’s on your phone is your business, not ours, and we’re not going to help the feds with what’s on your device” stance for so long. They went rather out of their way to help make the hardware hard to attack even with full physical access.
Their internal politics are their problem, but when the concept is released as a “This is a fully formed thing we are doing, deal with it,” complete with the cover fire about getting it done over the “screeching voices of the minority” (doing the standard “If you’re not with us, you’re obviously a pedophile!” implications), I had problems with it. Followed by the FAQ and follow on documents that read as though they were the result of a team running on about 3 days of no sleep, frantically trying to respond to the very valid objections raised. And then… crickets for a year.
I actually removed all Apple hardware from my life in response. I had a 2015 MBP that got deprecated, and a 2020 SE that I stopped using in favor of a flip phone (AT&T Flip IV). I ran that for about a year, and discovered, the hard way, that a modern flip phone just is a pile of trash that can’t keep up with modern use. It wouldn’t handle more than a few hundred text messages (total) on the device before crawling, so I had to constantly prune text threads and hope that I didn’t get a lot of volume quickly. The keypad started double and triple pressing after a year, which makes T9 texting very, very difficult. And for reasons I couldn’t work out nor troubleshoot, it stopped bothering to alert me of incoming messages, calls, etc. I’d open it, and it would proceed to chime about the messages that had come in over the past hour or two, and, oh yeah, a missed phone call. But it wouldn’t actually notify me of those when they happened. Kind of a problem for a single function device.
Around iOS 16 coming out, I decided that Lockdown mode was, in fact, what I was looking for in a device, so I switched back to my iOS device, enabled Lockdown, stripped the hell out of everything on it (I have very few apps installed, and nothing on my home screen except a bottom row of Phone, Messages, Element (Matrix client), and Camera), and just use it for personal communications and not much else. The MBP is long since obsolete and gets kept around for a “Oh, I have this one weird legacy thing that needs MacOS…” device, and I’ve no plans to go back to them - even though, as noted earlier, I think the M series chips are the most exciting bits of hardware to come out of the computing world in the last decade or two, and the M series MacBook Pros are literally everything I want in a laptop. Slab sided powerhouses with actual ports. I just no longer trust Apple to do that which they’ve done in the past - they demonstrated that they were willing to burn all their accumulated privacy capital in a glorious bonfire of not implementing an awful idea. Weird as hell. I’ve no idea what I’m going to do when this phone is out of OS support. Landline, maybe.
Nice concept, and certainly useful to help cut down on the amount of trash coming into inboxes (which I fully support!).
Though with the amount of “interaction required” stuff I’ve run across, I’m not sure how useful it would be. Most of the unsubscribe pages are some variety of “Click to confirm you want to unsubscribe.”
A few years back, I started on the process manually, just… OK. From this day forward, every email gets processed as something I care about, something to read and delete, or something to unsubscribe from. It’s made a massive difference in how useful my inbox is - I now (mostly) care about everything that comes in. If there’s no working unsubscribe link, spam reporting seems to work pretty well.
I’ve also gone back to “Email lives in an email client, and I check it when I’m around a computer, it doesn’t live on my phone.” That’s been very nice for sanity. I even go entire days without checking email on the weekends, now.
I’ve seen a few where the “Unsubscribe” link in the email body requires interaction, but there’s a list-unsubscribe header that supports one click unsubscribe via HTTPS POST. ¯_(ツ)_/¯
It’s an interesting story of how they mostly solved a “The horse has left the barn” sort of problem (Android certainly, at the time, was a “features first, security… ugh, if we must…” sort of feel), but I can’t help but seeing it also as a story of “How we solved a problem caused by complexity by adding massive complexity.”
I don’t recognize about half the media file formats that are listed as “things that can be parsed.” The proposed “How we’d attack it now” path talks about finding the weirder, more obscure ones that are less likely to be well-fuzzed and attacking that path - but it raises the question of, “Why should a phone auto-process these in the first place?” Apple’s approach with Lockdown (which everyone should turn on if you can, IMO) is to just refuse to process any of the weird stuff, and to keep attackers on the well chosen, well-fuzzed paths. It makes their job harder, and would anyone notice if their phone no longer plays WMV files in the browser, without any interaction?
I’d rather see the focus on “Do the common things well, and if you’re about to do something weird, at least give the user the option to say no before you parse it.” Or, better, give the user the option to say “Nope. I don’t want you to do anything weird,” as Apple has done.
… but then again, I also disable Javascript JIT in my browsers. So maybe I’m just weird.
Has anyone ever tried making an “overlay protocol” on top of IRC that adds additional features for rich clients, like images and custom emoticons, while still allowing interaction from a normal IRC client?
I’m sure something like this has existed, given IRC’s long history, but I don’t know where I’d find it if it does.
I’m imagining something like this:
IRCv3 is in active, if fitful development: https://ircv3.net/irc/
Various IRC clients over the years have done custom things like this - extensions that show up to other users of the same client. It’s generally been frowned on by “everyone else in the channel,” because they’re a lot of clutter.
I think you’ll find that most users of IRC very actively don’t want any of these features, and so haven’t put any effort into building or supporting things they don’t want in the first place. There’s no shortage of rich clients and chat services (Matrix being a modern self hosted one), but IRC is enjoyable mostly because it remains purely plaintext - and low bandwidth.
Seems like what @ar-nelson suggests would have very little clutter - only a URL in the topic. Maybe the standard could be that you only use the last URL in the topic for the API entry point.
And while Matrix’s bridging is head and shoulders above every other bridge that I’ve ever seen, it’s still not super graceful when it comes to message edits.
Recently they made it so shorter edits get converted into
s/foo/barstyle invocations, which is a big improvement, but I’d still prefer to be able to disable message edits for Matrix/IRC channels tho; sed style edits should be implemented client-side; it’s pretty easy to do this.Not to the same extent as what you’re describing, but for a time Slack let you connect with an IRC client. You didn’t get the history or things like that, but for managing day-to-day team communication it was fabulous. And then they killed it.
I wonder how many recursive resolvers have assumptions that don’t match this use of DNS, either in the form of the contents of the records or the size of them.
Me too! But so far, so good? (That said, I’ve been having trouble on one of my machines when it’s connected to Tailscale with MagicDNS. It could be cache poisoning from when I was doing testing during development; with DNS it’s always hard to tell. From a quick google, it looks like Tailscale does have TCP-fallback implemented for DNS, but the symptoms I see are consistent both with cache problems and with lack of correct fallback, so I’ll just wait and see if the problem goes away with time!)
Could you make a debug hostname where you can send different responses depending on if the response is sent over UDP or TCP, or does the DNS server not expose that difference?
I just re-ran it now, with tailscale and magicdns on, and
So some intermediate caching resolver is making a mistake somewhere. Perhaps tailscale’s local resolver? Perhaps 9.9.9.9? Digging @100.100.100.100 yields the broken behavior, so perhaps it’s a bug there. Speculating: when it cached the response from the server on the first run, perhaps it cached the truncated UDP response instead of the nontruncated TCP response, even though it sent the TCP response back on that first run, and perhaps for the second and subsequent runs it is using its empty cached UDP record.
There’s also a chance that some caching resolver isn’t prepared to store 48 KB data per record and instead only are prepared to store e.g. 2-4 KB per record. When you do the first attempt with a cold cache you might be served the response directly, but the second time with a hot cache you instead get it served from the caching layer that may have silently truncated the response to fit with their (broken) expectation of how much data a DNS response can hold.
As a first order estimate, “About all of them.”
However, most “modern” configured domains (DNSSec, various email validation systems, etc) have a variety of decent sized TXT records in their configurations - so it’s possible that there’s already a lot of data being stored about various domains, and this wouldn’t inflate the size too much. Hard to say.
I realize this may be a pretty privileged opinion of have, but I find it much easier to just have a separate laptop for work and for personal stuff. I don’t use enough utilities in my personal life that I feel like the mental overhead imposed by Qubes is worth it, and if I really do need an isolated environment to run something sketchy in it’s not that hard to spin up KVM on Debian or whatever. Qubes feels like a hammer but all of my problems require a screwdriver.
That and running stuff on different machines is just fundamentally a more bulletproof compartmentalization mechanism.
Even for personal stuff, being able to separate out random applications from each other has value - and in terms of metal overhead, I’ve been daily driving it for a few years now and I don’t find much overhead at all. I have a range of VMs set up for different tasks/trust levels, and just use them as I’ve set them up. Random web stuff is in a disposable VM, sysadmin is a VM with my private keys (I’ve not set up split SSH keys, though it’s an option), I stuff media players in a VM that doesn’t have access to anything else, etc.
Separate laptops are certainly a nice option, but with how often we see exploits in browsers and other applications, separating stuff out makes more sense to me than just lumping “work” and “personal” into different computers (in addition to the pain of hauling them around). Given the somewhat increasing drumbeat of “This or that Node package turns out to be evil and attack developer machines,” being able to keep that sort of thing away from my SSH keys and email is of value.
It’s certainly not a great solution for every use case, but I think it is a decent solution to a lot of modern computer problems.
I was looking into running Qubes on my daily-use desktop computer not too long ago, and this was the specific thing that stopped me from looking further. I do play video games on it, and having to reboot into a new OS to play a game is annoying. I did this years ago to boot into Windows for games only, before Linux gaming got good enough, and I’m not eager to return to those days.
How well does hibernate support work? If your use case is games, I can imagine that being fine if you can suspend to disk, resume the games OS, then suspend that and switch. For me, the bigger problem is the number of web sites and productivity things that rely on a GPU. I’m a bit surprised that it hasn’t gained GPU support by now, given that modern GPUs support virtualisation.
I’m also quite surprised by the RAM overhead. Modern cloud container systems often do CoW VMs, where you boot a kernel with a base image and then effectively fork it. With FUSE over VirtIO, you can share buffer cache pages between guests and so if every guest runs the same libc (for example) then you have a single copy of this that is read-only shared between all guests. This means that the RAM overhead of a VM is fairly small over the RAM overhead of a separate process.
There is GPU support… sort of. If you want to pass a separate hardware GPU through to a VM, you can hardware accelerate that VM (subject to the usual whims of GPU passthrough and vendors trying very hard to ensure you don’t do this with consumer cards), but you’ll need a separate display output from it. There’s no “copy the framebuffer back” mechanisms available that I’m aware of. So, it’s possible to have a gaming system as a VM, just not a common use case at all. The forum and IRC channels would be good places to start out with that, and assume it will be a bit of fiddling to make it work.
Some websites render slowly, but I don’t find the lack of a GPU to be a general problem in most use - though I’m also happy to not do things that require heavy GPU use these days.
The RAM overhead is mostly a result of Xen not supporting “same page merging” sort of features, and there are some security concerns from it as well (being able to tell what’s in use in other VMs). But I expect it would be offered as an option if Xen were to support something of that nature. It would definitely help on low-RAM systems.