If I ship an app for Windows I don’t have to include the entire Win32 or .NET runtimes with my app.
While you don’t have to for Win32, using modern .net does require it and you’re expected to either ship it with the app (self-contained package like appimage) or install it with the system (framework-dependant package like flatpak). Also going one layer beyond winapi - you’re expected to ship your own C++ runtime if you use one and you likely have multiple copies of it since “library packages” are not very popular on windows.
The complaint about lack of sharing is meh. Out of 8 apps I’ve got 7 share the base layers. The situation’s not perfect but we can’t expect it to be. People can do things differently and will do it sometimes.
Yep that bit me a few times. And don’t you expect the link for the c++ distributable to stay the same. Microsoft released a new fix version, and it’ll have a new cryptic link.
I don’t expect the ratios to change much for two reasons: 1. Once you get to hundreds of apps, you’re likely getting majority from a single distribution source and they’ll have popular common patterns. 2. There’s only so many reasons to do things in a different way. In the same way, out of hundreds of GUI apps, almost all of them will use Gtk or qt, with a long tail of something different for very few. We see the same thing with docker images - almost every one is based on Debian, Alpine, or CentOS (often providing two to choose from). I don’t see any reason for flatpak environment to behave any worse, given the shared runtime is an actual feature of that system.
I think another thing to note is these are not all actually different runtime types. You also have different versions which the apps will migrate from/to over time. And those should be fairly trivial updates to unify the base.
The the C++ library shipping is largely down to MS not wanting to ship it in its super unstable state early on in C++ history, but that then painted them into a wall with shipping it with the OS later. That does mean that they can tie it to different versions of MSVC++ so can improve perf in ABI-breaking ways which is nice for them.
But it isn’t required, libc++ is ABI stable on all its platforms, for Mac apps don’t need their own copy of the standard library. The general ABI stability of macOS is such that apps that aren’t relying on non-API features generally just work. They have made some ABI breaks, but generally with good reason - they dropped bincompat with PPC after a number of years of intel, they dropped IA32 after 10 years of x86_64, they dropped arm <v8 after several years of 64 bit iOS. The big difference in apple vs. MS ABI compatibility, is that apple is much more willing to break bug or non-api compatibility. e.g If someone ships buggy software on windows that happens to work MS will add workarounds to keep it going. By and large apple won’t - and will contact the dev to get them to fix their software.
The strict ABI compatibility problem is why macOS was stuck with an archaic openSSL for such a long time - because openSSL kept breaking ABI Apple could not update it. The perpetual “who cares about ABI” essentially forced apple to stop using it.
Apollo landed on the moon with 4 kB of RAM and 72 kB of ROM
While I’m as against software bloat as everyone else, and I admire the Apollo guidance software authors as much as everyone else, I find that analogy rather disingenuous.
Apollo guidance software authors managed to get things done with extremely limited hardware of that time by placing a lot of things outside the computer, like the entire user interface. Astronauts interacted with it by entering numeric command codes and looking up its numeric responses in the manual. Its scope was also quite limited, and I assume quite a lot of things were pre-calculated.
There’s no way anyone could make a Voyager or Curiosity with that hardware no matter how dedicated they are to keeping their software small.
While I’m as against software bloat as everyone else, and I admire the Apollo guidance software authors as much
as everyone else, I find that analogy rather disingenuous.
…
Apollo guidance software authors managed to get things done with extremely limited hardware of that time by placing a lot of things outside the computer, like the entire user interface.
….
There’s no way anyone could make a Voyager or Curiosity with that hardware no matter how dedicated they are to keeping their software small.
I’m not sure I would call it disingenuous as much as a straw man. The hardware that was available to the Apollo program is different from the hardware that is available today. Every situation is resource constrained and engineers/software authors will make as much of the resources that are available as they can (hopefully with some acceptable margin of error). Especially so if you are building something that you can’t refresh later.
An analogy I like to use is that hardware is a glass and software/firmware is water. The engineers/software authors pour as much water into that glass as they can, find out when it runs out, then back off by a little bit and work within that amount. If Apollo had a 32-core processor and 16 GB of RAM that weighed less than the 4 kB they had, they would have poured a lot more “water” into the hardware.
An interesting comparison would be to see what percent of the on-board resources Apollo used, as compared to Voyager or Curiosity. My guess is it didn’t change much and we’ll have the same debate in another 50 years: Wow! Look at what they made Curiosity do without quantum computers! Why can’t the kids today squeeze more qubits out of their hardware?
I keep my vote on QEmu(KVM) + Qcow2 as the non-distribution sanctioned packaging and sandboxing runtime. Once ironically, now as faint hope. The thing is, the packaging nuances matters little in the scale of things, heck - Android got away with a bastardised form of .zip and an xml file.
The runtime data interchange interfaces are much more important, and Linux is in such a bad place that even the gargantuan effort Android 5.0 and onwards applied trying to harden theirs to a sufficient standard wouldn’t be enough. No desktop project has the budget for it.
You have Wayland as the most COM-like being used (tiptoeing around the ‘merits’ of having an asynchronous object oriented IPC system) – but that can only take some meta-IPC (data-device), graphics buffers in only one direction and snaps like a twig under moderate load. Then you attach Pipewire for bidirectional audio and video, but that has no mechanism for synchronisation or interactive input, and other ‘meta’ aspects. Enter xdg-desktop-portal as a set of D-Bus interfaces.
Compared to any of the aforementioned, Binder is a work of art.
Indeed, it seems to me that a lot of companies that jumped on the sandboxing bandwagon have missed a critical point that the sandboxing systems used in the mobile world didn’t. Sandboxing is a great idea but it’s not going to fly without a good way to talk to the world outside the sandbox. Without one, you either get a myriad incompatible and extremely baroque solutions which so far are secure by obscurity and more often than not through ineffectiveness (Wayland), or a sandboxing system that everyone works around in order to keep it useful (Flatpak – an uncomfortable amount of real-life programs are ran with full access to the home dir, rendering the sandbox practically useless as far as user applications are concerned).
Our industry has a history of not getting these second-order points. E.g. to this day I’m convinced that the primary reason why microkernels are mostly history is that everyone who jumped on the microkernel bandwagon in the 1990s poured a lot of thought into the message passing and ignored the part where the performance will always be shit if the message passing system and the scheduler aren’t integrated. QNX got it and is the only one that enjoyed some modicum of long-term success.
I’m not convinced we’re going to get the sandboxing part right, either. While I’m secretly hoping that a well-adapted version of KVM + QCow2 is going to win, what I think is more likely to happen is that the two prevailing operating systems today will just retrofit a sandboxing system from iOS, or a clone of it, and dump all “legacy apps” into a shared “legacy sandbox” that’s shielded from everything except a “Legacy Document” folder or whatever.
or a sandboxing system that everyone works around in order to keep it useful
That’s the next tier: say you have nice and working primitives, now you just need to design user interfaces that fit these ergonomically. Mobile didn’t even try, but rather just assumed you have the threat modelling capacity of a bag of sand and proposed “just send it to us and we can give it back to you later” – it worked. I am no stranger to picking uphill battles, but designing scenarios for conditioning users to adopt interaction patterns that play well with threat compartmentation is a big no.
Our industry has a history of not getting these second-order points.
There is much behind that, especially so in open source. Rewrite ‘established tech’ for ‘hyped platform’. If you’re a well-funded actor, throw money at marketing the thing and repeat until something sticks. People new to the game might even think that what is being sockpuppeted this time around actually a new thing and not a tired rehash.
’m not convinced we’re going to get the sandboxing part right, either. While I’m secretly hoping that a well-adapted version of KVM + QCow2 is going to win
Heavens no. While I believe in whole-system virtualisation for compatibility or performance, the security/safety angle is dead even before decades of hardware lies are uncovered.
What boggles my mind is that we’re almost dipping into triple digit well-analysed big budget product sandbox escapes and people still go for that as the default strategy. Post facto bubble wrapping? that’s a parrot pining for the fjords. Design and build for least-privilege separation? perhaps, but hardly applies everywhere and the building blocks in POSIX/Win32/… are … “not ideal”.
There are some big blockers for the KVM/QCow2 angle, getting good ‘guest additions’ and surrounding tooling for interchange and discovery (search) is a major one. The container-generation solution of ssh and/or web comes to mind as the opposite of good here.
what I think is more likely to happen is that the two prevailing operating systems today will just retrofit a sandboxing system from iOS, or a clone of it, and dump all “legacy apps” into a shared “legacy sandbox” that’s shielded from everything except a “Legacy Document” folder or whatever.
That would be the pinnacle of tragedy (until the next one) – the prospect of having all the ergonomics of data sharing between domains on a smartphone with the management and licensing overhead of a desktop.
I know you’re kidding about the KVM + QCow2 sandboxing but I really think it’s the least bad option that can be built with what we have now and has a chance at industry traction. Guest additions are only a problem if you’re trying to run a kernel built for real hardware, in which case you need “special’ drivers. But if one were to devise and implement an ACME Virtual Sandbox Qemu machine, qemu-based sandboxing engine could just use a kernel with the guest additions baked in. Fine-grained access control is then a matter of mounting the correct devices over a virtual network.
It’s not a good solution but it does have the potential of providing satisfactory solutions for a bunch of thorny problems, not the least of which is dealing with legacy applications that nobody’s going to update for some fancy new sandboxing system. At least in the desktop space, neither of the two major players has any interest in solving much simpler problems. I doubt any of them wants to throw money at solving this problem properly, especially when they both have perfectly good walled gardens that they can sell as security oil. This one’s clunky but at least it’s not Windows Subsystem for Android.
I’m still waiting for the day when we’ll just sell software along with the computer that runs them, and every computer will be the size of an SD card and we’ll just plug the thing into the deckstation and our sandboxing solution is going to be real, physical segregation :-P.
(I also just know someone’s gonna figure out how to break that but hey, that’s the kind of fun that got me into computers in the first place!)
I know you’re kidding about the KVM + QCow2 sandboxing …
Yes and no. So I compartment some things, particularly browsers, by hardware. There’s a cluster. It netboots, gets a ramdisk friendly image, boots into chrome, forwards to my desktop. When I close a ‘tab’, the connection is severed and RST is pulled (unless there’s a browser crash and in those cases I collect the dump and some state history, more than a few in-the-wild 0-days have been found that way). That puts the price point for ‘smash and grab’ing me well beyond what little I am worth, and it opens up for a whole lot of offensive privacy. I think this can be packaged and made easy enough for a large set of users.
Guest additions are only a problem if you’re trying to run a kernel built for real hardware, in which case you need “special’ drivers.
They are used for some things that are most readily available in user-space, integration with indexing services, clipboard, drag and drop. I didn’t do any requirements engineering for arcan shmif. I picked a set of most valuable applications, and wrote backends to see what I was missing, then iteratively added that.
The first round was emulators because games and speedruns are awesome free tight timing test-sets. The second round was QEMU for basically the reasons we’re talking about. Linux won’t ever fix its rain forest of broken ABIs. Important legacy applications will break for someone. I don’t agree with that. While my belief is compatibility only, if someone thinks it fits their threat model, I won’t judge (openly).
I’m still waiting for the day when we’ll just sell software along with the computer that runs them, and every computer will be the size of an SD card and we’ll just plug the thing into the deckstation and our sandboxing solution is going to be real, physical segregation :-P.
Cartridges are coming back in style. One project I have in the sinister pile with an investor pitch deck comes distributed on SD cards targeting certain SBCs.
Indeed, it seems to me that a lot of companies that jumped on the sandboxing bandwagon have missed a critical point that the sandboxing systems used in the mobile world didn’t.
I mean, doing any work that involves multiple distinct software on an Android phone or even worse on iOS is completely and utterly impractical because of this so I’m not sure about the “didn’t”.
Those are good for consuming content, but for producing it ? They only show that even the best sandboxing systems mankind was able to come up with so far are a full-blown failures for doing actual work.
Sandboxing is a pipe dream; it’ll never work. Reserve hope for packaging (though not very much). And wrt packaging, the primary issue is the stability, not the quality of the associated APIs, wherefore the steam/flibit approach works decently well in practice. And has the advantage that it is not completely opaque, such that it is easier and more sensible to, say, swap in a patched libSDL2.
I think the reason android is in better shape than linux is because it has more clearly defined goals. ‘What do we want to package, and why, and for whom?’—‘Apps written in java, to collect ad revenue, for clueless smartphone users.’
It depends on how you define sandbox and what you expect from it. VMs are sandboxes, docker is a sandbox, systemd’s dynamic users are a sandbox, most of my system utilities run in selinux sandbox, your browser has at least 2 sandboxes, etc. The underspecified “sandbox” description is a problem. We don’t lack sandboxes which actually work.
There’s a difference between “we have system doing X and the implementations have bugs” and “we don’t have a system doing X”. If any past and future problem disqualifies an approach, then we don’t have lightbulbs, cars, agriculture, … They all have failure modes. I don’t think that hard stance is practical.
Modern sandboxes moved the bar from “just put a long string somewhere” (1990s) to “you need a lot of skills or $100k-s to get a temporary exploit”. And we’re not slowing down on improvements.
There’s a difference between “we have system doing X and the implementations have bugs” and “we don’t have a system doing X”. If any past and future problem disqualifies an approach, then we don’t have lightbulbs, cars, agriculture, … They all have failure modes. I don’t think that hard stance is practical.
A container’s fundamental purpose is to prevent infection of the system and any other systems that can possibly be linked, and thus to prevent exfiltration of secrets or abuse / damage to the machine. The fact is that this is fundamentally impossible to guarantee and very, very costly to assume. While cars have failure modes, they still mostly get you from A to B – they at least mostly do the job that you intended, whereas containers do not and cannot prevent malicious attacks. At best this is like everyone having a car that will only take you half way to where you are driving, and at worst it’s like having half a dam – it completely and utterly defeats the point of having the dam in the first place. Sure, it “sometimes works”, but I’m not very sure I would want to use it, and I certainly wouldn’t sell it to anyone else as a good thing.
Modern sandboxes moved the bar from “just put a long string somewhere” (1990s) to “you need a lot of skills or $100k-s to get a temporary exploit”.
Citation needed. All/Most of the following attacks are fundamentally “long string attacks”, i.e. buffer overflows.
Attacking containers is not very different from traditional servers or virtual machines. You can use well-known attacks to exploit vulnerabilities found in a container, for example: Buffer Overflows, SQL Injections or even default passwords. The point here is that you can initially get remote code execution (RCE) in containers using traditional techniques.
The exploit makes use of specially crafted image files that bypass the parsing functionality of a delegates feature in the ImageMagick library. This capability of ImageMagick executes system commands that are associated with instructions inside the image file. Escaping from the expected input context allows an attacker to inject system commands.
A container’s fundamental purpose is to prevent infection of the system and any other systems that can possibly be linked, and thus to prevent exfiltration of secrets or abuse / damage to the machine.
Security is not a binary state. You work with a given threat model, then work to prevent specific classes of attacks/vulnerabilities. There’s no tool that “provides security”. Containers remove some classes, and add some new issues to think about. You want to isolate the filesystem and network between processes on the same host - containers will help you. You want to mitigate kernel exploits, SQL injections, or people storming your datacenter - containers won’t help you.
Secure / not secure are not real states of the system. You need to define what kind of secure we’re talking about and what abuse is. What you define as abuse may be my business model.
Which leads to:
whereas containers do not and cannot prevent malicious attacks
What I’m trying to say is: what you mentioned is not a fundamental purpose of the containers and trying to discuss things like that is a mistake at step 1. If you’re interesting in learning more about those issues, I recommend reading about threat modelling.
Citation needed. All/Most of the following attacks are fundamentally “long string attacks”, i.e. buffer overflows.
I don’t know of anything that summarised the whole decades of changes, but in short: stack overflow are dead with stack protectors, shadow attacks and many compiler improvements; heap overflows are much harder due to (k)aslr and various layout mitigations, also almost dead due to W^X; rop is kind of there, but CET is getting popular and in general control-flow-integrity is a thing we talk about. The most popular breakout from V8 these days are double-free / dangling pointers and type confusion as far as I remember from browsing CVEs. These are multi-step exploits which are often still not reliable or immediate since they require address leaks first and there’s some luck involved. (And then you need to do a separate browser sandbox escape) Either way, basic overflows are stone age tools at this point and rarely work.
This is wholly antithetical to defense in depth. I don’t like linux containers, and docker in particular has had a long history of bugs, common misconfigurations and footguns; sandboxing complicates the attacker’s experience from just exploiting the application to exploiting the application and then pivoting to a container escape, if one is possible.
One thing to mention in the case of sandboxing an application in a container, is that you don’t necessarily get a whole system like you would if the app was running on an OS installed on bare metal. Commonly at my company, and I’m sure at many others, we use distroless images and run the applications in these containers as non-root. This severely limits what an attacker has in their toolkit to exploit a container escape.
Attacking containers is not very different from traditional servers or virtual machines. You can use well-known attacks to exploit vulnerabilities found in a container, for example: Buffer Overflows, SQL Injections or even default passwords. The point here is that you can initially get remote code execution (RCE) in containers using traditional techniques.
The next paragraph
For me, what differentiates containers from others technologies during a pentest engagement is the Post-Exploitation phase. Docker environments can be very dynamic (containers may be created and destroyed at any time). This can be challenging for attackers as gaining persistence may be difficult. Some of the containers might also not have any services exposed (so how can we access them?).
The post-exploitation phase is important, since as I said, the goalpost has now shifted. The question becomes how do I pivot from the application I exploited to one of these other containers (that as the article mentions, may not have an exposed service).
idk what this has to do with container weaknesses. They left a container management app exposed to the world unconfigured. The application has to have access to the docker runtime to do its job. If we wanna talk about a bare metal equivalent, this would be like leaving cPanel open to the world. Same shit.
“Containers are only as secure as their configuration, and a simple way to improve their security is to drop unused privileges.”
What they’re encouraging as a solution is to aggressively drop privileges from the sandbox that they don’t need. However, this a problem with Kata Containers and not the concept of sandboxing itself. The researcher here is arguing for more aggressive sandboxing, not doing away with the idea.
However, we’re currently seeing something completely different — a payload specifically crafted to be able to escape privileged containers with all of the root capabilities of a host machine. It’s important to note that being on Docker doesn’t automatically mean that a user’s containers are all privileged. In fact, the vast majority of Docker users do not use privileged containers. However, this is further proof that using privileged containers without knowing how to properly secure them is a bad idea.
The attack here is on a specific Docker configuration. This configuration is generally quite rare afaik (very few times have I ever run privileged docker containers, and we don’t run them in prod). Moreover, again the researcher argues here that more aggressive sandboxing is needed, not less.
I’m not sure how any of these point to doing away with sandboxing altogether. I’d argue that instead, they point to problems with specific implementations of Linux containers. And I’d agree! I think the piecemeal way in which you construct Linux containers via various primitives make it very easy to construct insecure containers, and many implementers (Docker obvs, and I guess here Kata Containers as well) have fallen into that trap. I don’t think that this is damning of sandboxing as a concept though.
‘Apps written in java, to collect ad revenue, for clueless smartphone users.’
Basically the only apps I use on my work-issued Android phone are 2FA apps (Okta, Google etc). These at least are considered ok to run on the platform.
This is indeed depressing … My last upgrade was from Ubuntu 16 to 18, not to the current 20 release, because it appears to have less Snap BS on it. Not sure what I’m going to do in a few years :-(
On another note, I recently ran WIndows XP on modern hardware and it absolutely flies. As far as desktop apps, it does basically everything that modern computers do. We used to make fun of Microsoft software for being “bloated” but the Linux world is 1000x worse now. The Microsoft calculator app was never 152 MB :-(
In fact the entire Windows XP installation is under 128 MB! Amazing!!! And it runs in 32 MB of RAM.
I wonder if Linux needs something like COM – stable shared library interfaces. I would prefer something more like IPC than shared libraries (Unix style), but performance is always a concern. Although you did have “DLL hell” back then too, which is what Snap and such are trying to avoid.
But I wonder if “DLL hell” was really people NOT using COM, which should let people know if the interfaces were changed? Or just using it poorly, i.e. changing the semantics of the interface, rather than creating new interfaces when breakage occurs.
Mozilla had XPCOM more than a decade ago but abandoned it for some reason. I think they abandoned true cross language interoperability and went for just JS / C++ interop like WebIDL in Chrome (as far as I understand).
I recently ran WIndows XP on modern hardware and it absolutely flies
Yeah. Software that was written by people using HDDs goes really well when you run it on SSDs.
I consider it a systemic tragedy that programmers tend to use very fast computers when we’re actually the one group that has the most capability to change things to make slow ones useful.
Yeah I asserted on Hacker News that the Apple M1 would probably have the effect of making the entire web slower.
If you assume that web developers are more likely to buy newer Apple laptops sooner than their audience, and spend more on them, that seems inevitable :-(
I think SSDs were a big jump in hardware performance but CPU and memory are issues as well. Today’s apps use so much memory that people running with 8 GB of RAM can experience slowdowns, let alone older computers with 2GB or 1GB (like at the library, or what many low income people use, etc.)
Fwiw that’s also been true of every new AMD and Intel CPU too so there’s no particular reason to single Apple out. Other than their currently being in front. Obviously I do believe you are correct.
Today’s apps use so much memory that people running with 8 GB of RAM can experience slowdowns,
This is one area where I hoped at one point that web browsers might help a bit because of per tab memory limits.
Also CI environments and serverless (neé Platform as a Service) environments tend to charge by the 128MB-second of RAM which tempts people to try to fit things in smallish boxes.
I think this unique because it’s the first time there’s a pretty big differential between Apple and the rest of the industry. And because web developers are more likely to use Apple machines, and the audience is more likely to use Windows.
When Apple was using Intel chips, top of the line Windows laptops had the same CPUs or faster. Now if you’re a Windows user, AFAIK you can’t get a laptop as fast as the Macs that everyone is buying right now.
Obviously this is not Apple’s “fault”; they’re just making faster computers. And I haven’t quantified this, but I still think it’s interesting :)
First off, most usage is via mobiles, and any company worth its salt (i.e. striving to make money) will take this into consideration. Mobile clients are generally not as fast as desktop ones.
Secondly, the M1 line is about a year old. Has there really been a critical mass of web technology developed and deployed during that time, that is acceptable to run on an M1 but not on others, to materially tilt the scale of performance in the wild?
And lastly, this paints an incredibly bleak “us vs. them” picture, wherein Intel, one of the world’s largest companies and one that has been in the forefront of processor development for decades will never[1] catch up with Apple when it comes to performance. Or that Apple, seeing a gaping hole in the market, won’t move in with more affordable machines using the M1 chips to capture that.
I’m not claiming this is a permanent state of affairs! Surely the CPU market will eventually change, but that’s how it is now.
The first claim about company incentives is empirically false… If mobile app speed mattered more than functionality or time to market, then you wouldn’t see apps that are 2x or 10x too slow in the wild, yet you see them all the time.
Funny story is that at Google, which traditionally had very fast web apps and then fell off a cliff ~2010 or so, people knew this was a problem. There were some proposals to slow down the company network once a week to respect our mobile users. (The internal network was insanely fast, both throughput and latency wise). This never happened while I was there. Seems like a good idea but there was no will to do it.
The truth is that 99% of changes to web apps never get tested on a mobile network or mobile device. It slows down development too much. Why do you think there is the mobile app simulator in desktop Chrome? Because that’s how people test their changes :)
Once there’s a slowdown, it’s fairly hard to back out after a few changes are piled on top. So in this respect Google is like every other company that does web dev. There is no magic. It’s just a bunch of people writing JavaScript on top of a huge stack, and they are incentivized to get their jobs done.
If they did test on mobile, employees generally had the latest phones because they were given out as gifts every year (for awhile). And testing on the phone doesn’t solve the problem of testing on an insanely fast network.
Actually, ironically the web solves this problem with process-based concurrency and stable protocols and interchange formats.
So if I have a web calculator app, a web spreadsheet, and a web mail app, I don’t ship the GUI with the app.
Instead I emit protocols and languages that cause the GUI to be displayed by the browser.
In some sense you do move some of the bloat do the browser, but it’s linear bloat and not multiplicative bloat like with Linux desktop apps.
You are forced to do feature detection (with JS) rather than version detection, but that’s good! That is, the flaky model of solving version constraints in a package manager is part of what leads to DLL hell.
Also, it’s much easier to sandbox such a web app than one that makes a lot of direct calls to the OS.
So the web is more Unix-y and avoids a lot of the problems that these Linux desktop apps have. Although I guess we recreated a similar problem again on top of it with JS package managers :-/
But so many things are just not possible with the web ? Good luck having a DaVinci Resolve or Solidworks (the real thing, not something with 1/100th of the features, format and hardware support) editing things that requires 32+GB of RAM to work semi-comfortably, in a web page.
Definitely true, I’m just pointing out an underexamined benefit of the web architecture for certain (simple) apps. There are plenty of problems with the web for that use case too, in particular that it’s most natural to write everything in JavaScript!
[T]he entire Windows XP installation is under 128 MB…[a]nd it runs in 32 MB of RAM.
That doesn’t sound like a full install. Officially it required 64Mb RAM but disabled things like wallpaper when below 128Mb; unofficially if you want to run any software the real number is much higher. Nonetheless the point remains valid.
I wonder if “DLL hell” was really people NOT using COM…[o]r just using it poorly
AFAICT COM is a bit of a red herring. COM allows for C++ style objects to be expressed with a stable ABI. But a lot of Windows is built on a C ABI where objects are not required. The key part is following the rules of either approach to ensure the ABI remains stable. Unfortunately this creates a situation where one person anywhere who makes a serious mistake can cause misery everywhere - it requires developers to be perfect. Frankly though, the vast majority of the time, ABI stability was achieved by just following the rules, and the rules are not that hard to follow.
I’ve ranted a bit about the lack of ABI stability on Linux libraries before, and agree with the original author’s point about “militant position on free software.” Just like the above, this position doesn’t need to be held universally - if it’s held by any single maintainer of a widely used library, the result is an unstable ABI.
It might have been 256 MB disk and 64 MB RAM, not sure… But yeah I was surprised when setting the VirtualBox resources how low it was. You can’t run Ubuntu like that anymore.
True, but that happens to me on a daily basis in Ubuntu :-/ Especially with external drives
Ubuntu used to be better but has gotten worse. Ditto OS X. I have had to reboot both OSes because of instability, just like Windows back in the day. 10 years ago they were both more stable IME.
You’d also be missing out on a lot of security stuff too. XP was a superfund site of malware back in the day before Microsoft started cleaning stuff up in SP2 and did radical refactoring in Vista.
Sure, I’m not saying we should literally use Windows XP :) I’m just saying it could be a good reference point for efficient desktop software. We’re at least 10x off from that now, and 1000x in some cases. We can be more secure than XP and Vista too :)
Core libraries like glibc (since 2.1) and libstdc++ (since GCC 5) are intending to remain backwards compatible indefinitely.
If you need to distribute a binary built against glibc, you need to use a very old distribution to build it so that it may run on any other that your users use (it means that you will make less secure binaries - e.g. due to compiler bugs, new libraries that do not compile or without recent things like stack protection). Because some function symbols contain a version number that may not be supported in the earlier version some users have. That is not what you call backward compatible.
And if you think about musl, then it’s a whole separate world: mixing libraries built for glibc with libraries built for musl will break.
GUI apps built for Windows 95 still work out of the box on Windows 10.
I think the author confuses backward compatibility with forward compatibility. Backward compatibility would mean that apps built for Windows 10 would still work on Windows 95.
A binary compiled against an earlier version of glibc is forward compatible with more recent versions of glibc. A binary compiled against a recent version of glibc is not backward compatible with earlier versions (but still forward compatible with newer versions).
But glibc itself, by supporting the symbols of the past, is backward compatible. glibc is partially forward compatible, for the symbols that exist presently, so that newer versions are backward compatible. This is the same for operating systems that can run old binaries.
Honestly lost me here. x86_64 binaries will work on 80% of up to date systems, and everyone else can maintain a package in their distro of choice if they want it.
Heck, even several year old binary nonfree blobs often work fine, much more than half the time.
Most binaries I download just list the libraries needed and make it my problem to install them. Some ship .so files in the bundle, which seems popular with nonfree especially for some reason.
That’s a pretty bad user experience compared to application distribution on Windows/macOS. Sure, the dev can package it for various distributions or volunteers can, but that still results in a lot of work.
A Debian package is cooperative. It relies on a community to say things like “in bullseye, these libraries at these versions will be available by default; these you need to state a dependency on; anything else, you need to ship”. You can rely on that for a couple of years.
flatpak/snap/containers/VM images are for hostile environments, where the people managing the deployable software don’t trust the people managing the systems that will get the deployment, or vice-versa, or both. “They won’t ship up-to-date versions of libhoopla” is isomorphic to “They can’t be bothered to use the stable version of libhoopla”.
This is what I find odd about the design of these things. They’re trying to solve for the operating environment to be hostile but also the reverse. Mixing sandboxing and dependency containment seems so ambitious and not something particularly well suited to Linux.
As a developer I’m bothered by “can’t be bothered to use the stable version of …”. There’s a balance of “I want compatibility with something N years old” and “I want to actually ship something and not reinvent stuff”. If targeting old stuff gets too annoying, you’ll get flatpak (or equivalent). In my case that was recently dropping .net framework and bundling .net core instead, because it saved days of work at the cost of extra 60MB.
If by any chance you are packaging a Rust application you might want https://lib.rs/crates/cargo-deb. Amazingly easy: install and run “cargo deb”, that’s it!
Anyone who can tolerate debian/control and debian/rules has a lot more patience. (For comparison, RPM is also equally dumb as bricks as dpkg, but specfiles are a lot simpler to write by hand.)
There’s also fpm (https://github.com/jordansissel/fpm) which “just works” in many situations if you just need the package but you’re not contributing it upstream.
Unfortunately, it is probably best for the types of packages which don’t need it. As soon as there are dependency differences across distros (e.g. /lib/libfoo.so vs /lib/libfoobar.so or something else) you are out of luck :l (they also haven’t merged my PR)
This is lightly touched on by some other commenters, but what Flatpak/Snap/AppImage all have in common is that they shift the onus of packaging and distribution from the distro maintainers to the developer of the software (in a distro-agnostic way), and (in the case of Snap/Flatpak) without giving up nice things such as auto upgrades. These tools also bundle a lot of other things like sandboxing, reviews, contained libraries, etc., but I don’t think we should ignore the practical value of independent distribution.
This part is mainly me exploring a hypothetical, but I can imagine a distro (or even OS) agnostic software registry that app developers can publish to, and use to natively integrate auto updates (ala Chrome/Firefox) and other nice features (such as expressing dependencies). If it could be federated to allow distros (Debian/Arch), large software communities (Gnome/KDE/GNU), companies (Redhat/Ubuntu/Google), services (GitHub) and individuals (you and me) to run registries for their own software but also from other registries they choose to depend on/federate with (e.g. Terraform is built with and distributed on Github’s registry and the Debian Terraform registry makes it available in a pass-through or patched mechanism for their installations), then it could dramatically reduce the duplication/maintenance burden in the industry. Use something like TUF (https://github.com/theupdateframework/) to ensure updates are secure and you’ve built a terrific way to get a lot of the practical benefits of flathub, without enforcing the use of a specific store, service, sandbox tool, or organisation.
This hypothetical already presents a lot of issues, and will push a lot of buttons (for good reason), but I think is an interesting area to explore.
I’m much more interested in how to get Excel and Photoshop on Linux rather than untrustworthy drive-by apps and games
I agree with criticizing Flatpak and the rest of the article (getting my head bitten off on Fedi for linking to it and defending it) but not this sentence. Giving preference to mega-apps over indie apps isn’t what I want.
I’m still trying to figure out what non-free apps I’m supposed to want so much on Linux that Flatpak (etc) are worth the trouble. The only non-free apps I have installed right now are Chrome (only because neither Firefox nor Chromium will cast to my TV), and Microsoft Teams (for work and for child’s school), and both of those are distributed as RPMs that add their updates repository to /etc/yum.repos.d/.
Whenever I accidentally install an application from flatpak (which happens because I run Fedora, for my sins) there’s always some usability problem with it until I realize that that’s what the problem is, uninstall it, and reinstall it from RPM.
I can answer that by doing flatpak list on my system.
Obsidian
Zoom
2 proprietary software I use by choice(1) or not (2) which are then installed without messing my system and nicely updated when needed.
I found myself installing free software like Lagrange, Foliate or Calibre through flatpak because it was easier and not packaged in Debian/Ubuntu (this has been solved for calibre but I’m still using the flatpak version by lazyness).
I mostly agree with all the criticisms about flatpak but, seriously, it’s really easy and really useful for stuff not in Debian.
Now, I’m asking myself if going all-in on flatpak is really a good solution. An interesting analysis would be to study what advantages/disadvantages of going all in or using it only for some proprietary stuff. For example, the protonmail-bridge is on flathub. It’s easier than downloading a deb on protonmail website. But, afaik, it’s not an official flatpak. Could it be trusted? Those are not easy questions.
Oh! I do also have Zoom installed, or I used to. They also provide RPMs and a yum/dnf repository for updates.
Calibre has been packaged in Fedora forever. I did have Foliate installed through Flatpak until it got packaged in Fedora 34; it had substantial problems with finding files that were solved when I reinstalled it from RPM. Lagrange I really can see installing from Flatpak, but I build it from source. When I had it installed by Flatpak, font and UI scaling was inconsistent depending on whether I was running it on X or Wayland.
I still think “It’s an end-user application not packaged in my distribution” is a reasonable use case for Flatpak, but in my experience, it’s always been strictly worse than using the same software, but packaged by the distribution, and I would absolutely hate for Flatpaks to make up the bulk of my system.
There’s a misunderstanding about windows there.
While you don’t have to for Win32, using modern .net does require it and you’re expected to either ship it with the app (self-contained package like appimage) or install it with the system (framework-dependant package like flatpak). Also going one layer beyond winapi - you’re expected to ship your own C++ runtime if you use one and you likely have multiple copies of it since “library packages” are not very popular on windows.
The complaint about lack of sharing is meh. Out of 8 apps I’ve got 7 share the base layers. The situation’s not perfect but we can’t expect it to be. People can do things differently and will do it sometimes.
Yep that bit me a few times. And don’t you expect the link for the c++ distributable to stay the same. Microsoft released a new fix version, and it’ll have a new cryptic link.
I wonder what happens when a hundred of your apps are running with flatpak. 8 is way too low to be interesting.
I don’t expect the ratios to change much for two reasons: 1. Once you get to hundreds of apps, you’re likely getting majority from a single distribution source and they’ll have popular common patterns. 2. There’s only so many reasons to do things in a different way. In the same way, out of hundreds of GUI apps, almost all of them will use Gtk or qt, with a long tail of something different for very few. We see the same thing with docker images - almost every one is based on Debian, Alpine, or CentOS (often providing two to choose from). I don’t see any reason for flatpak environment to behave any worse, given the shared runtime is an actual feature of that system.
On my system with 163 apps: 18 runtimes, and the top 5 cover 128 of the apps. Preinstalled in Endless OS: 58 apps, 13 runtimes.
(See the merged post!)
I think another thing to note is these are not all actually different runtime types. You also have different versions which the apps will migrate from/to over time. And those should be fairly trivial updates to unify the base.
The the C++ library shipping is largely down to MS not wanting to ship it in its super unstable state early on in C++ history, but that then painted them into a wall with shipping it with the OS later. That does mean that they can tie it to different versions of MSVC++ so can improve perf in ABI-breaking ways which is nice for them.
But it isn’t required, libc++ is ABI stable on all its platforms, for Mac apps don’t need their own copy of the standard library. The general ABI stability of macOS is such that apps that aren’t relying on non-API features generally just work. They have made some ABI breaks, but generally with good reason - they dropped bincompat with PPC after a number of years of intel, they dropped IA32 after 10 years of x86_64, they dropped arm <v8 after several years of 64 bit iOS. The big difference in apple vs. MS ABI compatibility, is that apple is much more willing to break bug or non-api compatibility. e.g If someone ships buggy software on windows that happens to work MS will add workarounds to keep it going. By and large apple won’t - and will contact the dev to get them to fix their software.
The strict ABI compatibility problem is why macOS was stuck with an archaic openSSL for such a long time - because openSSL kept breaking ABI Apple could not update it. The perpetual “who cares about ABI” essentially forced apple to stop using it.
While I’m as against software bloat as everyone else, and I admire the Apollo guidance software authors as much as everyone else, I find that analogy rather disingenuous.
Apollo guidance software authors managed to get things done with extremely limited hardware of that time by placing a lot of things outside the computer, like the entire user interface. Astronauts interacted with it by entering numeric command codes and looking up its numeric responses in the manual. Its scope was also quite limited, and I assume quite a lot of things were pre-calculated.
There’s no way anyone could make a Voyager or Curiosity with that hardware no matter how dedicated they are to keeping their software small.
I’m not sure I would call it disingenuous as much as a straw man. The hardware that was available to the Apollo program is different from the hardware that is available today. Every situation is resource constrained and engineers/software authors will make as much of the resources that are available as they can (hopefully with some acceptable margin of error). Especially so if you are building something that you can’t refresh later.
An analogy I like to use is that hardware is a glass and software/firmware is water. The engineers/software authors pour as much water into that glass as they can, find out when it runs out, then back off by a little bit and work within that amount. If Apollo had a 32-core processor and 16 GB of RAM that weighed less than the 4 kB they had, they would have poured a lot more “water” into the hardware.
An interesting comparison would be to see what percent of the on-board resources Apollo used, as compared to Voyager or Curiosity. My guess is it didn’t change much and we’ll have the same debate in another 50 years: Wow! Look at what they made Curiosity do without quantum computers! Why can’t the kids today squeeze more qubits out of their hardware?
I keep my vote on QEmu(KVM) + Qcow2 as the non-distribution sanctioned packaging and sandboxing runtime. Once ironically, now as faint hope. The thing is, the packaging nuances matters little in the scale of things, heck - Android got away with a bastardised form of .zip and an xml file.
The runtime data interchange interfaces are much more important, and Linux is in such a bad place that even the gargantuan effort Android 5.0 and onwards applied trying to harden theirs to a sufficient standard wouldn’t be enough. No desktop project has the budget for it.
You have Wayland as the most COM-like being used (tiptoeing around the ‘merits’ of having an asynchronous object oriented IPC system) – but that can only take some meta-IPC (data-device), graphics buffers in only one direction and snaps like a twig under moderate load. Then you attach Pipewire for bidirectional audio and video, but that has no mechanism for synchronisation or interactive input, and other ‘meta’ aspects. Enter xdg-desktop-portal as a set of D-Bus interfaces.
Compared to any of the aforementioned, Binder is a work of art.
Indeed, it seems to me that a lot of companies that jumped on the sandboxing bandwagon have missed a critical point that the sandboxing systems used in the mobile world didn’t. Sandboxing is a great idea but it’s not going to fly without a good way to talk to the world outside the sandbox. Without one, you either get a myriad incompatible and extremely baroque solutions which so far are secure by obscurity and more often than not through ineffectiveness (Wayland), or a sandboxing system that everyone works around in order to keep it useful (Flatpak – an uncomfortable amount of real-life programs are ran with full access to the home dir, rendering the sandbox practically useless as far as user applications are concerned).
Our industry has a history of not getting these second-order points. E.g. to this day I’m convinced that the primary reason why microkernels are mostly history is that everyone who jumped on the microkernel bandwagon in the 1990s poured a lot of thought into the message passing and ignored the part where the performance will always be shit if the message passing system and the scheduler aren’t integrated. QNX got it and is the only one that enjoyed some modicum of long-term success.
I’m not convinced we’re going to get the sandboxing part right, either. While I’m secretly hoping that a well-adapted version of KVM + QCow2 is going to win, what I think is more likely to happen is that the two prevailing operating systems today will just retrofit a sandboxing system from iOS, or a clone of it, and dump all “legacy apps” into a shared “legacy sandbox” that’s shielded from everything except a “Legacy Document” folder or whatever.
That’s the next tier: say you have nice and working primitives, now you just need to design user interfaces that fit these ergonomically. Mobile didn’t even try, but rather just assumed you have the threat modelling capacity of a bag of sand and proposed “just send it to us and we can give it back to you later” – it worked. I am no stranger to picking uphill battles, but designing scenarios for conditioning users to adopt interaction patterns that play well with threat compartmentation is a big no.
There is much behind that, especially so in open source. Rewrite ‘established tech’ for ‘hyped platform’. If you’re a well-funded actor, throw money at marketing the thing and repeat until something sticks. People new to the game might even think that what is being sockpuppeted this time around actually a new thing and not a tired rehash.
Heavens no. While I believe in whole-system virtualisation for compatibility or performance, the security/safety angle is dead even before decades of hardware lies are uncovered.
What boggles my mind is that we’re almost dipping into triple digit well-analysed big budget product sandbox escapes and people still go for that as the default strategy. Post facto bubble wrapping? that’s a parrot pining for the fjords. Design and build for least-privilege separation? perhaps, but hardly applies everywhere and the building blocks in POSIX/Win32/… are … “not ideal”.
There are some big blockers for the KVM/QCow2 angle, getting good ‘guest additions’ and surrounding tooling for interchange and discovery (search) is a major one. The container-generation solution of ssh and/or web comes to mind as the opposite of good here.
That would be the pinnacle of tragedy (until the next one) – the prospect of having all the ergonomics of data sharing between domains on a smartphone with the management and licensing overhead of a desktop.
I know you’re kidding about the KVM + QCow2 sandboxing but I really think it’s the least bad option that can be built with what we have now and has a chance at industry traction. Guest additions are only a problem if you’re trying to run a kernel built for real hardware, in which case you need “special’ drivers. But if one were to devise and implement an ACME Virtual Sandbox Qemu machine, qemu-based sandboxing engine could just use a kernel with the guest additions baked in. Fine-grained access control is then a matter of mounting the correct devices over a virtual network.
It’s not a good solution but it does have the potential of providing satisfactory solutions for a bunch of thorny problems, not the least of which is dealing with legacy applications that nobody’s going to update for some fancy new sandboxing system. At least in the desktop space, neither of the two major players has any interest in solving much simpler problems. I doubt any of them wants to throw money at solving this problem properly, especially when they both have perfectly good walled gardens that they can sell as security oil. This one’s clunky but at least it’s not Windows Subsystem for Android.
I’m still waiting for the day when we’ll just sell software along with the computer that runs them, and every computer will be the size of an SD card and we’ll just plug the thing into the deckstation and our sandboxing solution is going to be real, physical segregation :-P.
(I also just know someone’s gonna figure out how to break that but hey, that’s the kind of fun that got me into computers in the first place!)
Yes and no. So I compartment some things, particularly browsers, by hardware. There’s a cluster. It netboots, gets a ramdisk friendly image, boots into chrome, forwards to my desktop. When I close a ‘tab’, the connection is severed and RST is pulled (unless there’s a browser crash and in those cases I collect the dump and some state history, more than a few in-the-wild 0-days have been found that way). That puts the price point for ‘smash and grab’ing me well beyond what little I am worth, and it opens up for a whole lot of offensive privacy. I think this can be packaged and made easy enough for a large set of users.
They are used for some things that are most readily available in user-space, integration with indexing services, clipboard, drag and drop. I didn’t do any requirements engineering for arcan shmif. I picked a set of most valuable applications, and wrote backends to see what I was missing, then iteratively added that.
The first round was emulators because games and speedruns are awesome free tight timing test-sets. The second round was QEMU for basically the reasons we’re talking about. Linux won’t ever fix its rain forest of broken ABIs. Important legacy applications will break for someone. I don’t agree with that. While my belief is compatibility only, if someone thinks it fits their threat model, I won’t judge (openly).
Cartridges are coming back in style. One project I have in the sinister pile with an investor pitch deck comes distributed on SD cards targeting certain SBCs.
I mean, doing any work that involves multiple distinct software on an Android phone or even worse on iOS is completely and utterly impractical because of this so I’m not sure about the “didn’t”. Those are good for consuming content, but for producing it ? They only show that even the best sandboxing systems mankind was able to come up with so far are a full-blown failures for doing actual work.
Sandboxing is a pipe dream; it’ll never work. Reserve hope for packaging (though not very much). And wrt packaging, the primary issue is the stability, not the quality of the associated APIs, wherefore the steam/flibit approach works decently well in practice. And has the advantage that it is not completely opaque, such that it is easier and more sensible to, say, swap in a patched libSDL2.
I think the reason android is in better shape than linux is because it has more clearly defined goals. ‘What do we want to package, and why, and for whom?’—‘Apps written in java, to collect ad revenue, for clueless smartphone users.’
It depends on how you define sandbox and what you expect from it. VMs are sandboxes, docker is a sandbox, systemd’s dynamic users are a sandbox, most of my system utilities run in selinux sandbox, your browser has at least 2 sandboxes, etc. The underspecified “sandbox” description is a problem. We don’t lack sandboxes which actually work.
A sandbox is something I can use to run untrusted code and limit the scope of harm it can deal. No sandbox implemented in software has this property.
Nor does any implemented in hardware, for that matter… Spectre/Meltdown, enclave escapes, etc…
A web browser? DOSBOX? QEMU?
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=chrome
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=firefox
https://en.wikipedia.org/wiki/Row_hammer
https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)
There’s a difference between “we have system doing X and the implementations have bugs” and “we don’t have a system doing X”. If any past and future problem disqualifies an approach, then we don’t have lightbulbs, cars, agriculture, … They all have failure modes. I don’t think that hard stance is practical.
Modern sandboxes moved the bar from “just put a long string somewhere” (1990s) to “you need a lot of skills or $100k-s to get a temporary exploit”. And we’re not slowing down on improvements.
A container’s fundamental purpose is to prevent infection of the system and any other systems that can possibly be linked, and thus to prevent exfiltration of secrets or abuse / damage to the machine. The fact is that this is fundamentally impossible to guarantee and very, very costly to assume. While cars have failure modes, they still mostly get you from A to B – they at least mostly do the job that you intended, whereas containers do not and cannot prevent malicious attacks. At best this is like everyone having a car that will only take you half way to where you are driving, and at worst it’s like having half a dam – it completely and utterly defeats the point of having the dam in the first place. Sure, it “sometimes works”, but I’m not very sure I would want to use it, and I certainly wouldn’t sell it to anyone else as a good thing.
Citation needed. All/Most of the following attacks are fundamentally “long string attacks”, i.e. buffer overflows.
https://morphuslabs.com/attacking-docker-environments-a703fcad2a39
https://hackerone.com/reports/1332433
https://portswigger.net/daily-swig/vulnerabilities-in-kata-containers-could-be-chained-to-achieve-rce-on-host
https://www.trendmicro.com/en_us/research/21/b/threat-actors-now-target-docker-via-container-escape-features.html
Security is not a binary state. You work with a given threat model, then work to prevent specific classes of attacks/vulnerabilities. There’s no tool that “provides security”. Containers remove some classes, and add some new issues to think about. You want to isolate the filesystem and network between processes on the same host - containers will help you. You want to mitigate kernel exploits, SQL injections, or people storming your datacenter - containers won’t help you.
Secure / not secure are not real states of the system. You need to define what kind of secure we’re talking about and what abuse is. What you define as abuse may be my business model.
Which leads to:
What I’m trying to say is: what you mentioned is not a fundamental purpose of the containers and trying to discuss things like that is a mistake at step 1. If you’re interesting in learning more about those issues, I recommend reading about threat modelling.
I don’t know of anything that summarised the whole decades of changes, but in short: stack overflow are dead with stack protectors, shadow attacks and many compiler improvements; heap overflows are much harder due to (k)aslr and various layout mitigations, also almost dead due to W^X; rop is kind of there, but CET is getting popular and in general control-flow-integrity is a thing we talk about. The most popular breakout from V8 these days are double-free / dangling pointers and type confusion as far as I remember from browsing CVEs. These are multi-step exploits which are often still not reliable or immediate since they require address leaks first and there’s some luck involved. (And then you need to do a separate browser sandbox escape) Either way, basic overflows are stone age tools at this point and rarely work.
https://github.com/microsoft/MSRC-Security-Research/blob/master/presentations/2018_02_OffensiveCon/The%20Evolution%20of%20CFI%20Attacks%20and%20Defenses.pdf describes a part of that in better details.
As for the effort/price, Zerodium was advertising a couple years ago wanting to buy windows Chrome rce + sandbox escape for 250k.
This is wholly antithetical to defense in depth. I don’t like linux containers, and docker in particular has had a long history of bugs, common misconfigurations and footguns; sandboxing complicates the attacker’s experience from just exploiting the application to exploiting the application and then pivoting to a container escape, if one is possible.
One thing to mention in the case of sandboxing an application in a container, is that you don’t necessarily get a whole system like you would if the app was running on an OS installed on bare metal. Commonly at my company, and I’m sure at many others, we use distroless images and run the applications in these containers as non-root. This severely limits what an attacker has in their toolkit to exploit a container escape.
The next paragraph
The post-exploitation phase is important, since as I said, the goalpost has now shifted. The question becomes how do I pivot from the application I exploited to one of these other containers (that as the article mentions, may not have an exposed service).
idk what this has to do with container weaknesses. They left a container management app exposed to the world unconfigured. The application has to have access to the docker runtime to do its job. If we wanna talk about a bare metal equivalent, this would be like leaving cPanel open to the world. Same shit.
Quoting the researcher that found the exploits:
What they’re encouraging as a solution is to aggressively drop privileges from the sandbox that they don’t need. However, this a problem with Kata Containers and not the concept of sandboxing itself. The researcher here is arguing for more aggressive sandboxing, not doing away with the idea.
The attack here is on a specific Docker configuration. This configuration is generally quite rare afaik (very few times have I ever run privileged docker containers, and we don’t run them in prod). Moreover, again the researcher argues here that more aggressive sandboxing is needed, not less.
I’m not sure how any of these point to doing away with sandboxing altogether. I’d argue that instead, they point to problems with specific implementations of Linux containers. And I’d agree! I think the piecemeal way in which you construct Linux containers via various primitives make it very easy to construct insecure containers, and many implementers (Docker obvs, and I guess here Kata Containers as well) have fallen into that trap. I don’t think that this is damning of sandboxing as a concept though.
Basically the only apps I use on my work-issued Android phone are 2FA apps (Okta, Google etc). These at least are considered ok to run on the platform.
This is indeed depressing … My last upgrade was from Ubuntu 16 to 18, not to the current 20 release, because it appears to have less Snap BS on it. Not sure what I’m going to do in a few years :-(
On another note, I recently ran WIndows XP on modern hardware and it absolutely flies. As far as desktop apps, it does basically everything that modern computers do. We used to make fun of Microsoft software for being “bloated” but the Linux world is 1000x worse now. The Microsoft calculator app was never 152 MB :-(
In fact the entire Windows XP installation is under 128 MB! Amazing!!! And it runs in 32 MB of RAM.
I wonder if Linux needs something like COM – stable shared library interfaces. I would prefer something more like IPC than shared libraries (Unix style), but performance is always a concern. Although you did have “DLL hell” back then too, which is what Snap and such are trying to avoid.
But I wonder if “DLL hell” was really people NOT using COM, which should let people know if the interfaces were changed? Or just using it poorly, i.e. changing the semantics of the interface, rather than creating new interfaces when breakage occurs.
Mozilla had XPCOM more than a decade ago but abandoned it for some reason. I think they abandoned true cross language interoperability and went for just JS / C++ interop like WebIDL in Chrome (as far as I understand).
Yeah. Software that was written by people using HDDs goes really well when you run it on SSDs.
I consider it a systemic tragedy that programmers tend to use very fast computers when we’re actually the one group that has the most capability to change things to make slow ones useful.
Yeah I asserted on Hacker News that the Apple M1 would probably have the effect of making the entire web slower.
If you assume that web developers are more likely to buy newer Apple laptops sooner than their audience, and spend more on them, that seems inevitable :-(
I think SSDs were a big jump in hardware performance but CPU and memory are issues as well. Today’s apps use so much memory that people running with 8 GB of RAM can experience slowdowns, let alone older computers with 2GB or 1GB (like at the library, or what many low income people use, etc.)
Fwiw that’s also been true of every new AMD and Intel CPU too so there’s no particular reason to single Apple out. Other than their currently being in front. Obviously I do believe you are correct.
This is one area where I hoped at one point that web browsers might help a bit because of per tab memory limits.
Also CI environments and serverless (neé Platform as a Service) environments tend to charge by the 128MB-second of RAM which tempts people to try to fit things in smallish boxes.
I think this unique because it’s the first time there’s a pretty big differential between Apple and the rest of the industry. And because web developers are more likely to use Apple machines, and the audience is more likely to use Windows.
When Apple was using Intel chips, top of the line Windows laptops had the same CPUs or faster. Now if you’re a Windows user, AFAIK you can’t get a laptop as fast as the Macs that everyone is buying right now.
Obviously this is not Apple’s “fault”; they’re just making faster computers. And I haven’t quantified this, but I still think it’s interesting :)
Ah. You have a good point. Yes, I’ll concede this!
I don’t find this argument compelling.
First off, most usage is via mobiles, and any company worth its salt (i.e. striving to make money) will take this into consideration. Mobile clients are generally not as fast as desktop ones.
Secondly, the M1 line is about a year old. Has there really been a critical mass of web technology developed and deployed during that time, that is acceptable to run on an M1 but not on others, to materially tilt the scale of performance in the wild?
And lastly, this paints an incredibly bleak “us vs. them” picture, wherein Intel, one of the world’s largest companies and one that has been in the forefront of processor development for decades will never[1] catch up with Apple when it comes to performance. Or that Apple, seeing a gaping hole in the market, won’t move in with more affordable machines using the M1 chips to capture that.
[1] well, in the medium term
I’m not claiming this is a permanent state of affairs! Surely the CPU market will eventually change, but that’s how it is now.
The first claim about company incentives is empirically false… If mobile app speed mattered more than functionality or time to market, then you wouldn’t see apps that are 2x or 10x too slow in the wild, yet you see them all the time.
Funny story is that at Google, which traditionally had very fast web apps and then fell off a cliff ~2010 or so, people knew this was a problem. There were some proposals to slow down the company network once a week to respect our mobile users. (The internal network was insanely fast, both throughput and latency wise). This never happened while I was there. Seems like a good idea but there was no will to do it.
The truth is that 99% of changes to web apps never get tested on a mobile network or mobile device. It slows down development too much. Why do you think there is the mobile app simulator in desktop Chrome? Because that’s how people test their changes :)
Once there’s a slowdown, it’s fairly hard to back out after a few changes are piled on top. So in this respect Google is like every other company that does web dev. There is no magic. It’s just a bunch of people writing JavaScript on top of a huge stack, and they are incentivized to get their jobs done.
If they did test on mobile, employees generally had the latest phones because they were given out as gifts every year (for awhile). And testing on the phone doesn’t solve the problem of testing on an insanely fast network.
Thanks for clarifying and expanding. My faith in the free market and competition has taken a dent.
Actually, ironically the web solves this problem with process-based concurrency and stable protocols and interchange formats.
So if I have a web calculator app, a web spreadsheet, and a web mail app, I don’t ship the GUI with the app.
Instead I emit protocols and languages that cause the GUI to be displayed by the browser.
In some sense you do move some of the bloat do the browser, but it’s linear bloat and not multiplicative bloat like with Linux desktop apps.
You are forced to do feature detection (with JS) rather than version detection, but that’s good! That is, the flaky model of solving version constraints in a package manager is part of what leads to DLL hell.
Also, it’s much easier to sandbox such a web app than one that makes a lot of direct calls to the OS.
So the web is more Unix-y and avoids a lot of the problems that these Linux desktop apps have. Although I guess we recreated a similar problem again on top of it with JS package managers :-/
But so many things are just not possible with the web ? Good luck having a DaVinci Resolve or Solidworks (the real thing, not something with 1/100th of the features, format and hardware support) editing things that requires 32+GB of RAM to work semi-comfortably, in a web page.
Definitely true, I’m just pointing out an underexamined benefit of the web architecture for certain (simple) apps. There are plenty of problems with the web for that use case too, in particular that it’s most natural to write everything in JavaScript!
Well, after a fashion, we do already have COM in Linux, via Wine.
And by sheer headcount, most of the applications I run (via Proton ne Wine) are all using COM and the Win32 APIs.
If you want a solid API to bang against on Linux–at least for gaming–use Microsoft APIs. :3
Yeah I think I saw a pithy tweet about that recently!!!
I can’t find it but this is similar: https://twitter.com/badsectoracula/status/1181574065817038850
“the most stable ABI on Linux is Wine”
I’m not really familiar with the gaming world but it seems like this is a common thing: https://news.ycombinator.com/item?id=22922774
So yeah Linux is bad at API design and stability so we have converged on something that is known to work, which is Win32 :-/
That doesn’t sound like a full install. Officially it required 64Mb RAM but disabled things like wallpaper when below 128Mb; unofficially if you want to run any software the real number is much higher. Nonetheless the point remains valid.
AFAICT COM is a bit of a red herring. COM allows for C++ style objects to be expressed with a stable ABI. But a lot of Windows is built on a C ABI where objects are not required. The key part is following the rules of either approach to ensure the ABI remains stable. Unfortunately this creates a situation where one person anywhere who makes a serious mistake can cause misery everywhere - it requires developers to be perfect. Frankly though, the vast majority of the time, ABI stability was achieved by just following the rules, and the rules are not that hard to follow.
I’ve ranted a bit about the lack of ABI stability on Linux libraries before, and agree with the original author’s point about “militant position on free software.” Just like the above, this position doesn’t need to be held universally - if it’s held by any single maintainer of a widely used library, the result is an unstable ABI.
It might have been 256 MB disk and 64 MB RAM, not sure… But yeah I was surprised when setting the VirtualBox resources how low it was. You can’t run Ubuntu like that anymore.
Right, because COM doesn’t make you add an
IHober2
, it’s your own internal discipline.Well, at least until you block a UI thread in Explorer…
True, but that happens to me on a daily basis in Ubuntu :-/ Especially with external drives
Ubuntu used to be better but has gotten worse. Ditto OS X. I have had to reboot both OSes because of instability, just like Windows back in the day. 10 years ago they were both more stable IME.
You’d also be missing out on a lot of security stuff too. XP was a superfund site of malware back in the day before Microsoft started cleaning stuff up in SP2 and did radical refactoring in Vista.
Sure, I’m not saying we should literally use Windows XP :) I’m just saying it could be a good reference point for efficient desktop software. We’re at least 10x off from that now, and 1000x in some cases. We can be more secure than XP and Vista too :)
Here it’s how the deduplucation works. Pretty cool ♥
If you need to distribute a binary built against glibc, you need to use a very old distribution to build it so that it may run on any other that your users use (it means that you will make less secure binaries - e.g. due to compiler bugs, new libraries that do not compile or without recent things like stack protection). Because some function symbols contain a version number that may not be supported in the earlier version some users have. That is not what you call backward compatible.
And if you think about musl, then it’s a whole separate world: mixing libraries built for glibc with libraries built for musl will break.
I think the author confuses backward compatibility with forward compatibility. Backward compatibility would mean that apps built for Windows 10 would still work on Windows 95.
Your use is also at odds with how “backward compatibility” is used with, e.g., game consoles.
I got this wrong.
A binary compiled against an earlier version of glibc is forward compatible with more recent versions of glibc. A binary compiled against a recent version of glibc is not backward compatible with earlier versions (but still forward compatible with newer versions).
But glibc itself, by supporting the symbols of the past, is backward compatible. glibc is partially forward compatible, for the symbols that exist presently, so that newer versions are backward compatible. This is the same for operating systems that can run old binaries.
Honestly lost me here. x86_64 binaries will work on 80% of up to date systems, and everyone else can maintain a package in their distro of choice if they want it.
Heck, even several year old binary nonfree blobs often work fine, much more than half the time.
Shipping a binary is easy. Shipping a binary that depends on libraries is where you get into Fun Time due to the varying package managers.
Most binaries I download just list the libraries needed and make it my problem to install them. Some ship .so files in the bundle, which seems popular with nonfree especially for some reason.
That’s a pretty bad user experience compared to application distribution on Windows/macOS. Sure, the dev can package it for various distributions or volunteers can, but that still results in a lot of work.
With AppImage things just work in my experience. I build on CentOS 7 and it works on bleeding edge ArchLinux.
This is rather sobering. Hoping someone more familiar can weigh in here as I make custom Debian packages for my own usage. I guess I’m a luddite?
Nah, you’re winning.
A Debian package is cooperative. It relies on a community to say things like “in bullseye, these libraries at these versions will be available by default; these you need to state a dependency on; anything else, you need to ship”. You can rely on that for a couple of years.
flatpak/snap/containers/VM images are for hostile environments, where the people managing the deployable software don’t trust the people managing the systems that will get the deployment, or vice-versa, or both. “They won’t ship up-to-date versions of libhoopla” is isomorphic to “They can’t be bothered to use the stable version of libhoopla”.
This is what I find odd about the design of these things. They’re trying to solve for the operating environment to be hostile but also the reverse. Mixing sandboxing and dependency containment seems so ambitious and not something particularly well suited to Linux.
As a developer I’m bothered by “can’t be bothered to use the stable version of …”. There’s a balance of “I want compatibility with something N years old” and “I want to actually ship something and not reinvent stuff”. If targeting old stuff gets too annoying, you’ll get flatpak (or equivalent). In my case that was recently dropping .net framework and bundling .net core instead, because it saved days of work at the cost of extra 60MB.
Excellent point. They serve somewhat different use cases.
If by any chance you are packaging a Rust application you might want https://lib.rs/crates/cargo-deb. Amazingly easy: install and run “cargo deb”, that’s it!
This. Is. Gold! Thanks for the tip!
Anyone who can tolerate
debian/control
anddebian/rules
has a lot more patience. (For comparison, RPM is also equally dumb as bricks as dpkg, but specfiles are a lot simpler to write by hand.)There’s also fpm (https://github.com/jordansissel/fpm) which “just works” in many situations if you just need the package but you’re not contributing it upstream.
Unfortunately, it is probably best for the types of packages which don’t need it. As soon as there are dependency differences across distros (e.g. /lib/libfoo.so vs /lib/libfoobar.so or something else) you are out of luck :l (they also haven’t merged my PR)
Pkgsrc is future.
This is lightly touched on by some other commenters, but what Flatpak/Snap/AppImage all have in common is that they shift the onus of packaging and distribution from the distro maintainers to the developer of the software (in a distro-agnostic way), and (in the case of Snap/Flatpak) without giving up nice things such as auto upgrades. These tools also bundle a lot of other things like sandboxing, reviews, contained libraries, etc., but I don’t think we should ignore the practical value of independent distribution.
This part is mainly me exploring a hypothetical, but I can imagine a distro (or even OS) agnostic software registry that app developers can publish to, and use to natively integrate auto updates (ala Chrome/Firefox) and other nice features (such as expressing dependencies). If it could be federated to allow distros (Debian/Arch), large software communities (Gnome/KDE/GNU), companies (Redhat/Ubuntu/Google), services (GitHub) and individuals (you and me) to run registries for their own software but also from other registries they choose to depend on/federate with (e.g. Terraform is built with and distributed on Github’s registry and the Debian Terraform registry makes it available in a pass-through or patched mechanism for their installations), then it could dramatically reduce the duplication/maintenance burden in the industry. Use something like TUF (https://github.com/theupdateframework/) to ensure updates are secure and you’ve built a terrific way to get a lot of the practical benefits of flathub, without enforcing the use of a specific store, service, sandbox tool, or organisation.
This hypothetical already presents a lot of issues, and will push a lot of buttons (for good reason), but I think is an interesting area to explore.
I agree with criticizing Flatpak and the rest of the article (getting my head bitten off on Fedi for linking to it and defending it) but not this sentence. Giving preference to mega-apps over indie apps isn’t what I want.
I’m still trying to figure out what non-free apps I’m supposed to want so much on Linux that Flatpak (etc) are worth the trouble. The only non-free apps I have installed right now are Chrome (only because neither Firefox nor Chromium will cast to my TV), and Microsoft Teams (for work and for child’s school), and both of those are distributed as RPMs that add their updates repository to /etc/yum.repos.d/.
Whenever I accidentally install an application from flatpak (which happens because I run Fedora, for my sins) there’s always some usability problem with it until I realize that that’s what the problem is, uninstall it, and reinstall it from RPM.
I can answer that by doing flatpak list on my system.
2 proprietary software I use by choice(1) or not (2) which are then installed without messing my system and nicely updated when needed.
I found myself installing free software like Lagrange, Foliate or Calibre through flatpak because it was easier and not packaged in Debian/Ubuntu (this has been solved for calibre but I’m still using the flatpak version by lazyness).
I mostly agree with all the criticisms about flatpak but, seriously, it’s really easy and really useful for stuff not in Debian.
Now, I’m asking myself if going all-in on flatpak is really a good solution. An interesting analysis would be to study what advantages/disadvantages of going all in or using it only for some proprietary stuff. For example, the protonmail-bridge is on flathub. It’s easier than downloading a deb on protonmail website. But, afaik, it’s not an official flatpak. Could it be trusted? Those are not easy questions.
Oh! I do also have Zoom installed, or I used to. They also provide RPMs and a yum/dnf repository for updates.
Calibre has been packaged in Fedora forever. I did have Foliate installed through Flatpak until it got packaged in Fedora 34; it had substantial problems with finding files that were solved when I reinstalled it from RPM. Lagrange I really can see installing from Flatpak, but I build it from source. When I had it installed by Flatpak, font and UI scaling was inconsistent depending on whether I was running it on X or Wayland.
I still think “It’s an end-user application not packaged in my distribution” is a reasonable use case for Flatpak, but in my experience, it’s always been strictly worse than using the same software, but packaged by the distribution, and I would absolutely hate for Flatpaks to make up the bulk of my system.
Should be merged into
ljsx5r
.