This was from 2012. Arguably, we’re already there. Tons of popular computers run signed bootloaders and won’t run arbitrary code. Popular OS vendors already pluck apps from their walled garden on the whims of freedom-optional sovereignties.
The civil war came and went and barely anyone took up arms. :(
It’s not like there won’t always be some subset of developer- and hacker-friendly computers available to us. Sure, iPhones are locked down but there are plenty of cheap Android phones which can be rooted, flashed with new firmware, etc. Same for laptops, there are still plenty to choose from where the TPM can be disabled or controlled.
Further, open ARM dev boards are getting both very powerful and very cheap. Ironically, it might even be appropriate to thank China and its dirt-cheap manufacturing industry for this freedom since without it, relatively small runs of these tiny complicated computers wouldn’t even be possible.
This is actually the danger. There will always be a need for machines for developers to use, but the risk is that these machines and the machines for everyone else (who the market seems to think don’t “need” actual control over their computers) will diverge increasingly. “Developer” machines will become more expensive, rarer, harder to find, and not something people who aren’t professional developers (e.g. kids) own.
We’re already seeing this happen to some extent. There are a large number of people who previously owned PCs but who now own only locked down smartphones and tablets (moreover, even if these devices aren’t locked down, they’re fundamentally oriented towards consumption, as I touched on here).
Losing the GPC war doesn’t mean non-locked-down machines disappearing; it simply means the percentage of people owning them will decline to a tiny percentage, and thus social irrelevance. The challenge is winning the GPC war for the general public, not just for developers. Apathy makes it feel like we’ve already lost.
Arguably iPhones are dev friendly in a limited way. if you’re willing to use Xcode, you can develop for your iPhone all you want at no charge.
Develop for, yes, within the bounds of what Apple deems permissible. But you can’t replace iOS and port Linux or Android to it because the hardware is very locked down. (Yes, you might be able to jailbreak the phone through some bug, until Apple patches it, anyway.)
Mind you, I’m not bemoaning the fact or chastising Apple or anything. They can do what they want. My original point was just that for every locked-down device that’s really a general-purpose computer inside, there are open alternatives and likely will be as long as there is a market for them and a way to cheaply manufacture them.
Absolutely! Even more impressive is that with Android, Google has made such a (mostly) open architecture into a mass market success.
However it’s interesting to note that on that very architecture, if you buy an average Android phone, it’s locked down with vendorware such that in order to install what you want you’ll likely have to wipe the entire ecosystem off the phone and substitute an OSS distribution.
I get that the point here is that you CAN, but again, most users don’t want the wild wild west. Because, fundamentally, they don’t care. They want devices (and computers) that work.
Google has made such a (mostly) open architecture into a mass market success.
Uh, I used to say that until I looked at the history and the present. I think it’s more accurate that they made a proprietary platform on an open core a huge success by tying it into their existing, huge market. They’ve been making it more proprietary over time, too. So, maybe that’s giving them too much credit. I’ll still credit them with their strategy doing more good for open-source or user-controlled phones than their major competitors. I think it’s just a side effect of GPL and them being too cheap to rewrite core at this point, though.
I like to think that companies providing OSes are a bit like states. They have to find a boundary over how much liberty over safety they should set, and that’s not an easy task.
This is not completely true. There are some features you can’t use without an Apple developer account which costs $100/yr. One of those features is NetworkExtension.
friendly in a limited way.
OK, so you can take issue with “all you want” but I clearly state at the outset that free development options are limited.
Over half a million people or 2 out of 100 Americans died in the Civil War. There was little innocent folks in general public could do to prevent it or minimize losses Personally, I found his “civil war” to be less scary. The public can stamp these problems out if they merely care.
That they consistently are apathetic is what scares me.
Agreed 100%.
I have no idea what to do. The best solution I think is education. I’m a software engineer. Not the best one ever, but I try my best. I try to be a good computing citizen, using free software whenever possible. Only once did I meet a coworker who shared my values about free software and not putting so much trust in our computing devices - the other 99% of the time, my fellow devs think I’m crazy for giving a damn.
Let alone what people without technical backgrounds give a damn about this stuff. If citizens cared and demanded freedom in their software, that would position society much better to handle “software eating the world”.
The freedoms guaranteed by free software were always deeply abstruse and inaccessible for laypeople.
Your GNOME desktop can be 100% GPL and it will still be nearly impossible for you to even try to change anything about it; even locating the source code for any given feature is hard.
That’s not to say free software isn’t important or beneficial—it’s a crucial and historical movement. But it’s sad that it takes so much expertise to alter and recompile a typical program.
GNU started with an ambition to have a user desktop system that’s extensible and hackable via Lisp or Scheme. That didn’t really happen, outside of Emacs.
Your GNOME desktop can be 100% GPL and it will still be nearly impossible for you to even try to change anything about it; even locating the source code for any given feature is hard.
I tried to see how true that is with a random feature. I picked brightness setting in the system status area. Finding the source for this was not so hard, it took me a few minutes (turns out it is JavaScript). Of course it would have been better if there was something similar to browser developer tools somewhere.
Modifying it would probably be harder since I can’t find a file called brightness.js on my machine. I suppose they pack the JavaScript code somehow…
About 10 years ago (before it switched to ELF) I used Minix3 as my main OS for about a year. It was very hackable. We did something called “tracking current” (which apparently is still possible): the source code for the whole OS was on the disk and it was easy to modify and recompile everything. I wish more systems worked like this.
Remember when the One Laptop Per Child device was going to have a “view source” button on every activity?
This article is a sequel of Lockdown: The coming war on general purpose computing.
On a related note, it’s also worth noting that the user control situation is even worse on mobile devices. You pretty much can’t buy phones or tablets with unlocked firmware that you can easily put your own operating system on.
Well there is the Librem at least.
It is my understanding that even this and Fairphone still require blobs and the baseband is totally opaque. The battle for complete user freedom on mobile still seems to be completely lost.
This is correct. Purism routinely exaggerates about what they are able to provide in terms of openness, without any plausible way of actually delivering. It’s quite tiresome.
Not only will Librem 5 have blobs, they’ve now shamelessly announced they intend to use a loophole to procure FSF RYF certification despite this. If this is allowed to stand, it also makes RYF rather meaningless.
Also Fairphone:
We offer the ability to choose between the Google experience and the freedom of open source. Both versions are officially supported by Fairphone and we will provide continuous software updates.
In addition, and because the code is openly available, everybody is free to work on making other operating systems work on the Fairphone 2. The community already offers alternative operating systems like Sailfish OS, Ubuntu Touch and LineageOS.
Thanks, haven’t seen Fairphone before. I really hope there will be enough of a niche for companies like them and Librem going forward.
As a Fairphone user: the market is made by buying the damned phones.
I wish there was an official Sailfish distro. I’m a happy user of the community port, but I also tolerate some glitches. Like not being able to calibrate the proximity sensor or run android apps.
But, as stated, they do have a non-Google android for those who want to be closer to the mainstream and a Google android for people who don’t care that much.
You can unlock the bootloader on most Android phones and you can run LineageOS or other AOSP forks, sometimes Ubuntu Touch and Sailfish ports, or postmarketOS.
You typically have to run the vendor android kernel fork if you want to have useful functionality, but some devices (Nexus 5, Nexus 7, Xperia Z2, Xperia Z2 Tablet) can run mainline Linux.
I know that you can unlock the bootloader, but I think that’s very far from ideal. Also the tools themselves tend to be closed source, and sketchy. You should be able to decide what runs on your phone without jumping through hoops.
I’m not following the conspiracy theory with Hollywood. What evidence is available for that assertion?
It’s not a conspiracy theory, it’s in plain sight.
Firstly, Intel invented HDCP, a DRM technology. Source: https://www.digital-cp.com/about_dcp
Secondly, Intel then added DRM functionality to their CPUs. Source: https://blogs.intel.com/technology/2011/01/intel_insider_-_what_is_it_no/ (Incidentially, we know this functionality is implemented via the ME, because the architecture of it is described in the book written about the ME by the guy who designed it, which is available for free here: https://link.springer.com/content/pdf/10.1007%2F978-1-4302-6572-6.pdf You want chapter 8.)
This technology necessarily involves the development of a contractual relationship between Intel and industries whose interest is in precluding platform owners from controlling those machines fully, namely Hollywood. Intel’s DRM technology appears focused on video and is clearly aligned with the interests of this industry.
Intel tries to frame this relationship as providing a benefit for its customers by enabling access to content Hollywood would otherwise be too skittish to provide; but this relationship is necessarily contrary to the essential interests of the platform owner in controlling their machine, and directly works to oppose it.
Intel Insider on the Intel blog. Their intentions on Hollywood side are clear. A 2008 article on Microsoft’s side of it. Schneier wrote about TCPA in 2005, too. I’ll note NSA was in on that with some of the sessions on its design decisions classified.
Some of that fed into the High Assurance Platform referenced in OP which wasn’t actually high assurance: low-to-medium assurance components like VMware and Red Hat mixed with secret stuff from NSA. It was sold as General Dynamics TVE Workstation but Google and GD are giving me garbage results right now. The high-assurance offering were separation kernels that ended up failing due to inherent vulnerabilities of desktop hardware. At least all that R&D accidentally gave us an option to partly turn off one backdoor, though. ;) Also, you can used embedded-style hardware with separation kernels for good results.
Far as backdoors, I kept advocating people and/or companies raise money to get AMD to do a semi-custom design that removes the backdoor, maybe some legacy baggage, and maybe adds some security extensions from CompCi. They and Intel were doing such customized CPU’s/SoC’s for a lot of companies. The recent Chinese licensing of pretty much the whole of AMD processors, which blew my mind, makes me think the semi-custom deal is less far-fetched. Hell, might even be able to do it with the Chinese company even cheaper or at least on paper to bypass any contractual obligation AMD has.
[Comment removed by author]
See the article. Most of what is said also applies to AMD, and according to the linked Phoronix post, they entered into similar agreements to be able to provide the same DRM functionality.
One idea I had to solve the fundamental insecurity of web-based crypto was to add subresource integrity support to service workers.
Service workers are persistent and are essentially a piece of JavaScript which can intermediate all HTTP requests to an origin, which means a trusted service worker could verify the integrity of all loaded resources according to an arbitrary policy. Then you only need to secure the service worker itself. If you could specify the known hash of a service worker JS file, you could know that all future resource loads from the origin would be intermediated by that JS. Presumably, the service worker JS would change rarely and be publically auditable. (If the inconvenience of not being able to change its hash is too inconvenient, it could chainload a signed service worker file; you can implement arbitrary policies.)
This creates a TOFU environment. One logical extension if this were implemented would be to create a browser extension which preseeds such service workers with their known hashes, similarly to HTTPS Everywhere.
I created an issue suggesting this at the Subresource Integrity WG: https://github.com/w3c/webappsec-subresource-integrity/issues/66
I am one of the editors of the SRI spec but I am currently on a month long vacation in remote places that doesn’t allow internet access (except this comment, written on a potato.)
Having said that, you’ll likely be interested in this web hacking challenge of mine from last year. It involves SRI and Service Workers: https://serviceworker.on.web.security.plumbing/
I’ve summarized my findings here: https://frederik-braun.com/sw-sri-challenge.html
A followup has been posted: https://marc.info/?l=openbsd-tech&m=152895192209700&w=2
I want to run a self-hosted issue tracker, which is my favorite thing about GitHub. I do NOT want a replacement for GitHub. This has nothing to do with the Microsoft/GitHub merger. This is purely about the fact that I do not like fork-and-PR workflow/s and I don’t like the way that GitHub has implemented code reviews. So I’m not looking to run GitLab, Gitea, Gogs, or any other GHE clone.
I’d rather host my own raw Git server (possibly using Patchwork to manage patches). I just need some sort of issue-tracking software that has the ability to link to specific patches and commits in my Git repos.
Does anybody have any suggestions please?
There are plenty of standalone issue trackers. Bugzilla is the godfather of them all; Request Tracker is similarly venerable but is more often used for IT helpdesks, and only occasionally OSS projects (e.g. Perl).
The trouble with standalone issue tracking software is that since issue tracking is the focus of its existence, they tend to end up a lot more complex than something like GitHub issues, if something that simple is what you’re looking for. If you want something GitHub issues-like, I wonder if mild modification of Gitea to shut off the code hosting aspects would be productive.
Another thing I’ve been thinking about lately is tracking issues in a branch of the repository (similarly to how GitHub uses an unrelated gh-pages branch for website hosting). This would have the not insignificant advantage that the issues would then become as portable as Git itself, and be versioned using standard Git processes. I think there are some tools that do this, but I haven’t looked at them yet.
If those issue trackers are too complex for your needs, I reckon it’d be about an afternoons work to throw together a simple one (which might be why there isn’t one packaged - it’s not big enough!). Of course, within a few months you’ll start wanting to add more features…
Agree that tracking issues in a git repo is great.
The poster points out that bcrypt isn’t dependent on password length, but doesn’t point out that this will obviously be the case because bcrypt ignores all but the first ~72 characters of input. Both this limitation and the way in which it is typically implemented (silently ignoring the rest) are good reasons to avoid use of bcrypt. bcrypt may be reasonable to use if combined with a SHA-2 prehash, but that’s nonstandard and little supports it.
I did not mention it in my post, however, I did use passlib.hash.bcrypt_sha256.hash() in my benchmark. I’ve updated the post to reflect it.
For context, one can prehash the password with SHA-256, encode the result with base64, giving a 44-byte password (including the “=” padding), then send the final string to bcrypt as your “password”. Essentially, in pseudocode:
pwhash = bcrypt(base64(sha-256(password)))
Really fascinating stuff here. So little is still known about some of the innards of the AS/400, or at least so I thought. I’d been wondering if anyone had ever looked at the internals of that tagged memory system… apparently so.
There are a few interesting things that stand out here. It looks like the tagged memory system isn’t used to enforce any security controls at the hardware level; it’s purely informational. Literally user-level machine code has to execute an instruction to trap if the tag bit in a pointer isn’t set… remove that instruction, your security collapses. The security appears to derive from the fact that the code that translates from the “MI” bytecode to Power assembly is trusted, very JVM-like. It’s also stated there’s also a normal (micro)kernel running under everything. Reading between the lines, it sounds like this kernel is used for task switching (and probably swapping from disk, due to the SLS design) but all of its threads run using the same page tables.
If my interpretation is correct, this means pretty much everything running on an AS/400 can be thought of as running under a microkernel but all in the same process(!), with JVM-like techniques used to enforce security between tasks.
Makes me think more and more about the potential of using an OS with a JVM-like design prohibiting the use of native code, like the JVM itself, the CLR, WebAssembly (this even just made the rounds: https://github.com/nebulet/nebulet), or alternatively an SFI-based design like NaCl. It would be very interesting to have a CheriBSD-like OS which doesn’t need special hardware, since (see the CHERI paper) it would allow more flexibility in how security controls are structured than a microkernel, without the overheards of a microkernel and the very coarse manner of access control provided by page tables. Things like Singularity and the AS/400 imply that this could really work.
It looks like the tagged memory system isn’t used to enforce any security controls at the hardware level
It started as a hardware-enforcement mechanism. The customers of most suppliers doing that cared about performance more than security. So, they migrated off the hardware mechanisms that worked on a per operation basis. You can learn more about it by reading System/38 chapter of this book. Burroughs B5000 and Intel i432 were other commercial offerings with hardware-level security. Successors to those concepts are the SAFE and CHERI architectures.
Edit: Writing as I read your comment. I see CheriBSD in it later. :)
“. It’s also stated there’s also a normal (micro)kernel running under everything.”
I’m skeptical of that. Even System/38 had around a million lines of high-level code in OS. Microkernels weren’t fast back then either. I’d guess monolithic even if it went for a modular design. A microkernel would be a pleasant surprise, though.
“Makes me think more and more about the potential of using an OS with a JVM-like design prohibiting the use of native code, like the JVM itself”
That would be JX Operating System.
“alternatively an SFI-based design like NaCl”
CheriBSD looks stronger than that since it enforces POLA on protected primitives. However, Criswell’s group doing stuff like SVA-OS had a SFI-like design called KCoFI applied to FreeBSD 9.
I’m pretty generally annoyed with the container ecosystem prevalent today, I don’t want to run an OS, or manage containers, I want to upload/push my app to a mainframe in the cloud. To do this you really need mainframe level isolation; limit access to system resources, limit heap, limit stack, limit cycles per request essentially. There’s pretty much no language/runtime that allows this level of sandboxing. Some cloud stuff gets close, GAE, and Amazon elastic bean stalk, but they’re pricing is container/instance priced.
The refrain I get when discussing this is; use containers with cpulimit, x, y, and z. My response is; a container is at the minimum one process, how many processes can one machine host, 100s, 1000s? Point is there’s a limit imposed by hardware. A mainframe like environment would not have this limit, since it could be async, per request, per user account.
Toward that end I’ve been playing around with sand boxing lua; with a custom allocator you can manage heap resource, with a debug hook you can manage “cycles”, and stack is limited also. I don’t know how I feel about writing “mainframe webapps” in lua though.
I’m guessing by “really need” mainframe-level isolation you just mean the feature set. In that case, it is a good feature set that many have tried to approximate in homebrew situations. The last project doing it also ended up exploring Lua. I want to look at your first requirement, though, since I’m not sure it’s necessary.
“My response is; a container is at the minimum one process, how many processes can one machine host, 100s, 1000s? Point is there’s a limit imposed by hardware. “
The traditional solution to this was a load-balancing cluster. They were down to microsecond latencies on some network cards even back in the day. You can have the load balancer distributing the requests across machines or a master machine driving the worker machines. You get as much processing as you have hardware.
The times when this doesn’t work is if one application needs more RAM than one system can have or even microsecond latencies are unacceptable. This is rarely a problem given the distributed components we have these days. Yet, the solution in the past was not mainframes but NUMA machines like SGI’s UV. They run Linux these days, too. Today’s big x86 servers are a lot like older NUMA machines. There’s also special-purpose devices that can connect them into NUMA machines. There were also so-called Distributed, Shared Memory (DSM) machines that emulated the NUMA model across a low-latency, high-performance cluster. CompSci had a few prototypes for doing that on x86 that were FOSS licensed.
So, consider load-balancing clusters, Beowulf clusters, NUMA machines, and DSM before “mainframes” since mainframes have a very, specific meaning with associated cost, restrictions, lock-in, etc.
“Toward that end I’ve been playing around with sand boxing lua”
Although I don’t have original work, I did find the Supple sandbox doing a search. Another approach from high-assurance security was combining a separation or microkernel OS with lightweight runtimes with each component running in its own partition communicating with IPC-based middleware. The Barracuda Application Server is only commercial attempt I know that used a combo of INTEGRITY RTOS, C-based apps, and Lua-based apps. If they did typical usage, the Lua-based apps would run in a dedicated partition with time and space partitioning. They could’ve bullshitted, though. FOSS options for that include Muen separation kernel, Genode, and the L4’s. That’s on top of any Linux-based solution you might go with for maturity and ease of use.
This feels like they went to some brand identity consultancy and got a package deal, and ended up with something that would feel more appropriate for a retail outlet. Someone else put it better than I could: it looks like the logo for a bicycle store. But hey, it’s a nice booklet.
Gotta wonder if this might actually work in our favor with the current de-regulation crazy administration and congress in control?
Thinking about it, I think they’re even MORE BigCorp crazy, so that will Trump the first impulse.
People that want open systems actually buying open systems would be a start. Right now, they buy the closed systems for various advantages they have. Most didn’t start that good, though: they got there through years of R&D and improvements fueled by selling their product. The open products can only get there with our help.
Although RISC-V is current favorite, there was also non-Intel CPU’s with Open Firmware. A few were even GPL at various times. People didn’t buy them when they were available since a volume product from Intel/AMD/ARM/MIPS was (insert trait here). Between that and prior failures (eg BiiN, Itanium), investors stopped fabbing them since they thought nobody would buy them. Advocates of ethical, open hardware didn’t pool money together to get that started either.
Absent regulations, it looks like the market is getting exactly what it should expect buying goods from evil, scheming companies. Then, some of them gripe about the evil schemes that follow. The market side of solution remains: start and/or buy open and/or ethical solutions. For long-term assurances, buy from companies or nonprofits chartered to stay open, avoid lock-in, etc.
I looked at Risc-V boards, but all the currently available devices have firmware blobs for various non-CPU components on the board. From a purist perspective, Hi-Five’s board is hardly better than most ARM boards. I am very hopeful for the future of Risc-V, though.
This is arguably happening. The problem is that so few devices meet a purist’s standards, so you typically have to compromise in one way or another. There are a few online stores that traffic in Thinkpad X200s and Asus KGPE-D16s. And of course the Talos II has finally made it to market.
Although true for purists, pragmatists might take the blobs if the open core had stuff like IO/MMU to mitigate some risk. There’s definitely stuff happening on demand side. That’s good news.
I’m always super cautious about ascribing concepts like good or evil to corporations. Corporations exist to make money. Some corporations have figured out that maximizing value to their customers can also mean being good citizens in the ecosystems, nations, and PLANETS in which they operate.
I mean, what this really boils down to is: Is capitalism inherently bad? I almost feel like this impulse towards “Profit = EVIL” should go down as one of the biggest geek social myths of all time.
While I’d love to live in some kind of luxury space communism based society where material things are essentially valueless and we can all have whatever we want whenever we want, we’ve a long way to go before we get there.
(And don’t start talking about how we can 3D print everything now, because we can’t. We can 3D print more and more things every day, but it’s neither easy nor cost effective when you get away from the kinds of plastics that have been commoditized for that purpose.)
There exist companies like System76 and Purism that cater to the “truly open” market, but the fact is most people simply don’t care and arguably they SHOULDN’T care so long as their needs are being met.
“totally open” only matters to us mad scientist types who want to tinker with EVERYTHING. I agree that our needs should be met too, but we shouldn’t project our needs onto the market at large.
Openness also seems to matter to the cloud business? Judging by Google’s interest in things like LinuxBoot and POWER at least.
The topic of this article (and hopefully this discussion :) is general purpose computers. As in, a computer you can walk up to and run random programs on.
Nobody disagrees that openness is important. Lobsters wouldn’t exist without open source, and the Linux universe a huge chunk of us make our living off of depends upon it as well, but SPECIFICALLY talking about general purpose computers that humans buy to perform every day tasks, I’d argue that having a 100% open architecture is utterly meaningless to easily 99% of their userbase.
Do remember the cloud business is already customizing boards and maybe even chips on a regular basis. Intel and AMD allow that through their semi-custom service. The ARM and MIPS suppliers stay doing that. They’re seriously performance, feature, and cost competitive on top of it with low-level optimizations being part of that. Put it all together, there’s good reasons for cloud market to look into open CPU’s. I think they’ll need to be fully-built, cost-effective performance, and support easy addition of acceleration engines. Cavium is in best position to do a RISC-V SoC like this but they did MIPS and ARM for ecosystems instead.
(Waited till I got home to respond to this. It deserves more effort. :)
It’s good to be cautious about it. There’s all kinds of ways to look at morality. I feel you on that. As I thought about it, I realized there was a lot of common ground among the majority of people. Focusing on that could help.
So, an easy one to leverage that’s already established in our intuition and legal system is fraud. An evil company promises one thing to the seller but doesn’t deliver it at all or as promised. This might be performance, quality, support/service, or something where screwups are easy to assess. An extension of this is the company tries to use legal or technological means to prevent customers from assessing that or shut down negative reports. I mean, a fundamental assumption for the market for goods is you should know what you’re getting, have a chance at assessing its value, and complete a transaction on it.
We could nail lots of companies with just that rule. Especially in EDA or embedded SoC’s where they try to use NDA’s on all kinds of things. From there, I might add protocols or storage formats have to be open to block lockin. It also preserves competitiveness by allowing solutions to be plug and play. We might also reduce copyright, patent, or EULA restrictions on basis that owners only get such protections if they’re acting reasonable. One example is Oracle wanting a billion dollars for a few lines of code in a system depending on millions of them or twenty something per phone when profit is around thirty with their patent being one of 250,000. Obviously, these numbers in no way represent Oracle’s contribution to the platform. Even a dollar a patent would be more than the funding of a startup in that sector. We can look at stuff like that, even progressive schemes where people pay as they grow. We can be flexible. Thing is, the greedy companies are so epically full of shit that even basic, common sense stuff will knock out lots of their schemes while minimally affecting well-run companies or true innovators.
“Is capitalism inherently bad?”
Yes if you’re going by the interpretation of always increasing gain for yourself at expense of others with no limit. It provably leads to evil on a massive scale. When you combine that with capitalist media, it gets worse in a self-sustaining way. One [biased] source I like on it just for the anecdotes is the documentary The Corporation. I listed some highlights from it in this comment answering a similar question.
“There exist companies like System76 and Purism that cater to the “truly open” market, but the fact is most people simply don’t care and arguably they SHOULDN’T care so long as their needs are being met. “totally open” only matters to us mad scientist types who want to tinker with EVERYTHING. “
The people who built the proprietary systems of the richest, tech companies usually had source and/or hardware control. The creatives probably wouldn’t do as good a job if their already-paid service started showing them ads more often. The TPM-powered solutions industry wanted stopping most forms of sharing, making you pay for stuff multiple times, not letting you record stuff, and so on would probably be opposed by the masses. Most companies locked in to inferior products that they built stuff on long ago don’t like that fact so much as tolerate it out of necessity. It hurts their ability to move fast and profit off of things.
You can find a lot of damage that always-closed platforms do vs open, tinkerable ones if you focus on peoples needs, wants, and goals. A well-designed, commercial platform that had source where third-parties can extend or integrate it will always have more potential for those people than one that’s arbitrarily limited. People don’t care since tech people don’t speak their language focusing on their goals. I’ve been learning to do that over past few years. I mean, it will still be an uphill battle. I’m just saying things like I just wrote get “Oh yeah, that’s aggravating!” or “That could be really cool!” reactions from people instead of blank stares wondering whether to be impressed, confused, or annoyed by impenetrable jargon or politics that can’t mean anything in real world. If value proposition was same, people almost always prefer the device which also let them fix it cheap, customize it easily (maybe via friend or company), not leak their stuff, and not force unnecessary upgrades. Or make them buy a new charger. ;)
There are more people that care about “totally open” than tinkerers. You have a coalition between tinkerers, people who believe closed source software and/or lockdown is unethical, and people for whom blobs pose an unacceptable security hazard.
Raptor is apparently a major customer of their own Talos II product, due to untenable security concerns around unauditable blobs on x86.
As to the confluence you speak of - the people in your first paragraph still amount to no more than 1% of the consumer computing market.
As to the next paragraph about companies embracing open - speaking as a worker bee in the employ of a rather large corporate overlord, I can say from experience that there are many varieties of “open”.
There’s “We have published full specs, firmware, circuit diagrams, and microcode on Github”
And then there’s “We will provide YOU, $MEGACORP with source code and materials to all of our products so you can conduct a full security audit”. This happens a LOT.