This was interesting to me as I know nothing about these iOS limitations. But it also struck me that the author was perhaps trying to fit a square peg in a round hole. An iPhone is designed as an appliance. It LOOKS like a general purpose computer, but it actually isn’t. It’s amazing what it can do, but its a PHONE. There are tradeoffs to all this and it ends up as marketing and market segments. It seems to me the author should not aim for the iPhone but rather android or the desktop.
I think the author’s entire criticism is that the idea of “designed as an appliance”, whether phone or game console, wastes a significant value of a computer. It’s not a phone. It’s a computer that also happens to run a phone program. The X-Box is a computer with controllers instead of keyboards. When we pretend they’re appliances we do much more than trade off computer sciencey features for convenience.
Yes, that’s a good article. Agreed - it’s not just about convenience. It’s about making something that’s as reliable as an appliance. There are serious information-security threats which the no-execute restriction goes a long way to mitigating.
In a strict technical sense, it’s a computer - it has a CPU - but I don’t understand why people feel that every device that has the technical ability to be used the way computers are used, absolutely has to be usable that way as a moral imperative. Where does morality come into the picture? Technology nominally exists to do things people find useful; people are finding it useful to have devices that are restricted; where is this widespread fear of that coming from?
Reading early thoughts on the topic like Right to Read, my impression on the moral dimenstion is that these things undermine the concept of ownership. I can do anything I like with my coffee table, but a large class of things I would very much like to do with my X-Box are prohibited to me. It doesn’t matter that I paid for them both at a store, I only have complete ownership over one.
Decades ago I agree that this wasn’t a big deal, but computerization has been advancing steadily. Our cars are computers with wheels, and we’re not allowed to look at, modify, or fix the software (first example that comes to mind is John Deere but this is every vehicle now). We live incredibly intimately with “phones” - really, they’re not phones, they’re powerfully multi-purpose assistants that mediate our private relationships, keep staggering amounts of personal information, and are often the first and last things we see every day - and we don’t really own them in the way I own my coffee table. We’re even integrating computers into our bodies. If you have a pacemaker or a powered limb with a computer you don’t own, you’ve lost fundamental bodily autonomy.
“Right to Read” really looked like campy indulgence in slippery-slope nonsense when it was written, but digital textbook rental is a thriving business now. In some fields it’s a convenient cost-saver, but in advanced and technical courses the prices have lept right back up to full textbook prices (or the regular textbook has a “digital component”…)
I agree that locked-down appliance-style computers are generally more reliable, easier to use, and just plain fun than open, general-purpose computers are. But we’re sliding pretty far down that slope now with app stores from OS vendors. They’ve started as an optional, easy way to install and update software. They deliver a real value. And the warnings ratchet into errors. I’m not an OS X dev, but I get the impression it’s very difficult to make native OS X apps without Apple’s permission to use XCode.
When we don’t have that control over computers and systems like phones we see things like the Ukraine goverment and the FBI surveilling and intimidating political protests.
To wind up a long answer to a good question: because computers are ever more powerful and pervasive, it’s ever more important that we own and control them or they’ll be used to control us.
Well, sigh, yes there’s an issue of government control. I’m not sure software architecture can do anything to reduce that threat, though. Leaving things open just means malware can lock them down for its own purposes.
And I commented in depth on the car software article that crossed Lobsters recently, and I’m on your side for that one. In my fantasy world, absolutely everything would use open-source kernels, and the other components would be transparent in function and open-source when possible, and physical equipment owners could always override the root of trust in the bootloader. But those kernels would definitely enforce noexec or something very much like it.
Are people (i.e. users) finding this useful? It feels to me like the corporations that own platforms are finding it useful, and users don’t get a vote. Users use the iOS app store because they have no other choice if they want to run third-party software. On the Mac, where a locked-down app store coexists with several other options for software installation (GUI installers, application bundles, package managers with dependency fulfillment), the app store is little-used.
I was trying to make the case that the vote users have is precisely to choose iOS in the first place. It’s viewed as a luxury brand (rightly or wrongly); nobody has to buy it, and in fact its market share among smartphones is a minority. And, again, Android does not use a technical measure to prevent self-modifying code. So there’s a meaningful alternative.
I guess it comes down to whether you believe that market outcomes are self-evidently correct by virtue of being market outcomes. I don’t. Nobody has to buy iOS, but iOS devices have many advantages over their competitors (of which there are only really a handful–it’s not as though there is a thriving marketplace for mobile OSes in which any need might be met), and choosing a system which allows freer execution of software means giving up other unrelated things. I believe that people choose iOS not because they like the walled-garden model but because they like the high quality of the hardware (and to a lesser extent the bundled software) more than they dislike the app store restrictions.
(Though I suspect most don’t really know what they are giving up at all, have no idea that Apple has these kinds of restrictions on the app store, and wrongly believe that they can run any compatible software they choose on their devices. I suspect that only a minuscule fraction of iOS users are even aware that the iOS version of Chrome uses WebKit instead of Blink, and even most of those do not realize that this is because of Apple’s restrictions rather than for technical reasons. The information asymmetries are vast in this market.)
I don’t believe that markets are inherently correct (or usually anywhere close to correct), in the slightest.
I understand what you mean about the information asymmetries.
I’m not sure I have more to add. :/
To me, it seems the other way around — an iPhone looks like an appliance, but it’s actually a general-purpose computer, with all the dangers that entails when it’s under the control of a malicious third party.
Right - but Android does allow self-modifying code. It’s been used as a marketing point, in fact, in support of the general theme of giving users more control compared to Apple’s setup. And that has sold fairly well.
I really see no reason these can’t coexist; at the abstract level, pretty much everyone who doesn’t understand the technology thinks of it as either “everything but Apple is unusably complicated”, or “Apple doesn’t let you customize anything”. Which are both correct descriptions from their perspective, and really do represent the motivation for the choice. Let’s not try to argue on the assumption that we as technologists are the only ones who can possibly make informed decisions about it.
The thing is that every reasonable-sized application needs configuration files, that it can read and write.
If you want a general purpose application, have the read-only part be a bytecode interpreter and the configuration file be the bytes to be interpreted.
Apple or whoever could prohibit that but determining whether a given program is “Turing Complete” is an undecideable problem so it would be a bit hard to stop.
Does Emacs run on any of these platforms, for example.
[Comment removed by author]
That, and it’s very obvious from looking at an app’s advertised features, whether any of them rely on this. It’s not that they could prohibit this - they have done so, successfully, for more than five years now. Also, there are no user-editable config files for iOS apps because there is no user access to the filesystem.
I’ll reply just here though I think my comment applies to all replies to my post above.
Perhaps I fed this a bit but I think just about all the arguments involve conflating two issues; the architecture of the iPhone and how that “ends hacking” and Apple policy how that might do the same.
I think it’s obvious that just the architecture still lets you have general purpose computation (speed is yet another semi red herring). Apple policy may well prevent hacking platforms from ever getting to the consumer but it could do so regardless of the underlying architecture - “it just has to look at the feature list”, yes and that has nothing to do with architecture.
I agree, and perhaps I didn’t state that clearly.
No - there’s no iOS Emacs. I specify iOS because, of the platforms the author mentions, that’s the only one that isn’t (solely) a game console, which is pretty much why I find the complaint overblown. An xbox is specifically sold as not-a-computer, and that’s actually a selling point because general-purpose computation implies a nightmare of compatibility issues. If all consumers wanted customization, game consoles wouldn’t even exist.
When the iOS policy was first put in place many years ago, it sharply limited the ability of games to sell DLC levels which were not bundled as part of the app themselves, in that it forbid the levels from having any sort of behavior scripting in them. They carved out an exemption for that particular situation. They have, over the years, been asked to make additional exemptions, including for third-party web browsers and for languages that can’t function without JIT-based trampolines. They have declined on all counts. (Chrome and Opera for iOS both use Apple’s WebKit, and wouldn’t have been allowed otherwise.)
I’m also kind of not exactly surprised to see feigned surprise today about something that’s been accepted for many years already, I have to say. This is not a new issue and the author has actually nothing to add about it, other than referring to Von Neumann architectures in the title to scare people into reading it.
If it feels like I’m being too generous to Apple? It’s because I find security even with this policy to be woefully inadequate. It’s a necessary, important step. If we ever get to a position where commonly-used OSes go even a couple months without a major remote-code-execution vulnerability, perhaps we can talk about reducing security measures.
An xbox is specifically sold as not-a-computer, and that’s actually a selling point because general-purpose computation implies a nightmare of compatibility issues. If all consumers wanted customization, game consoles wouldn’t even exist.
Game consoles exist now to have a specific platform that every game developer can target. Games get the experience that they targeted for that platform. But that quality can co-exist with allowing (more) general computing. I actually think it would be a nice feature that could set the XBox apart from the competition.
I wonder if the perception of the ‘phone as an appliance’ will change if things like Windows 10 Continuum take off.
Yes, and part of being a specific platform is not allowing installed software to interfere with other installed software. You can’t install device drivers on your XBox; in fact, the only third-party devices you can use it with at all are various forms of game controller (and various televisions, I suppose, but only with the built-in GPU). If you could, game developers would have to write code to do something sensible with those options, and there’d be a dramatically higher level of testing need, and a dramatically higher level of developer support provided by Microsoft.
It’s not that those things aren’t worth the tradeoff. It’s that if you want a desktop OS, you know where to find it. There’s no point dragging a single-purpose platform halfway to desktop status. You’d incur most of the drawbacks, and not enough of the benefits, and desktops have evolved into their niche by finding the right balance for people who want general-purpose stuff.
Personally, by the way, I don’t own a current-gen game console, and doubt I will again; I play my games on the computer. I see the console niche as obsolete. But the market doesn’t appear to agree with me, yet. :) And people are entitled to make that decision for themselves.
A bit melodramatic, isn’t it? One of the very things lauded on OpenBSD in this forum is its ability to separate code from data, by making program segments non-executable. I seem to recall that Firefox, one of the examples the author uses, is a blocker for full application of this security feature. I also recall that @tedu is looking to facilitate a fix, which would give the best of both. Perhaps one could hope for something like it for iOS?