1. 17
  1.  

  2. 9

    Contrary to the comments at Reddit, I’m pretty sure Apple cannot do this unless you have installed a MDM profile…

    Locking, remote wipe, etc are limited to your iCloud account. There is no equivalent to “Google Play Services”. APNS has no control; it only handles push notifications.

    1. 15

      Contrary to the comments at Reddit, I’m pretty sure Apple cannot do this unless you have installed a MDM profile…

      When the OS is closed source how would you know?

      1. 12

        If you think Apple has a gaping backdoor in all of their phones which violates the mission of their product line, then please prove me wrong. In fact, take this opportunity to short their stock and prove it to the world. You could make yourself really rich really fast.

        Nobody else has done it, and everything Apple has done with their product line has been to constantly increase user security, not install backdoors for remote control and spying.

        I do not think they are perfect, but this would be a huge blow to their public perception and would certainly tarnish their brand for years to come.

        1. 7

          Objectively, I think that u/user545 has a valid point. When proprietary software is in place there is no way to verify that such software does what the user expects it to do, and nothing more. Just because Apple has said it doesn’t spy on its users, doesn’t mean such a statement is true; and we cannot trust them, because we don’t know what the program does in the inside.

          1. 9

            Perhaps it’s not as severe as user545 says.

            I think the argument can be transposed to anything done by anyone else:

            • I didn’t see how cars were built. So I have to assume the worst.
            • I didn’t see how roads were built. So I have to assume the worst.
            • I didn’t audit this open source project’s source code myself. So I have to assume the worst.
              • Or I only heard from someone that this source code checks out. But I don’t know that person, so I have to assume the worst (that they’re lying to me).
              • I didn’t audit the crypto algorithms. So I have to assume the worst.
              • I didn’t compile it myself. So I have to assume the worst.
              • I didn’t compile my compiler myself. So I have to assume the worst.
              • I didn’t compile my operating system myself with my own compiler. So I have to assume the worst.
              • I didn’t mine and process the raw resources to create my computer. So I have to assume the worst.

            Sure I can assume the worst, but then I probably wouldn’t live in a society.

            “Assume the worst” feels like an impractical rule to follow. Instead, it’s a practical tradeoff of efficiency (of my time) and likelihood I need to “assume the worst”. I’m not discounting the valuable effort that security researchers do to audit and break into these systems. Especially if they take this approach, that’s great. But they’re way more qualified and have more resources (eg - time, money) than me to do it. I’m not going to blindly assume the worst that these security researchers are out to trick me.

            I agree with feld. Apple isn’t perfect. They may change in the future. But Apple seem less likely than Google to implement a backdoor like this based on the way they position themselves in the market right now.

            1. 5

              You’re missing two things:

              1. “They’re usually defective since suppliers dont care or have liability.”

              2. “Intelligence agencies and law enforcement are threatening fines or jail for not putting secret backdoors in. The coercive groups also have legal immunity. Their targets can do 15 years if they talk.”

              No 1 also applies to FOSS. With those premises, I definitely cant trust closed-source software to not have incidental or intentional vulnerabilities. Now, we’re back to thorough design and review by parties we trust. Multiple, skilled, mutually-suspicious groups.

              1. 2

                Thanks,

                I agree with you on #1, including that it applies to FOSS. I may argue that a supplier has more incentive to fix it if you’re a potentially influential customer over a FOSS that has a disinterested maintainer (making you fall back to build-it-yourself or audit yourself. And to be clear, FOSS is definitely a better option than if the non-cooperative supplier is a monopoly). But I’d admit only be able to back up anecdotally, which isn’t a strong case.

                For #2, couldn’t that also apply to key maintainers in FOSS if they are contributing to the same project? I’d take a random guess that governments may find it impossible to coerce a small set of individuals. 15 years would equality scare FOSS maintainers as well. Sure, a geographical barrier may make that more difficult, but I’d guess that human-based intelligence agencies like the CIA probably have some related experience in this. I agree that FOSS makes it harder to sneak one by reviewers, but maybe there’s not many people needed to coerce to get the backdoor in a release.

                I only tangentially review security topics, so I’m not sure if that’s a realistic threat or just a tinfoil haty thought <:-).

                I guess I’m putting more emphasis from the perspective of typical (non-technical) user of software to:

                1. care more about security / privacy
                2. pressure companies they support to have better security/privacy practices

                Over distrusting all companies and have a significantly worse user experience of using software in general. Non-technical users generally like the fallback of technical support over just “figure it out yourself” or “you lost all your data because you couldn’t manage your secrets”.

                I’m curious, if a company allowed you to audit their source code before you approved/used it, would that significantly minimize the advantages FOSS software have over proprietary software for you?

                1. 2

                  I may argue that a supplier has more incentive to fix it if you’re a potentially influential customer over a FOSS that has a disinterested maintainer

                  This hasn’t been the case at all in the mobile space. The supplier has an incentive to not fix things so you buy a new device where as FOSS maintainers want your device to last as long as possible.

                  1. 2

                    I’d agree the motivation for some suppliers to upsell to newer devices, although I don’t really understand motivation for FOSS maintainers to want you to use your device as long as possible. As a one who maintained iOS libraries, there’s strong motivation to deprecate older devices/platforms since it’s a maintenance burden that sometimes hinders new feature work (and typically the most active contributors use the latest stuff). And when pitted against supporting the latest devices vs the older devices, chances are the newer stuff will win in those debates.

                    Thinking through the supplier stuff a bit more doesn’t make that much difference though. Sure, it doesn’t feel like a great business practice for a company to upsell. But it’s also how those companies stay in business. It could be viewed similarly to a maintenance support fee for existing devices. If suppliers offered the a retainer fee, it would effectively be the same thing then?

                    1. 2

                      The lineageOS team does amazing work keeping old Android devices on the latest release. Also means app devs don’t have to worry because these old devices support all the new apis and features.

                  2. 2

                    “For #2, couldn’t that also apply to key maintainers in FOSS if they are contributing to the same project?”

                    That’s a great observation. I held off mentioning it since people often say, “That’s speculation or conspiracy. Prove it with examples.” And the examples would have secrecy orders so… I just dropped the examples where they can find proof it happened. There very well could be coercive action against FOSS maintainers. Both Truecrypt developers and someone doing crypto on Linux filesystems kind of disappeared out of nowhere not talking about the project any longer. Now we’re into heresay and guesswork, though. Also, they might be able to SIGINT FOSS with a secrecy order. We might be able to counter that having people in foreign countries looking for the problem, submitting a fix, and the rule is to always take a fix. They have to spot the problem that might be out of their domain expertise, though.

                    Plenty of possibilities. I just don’t have anything concrete on mandated, FOSS subversion. I will say one of the reasons I’d never publish crypto under my own name or take money for it is this threat. I think it’s very realistic. I think we haven’t seen it play out since the popular libraries for crypto were so buggy that they didn’t need such a setup. If they did, they’d use it sparingly. Those also ran on systems that were themselves ridden with preventable 0-days.

                    Far as open vs closed with review, I wrote an essay on that here.

                    1. 2

                      Thanks for that essay, that was insightful.

                      I’m roughly remember the Truecrypt incident and that was suspect, although never came across the linux file system crypto circumstance. Was it similar to Truecrypt? Was that developer already known. My googling didn’t seem to show up any mention of that at all.

                  3. 1

                    There is one thing I am wondering about. Government agencies require backdoors but I would think they also require backdoors that are kept secret. How does that work with FOSS software? Alright yes they could sneek it in the compiled version maybe but distros are all moving to reproducible builds so that would be detected.

                    1. 2

                      Ignore the Karger/Thompdon attack: only happened twice that I know of. The nation-state attackers will go for low-hanging fruit like other black hats. They also need deniability. So, they’re most likely to either (a) use all bug hunting tools to find what’s already there and (b) introduce the kinds of defects people already do by accident. With (b), discoveries might not even burn the source if they otherwise do good work.

                      For FOSS, they’ll slip the vulnerability into a worthwhile contribution. It can be either in that component or be an interaction between it and others. Error-handling code of a complex component is a particularly-good spot since they often have errors.

              2. 11

                They are able to push updates over the internet and the whole thing is proprietary. I am unable to tell you what the system does because I cant see it. And at any time apple can push arbitrary code which could add a back door without anyone knowing.

                When you can’t see what is going on you have to assume the worst.

                1. 5

                  I can’t tell whether this is 1. a defense of open-source in general and android in particular or 2. a critique of apple.

                  Neither works.

                  1. See example of what just happened. or the firefox/mr robot partnership recently. open source does not automatically confer transparent privacy.

                  2. Apple has, in fact, emerged as a staunch defender of user privacy. There are many many examples of apple defending users against law enforcement.

                  You can’t wish Apple to be terrible about privacy and use that as the argument.

                  1. 3

                    Sure you can. They could take money to secretly backdoor the phone for NSA and use lawyers to tell FBI to get loss for image reasons. The better image on privcy leads to more sales. The deal with NSA puts upper bound on what FBI will do to them since they might just get data from NSA.

                    If that sounds far fetched, remember two things:

                    1. The telecoms were taking around $100 million each from NSA to give them data that they sometimes passed onto feds to use with parallel construction. Publicly they said they gave it out only with warrants. RSA went further to say they encrypted the data but weakened the crypto for $30 mil. The Core Secrets leak also said FBI could “compel” this.

                    2. In Lavabit trial, Feds argued he wouldnt have losses if customers didnt know he gave Feds the master key. He was supposed to do it under court order and then lie about it.

                    Given those two, I dont trust any profit-motivated company in US to not hand over data. Except maybe Lavabit in the past. Any of them could be doing it in secret for money that they take or get fines/jail.

                    1. 3

                      I would say Apple is more comparable to Lavabit than the others – they’re actively and publicly taking steps to protect their users’ privacy.

                      I wouldn’t argue that they will never do it, but to paint Apple and Google with the same brush on user privacy is silly and irresponsible.

                      1. 2

                        Well, we know that the secret, court meeting was going to put him in contempt or else. He had to shut the business down to avoid it. Apple may have been able to do more due to both size and making case public debate. Then again, that may have been a one-time victory followed by a secret loss. You can’t know if there’s two legal systems in operation side by side, one public and one secret. I assume the worst if the secret system is aggressively after something.

                        “I wouldn’t argue that they will never do it, but to paint Apple and Google with the same brush on user privacy is silly and irresponsible.”

                        I agree with this. Apple is a product company. Google is a full-on, surveillance company. Google is both riskier for their users now and more over time as they collect more which more parties get in various ways.

                    2. 3

                      I am not defending android at all. As you can see in the OP post android is absolutely horrible for privacy and control. I also agree that open source is not flawless of course but open source enables us to have the opportunity to inspect the programs we use (usually while contributing features) from what I understand the firerfox event was pushed through a beta/testing channel and not through the FF source. I would hope all linux distros have this feature turned off when packaging FF.

                      The OP comment was asking me to prove that Apple is able to change user settings over the network and I think that is an unreasonable statement to make when the software is closed source. I also mentioned that it is possible as apple is able to push new updates at any time with arbitrary code. So they have the capability of doing anything that is possible hardware wise.

                      1. 2

                        Fair on your 2nd point of responding to the OP and I don’t know whether they have the capability. However, they seem, at least at the moment, disinterested in taking random liberties with their users’ privacy.

                        1. 3

                          disinterested in taking random liberties with their users’ privacy.

                          I think that’s probably true but no one in this thread actually knows and one day its quite likely that the US government will force them to backdoor devices if they haven’t already.

                      2. 1

                        Apple has, in fact, emerged as a staunch defender of user privacy.

                        this has to be a joke

                      3. [Comment removed by author]

                        1. 1

                          I can be sure in the way I can find out if needed. With proprietary software I can not be sure even if I was willing to put in the effort unless I wanted to spend my whole life trying to reverse engineer a build that would be out of date in a few months.

                          1. 1

                            Ill add that the move toward tamper-resistant enclaves and integrity checks will make that even harder since some are about denying you read access or flagging your device on access attempt. You’re effectively punished for trying to verify their software.

                            1. 2

                              I find these fairly problematic because one of the main uses for these systems is to prevent the user making modifications that the OEM doesn’t want and DRM but at the same time the do have genuinely useful features that would be desirable if they were under my control.

                              There are a lot of other things in IT I think fall under the same category. My bank offers you data showing all the different categories of things you have spent on in the month which is really useful for me to have but really creepy for the bank to have.

                              1. 2

                                Yeah. There are also schemes that put the user in control to get those benefits. That most suppliers don’t implement them tells us a bit about their intent.

                        2. 1

                          How do you know they are able to do that then?

                          Because all system updates that got installed on my phone came only after I manually approved them. Unless I am not aware of some previously demonstrated capability this sounds like exactly the same kind of unsubstantiated argument you are arguing against.

                          1. 1

                            What criteria do you use for approving or denying updates and how would that be able to stop a backdoor being installed?

                            1. 2

                              It doesn’t matter since the original argument was that Apple can do the same thing (automatically install/change software on your device) which they cannot. You have to assent to the installation (of updates, backdoor or whatever). May not be a difference you care about, but I do.

                              I agree that black box software makes it impossible to know if software can be trusted, but binary package of an open source software is also just a black box if I am not able to generate the same hash when compiling myself which in my admittedly not recent experience happened a lot.

                              1. 1

                                “You have to assent to the installation “

                                You would need a copy of source for all priveleged hardware and software on their platform to even begin to prove that. You dont have that. So, you don’t know. You’re acting on faith in a profit-motivated, company’s promises.

                                I’ll also add one that has enough money to do a secure rewrite or mod of their OS but doesnt intentionaly. They don’t care that much. They’re barely even investing into Mac OS X from what its users say. Whereas, Sun invested almost $300 million into redoing Solaris for version 10. That brought us things like ZFS.

                                A company with around a $100 billion that cares less about QA than smaller businesses shouldnt be trusted at all. They’ve already signalled that wealth accumulation was more important.

                                Meanwhile, tiny OK Labs cranked out mobile sandboxing good enough that General Dynamics bet piles of money on them for Defense use. Several other companies cranked out security-enhanced CPU’s, network stacks, DNS, end-to-end messaging, and so on. Quite a few were for sale, esp those nearing bankruptcy. Shows Apple had plenty of opportunities to do the same or buy them. Didnt care. They’ll make billions anyway.

                                1. 2

                                  I agree with pretty much everything you say and while interesting, I am not sure how it is relevant to what I said.

                                  I did not argue that one should trust Apple (even though I do think iPhone has a better track record than Android). My point was simply that all other things being equal I prefer platforms that don’t suddenly change on some company’s whim and let me decide when or if I want to perform an update and that AFAICT Apple does not push those updates without user’s consent.

                                  I assume your argument is that consenting is meaningless as I cannot perform any reasonable security analysis of what I will receive. True that I can’t, but I also value predictability and speaking from a personal experience I feel I lose some of it with auto-updates.

                                  1. 1

                                    I assume your argument is that consenting is meaningless as I cannot perform any reasonable security analysis of what I will receive. True that I can’t, but I also value predictability and speaking from a personal experience I feel I lose some of it with auto-updates.

                                    I think you are missing the point. Your iPhone has convinced you that it would only ever install an update if you approved it, but you have no way of knowing that there isn’t already a way for Apple to push software without your consent, in a way that you wouldn’t detect.

                                    I’m sure if you looked at the EULA that you agree to when you use an iPhone, Apple has every legal right to do this even if they try to create an image of a company that wouldn’t.

                      4. 4

                        objdump -d

                        1. 3

                          When the OS is open source how would you know? Have you personally audited all of linux? How do you know you can trust third-party audits? I don’t think “it’s open source” provides much in terms of security all things considered.

                        2. 3

                          how do you know, what APNS does.