1. 11

  2. 5

    Little bit confused as to how this was a “mistake” and nobody noticed the data pouring in? Like, ok, you leave the debugging logger active in a production build, but when your log server starts filling up with stragers' data, isn’t that a hint that you should fix something?

    1. 4

      It was meant to only be enabled for Chinese consumers, not for Americans. That was the “mistake”.

    2. 4

      It’s a company from China whose solution updates firmware “over the air” (FOTA) that’s deployed on over 100 million phones. Of course it’s spyware for the Chinese. Both the attack vector and deployment are ideal for it. I specifically warned of this risk in my smartphone analysis. It’s likely same vector other nation-states use if the local telecoms are cooperating with this. In case of Five Eyes, they’re definitely cooperating.

      On Schneier’s blog, someone asked me what smartphone I’d choose to try to avoid all backdoors. I told them that’s not possible given you always have to trust someone. You can only trust them to act in their self-interest. This might be coerced. They have to be strong enough to resist that plus act in their self-interest. What I recommended was assuming all of them were backdoored. If you can’t avoid them, then you use totally Chinese-made, secure phones to hide from Five Eyes surveillance and U.S.-made, secure phones to hide from Chinese or Russians. Like my old remailers that bounced emails through jurisdictions that didn’t cooperate with each other. Similar concept.

      1. 1

        Like my old remailers that bounced emails through jurisdictions that didn’t cooperate with each other.

        That sounds an awful lot like giving the data to all jurisdictions instead of keeping it away from as many as possible…

        1. 2

          Encryption provided the privacy. Remailing/tunneling the anonymity.

          1. 2

            Ah. Thank you for the clarification.

          2. 2

            That’s an ongoing concern I have with a lot of schemes. Like two engine aircraft are safer than four engine aircraft, when you might think the opposite. You’ve doubled your chance of failure and a ¾ engine aircraft isn’t actually twice as safe as a ½ engine aircraft.

            I could put up a website hosted in Switzerland if I were somehow afraid the US could shut it down. But really, my exposure is now that the Swiss can shut down my site, and the US can (as before) shut me down. Safer or less safe?

            1. 1

              For the following I shall ignore exploding engines or the fact that a plane can successfully glide its way to safety with no engines running. A two engine plane can continue flying on no less than one engine, and a four engine plane can continue flying on no less than two engines. I am not concerned with the fact that the overall probability of failure is greater, I am concerned with the probability that a critical number of engines will fail: 2 for a two engine plane, 3 or 4 for a four engine plane.

              Assuming the probability of one engine failing is f, then the probability of all engines failing for a two engine plane is , and the probability of 3 or 4 engines failing for a four engine plane is 4 (1 - f) f³ + f⁴. If you plot 4 (1 - f) f³ + f⁴ - f², you see that if your probability of one engine failing is below you are safer flying on a four engine plane, and if it is above then it is safer to fly on a two engine plane. I’m willing to wager that the probability of airplane engine failure is less than .

              Switzerland has a reputation for shutting down fewer websites than the US. If you physically reside in the US, the US will, of course, always be able to take you down; however, if your website is in Switzerland then the US must go through international formalities to take down your website, but if your website is in the US, then they take down your website immediately. It can be seen as a “lesser of two evils” situation, where hosting in Switzerland is the lesser evil.

              1. 1

                Exploding engines are exactly the kind of unanticipated risk that comes from adding safety.

              2. 1

                “Like two engine aircraft are safer than four engine aircraft”

                I think that’s a bad example for reasons zgrep already mentioned. More relevant is triple modular redundancy with voter schemes used in aircraft control systems which are individually already designed to be relatively simple with careful coding. Idea being a random fault in one isn’t likely to happen in the others. From there, you add the concept of safety or security through diversity where each is done substantially different from algorithms to OS’s to hardware so it’s very unlikely they’ll share the same failure on any given input. For security, you add ideologically-opposed people to the mix. My original was coding and verification team of Jews and Palestinians that alternated roles with some talented management doing conflict resolution.

                “But really, my exposure is now that the Swiss can shut down my site, and the US can (as before) shut me down. Safer or less safe?”

                The Swiss have strong rules about that sort of thing, though. The situation is objectively better. Far as the last risk, it would mean you screwed up thinking that a Swiss site means a site located in Switzerland. It has to be Swiss-owned and run for best results. I can’t even recall a case of U.S. killing anyone over there because they couldn’t take a site or business down. Instead, they try espionage to get secrets and hacking against websites. So, having a Swiss person or company that’s trustworthy control the Swiss site reduces the risk from a police state shuts it down in secret arbitrarily with owner being indefinitely detained in black site while fed rectally + might hack it to… they might hack it or try to convince Swiss authorities to take legal action that ends up in a court. World of difference. ;)

                Note: I don’t know the extraordinary rendition risk for Switzerland. Always a possibility. Add black site for highest-profile targets there just for good measure. Usually just FBI’s BS, though.

          3. 1

            In other news, not-so-secret backdoor in ALL U.S. phones send data to the NSA.

            1. 2

              I don’t think they have a backdoor in all U.S. phones. If they do, you or I aren’t important enough to worry about it since they won’t expose it. The leaks indicated they had 0-days in everything. They probably try to introduce them in critical areas in ways to look deniable like ordinary bugs. It’s possible they also hack specific accounts with fake updates from cooperating manufacturers. I don’t think they have a static backdoor for widespread use, though, except in maybe Blackberry.

              1. 2

                Curious what makes you say:

                except in maybe Blackberry.

                Context for my curiosity: I used to (~7 years ago, peak of the BlackBerry era) work in the BBSec (BlackBerry Security) department when they were stilled called RIM, and unless it was way over my pay grade, the team and I were unaware of any attempted deploy of a nation-state-backed static backdoor. And BB’s are so irrelevant today that I doubt it’s even worth backdooring. Not to say there weren’t gaping holes in their products' protocols like the one used for BBM… Oh boy.

                1. 1

                  There’s been a string of articles on Blackberry Messenger being subject to lawful intercept capabilities more than the rest. The architecture even made it better suited for that from what comments said. There was also this on top of that:


                  Far as a subversion risk in OS, you wouldn’t have to know about it. It would take literally one person who could do decent in obfuscated C contest. Or whatever you were using. Blackberry wouldn’t even need to fully endorse it. One person getting a big contract requiring specific features plus one subversive developer putting them in is all it takes. Just two people. It’s the more obvious stuff like in India that makes organizations pause. They caved on that, too, though. ;)

            2. 1

              I use a “dumb phone” from trac phone. Got it for free. It makes calls and gets text messages. Probably one of the safer phones to use other than a landline.

              1. 4

                I don’t know. I mean, it says it’s doing less stuff, but it’s still a black box of mystery meat. Waiter, I’d like the “lean portion” of mystery meat. :)

                1. 3

                  Does it have a web browser? If it has baseband stack and juice for multimedia, then it can be backdoored easily. Dumber ones can be backdoored but it’s easier with more CPU. The low-cost phones are also sold by companies that make less money. That increases the risk that they might take some kind of bribe to put a deniable backdoor in.

                  Bottom line: don’t trust your phone. Make sure the batteries can be taken out. Don’t have one present if discussing anything sensitive. I wrote up the ways they can be attacked here per my old framework for high-security work:


                  Don’t trust anything to be secure against nation-states unless it addresses everything on that list at a minimum. They’ll hit whatever is available. They also have the funds to do so for hardware.

                  1. 3

                    Ironically the dumber phones may be more vulnerable to baseband attacks. Smartphones put that on the other end of a not DMA capable link. A cheaper more integrated design probably has fewer barriers between OS and network.

                    1. 1

                      That’s a great point. The cheaper processors are also less likely to have a MMU or OS that does any privilege separation. The libraries are less likely to do expensive checks on memory access, I/O, etc. All the corners they cut due to less resources can let them get shredded. Like with embedded platforms.

                      There could be exceptions if they were designed to be. I remember Clive Robinson used MCU’s for his guards with extremely-limited memory. As typical in high-assurance, he used state machines with static, memory usage for whatever transfers and checks he was doing. What he also did was make sure the code and data took up all the available memory then ensuring data couldn’t screw with control flow. This combo meant there basically wasn’t any room left for malware to hide. The system would probably crash if they injected anything or visibly act weird at worst.

                      Past such designs, though, the dumber stuff almost always has less safety. Dumber stuff, even MCU’s, have plenty of memory and CPU for attacks these days vs old 8- or 16-bitters. Nothing is safe if its foundation isn’t inherently safe and also has enough resources for attackers. :)