1. 13
  1. 14

    Sigh. I visited the link for first example, an Apple backdoor.


    I mean, kind of interesting, but if it’s that what we’re calling malware these days, may as well just pack it up and go home.

    Actually, that bug has quite a bit in common with this file permission “backdoor API”: http://www.openwall.com/lists/oss-security/2017/01/24/4

    So on the bright side, conclusive proof that systemd is malware?

    1. 4

      This is only the first link, but there are others, like http://www.telegraph.co.uk/technology/3358134/Apples-Jobs-confirms-iPhone-kill-switch.html

      For me, this only means that only Android ROM’s like LineageOS are usable.

      1. 3

        There is also something to be said when these ‘backdoor APIs’/bugs are being inserted behind the doors vs in the daylight.

        I’m sure you are aware of the distinction, but not sure why you seem to condone it.

      2. 6

        Although many of the examples are true, if sometimes overblown, the problem is that you are not going to convince any significant number of people outside a niche. People willingly give their life’s story, kid’s photo’s, etc. to Facebook, Google, etc. in exchange of attention or free products. Heck, some people even willingly place voice assistants in their home by an advertising company or a marketplace that is always-on.

        This is my problem with all FSFs campaigns, BadVista, the one that DoS’ed Genius Bars, etc. At best they people are oblivious to them at best they are annoyed by them (imagine all Genius slots being taken when you need to have your laptop repaired).

        It would be better if they put their energy and money in solving shortcomings of the FLOSS desktop, so that it becomes a good alternative, than preaching to the choir.

        (Of course, there is a role for such activism, but e.g. the EFF does this in a far less preachy, more productive manner.)

        1. 2

          So this gets at the root of my displeasure. I have some differences of opinion about the role of proprietary software in society, but even if I agreed, there’s no way I’m sending this link to anyone out of concern that the recipient conclude that I don’t know what words like malware and backdoor mean.

        2. 8

          As a long time open source person, I’m increasingly frustrated by the partisan refrain that software which isn’t open source cannot be trusted to be non-malicious. This isn’t really true, there’s an entirely domain of expertise dedicated to reverse engineering what no-source-available code does, and folks who do this for a living are really good.

          There’s plenty of good reasons to prefer open source, even related to trustworthiness, source availability facilitates easier reviews from people with different expertises (e.g. a cryptographer who is not an expert reverser), however the idea that closed source is just a total black box is a political argument, not a technical one.

          1. 10

            there’s an entirely domain of expertise dedicated to reverse engineering what no-source-available code does

            OK, but the cost of determining whether something is malicious is incredibly high for closed-source software compared to open source. Prohibitively so, for the vast majority of users.

            Any technique you can use to audit closed source software, you can use to audit open source software, right? But you also have the source code, the commit history including who committed what, diffs between versions, code comments, etc.

            Plus there’s the social factor. If somebody at Microsoft adds a backdoor to Windows, outsiders might notice the unusual network traffic, but they have no chance to see the code. All the committers to Windows are under NDA and can be pressured to be quiet. Whereas adding a backdoor to Linux would mean sneaking it past a bunch of people whose only unifying motive is to produce a good OS, and keeping them all from noticing it indefinitely. It’s a much harder task.

            So open source makes it much harder to add back doors and much easier to find them. It’s not perfectly safe, but it sounds a heck of a lot safer.

            1. 8

              You are technically right in saying that there are domains of expertise dedicated to reverse engineering closed source, and these folk tend to be really good.

              Making this the argument towards trusting closed source is moot. There is plenty of closed source that isn’t actively reversed, and audited, which is used heavily. Tax software, Search engines, you name it …

              1. 3

                As Ken Thompson showed, this argument is also wrong in that even if you can see the source, the binary that’s actually running could have been diddled in some way. And it’s not even that hard to get backdoors into source without people noticing, as Heartbleed and the Underhanded C Contest and so on show.

                1. 3

                  Still, you can’t really reverse engineer something as huge as Windows.

                  EDIT: Even if you were able to reverse engineer it, you can’t really modify it, unless you break the EULA. Even then, it’s a play of cat and mouse - you hack Windows to change its behaviour and then Microsoft patches it, so you need to find another hack.

                  1. 5

                    Neither you can audit a huge open source project: the OpenSSL fiasco showed that “since it’s open, someone would have noticed” doesn’t work.

                    1. 6

                      Still, being open source, it allows anyone to fork it (see LibreSSL) and use it instead.

                  2. 1

                    I suggest mentioning 3rd party evaluations instead of RE. People or organizations you trust get the source, vet it, and sign its hash. That was how security evaluations at government have been done since 90’s. It can be pretty cheap if software isnt humongous.

                    Interestingly, still necessary for FOSS since that gets so little review. Like in closed-source, most users just trust a 3rd party saying it’s good.

                    1. 0

                      This isn’t really true, there’s an entirely domain of expertise dedicated to reverse engineering what no-source-available code does, and folks who do this for a living are really good.

                      This is like saying terrorism is okay because there’s an entire domain of expertise dedicated to stop people from blowing up a schoolbus with a martyr vest and these experts (at least some of them) are quite good at it.

                      Open source and free software are superior to closed source proprietary software. All else being equal, there is no reason to use a closed software over an open one.

                      Just because you can mitigate the awfulness does not make something good.

                      1. 3

                        This is like saying terrorism is okay because there’s an entire domain of expertise dedicated to stop people from blowing up a schoolbus

                        That is ridiculous. The only point of terrorism is destruction to cause some reaction. Whereas, the point of proprietary software is to solve an actual or perceived problem for users in a way that works well enough. It usually does. Problems are usually recoverable. It almost never kills someone. Shady companies may optionally do a bunch of evil like lock-in on top of that. Don’t do business with shady companies & make sure you have an exit strategy if the supplier becomes one. Meanwhile, enjoy the software.

                        “Open source and free software are superior to closed source proprietary software.”

                        Like hell. You said all else being equal but it rarely is. In average case, it’s easily proven false in a lot of categories where at best open source has knockoffs of some proprietary app that suck in a lot of ways. In many other cases, there’s no open-source app available. Should be decided on a case by case basis which are better. Far as security, it was mostly proprietary in high-assurance sector steadily produced highly-robust solutions because they had money to put the necessary QA and security work into it. It’s basically unheard of in open-source unless paid pro’s or CompSci people are doing it with the open-sourcing being incidental. GEMSOS, VAX VMM, KeyKOS, Kesterel Institute’s stuff, OKL4, seL4, Caernarvon, MULTOS, CertiKOS, Eiffel SCOOP, SPARK Ada, CompCert, Astree Analyzer… all cathedral model by people who knew what they were doing being paid for it.

                        Closest thing in FOSS w/ communal development is OpenBSD with a mix of root-cause fixes and probabilistic mitigations whose effectiveness is unknown since top talent don’t focus on small, market share unless paid to. That vs competition using some of their methods plus formal proof, static analysis, exhaustive testing, covert-channel analysis, ensuring object code maintains source’s properties, SCM security, and 3rd-party pentesting. Open-source security is a joke on the high-end in comparison. Although NSA pentesting failed to break a few above, they certainly have many of those FOSS apps and FOSS-using services in the Snowden leaks with full “SIGINT-enabling.” They strongly encourage many of these FOSS apps to be used while making it illegal for companies to sell me Type 1-certified crypto or TEMPEST-certified devices. Kind of weird if FOSS quality is so good. ;)

                        Enough myths. Neither side is better by default. What matters are benefits for user at what cost. Sometimes it’s proprietary, sometimes not. Look for FOSS by default for many, good reasons. It’s not always the best, though. Just ask any hardware developer if we high-assurance people seem too fringe. Ask the hardware people to tell you which FOSS software is good enough to produce the chip you wrote that comment on. Which is “superior to closed-source proprietary software.”