1. 22

    1. 2

      That would require the attacker be inside the building to actually use the information they acquired.

      If they’re running code on the target processor, their code is inside the building.

      This does sound a bit interesting, but only incrementally more interesting than address randomization, which is already quite defeatable. I see no reason to expect this to be any different.

      The only way I really see to make a truly secure system is to eliminate all security vulnerabilities by construction. That requires extensive use of formal methods throughout the full stack—hardware, firmware, and software.

      1. 2

        Why not have two stacks? One just for return addresses that usercode can’t access at all, and one just for parameters? That way, you can not overwrite return addresses at all. Or am I missing something? I probably am …

        1. 1

          Our idea was that if we could make it really hard to make any exploit work on it, then we wouldn’t have to worry about individual exploits.

          That just sounds like “security by obscurity” to me. The rest of the interview doesn’t convice me otherwise, but then it’s not very technical.

          1. 6

            they’re reencrypting the return stack pointer every 100ms, and implied that’s happening to many more places in memory also - how is that obscurity?

            not trying to be contrary, I just don’t understand the dismissal.

            In the paper we wrote about [the Morpheus concept], we had 504 bits of knobs. In the design that we put into this attack, we had almost 200. And 2^200 is a big space to search.

            The way we do it is actually very simple: We just encrypt stuff. We take pointers—references to locations in memory—and we encrypt them. That puts 128 bits of randomness in our pointers. Now, if you want to figure out pointers, you’ve got to solve that problem.

            1. 3

              What I’m sceptical of is the seemingly arbitrary “let’s encrypt this, let’s encrypt that”. Their approach to security is complexity, and there are no formal models/proofs that these measures actually enhance the security of the system.

              Of course this is a valid, albeit in my opinion rather poor, approach to security. Instead of a tangible problem formulation (what are the attacker capabilities? what do we want to protect? how do we achieve that?) it’s a bunch of encryption. Which also has me asking: How are they generating the encryption keys? Because once you break/tap into that you’ve essentially won anyway, regardless of their countermeasures.

              Ah they have a paper, I must have missed that before, my bad. Not really an excuse, but information security is a niche in which claims fly wild and I’ve learned to apply a healthy amount of scepticism to any new shiny thing that comes out.

              At first glance the paper looks reasonable to me - the only questionable security assumption in my opinion is support from the OS scheduler.