1. 60

  2. 18

    I continue being amazed both by how fragile the security of our systems is and the ingenuity of the security researchers. It seems it’s impossible for anyone to completely understand all the implications of every design decision. Even the ECC correction is not enough in this case by exposing yet another side-channel in the latency of reads, giving the attacker the information it needs to know if there has been a flip or not.

    What could be done in order to mititgate side-channels systematically? Is it to go back to simpler, even if slower systems? I don’t think even that would help, right? Is security really a completely unaittenable goal for computing systems? I know that the general idea is that perfect security doesn’t exist and the level of security depends on tradeoffs, but hardware side-channels are very scary and I don’t think it is that much about trade-offs anyway (although I am far from knowledgeable in this).

    I used to have this trust in hardware, don’t know really why, but more and more I’m scared of the amount of ways to get secret information there are (even if impractical).

    I think we humans got into levels of complexity we were completely unprepared for, and we will pay it badly very soon.

    1. 11

      I continue being amazed both by how fragile the security of our systems is and the ingenuity of the security researchers. It seems it’s impossible for anyone to completely understand all the implications of every design decision.

      Sort of. Applying covert-channel analysis to Intel CPU’s in the mid-1990’s showed pervasive vulnerability. If you do it at system level, you’d see even more of these problems. I’d seen folks on HN griping about QA being a low priority when they worked at RAM companies. The problems were mostly ignored due to market and management’s economic priorities: make things faster, smaller, and with less power at max profit. That leads to less QA and more integration instead of separation. Both apathetic users and companies supplying their demand got here willingly.

      The attacks have been really clever. There were always clever defenses that prevented many of them, too. Companies just don’t use them. There’s a whole niche of them dedicated to making RAM untrusted. They define SoC itself as security boundary, try to maintain confidentiality/integrity of pages, and typically take a performance hit from the crypto used to do that. Another strategy was using different DIMM’s for different applications with separation kernels flushing the registers and caches on a switch. The RAM controller would get targeted next if that got popular. Others suggested building high-quality RAM that would cost more due to a mix of better quality and patent royalties RAM cartel would sue for. It has to be high volume, though, if nobody wants to lose massive money up-front. I was looking at sacrificing RAM size to use SRAM since some hardware people talked like it had less risks. I’d defer to experts on that stuff, though.

      “What could be done in order to mititgate side-channels systematically?”

      Those of us worried about it stuck with physical separation. I used to recommend small-form PC’s or high-end embedded (eg PCI cards) tied together with a KVM switch. Keep untrusted stuff away from trusted stuff. Probably safest with a guard for what sharing needs to happen. Most people won’t know about those or be able to afford them. However, it does reduce the problem to two things we have to secure at users’ end: a KVM switch and a guard. Many guards have existed with a few high security. I think Tenix making a security-enhanced KVM. It’s a doable project for open source, small company, and/or academia. It will require at least two specialists: one in high-security with low-level knowledge; one doing EMSEC, esp analog and RF.

      1. 11

        Is security really a completely unattainable goal for computing systems?

        Well, yes. Not because they are computer systems, but because they are physical systems.

        Let’s take fort-building techniques and materials as an analogy. Suppose you want to protect a crown. There was a pre-fort era: anybody could walk up and take the crown, if they knew where it was. Think dialup access to a prod system; no password. Early forts were a single, short, unconnected wall (designed to halt the progress of foes coming at you from a single point) and they were trivial to defeat: think of a front end with a password and a backend database with no password, also connected to the internet. Let’s fast forward…

        Modern forts have moats and observation towers and doors that are armored and that armor is engineered to be stronger than the walls–which provides a sort of guarantee that they ain’t gonna breach that door–it’s cheaper for them to go through the wall. Modern forts have whole departments dedicated to simply determining ahead of time how powerful the foe’s strongest weapon is and making sure the armor is at least strong enough stop that weapon.

        You see where I’m going. A fort is never “done”. You must continue to “fortify”, forever, because your foe is always developing more powerful weapons. Not to mention, they innovate: burrowing under your walls, impersonating your staff, etc.

        That said, there are some forts that have never been breached, right? Some crowns that have never been stolen? This is achieved by keeping up with the Jones, forever. It’s difficult and it always will be, but it can be done.

        What about physics? Given infinite time, any ciphertext can be brute-forced, BUT according to physics, the foe can not have infinite time. Or, given infinite energy, any armor can be pierced, BUT, according to physics, the foe can not have infinite energy. Well, this isn’t my area, but.. does physics say that the foe can not better at physics? Better keep up…

        The horror we’re facing now with all these side channel attacks is analogous to the horror that the king in that one-wall fort must have felt. “Oh crap, we’re playing on a massive plane, rather than a single line between them and me. I’m basically fort-less right now.”

        (EDIT: moved my last paragraph up one and removed the parens that were wrapping it.)

        1. 3

          What could be done in order to mititgate side-channels systematically?

          Systematic physical separation of everything.

          Provision a new Raspberry Pi for each browser tab :D

          (more practically, never put mutually untrusted processes on the same core, on the same DRAM chip, etc. maybe?)

          1. 4

            There’s not that much unpractical about it, I do it on a daily basis - though Pine64 clusterboard turned out a bit cheaper (~300usd / for 7 tabs) than the PIs. Ramdisk boot chromium (or qemu, or android or, …) as a kiosk in a “repeat-try connect to desktop; reboot” kind of loop. Have the DE allow one connection everytime you want to spawn your “tab”. A bit more adventurous is collecting and inspecting the crashes for signs of n-days…

            1. 3

              Provision a new Raspberry Pi for each browser tab :D

              Ah yes, the good old “Pi in the Sky” Raspberry Pi Cloud

              1. 3

                Power usage side channels will still leak data from one Raspberry Pi to another. The only larger point I could tie that to is that perfect defense is impossible, but sebboh already said that quite eloquently, so I’ll leave it at that.

                1. 6

                  Most of the more esoteric side channels are not readily available to other systems however. Even physically colocated systems aren’t hooked into the same power monitor to watch each other.

                  There will be a never ending series of cpu/ram performance side channels because the means of measurement is embedded in the attack device.

                  1. 3

                    Separate battery systems (power), everything stored at least 30cm apart (magnets) in a lead-lined (radiation) soundproof (coil whine) box. Then you’ll want to worry about protecting the lines to the keyboard and monitor…

                    1. 1

                      is it possible to protect monitor cables / monitors for remot scanning. From what I’ve gathered there is hardware that can get a really clear picture of what’s on screen from quite the distance. Faraday’s cage around the whole unit and or where you are sitting or what?

                      1. 2

                        From my fairly basic knowledge of the physics, yes. Any shifting current in a wire will make that wire act a little like an antenna and emit radio waves, which is how these attacks work. It’s usually undesirable to have the signal you’re trying to send wander off into the ether, so cables are designed to minimize this, but it will always happen a little. Common coax cables already incorporate braided wire mesh or foil around the signal-carrying bits, for example.

                        But, it can never eliminate it completely. So, it’ll always be another arms race between better shielding and more sensitive detectors.

                        1. 1

                          ah so they work against the cable and not the display itself right? Does this mean that say a tablet or a laptop is less susceptible to this kind of attack than a desktop computer?

                          Also to really be foolproof would it be useful to build faraday’s cages into the walls? I’ve heard that if the metal rods stabilizing the concrete in buildings gets in contact with water that grounds them, creating a faraday’s cage and this explains why cell phones can get really bad reception in old big concrete houses. Wouldn’t it be a sensible measure for large companies to do exactly this but on purpose. For cell reception they could have repeaters inside where that would be needed. Wifi is supposed to stay indoors anyways and yeah chinese spies with tempest equipment shouldn’t get their hands on any radiation either.

                          1. 2

                            They’re called emanation attacks. The defense standards are called TEMPEST. Although they claim to protect us, civilians aren’t allowed to buy TEMPEST-certified hardware since they’d have harder time spying on us. You can find out more about that stuff here (pdf), this history, this supplier for examples, and Elovici et al’s Bridging the Airgap here for recent attacks.

                            The cat and mouse game is only beginning now that teams like Elovici’s are in the news with tools to develop attacks cheaper and more capable than ever. It’s why Clive Robinson on Schneier’s blog invented concept of “energy gapping.” All types of matter/energy that two devices share is potentially a side channel. So, you have to mitigate every one just in case. Can’t just buy a product for that. ;)

                            1. 2

                              yeah I heard about TEMPEST there was this fun program that let you broadcast FM or AM via your CRT that I played with forever ago tempest for eliza or something.

                              messed up that they make laws against things like that.

                              My thinking is to protect the whole house at once or why not cubicle depending on how much you are willing to spend on metal of course

                              1. 1


                                Far as whole house, they do rooms and buildings in government operations. A lot of the rooms don’t have toilets because the pipes or water might conduct the waves. Air conditioning is another risk. Gotta keep cellphones away from stuff because their signal can bounce off the inside of a passively-secured device, broadcasting its secrets. All sorts of issues. Safes/containers and SCIF-style rooms are my favorite solutions since scope of problem is reduced.

                                1. 1

                                  Yeah that’s the one.

                    2. 2

                      I always recommended EMSEC safes with power filters and inter-computer connections being EMSEC-filtered optical. So, yeah, it’s a possibility. That said, some of these systems might not have the ability for firmware, kernel code, or user code to measure those things. If none are this way, new hardware could be designed that way with little to no modifications of some existing hardware. Then, a compromise might just be limited to whats in the system and whatever the code can glean from interactions with hardware API’s. On the latter, we use ancient mitigations of denying accurate timers, constant-time operations, and masking with noise.

                      I think there’s potential for making some of those attacks useless with inexpensive modifications to existing systems. Meanwhile, I’m concerned about them but can’t tell you the odds of exploitation. We do need open designs for EMSEC safes or just containers (not safes), though.

                  2. 3

                    I used to have this trust in hardware, don’t know really why, but more and more I’m scared of the amount of ways to get secret information there are (even if impractical).

                    As long as there’s physical access to a machine, that access will be an attack vector. As long as there’s access to information, that information is susceptible to being intercepted. It comes down to acknowledging and securing against practical attack vectors. Someone can always cut my brakes or smash my windows and take my belongings from my car, but that doesn’t mean I operate in fear every time I park (of course this is a toy analogy: it’s much easier and far less risky to steal someone’s digital information, EDIT: and on second thought, you would immediately know when your belongings have been tampered with).

                    From the paper:

                    We now exploit the deterministic behavior of the buddy allocator to coerce the kernel into providing us with physically consecutive memory

                    Does the Linux kernel currently have any mitigations like randomization within its allocators? I believe this is orthogonal to ASLR.

                    1. 2

                      Hardware is cheap; use that as your security boundary between trust domains. On-device process separation, virtualization, still makes a lot of sense for other reasons (compatibility, performance, resilience), but it is about as alive as a parrot in a monty python sketch when it comes to security. Rowhammer should have been the absolutely last straw in that respect - there were plenty of indicators well before then. What sucks is that the user-interfaces and interaction between hardware separated tasks (part of the more general ‘opsec’ umbrella) is cumbersome at the very best. Maybe that is easier to fix than multiple decades of opaque hardware…

                      1. 4

                        Consumer grade hardware may be cheap; Power and hardware with ECC RAM support is not so much. With dedicated hardware you are burning a lot more power for useful computations performed.

                        For this particular attack, AMD’s Secure Encrypted Virtualization (SEV) is an actual solution and is mentioned as such in the paper. Intel’s Multi-Key Total Memory Encryption (MKTME) should be too when it comes out. Unfortunately software support is not really what I would call complete yet.