1. 15

Abstract: “Graphics Processing Units (GPUs) are commonly integrated with computing devices to enhance the performance and capabilities of graphical workloads. In addition, they are increasingly being integrated in data centers and clouds such that they can be used to accelerate data intensive workloads. Under a number of scenarios the GPU can be shared between multiple applications at a !ne granularity allowing a spy application to monitor side channels and attempt to infer the behavior of the victim. For example, OpenGL and WebGL send workloads to the GPU at the granularity of a frame, allowing an attacker to interleave the use of the GPU to measure the side-e”ects of the victim computation through performance counters or other resource tracking APIs. We demonstrate the vulnerability using two applications. First, we show that an OpenGL based spy can !ngerprint websites accurately, track user activities within the website, and even infer the keystroke timings for a password text box with high accuracy. The second application demonstrates how a CUDA spy application can derive the internal parameters of a neural network model being used by another CUDA application, illustrating these threats on the cloud. To counter these attacks, the paper suggests mitigations based on limiting the rate of the calls, or limiting the granularity of the returned information.”

  1.  

  2. 3

    Repeat after me the lesson from the founders of information security: every system, from individual components to their integration, is insecure until proven trustworthy by sufficient analysis. You have to also have a precise statement of what secure means to compare the system against. You then apply methods proven to work for various classes of problems. By 1992, they had found everything from kernel compromises to cache-based, timing channels using such methods. On this topic, every hardware and software component in every centralized or decentralized system has covert channels leaking your secrets. Now, if you’re concerned or want to be famous, there’s something you can do:

    Shared Resource Matrix for Storage Channels (1983)

    Wray’s Method for Timing Channel’s (1991)

    Using such methods were mandatory under the first, security regulations called TCSEC. They found a lot of leaks. High-assurance, security researchers stay improving on this with some trying to build automated tools to find leaks in software and hardware. Buzzwords include “covert channels,” “side channels,” “non-interference proof or analysis,” “information flow analysis (or control),” and “information leaks.” There’s even programming languages designed to prevent accidental leaks in the app or do constant-time implementations for stuff like crypto. Here’s an old Pastebin I did with examples of those, too.

    Go forth and apply covert-channel analysis and mitigation on all the FOSS things! :)

    1. 4

      Shared computers are shared. :)

      1. 3

        I like that but oversimplifies it. My goto explanation, smaller than most, is Brian Snow’s from We Need Assurance (2005). I think he really nailed it with this excerpt with just enough info to frame most of the problem:

        “The problem is innately difficult because from the beginning (ENIAC, 1944), due to the high cost of components, computers were built to share resources (memory, processors, buses, etc.). If you look for a one-word synopsis of computer design philosophy, it was and is SHARING. In the security realm, the one word synopsis is SEPARATION: keeping the bad guys away from the good guys’ stuff!

        So today, making a computer secure requires imposing a “separation paradigm” on top of an architecture built to share. That is tough! Even when partially successful, the residual problem is going to be covert channels. We really need to focus on making a secure computer, not on making a computer secure – the point of view changes your beginning assumptions and requirements! “

        1. 2

          That is always a good paper.

          1. 1

            I still probably quote you in some situations. It’s got a nice “Ahhhh…” moment or face-palm trigger built into it depending on my audience.

    2. 2

      Some weird reverse-OCR bug turning “tt” and “fi” into “!” …?

      1. 2

        pdf to text? They render as ligatures, but copy paste as !.

        1. 1

          I don’t know. I’ve seen two letters get turned into one symbol on at least two papers recently. On the other, it was multiple pairs of letters that did that. I fixed them before submit that time.

          I didn’t think they were OCR’d but I don’t know what that looks like now. I know there’s a lot of use of non-PDF formats or applications in academic circles that get exported to PDF. The exporter (translator) could be the source of the errors.