1. 10

An interesting paper that surveys symbolic execution techniques, but there are an accompanying set of slides as well now that may be interesting as well for a re-submission.

The article talks about what symbolic execution is, techniques for symbolic execution, handling path selection & explosion, &c. It’s an interest area for me, as well as something that my company actively works on.

  1.  

  2. 2

    Always nice to see these sort of comprehensive overviews, since I always feel like the space is a bit overwhelming to know where to start learning.

    One question, not necessarily related to the material at hand, but something that stuck out at me:

    Soundness prevents false negatives, i.e., all possible unsafe inputs are guaranteed to be found, while completeness prevents false positives, i.e., input values deemed unsafe are actually unsafe.

    Did anyone else learn these definitions as switched from the above? In my education (and in informal usage of the terms), “sound” meant “if you’re given an answer, it is actually valid” whereas “complete” meant “if it’s valid, it’ll be guaranteed to be given as an answer” (e.g. so certain logic programmings might be sound but not complete), which is the opposite. Do different sub-disciplines use these terms in the other way that I learned it? (Or, did I learn it incorrectly?)

    1. 2

      Sorry for the late reply, this week has been trying!

      Wikipedia says it better than I will:

      In mathematical logic, a logical system has the soundness property if and only if every formula that can be proved in the system is logically valid with respect to the semantics of the system.

      “Complete” is a bit more complex, but basically something is complete when you can use it to derive every formula within the system. There are slight differences based one what completeness element you’re discussing, such as complete formal languages vs complete logical semantics.

      I don’t think you learnt it incorrectly, just probably focused on the area you were learning. Wrt the section outlined, the difference there is that you can either detect all possibly unsafe inputs (they are guaranteed to be logically valid for the domain and thus possibly unsafe) OR you can ensure that everything found is actually unsafe (i.e. it actually expresses the nature of “un-safety” to a particular program’s semantics).

      Does that make more sense? It’s quite early here and I’m still ingesting caffeine, so I apologize if not…

    Stories with similar links:

    1. A Survey of Symbolic Execution Techniques via pushcx 2 years ago | 14 points | 3 comments