1. 22
    1. 3

      The article hypothesizes:

      1. The compilers will achieve great performance
      2. The compilers can be made to stop breaking constant-time properties of the code

      These two seem at odds with each other. Even the breakage example in the article is caused by an optimization pass.

      1. 4

        The author mentions one pass that breaks the constant-time properties: the “x86-cmov-conversion.” He proposes extending the compiler to allow programmers to selectively disable this pass on certain variables.

        Now, all of this counts on a sufficiently smart compiler™, but it seems to me that this would have no performance impact; the compiler pass would only be disabled in the case of variables marked as secret, and these variables were already not using the pass.

        1. 2

          Of course if human can write fast constant-time code, then sufficiently smart compiler can too. But in practice it’s very hard. The stricter the semantics, the harder it is to optimize. Compilers split optimizations into passes, because that’s a way to have manageable complexity, but that makes individual passes less smart about global view and higher-level semantics of the code.

          Passes are designed to be independent for simplicity, and yet interact, to achieve better results together. For example, common subexpression elimination can leave unused code behind, and rely on dead store elimination to clean this up. But later dead store elimination won’t be aware whether it’s eliminating unused code of folded subexpressions, or removes an attempt to clear secrets from memory. And you could try to add more metadata to track these things across passes, but preserving that metadata throughout transformations makes things harder (e.g. LLVM already struggles with preserving aliasing info, which prevents Rust from optimizing better).