1. 23
  1.  

  2. 10

    This is great. Simultaneously answering the questions “how do I get started with coq?” and “why hasn’t anybody verified my browser’s box model with coq?” :)

    Formal methods are great, as far as they go, and I’m happy they seem to be getting more attention. The past year or two has seen a resurgent interest in software correctness, with a lot of interest in proofs. Alas, too many people with no experience seem to expect other people to do all the proving.

    I suppose it’s cyclical. Some years ago the answer to every security problem was fine grained ACLs. And when that didn’t work, the answer was finer grained ACLs! “We can use this tool to solve all our problems, we just need to make it a little harder to use.”

    1. 3

      I have this idea that Big Data could be a great in-road for formal methods. The reason being, the scale of Big Data limits the operations you can perform on them (especially if you’re doing some kind of stream processing) and the scale also means there is no way to know if you’ve made a mistake beyond some statistical options. I have no idea how this would actually go, but it seems like a possible window.

      1. 1

        Formal methods are great but they can only model effects that you have, well, formalized. My sense is that most big data work happens where you’re not quite sure what you know or what you’re doing, never mind having a model rigorous enough to implement for a computer.

        I’d expect to see formal methods succeed in the same places the more extreme end of functional languages succeed - which AFAIK is mostly finance?

        1. 1

          In most Big Data frameworks the types of operations one can do and still make sense are extremely limited, however. You can’t just do anything (although people certainly try).

          1. 1

            Maybe, but not in a way that’s formally modelled - otherwise the type system would rule out more of the operations that don’t make sense - indeed the biggest reason I think formal methods won’t work here is that people are able to try to do things that simply wouldn’t compile in non-big-data systems. E.g. when writing a hadoop mapper or reducer, it’s not enough to understand Java and implement the interfaces in ways that make sense - you also have to understand the internals of hadoop and precisely how it will be called. That to me suggests that there isn’t any formal notion of how a hadoop mapper is supposed to work.

            1. 1

              you also have to understand the internals of hadoop and precisely how it will be called.

              IMO you are confusing the fact that Hadoop is a terrible piece of software with how Big Data can be. Many other Big Data systems try to be more semantics-driven. Look at Summingbird, for example, which bases its entire model around monoids (or did, at least). The biggest problem is mindset, people tend to just want to explore big data without any principles. But with enough well-defined operations, it could be possible to give them the tools for that.

              Regardless, as far as I have seen, handling big data at scale and getting meaningful data out is really quite restrictive, and that is an opportunity to try something new and outrageous because the scope would be so much smaller.

    2. 5

      It sure would be nice to see a #formalmethods tag to add to this.

      1. 2

        randomhacks.net has been around since 1998! Wow, you don’t see a lot of blogs with that kind of age!

        1. 1

          How does Coq know that H_A_True has type A and H_B_True has type B? Convention on names?

          1. 1

            No, they are from the goal. Names are arbitrary.

            That is, if the proof script had “intros A B H_B_True H_A_True”, H_A_True has type B and H_B_True has type A.