Here’s what I found about the licensing / sustainability / who pays for this, since that’s not summarized in one place. Summary: It looks fine, even after noting the company is blockchain-adjacent.
Developed by Informal Systems. Code is Apache-licensed. No trace of an ‘open core’ or tiered pricing model. AFAICT this is a tool Informal Systems themselves uses in their consulting work, and hopefully it also brings them buzz and business.
Informal Systems earn their money consulting for various blockchain companies. But: a big part of the work they do is audits and formal verification work, and that skill should transfer to other sectors if and when blockchain technology deflates. And they don’t seem speculative or rent-seeking themselves.
Yep, dev here: everything is opensource, you download it and run locally on your machine (it will fetch Apalache from GitHub, but that’s all through a public repository and you can do it yourself if needed).
Regarding Informal Systems, we are in deed using this for the auditing service, specially when complicated protocols are involved. We also use it on regular engineering-level services, like in 1 where we have to handle BFT consensus protocols which is about a really interesting distributed systems problem and not actually blockchain-specific.
And although we don’t have a significant user base outside of Informal Systems yet, it is slowly growing. See the blogpost in 2 from a former employee that is still using Quint after leaving.
Oh, this looks like exactly what I’ve been looking for the past couple of years. Though, what gives me the pause is that perusing quick start and FAQ doesn’t answer the key question: what are they actually doing with the specs? Is it just some kind randomized fuzzing? Or does it try to exhaustively categorize and enumerate reachable state to get some measure of the coverage/guaranteed full coverage? Or are there symbolic proofs?
I think this is always going to be an anti-goal. Specs are most useful when they’re extremely high-level and abstract. Too much information is lost at this level to be able to be reconstituted at the implementation level.
For me, I wouldn’t need any of the implementation details, just auto-generated stubs that put everything in the right place and the right order. Just have it call empty functions that I can fill in with my own code, but call them in the right order, at the right time, etc.
This has been an idea in my head for quite a while. The benefits of modeling and specification are clear, but I think executability as a constraint is a huge win. In practice, you often add bounds to any model so that you can reasonably check it anyway. But executability means that you can start to use the model in a tactile way.
From their github:
enabling tests, trace generation, and exploration of your system
This exploration is the key for me. An ergonomic toolchain for doing this is definitely something that could get people to actually use such “lightweight” formal methods.
I honestly have no idea what might be going wrong. I’d appreciate a lot if you could share more details (i.e. copy of the entire output). Either here on with an issue: https://github.com/informalsystems/quint
I know now what happens and it relates to how npm exec seems to take parameters, it seems that npm consumes the --invariant parameter and never passes it to the quint command.
The Quint README on GitHub is more informative than the landing page.
Thats (sadly) often the case
It looks very cool indeed.
Here’s what I found about the licensing / sustainability / who pays for this, since that’s not summarized in one place. Summary: It looks fine, even after noting the company is blockchain-adjacent.
Developed by Informal Systems. Code is Apache-licensed. No trace of an ‘open core’ or tiered pricing model. AFAICT this is a tool Informal Systems themselves uses in their consulting work, and hopefully it also brings them buzz and business.
Informal Systems earn their money consulting for various blockchain companies. But: a big part of the work they do is audits and formal verification work, and that skill should transfer to other sectors if and when blockchain technology deflates. And they don’t seem speculative or rent-seeking themselves.
Yep, dev here: everything is opensource, you download it and run locally on your machine (it will fetch Apalache from GitHub, but that’s all through a public repository and you can do it yourself if needed).
Regarding Informal Systems, we are in deed using this for the auditing service, specially when complicated protocols are involved. We also use it on regular engineering-level services, like in 1 where we have to handle BFT consensus protocols which is about a really interesting distributed systems problem and not actually blockchain-specific.
And although we don’t have a significant user base outside of Informal Systems yet, it is slowly growing. See the blogpost in 2 from a former employee that is still using Quint after leaving.
Yeah the README mentions “blockchain” as the very first example application, yikes 🚩. Thanks for providing some more context.
Not sure why crypto is a red flag? It’s funding a lot of formal methods work and has generally advanced th field.
Oh, this looks like exactly what I’ve been looking for the past couple of years. Though, what gives me the pause is that perusing quick start and FAQ doesn’t answer the key question: what are they actually doing with the specs? Is it just some kind randomized fuzzing? Or does it try to exhaustively categorize and enumerate reachable state to get some measure of the coverage/guaranteed full coverage? Or are there symbolic proofs?
Edit: fanf comment answers my question!
My understanding of the flow is (of the model checking):
Quint ->Apalache IR (TLA Model Checker) -> SMT-solver (Z3)
And the for the simulation:
Quint -> Internal simulator
But I think I am wrong somewhere… (trying to compare this to the TLA toolbox https://lamport.azurewebsites.net/tla/tools.html
Appreciate if somebody could clarify.
You got it right! Dev here.
quint verify-> sends spec to Apalache and use Z3 to verify that a property holds in all possible executions of up to N steps (N=10 by default)quint run-> internally runs 10000 (default) random executions of up to N steps and see if the property holds in all states.TLA toolbox uses TLC (another model checker), which also has two similar options. But TLC doesn’t use Z3 like Apalache, it does state enumeration.
[Comment removed by author]
now if I could generate real world business logic Go code from this I would pay serious cash
I think this is always going to be an anti-goal. Specs are most useful when they’re extremely high-level and abstract. Too much information is lost at this level to be able to be reconstituted at the implementation level.
For me, I wouldn’t need any of the implementation details, just auto-generated stubs that put everything in the right place and the right order. Just have it call empty functions that I can fill in with my own code, but call them in the right order, at the right time, etc.
This has been an idea in my head for quite a while. The benefits of modeling and specification are clear, but I think executability as a constraint is a huge win. In practice, you often add bounds to any model so that you can reasonably check it anyway. But executability means that you can start to use the model in a tactile way.
From their github:
This exploration is the key for me. An ergonomic toolchain for doing this is definitely something that could get people to actually use such “lightweight” formal methods.
Hmm, the getting started example finds no violation. I tried both with the verifier and the simulator.
Are you providing the invariant parameter as instructed in the getting started guide?
yes, interestingly it shows some states and lots of them have negatives, so it seems it is ignoring the invariant?
I honestly have no idea what might be going wrong. I’d appreciate a lot if you could share more details (i.e. copy of the entire output). Either here on with an issue: https://github.com/informalsystems/quint
Will do later when I’m on my computer again :)
I know now what happens and it relates to how
npm execseems to take parameters, it seems that npm consumes the--invariantparameter and never passes it to the quint command.Thank you so much for reporting! I’ve replied on GitHub :)