1. 3

    @redjamjar,

    You may not be aware that on lobste.rs, tags act as filters. When you add so many tags to a story as you’ve done here you’ll reduce the number of people that see it, rather than increase the number of people that will see it. If an account has elected to filter any of the tags you’ve selected here, they won’t see it. This story doesn’t show up on my own recent page, for instance.

    1. 2

      Hey,

      Ok, thanks — that’s helpful. Can I adjust that now?

      1. 2

        Yeah, so I can. I went with compsci, though not sure what’s the best. Perhaps just programming actually makes most sense!

    1. 2

      The Bernstein paper is a real classic — a must read, IMHO!!

      1. 2

        As many here have said, “formal methods” could be a lot of different things. People will have different motivations, too. Some may want to master something difficult or esoteric for its own sake, or for reasons of ego and status. Others are genuinely in search of practical ways to make more trustworthy software, but there are lots of possibilities there too. Are they designing a system, or writing a spec? How much control do they have over the implementation? The most difficult verification projects I can think of involve proving that a pre-existing implementation (written for performance, probably in C) obeys a frozen, pre-existing spec (hopefully in some formal language!). As far as I know, even FM experts can only do this for very small, well-defined systems.

        On the other hand, merely choosing to build your software in a language that offers some safety guarantees can avoid some headaches altogether. Gaining proficiency and fluency in these languages necessarily involves learning to think mathematically, with both precision and discretion. These habits of mind have not been fully appreciated by most practitioners in industry, nor are they given much incentive there. (Academic computer science, where logical rigor can become an end in itself, has its own perverse incentives, which may help explain why its artifacts are so often inaccessible.)

        “Almost anything in software can be implemented, sold, and even used given enough determination. There is nothing a mere scientist can say that will stand against the flood of a hundred million dollars. But there is one quality that cannot be purchased in this way — and that is reliability. The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.” – Sir Tony Hoare

        Managing complexity is an essential skill, and difficult in that it involves a discipline uncommon in our modern resource-rich computing culture, where orders of magnitude worth of code bloat have been progressively normalized for generations. Worse, the people making the decisions about which new features to implement rarely have much insight into, or responsibility for, the cumulative cost of these features. I think that engineers investigating formal methods should first try to attain reasonable expectations for these techniques.

        You’re absolutely right that here’s a continuum from easy to difficult, with different payoff curves for different kinds of projects. The perfect is the enemy of the good, and effective engineering solutions shouldn’t be conflated with mathematically elegant proofs. If you have control over both the program spec and its implementation, there are plenty of gradual approaches possible. If your project doesn’t have any tests, write some. If unit tests can’t cover enough, try property testing. From there it’s not a big leap to design by contract: check your assertions at runtime, not just testing time. If you decide that runtime exceptions are not acceptable, you’ll be more ready for something like Daphny, where an SMT solver runs against your code as it co-evolves with its spec. Having come that far, Coq and similar “industrial strength” verification systems won’t seem so intimidating any more. But, even if you don’t get that far (and most projects really shouldn’t bother) your program will be somewhat more safe, and you’ll understand it somewhat better.

        1. 2

          If you decide that runtime exceptions are not acceptable, you’ll be more ready for something like Daphny, where an SMT solver runs against your code as it co-evolves with its spec

          Yeah, Dafny is a great example here (https://github.com/Microsoft/dafny) and also Whiley (http://whiley.org). Checkout this demo which shows formal methods doesn’t have to be that hard:

          https://www.youtube.com/watch?v=dGsD_K-IAgA&t=693s

          1. 2

            Thanks for the spelling correction, and the Whiley link! Looks quite interesting.

            1. 1

              I was about to ask why you push Whiley in this thread so much when we have SPARK and Dafny in quite a usable state. SPARK has already been applied in IRONSIDES DNS and industry. MS Research is doing some amazing stuff on top of or using Dafny. Too much to write up. Then, I see your profile.

              Well, pleasure to have you here. Also, thanks for contributing effort to high-assurance software esp in a language people might use for a productivity boost on practical stuff down the line. I think I still want to ask you, albeit nicely, in terms non-verification-experts can understand, “What advantages or disadvantages does Whiley have compared to the other two?” It’s worth asking given the small ecosystems of both verified code and tooling to support it building up around those languages. Those make SPARK and Dafny my default recommendations in this space if it’s about using stuff for practical reasons. There’s still people just enjoying a language for various reasons or contributing to one that’s different than others. Maybe even practical advantages if we’re lucky. :)

              EDIT: Sorry I forgot that yours was object/functional, not imperative. I warn that I got serious memory problems so I slip on that. Still want to here your selling points on current or future state, though.

              1. 2

                What advantages or disadvantages does Whiley have compared to the other two?

                Ah, this is the question I get asked all the time!! It’s a fair question. There are different answers:

                • Compared with SPARK, I think there are a lot of differences. Compile-Time Verification in SPARK is still relatively limited in what can be done. My goal is to go way beyond this. Of course, SPARK has a largish community and has been used in lots of real-world industrial projects. It is hard to compete with that.

                • Compared with Dafny, well actually they are quite comparable in many ways. Whiley and Dafny actually started around the same time (roughly 2009). When I started, I simply didn’t know about it! There are some differences in the syntax which I personally believe are significant (but I am biased here). The main thing though is that Dafny builds on Z3, whereas Whiley builds on its own theorem prover. That’s good and bad, but it just provides another point in the space. It’s easy to work with Whiley and extend it as you want. At the moment, the theorem prover in Whiley is not as good as Z3, though I am catching up pretty fast now.

                • Ultimately, the real answer to your question. My goal in building Whiley was just to learn everything about building such a system. That’s why it’s a start-from-scratch project and also why I build the theorem prover as well.

                I warn that I got serious memory problems so I slip on that.

                Well, it has imperative syntax and you shouldn’t underestimate the benefit this gives (with respect to memory). For example, when compiling arrays to Java they go to Java arrays. The only question then is how often you need to clone these arrays (to achieve functional semantics) and, in many cases, you hardly ever have to. But, I do still have more work to do on efficient backends …

                1. 2

                  In SPARK and Frama-C, they use Why3 to leverage the results of the SAT and SMT solver advancements with boosts over time. Then, the number of projects building on Coq, Isabelle/HOL, and HOL4 mean they’re advantageous for how your efforts might get easier over time. Smart frontends or heuristics might be built for them to better suit your language.

                  So, why did you decide to write your own prover? I’m curious since trying to exceed SPARK in industry or Dafny in MS-funded R&D is already pretty challenging.

                  1. 2

                    So, why did you decide to write your own prover?

                    Well, just because I wanted to know how they work. I don’t want my system to rely on a black box that I don’t understand and/or cannot fix/update, etc. And I can provide the best experience possible by having a tight integration between my language and my prover.

                    I’m curious since trying to exceed SPARK in industry or Dafny in MS-funded R&D is already pretty challenging.

                    I think you might be surprised. MS is not investing heavily in either Dafny or Z3 any more, though they are still being developed. Building a theorem prover is not as hard as you might think, despite what everyone says. Especially in the context of a language like Whiley / Dafny where the generated verification conditions are pretty small. Something like Z3 can handle boolean formulas with 100K+ variables. But, you’ll never see something like that when verifying whiley/dafny programs. For SPARK, well I know someone who was on the team for the SPARK 2014 update. There’s no rocket science going on here. It’s not magic. They have some people like me working on it 9-5. But, the mythical man month always applies and they have other concerns to worry about as well.

                    1. 1

                      “Well, just because I wanted to know how they work.”

                      Makes sense. I feel you on the black box thing as ground-up, trustworthy systems is one of my research areas. I thought about trying to learn the stuff myself at one point. People told me book, The Little Prover, by the Little Schemer people was nice. I’m trying to avoid the heavy texts. I also thought studying and trying to extend Davis’ Milawa prover might be fruitful given what it already accomplished. Maybe put a Prolog in it that ties into the prior ones that were partially verified. Then, type systems, verification generators, or some other things might be prototyped in something that verifies them.

                      Note: You might wonder at some point why I talk about more angles than I could possibly be building on. I’ll say upfront that I mostly work on a high-level so I can run through as many concepts as possible to filter down to good recommendations for specialists that do build things. Also, integrate things in fields that are often siloed. I’m basically like an applied science version of meta studies who tries facilitate serendipity in R&D among hands-on researchers. I don’t build a lot myself. Of sub-fields, I’m most ignorant in hands-on formal verification and hardware development but still get and give useful ideas here and there talking to specialists. :) I can send you a few of my summary write-ups that came from that process if you’re curious. Back to the conversation.

                      “I think you might be surprised. MS is not investing heavily in either Dafny or Z3 any more, though they are still being developed. “

                      WHAT!? That’s crazy. MS Research seemed to be in the lead developing so much jaw-dropping stuff on every part of the stack, shallow vs deep verification, and even some stuff easier to learn (eg Dafny). I expected MS to piss on their own good work but not MSR. They seemed wiser…

                      “There’s no rocket science going on here. It’s not magic. They have some people like me working on it 9-5.”

                      It’s good to know. You’ll probably be fine on existing capabilities of something like SPARK over time in that case. There’s still apparently challenges handling algorithms with lots of pointers or dynamic allocation. The examples I’ve seen take separation logic at the least. Rust’s borrow-checker cheats turns it into basically a heuristic exercise of figuring out what it will accept. I told Yannick McCoy they need to get that into Ada or SPARK pronto to remain reigning champ in safe, systems languages. I wonder what odds are of getting equivalent of a borrow-checker into a useful subset of C. You using a language with a GC might let you dodge that one but it’s a huge TCB. You might want to consider compiling with a borrow-checker to Clight or something at some point. You current model might at least get significant impact in mission-critical Java, a market SPARK doesn’t serve.

                      1. 1

                        I can send you a few of my summary write-ups that came from that process if you’re curious

                        Sure, interested.

                        MS Research seemed to be in the lead developing so much jaw-dropping stuff on every part of the stack

                        Well, Spec# didn’t make it into production. I guess they don’t make much money of Z3, though they must make some. They definitely don’t make any of Dafny, and the lead guy has moved to Amazon (I think it is). Maybe they’ll continue development.

                        You might want to consider compiling with a borrow-checker to Clight or something at some point

                        Rust is definitely interesting to me. See:

                        http://whiley.org/2016/05/28/reference-lifetimes-in-whiley/ http://homepages.ecs.vuw.ac.nz/~djp/files/MSCThesisSebastian.pdf

                        1. 1

                          Sure, interested.

                          I can share some of my experiences with TLA+ and Alloy, if that interests you too. Throw me a message!

            2. 2

              That’s a great quote by Hoare. I also like your incremental approach. I’ve been leaning in that direction talking a default of DbC w/ property-based and random testing. They can always selectively do more where they want to. That’s the thing I liked about Cleanroom where all the structure was the same but they encouraged people to be reasonable about how each was verified. Probably should include a reminder about this in next version of this article as some will interpret the push as an all or nothing method like how some languages do OOP or function programming.

            1. 2

              Great advice! Thinking about your three points made me think of the following, I hope you find it useful.

              In your comment you linked, I think the list of all the projects is extremely useful to see what’s going on. Related to your point (1), I think is that each of these projects is impressive because they built something within a theorem prover, e.g., no one would be impressed if someone wrote a C-compiler in C++, but verifying non-trivial theorems about a C-compiler is impressive.

              Perhaps a better way to look at the problem of “how to get into formal methods?” is to look at it as “what do you want to get from your efforts?”.

              This, I believe, expands on your point (2): if you want to reduce the number of bugs in your code, perhaps just fuzzing is the way to go. This probably has the highest return-on-investment for existing projects. I know this is probably no longer “formal” methods but I would consider fuzzing at least a distant cousin. Extending beyond this are more heavy-weight techniques like bounded model checking, abstract interpretation and concolic execution: these again are not full-blown verification but provide some formal guarantees. These are maybe stepping stone before the lightweight stuff mentioned in your comment, or perhaps are around the same area.

              If you want to learn how to encode proofs into your type system, then maybe the first few chapters for Software Foundations or some other theorem prover textbook would be great: you won’t build anything “real”, but perhaps will have some extra appreciation for types. This relates to your point (1).

              Related also to (1) is the difficulty in formalizing “correctness” of a program: often writing the specification for a program is more difficult than writing the program itself. A contrived example is, is specifying the correctness of int max(int a, int b). Especially for a “real engineering” problem where the problem itself is not very well specified, coming up with a notion of correctness is in and of itself difficult. Stuff like using the runtime sanitizers (ASAN, UBSAN, MSAN) makes encoding some notion of correctness into your existing tests easy.

              Anyway, I feel this comment was a bit rambling but I perhaps the ideas will be useful to you: I agree that equating formal-methods with dependent types is maybe too far, and wanted to show some of the other techniques for reducing the number of bugs, which sometimes are not officially included in “formal methods”. (As you can maybe tell, my own interests are in fuzzing/bounded-model checking/symbolic execution/abstract interpretation instead of dependent types ;)

              1. 2

                To illustrate then, here’s the max function written in Whiley:

                function max(int a, int b) -> (int r)
                ensures (r >= a) && (r >= b)
                ensures (r == a) || (r == b):
                    //
                    if a >= b:
                        return a
                    else:
                        return b
                

                (try it out for yourself at http://whileylabs.com)

                But, yes, in this case the specification is as long as the body itself. Here’s another example: the well-known indexOf function:

                function indexOf(int[] items, int item) -> (int r)
                // If valid index returned, element matches item
                ensures r >= 0 ==> items[r] == item
                // If invalid index return, no element matches item
                ensures r <  0 ==> all { i in 0..|items| | items[i] != item }
                // Return value is between -1 and size of items
                ensures r >= -1 && r < |items|:
                    //
                    int i = 0
                    while i < |items|
                        where i >= 0
                        where all { k in 0 .. i | items[k] != item }:
                        //    
                        if items[i] == item:
                            return i
                        i = i + 1
                    //
                    return -1
                

                Again, overall, probably as much spec stuff as code. But, in the end, is it such a problem? Think about the lines of unit test code you would write to test this function. Likewise, the lines of documentation you would write for it anyway.

                1. 2

                  Thanks for the thorough response with the examples.

                  In retrospect the max example was probably not the best for the point I was trying to make: I didn’t mean to say that “writing the spec is too hard:” I definitely agree, especially with comments+tests, that it’s preferable to write the spec and prove it’s correct rather than write tests.

                  But, what about cases where it’s hard to determine what correct even means? For example, what if you wanted to write a spec that captures an arbitrary webpage loading? You can’t look at stuff like exceptions being thrown and not caught since so many webpages do this without them being “incorrect.” This is probably a moot point when talking about verifying stuff since it implies you need to know the specification first: so, what I was trying to say is that often actually coming up with the spec is hard, let alone proving its correct, thus making the problem even more difficult.

                  1. 2

                    what if you wanted to write a spec that captures an arbitrary webpage loading?

                    Right, I see. Yeah, so this is really about whether or not we always try to make our specifications “complete” in some sense. The key to remember here is that we can still get value from it without specifying absolutely everything. In your example, it could be too hard to specify that. And, that’s fine! Or, we could try to capture it with e.g. a boolean flag to indicate when its loaded or not. Then we can specify things over this flag (e.g. cannot call this method unless flag is true, etc). That’s a kind of proxy approach.

                    Other things like e.g. “this page must load within 200ms” are also pretty hard to specify. So, it’s not a silver bullet … but it can still be useful (much like types are).

                    1. 2

                      Sorry, just to add to that. My pet peeve (which I think is relevant to the original post) is that formal methodists do often seem to get carried away with always trying to specify everything. I personally think we need to move away from that mindset to a more middle ground.

                      1. 3

                        I’m gonna start calling myself a formal methodist from now on.

                        Also, I completely agree. This is my favorite “don’t get carried away” piece on verification.

                        1. 1

                          I’m gonna start calling myself a formal methodist from now on

                          You’ll like this quote then:

                          it is clear to “Formal Methodists” like ourselves, for some of whom formal methods can be something of a “religion”, that introducing greater rigor into software development will improve the software de- velopment process, and result in software that is bet- ter structured, more maintainable, and with fewer er- rors

                          https://pdfs.semanticscholar.org/60d8/aeb3f1c6a90b05ade53b638350463ae6d357.pdf

                          1. 3

                            Oh wait, that’s good for a separate reason.

                            https://www.microsoft.com/en-us/research/publication/tlz-abstract/

                            I told Ron Pressler (pron) that his vision of future formal methods was the past. That is, verifying the algorithms, not code, in formal specifications w/ combo of model-checkers and provers depending on utility. He said TLA+ was superior because it was basic mathematics: set theory and (iirc) first-order logic with concurrency/time support. I told him he basically just described Z + CSP/SPIN used for high-assurance systems back in the day. The work proceeded to go deeper than algorithm level due to the problems at code, compiler, and assembly levels they ran into. He’s right there’s still tons of mileage to get out of that level as prior work showed with low defects.

                            Then, I kept telling @hwayne that one might have substituted TLA+ for SPIN combined with Z for its benefits on sequential. Otherwise, emulated same benefits in TLA+ itself if capable (not specialist enough to know). Then, your link has the TLA+ inventor himself doing exactly that. Wow. Then, the audience has no interest in it. Z’s tool support combined with TLA+‘s ease of use could’ve sped up deployment of lightweight methods that would’ve drawn more funding to them after successes came in like with ASM’s, SPIN, and Alloy. And they had no interest. Unbelievable…

                            Well, I appreciate you drawing it to my attention since it was neat to see he tried it. He was right that he did have the benefit of no Z baggage. Maybe it was for the better they didn’t listen and he made TLA+. Had Z really taken off, the opposite might have been true as in Worse is Better effect of hitching a ride on a bandwagon.

                            1. 2

                              Then, I kept telling @hwayne that one might have substituted TLA+ for SPIN combined with Z for its benefits on sequential.

                              I don’t remember you telling me that and now I feel really bad for forgetting. Sorry dude :(

                              Regardless, though, I agree. I’m a huge fan of TLA+, but I recognize that it’s in part due to sentimental reasons. TLA+ is the first formal method I learned and also how I’ve managed to contribute back to tech. I wrote Learn TLA+ because there weren’t good TLA+ resources but also because I didn’t really know about any other formal methods techniques when I started. If I started with, say, Alloy, I probably would have seen how much harder TLA+ was to learn and how much smaller the community was, gone “nah”, and never learned it.

                              I still think TLA+ is amazing, but now I recognize that 1) it’s “the best tool in a lot of cases” and not “the only tool in all cases” and 2) most of the difficulty using it is UI/UX, not essential complexity. It’s a rough diamond that needs to be cut.

                              Alternatively, someone may be inspired by it to make something better. That’d be awesome, too.

                              1. 1

                                I don’t remember you telling me that and now I feel really bad for forgetting. Sorry dude :(

                                I tell people hear about a lot of stuff. Forgetting something is inevitable. No worries.

                              2. 2

                                He was right that he did have the benefit of no Z baggage.

                                Right, and I think that’s a big deal. Z is very big and somewhat bloated I think. Even attempts to mechanise the language were challenging, with tools like CZT appearing to help. That’s why B spun out with its tool support and then eventually event B.

                              3. 1

                                Hey,

                                What book is that chapter you’ve referenced from? I found another chapter:

                                http://www.cypherpunks.to/~peter/05_verif_implementation.pdf

                                … ?

                                1. 3

                                  Peter Gutmann. He’s super nice! I emailed him gushing about how much I loved that essay and he didn’t even call me weird for reading his PhD thesis for fun.

                                  1. 3

                                    ok, no worries I figured it out:

                                    https://www.cs.auckland.ac.nz/~pgut001/pubs/thesis.html

                                    Another New Zealander no less!!

                                    1. 3

                                      It was Peter Guttman. I eventually give it to people to try to keep them from being overly optimistic about what formal verification can accomplish in general or alone. Of course, high-assurance security required a lot of assurance activities that all acted as a check against each other. Formal verification itself was done only on the small, central component enforcing the security policy. As CompSci operates in silos, many in formal methods community thought they’d solve everything that way much like we saw with OOP and unit testing zealots.

                                      Old saying goes, “Everything in moderation.” Just seems more true the longer I’m around. I do dig the whole-stack efforts like CLI, Verisoft, and DeepSpec, though. We can keep some teams trying to verify all the core or at least practical stuff while the rest of us exercise a bit more sense for more productivity. I expected trickle down effects to kick in eventually which were delivered at least with CompCert and CakeML given usefulness in security-critical C and CompSci tooling (esp compilers or extracted code). Push-button simple eliminating whole classes of errors. Probably should count those checking SCOOP concurrency model or certified WCET analyses given obvious benefits of both in various sectors.

                                      So, final outlook is formal verification should be usually done to simplest, most-critical parts of probably hardly any software projects. Most do stuff like DbC w/ generated tests in memory-safe languages. That with the modifier that even those people can benefit if some teams push heavy stuff as hard as possible.

                                2. 1

                                  That was the position of reviews in Cleanroom methodology. They kept everything structured for easy analysis but level of formality depended on what they actually needed.

                          2. 1

                            On top of it being in a theorem prover, most of them are done in a “LCF-style” theorem prover designed to make it hard to screw up the proof process. Both Isabelle and Coq also have work done on foundations where the verifiers or extractors can be verified themselves to some degree. So, people can focus more on the specs and extracted code than with a prover not designed that way. If the specs are peer-reviewed and right, this leads to a level of confidence that’s hard to achieve without formal methods. I dare say near impossible for at least optimizing compilers based on empirical evidence.

                            Far as reducing bugs, you first have to know what your code is supposed to do. That’s why I push behavioral contracts a la Design by Contract. This can be combined with methods such as fuzzing to help you spot where it broke with laser precision. Those same contracts can also be used for generation of tests (e.g. Eiffel) or verification conditions (e.g. SPARK). Harder to do it the other way around. Hence, my recommending DbC combined with automated analysis and tests. That’s not a substitute for review or manual tests where appropriate (esp hard to specify stuff).

                            The problem you mention on difficulty of specification is real. My shortcut as above is to just tell people to switch to tests for anything that’s hard. They can Google prior work if they want to see if anyone did it and how. As minimax was suggesting, might mention something about this when describing tradeoffs. Far as abstract interpretation, Astree Analyzer was one of my favorite results of software, assurance field given prevalence of C in embedded. I wonder what it would take for a CompSci team to crank out a FOSS version with equivalent capabilities. Far as dependent types equivalence, it’s definitely too far since people who give up on Coq and Idris often like TLA+, Alloy, SPIN, or even Z.

                          1. 13

                            I would absolutely love a “beginners guide to formal methods” resource. Right now the field is mostly impenetrable but doesn’t have to be. I think a lot of that is because what formal methods used to be useful for is much more limited than what it is useful for now, but resources are still aimed at that narrow domain. Here’s my thoughts on the current structure you’re outlining:

                            Tell them how hard it is. Most don’t really want to deal with that.

                            I don’t think formal methods are necessarily that hard. Like most of us understand type systems, and that’s a kind (if not a very robust kind) of formal method. There’s a spectrum of difficulty/power, and I think that’s worth emphasizing. So maybe we could adjust this to:

                            • There are many different kinds of formal methods, just like there are many different kinds of type systems or bicycles.
                            • Flyweight Methods, like TLA+ or Alloy, are much easier to learn and easier to use. Formal Verification is more rigorous, but much harder and slower.
                            • Start with flyweight methods. For 99% of people it will be much more useful in their problem domain. For the remaining 1%, it will give you a solid foundation you’ll need for learning formal verification.

                            Tell them about formal methods that are easy to learn with immediate benefit. Suggest they start with those.

                            Woop! I’d add to this that it’s worth explaining why it’s easier and more beneficial.

                            Tell them what resources to use to learn the hard stuff. Tell them it will take lots of time to learn.

                            This is a bit more extreme, but what if this is extracted out? Instead of saying “here are the resources to for learning verification”, we say “go here to find the resources for formal verification”. It’s a small change and adds a bit more maintenance, but it might make it more obvious that it’s not a great starting point.

                            EDIT: I’d be happy to write a brief compare/contrast on TLA+ vs Alloy if that would help the expansion.

                            1. 1

                              Like most of us understand type systems, and that’s a kind (if not a very robust kind) of formal method.

                              Agreed. I have seen it suggested that static type systems are the most widely used kind of formal method. That, of course, is debatable … still, food for thought.

                              1. 1

                                I’ll look into how to easily explain the benefit of lighter methods. The heavy part could be extracted out into its own introductory article. David A. Wheeler’s High Assurance FLOSS article already has some material to draw on for both of these. Regarding compare/contrast, that would be pretty nice to write-up as I’ve only seen maybe one of those. There’s a lot of people who will ask about how they compare to each other if they keep seeing them together.

                              1. 20

                                Look, here’s the thing. If you’re holding 30 million dollars in 250 lines of code that you haven’t audited, then it’s on you. Seriously. It takes any half-decent appsec guy less than one man-day to fleece those 250 lines. At most, that would cost them a few thousands of dollars. They didn’t do it because they wanted it all for free. They didn’t do it because they’re greedy and cheap. They absolutely deserve this.

                                I kinda agree with this, honestly. :-\

                                1. 2

                                  I kinda agree with this, honestly. :-\

                                  That’s because, as your post history on Lobsters has established, you need to get you some ethics and morals.

                                  I kinda agree with the top comment in the article:

                                  “ Look, here’s the thing. If you’re holding 30 million dollars in 250 lines of code that you haven’t audited, then it’s on you.”

                                  Look here’s the thing. If you’ve parked your car on the street like a pleb instead of buying a house with a garage, then its on you.

                                  Look here’s the thing. If you’re holding a PC and a TV and a washing machine in a house with single glazing on the rear windows, then it’s on you.

                                  Whilst this was an extremely interesting read and I’m sure awesome fun to pull off, theft is theft. The rule of law is the rule of law. You know that these ETH belong to other people and you have taken them for yourself. That’s theft, and I hope the law catches up with you.

                                  1. 13

                                    But the entire point of “smart” contracts is that the code IS the contract, right? Your analogy is flawed. It’s not like stealing a car, it’s like finding a loophole in an agreement (or “dumb” contract) and exploiting it in the courts. That happens literally every day, and it is perfectly legal.

                                    The difference is that when you have actual humans making the decisions instead of computers you can make more subtle arguments about what was intended instead of being beholden to the most pedantic possible interpretation of the contract.

                                    1. 14

                                      This is the correct interpretation. The “smart contract” hype is built around the concept that the blockchain is the judge and the jury: it’s all built on the assumption that the blockchain is incorruptible and perfect. To quote from Gavin Wood’s paper “Ethereum: A Secure Decentralised Generalised Transaction Ledger:”

                                      [Ethereum has attributes] not often found in the real world. The incorruptibility of judgment, often difficult to find, comes naturally from a disinterested algorithmic interpreter.

                                      Further:

                                      …natural language is necessarily vague, information is often lacking, and plain old prejudices are difficult to shake.

                                      Most ominously, perhaps:

                                      …the future of law would be heavily affected by [smart contract] systems… Ethereum may be seen as a general implementation of such a crypto-law system.

                                      Based on these concepts, the idea that they’re building a perfect replacement for law, they implemented a Turing-complete language with no concept of or provision for proofs, and run it on a distributed VM from which no malicious programs can be purged. Brilliant!

                                      1. 4

                                        Is it brilliant? I’m not so sure: what sovereign citizens and computer geeks alike seem to believe is that the law is a sequence of perfectly defined rules - which is why the former loves to look for the magical series of words that exempts them from it.

                                        But in reality the law is often about intent and judgment. If I found a bank that let me put my name on everyone’s account and I did with the purpose of withdrawing their savings, the court would hold a dim view of me saying “but they let me do it!

                                        1. 4

                                          That was sarcasm. :)

                                          1. 3

                                            thank god. but like the best sarcasm - and I say this with complete sincerity - it’s indistinguishable from what people are claiming both here and in the article.

                                            1. 1

                                              Well note, only the “Brilliant” part was sarcasm. The rest was literally quoting a seminal paper in the space.

                                        2. 2

                                          hopefully the interest in contract languages on blockchains will encourage more folks to get involved in formal verification.

                                        3. 3

                                          But the entire point of “smart” contracts is that the code IS the contract

                                          Agreed. The analogies given above were ridiculous:

                                          Look here’s the thing. If you’ve parked your car on the street like a pleb instead of buying a house with a garage, then its on you.

                                          This is not a comparison. Try this instead:

                                          Look here’s the thing. If you’ve parked your limited edition McLaren F1 on the street instead of in your garage, then yeah that was dumb

                                          But this is still a rubbish analogy because in Ethereum: Code is Law.

                                          1. 8

                                            The correct analogy would be to leave the thing unlocked, with the keys in a plastic box inside, and with a notarized affidavit that reads, ‘I, goodger, hereby transfer ownership of this vehicle and its contents to whomsoever may open this box’.

                                            1. -1

                                              Bingo!!

                                        4. 19

                                          That’s because, as your post history on Lobsters has established, you need to get you some ethics and morals.

                                          Says the guy who posted 9/11 truther conspiracies from his blog. Angersock has ethics and morals, and I’m a little disheartened that your ad hominem attack got upvoted.

                                          1. 6

                                            There are a few certain types of stories regarding politics and cryptocurrencies that seem to bring out a group of extremely angry and aggressive posters that don’t seem to want to have anything but traditional internet yelling. “Get morals” has been yelled at me any time the US government is brought up and always seems heavily upvoted.

                                            1. -5

                                              Says the guy who posted 9/11 truther conspiracies from his blog

                                              And what is wrong with that?

                                              9/11 Truthers are called 9/11 Truthers because they aren’t 9/11 Frauds.

                                              EDIT: BTW, those downvoting this as “off-topic” might want to downvote @ngoldbaum’s post instead. I didn’t bring up 9/11, he did. I’ll defend myself if called and, and so to quote from elsewhere: It’s been 16 years now and over $300k in research by multiple teams have refuted NIST multiple times — enough is enough.

                                              and I’m a little disheartened

                                              That’s too bad.

                                              It’s what happens to people who don’t understand basic physics.

                                              Have fun with the paid sock puppets though.

                                              1. 2

                                                Damn, I’m a sock puppet after all… Also ad hominem.

                                                1. 2

                                                  me too! #sockpuppet

                                                  1. -4

                                                    Keep it up, y’all are going to spend the end of your lives in a prison of your own making.

                                                    You think smart people can’t see past these fake votes?

                                                    1. 5

                                                      It must be very hard living a life where you think every time someone disagrees with you it’s because of a huge conspiracy.

                                                      I encourage you to talk to a mental health professional.

                                                      1. -2

                                                        It must be very hard living a life where you think every time someone disagrees with you it’s because of a huge conspiracy.

                                                        You misunderstand, I don’t think that.

                                                        But 9/11 is a huge conspiracy, so on this particular topic it’s perfectly sensible to think that.

                                                      2. 2

                                                        I know that this is futile and I’m shouting into the void, but why would you assume that everyone who disagrees with you is a sock puppet? These aren’t fake votes I think people are disagreeing with your aggressiveness, there is no reason for this to be a psy-ops campaign just to mess with you.

                                                        1. -4

                                                          but why would you assume that everyone who disagrees with you is a sock puppet?

                                                          See my response to your sock puppet friend’s identical question.

                                                          But, tell me (since now with the fake downvotes nobody can see your response), how much do you get paid to write this stuff?

                                                          Are you an American? If so, is it enough to sleep at night, knowing that you’re supporting the terrorists who attacked this country on 9/11?

                                                          1. -3

                                                            You gonna answer my question or just exercise your downvote button?

                                                            Think McFly!

                                                  2. -5

                                                    Angersock has ethics and morals

                                                    Yeah, theft is cool man. Totally ethical. Totally moral. And your upvotes totally didn’t appear simultaneously as a bunch of sock puppets upvoted your comment.