1. 5
  1.  

  2. 4

    Fagan Inspects, Mills’ Cleanroom, Meyer’s Eiffel Method, and Praxis’ Correct by Construction all worked if we’re talking about developers delivering products in acceptable timescale with low defects. They all let developers do their job in an iterative way providing extra tools or restrictions that help ensure quality and/or maintainability. There’s a difference between methods that are shown with evidence to help developers and methods that are merely marketed as such. The writeups on methodologies should distinguish between these. It’s also possible these people have thought about methodologies a long time without ever discovering those above.

    1. 3

      Indeed. All the complaints about methodologies I read are about heavily marketed ones and seem more geared toward the benefit of the consultant pushing it rather than actual outcomes.

      Thanks for pointing me to some things I had not heard of before.

      1. 6

        I repeat them a lot here so left off links. If they’re new to you, here’s a few links:

        http://www.mfagan.com/pdfs/software_pioneers.pdf

        http://infohost.nmt.edu/~al/cseet-paper.html

        http://se.ethz.ch/~meyer/publications/acm/eiffel_sigplan.pdf

        http://www.anthonyhall.org/c_by_c_secure_system.pdf

        Note: They also came up with a safe, concurrency model for Eiffel called SCOOP that was easy to use. Ada, which was also about systematically removing error, had Ravenscar with SPARK (formal verification) adding it recently. Such things by themselves would’ve stopped all kinds of problems and saved money on debugging.

        1. 3

          I’ve been reading these links. They all seem to assume a world where you’re creating new software from scratch. Are you aware of any work on maintaining existing software? Say you modify an existing project. What parts do you need to collegially and manually verify to gain confidence you haven’t caused a regression?

          In general, my complaint for a lot of this stuff is that the idea that software is read far more often than it’s written (and read after it’s deployed) gets short shrift.

          Are you aware of any work applying these methods to a webservice? I’d be interested to see if it works in a world of continuous delivery, where requirements can change in unanticipated ways after the fact. The challenge then becomes not avoiding bugs at a single point in time, but avoiding regressions over time.

          (cc @smalina)

          Edit: The Fagan article is particularly illuminating for the cliché comparison with hardware. Hardware has a fundamentally different cadence from software, caused by one crucial fact: once it’s deployed there’s no question of changing it. That one constraint has radical implications for the economics, to the point where comparing the two is meaningless. Bringing a hardware sensibility to software is like a chess master playing Go.

          Could you point me at a description of his inspection process itself?

          1. 3

            Could you point me at a description of his inspection process itself?

            Here’s the original article from the IBM Systems Journal. I have not read it yet, but it looks more substantial than the one above.

            http://www.mfagan.com/pdfs/ibmfagan.pdf

            1. 2

              “Say you modify an existing project. What parts do you need to collegially and manually verify to gain confidence you haven’t caused a regression?”

              Yes, they’re usually about making new things since the code has to be designed with verification in mind. As in, certain structures and docs make that easier whereas random, legacy code might be barely designed at all much less amendable to low-defect methods. The common wisdom would be to refactor it piece by piece with good documentation and interface checks. Whereas, the new code for such a system would be done with a methodology I described. It slowly over time becomes something better. I’ve seen stuff like I described applied to legacy code but can’t remember references off-hand. It was rare find for sure.

              I think Design-by-Contract, contract-based generation of tests, unit/acceptance tests for what was hard to formalize in contracts, and equivalence tests between old and modified code would be best methods to use on legacy code. The DbC method is partly designed to catch breaks during software maintenance, esp refactoring. It can also be emulated in most languages in asserts, conditionals, or object constructors/deconstructors. Widely applicable. Also, Fagan’s approach of noting common problems in a system/codebase to inspect rest for it periodically should help since the developers probably reuse the same constructs which will produce the same defects or just code smells.

              “I’d be interested to see if it works in a world of continuous delivery, where requirements can change in unanticipated ways after the fact.”

              Both Cleanroom and Design-by-Contract made code easier to modify without breakage. Should work fine for a web services. I at least found one on DbC with a quick Google from academia:

              http://www.thinkmind.org/download.php?articleid=soft_v5_n12_2012_5

              I’d imagine it’s so standard for Eiffel developers they don’t write much about it. Use of contracts is also hit and miss among professional developers since they rush things. You might look at Eiffel Web Services/Framework to see if they at least use contracts internally in the framework for correctness. Praxis Correct-by-Construction is for stuff that doesn’t change often with heavy, up-front investment. I’d not use it for a web service unless it was slow-moving, critical one. That one guy writing IRONSIDES DNS in SPARK does indicate it may be possible to do a subset of it for servers or key components, though.

              1. 2

                Thanks for that link. My question was about even modifying just code written with the same ideal methodology, not even legacy code. But yes, Eiffel DbC does seem immune to my criticism.

                Perhaps the synthesis here is to use contracts for correctness and tests for communicating a design to others.

                (cc @smalina)

            2. 2

              Thanks for the links. I had heard of Meyer’s stuff before, as well as Correct by Construction, but not the other two.

        2. 3

          All methods work if you are willing to spend enough money.

          If only clients would stop asking for low cost, low defect and short timescale, then software developers would stand a chance of delivering what was wanted.

          1. 1

            My experience is that agile really is that much better than not; the success of the various employers and clients I’ve had over the years correlates directly to the extent to which they followed the manifesto, kept cycle times short and so on (not particularly well-correlated with the extent to which they claimed to be following agile, and negatively correlated with the level of consultancy/certification going around). Maybe I’ve just been lucky or unlucky, but it makes me very skeptical of this whole “no methodology works better than any other or makes any difference” narrative.

            1. 1

              The real problem is that software engineering in general is under-studied because research on methodology effectiveness isn’t as sexy as computer science.

              1. 1

                I don’t think it’s sexiness so much as difficulty; we’re terrible at measuring productivity even within a single methodology, so it’s very hard to compare.

                1. 1

                  Who is “we” here? You and I: laypeople on lobsters? Sure – I’m suggesting scientific study with rigour :) There is a prof in Toronto who does some, but in general it’s not a popular field of research.

                  Much harder to quantify fields of social science get research done, so why can’t development practises?

              2. 1

                My experience is that agile really is that much better than not.

                For the most part, I also see an “agile” approach as better than other approaches, based on my experience. They’ve been reasonably productive and sometimes successful.

                Sadly, that’s pretty much all we have in the study of it: anecdotes.

                By the way, I put agile in quotes because I don’t think it has any agreed upon definiton. I’ve been through 5+ “agile” methods, all of them different. Its amorphousness may in fact be its strong point, but it makes talking about it and finding common ground difficult.