1. 20
  1.  

  2. 6

    My first response was too much of a troll for sure. Here’s a more detailed and constructive review.

    “A large class of errors are caught, earlier in the development process…”

    Caught in an earlier stage, perhaps not earlier in time. Dynamic languages do not have a compile stage. That large class of errors can be caught in different ways with a disciplined interactive dynamic languages methodology.

    “closer to the location where they are introduced”

    Again interactive programming provided a means of working “closer to the problem”. Which is closer? There’s no information about how to tell.

    “The types guide development and can even write code for you—the programmer ends up with less to specify”

    Which is true of declarative programming of various flavors, even the dynamic kind. Again there’s no comparison with other dynamic means of declarative programming, so again we do not know which lead to less code or less churn.

    “More advanced dependently typed languages routinely derive large amounts of boring code with guidance from the programmer”

    Again this is not in comparison to anything. What is the cost in time or effort, and what kinds of problems are most suitable for current dependent tupe systems? Perhaps that boring code is not needed and a little exploration in a rapid prototyping system with a customer can rule out the need for it. Perhaps the code in question has already been developed and validated, and there’s no value in rewriting it in a dependently typed system.

    I’ve no experience with depended types, but my sense is there’s a cost curve for many situations where Horse Logic semi-formally applied could be more cost effective in practice. But either way there’s no evidence presented.

    “One can refactor with greater confidence…”

    Greater confidence than what? The smalltalk refactoring browser was the first, most featured for years. Simple, expressive languages with good tools can be more effective than languages with more rules. So this argument without evidence sounds theoretically true, but there’s sufficient experienced in the field over decades to suggest it’s not necessarily true in practice.

    “Static types can ease the mental burden of writing programs, by automatically tracking information the programmer would otherwise have to track mentally in some fashion”

    This is a false dichotomy. Some dynamic languages have very good tools that programmers in some static languages would envy.

    “Types serve as documentation for yourself and other programmers and provide a ‘gradient’ that tells you what terms make sense to write”

    These tend not to be sufficient for understanding dynamic systems. And again good tools are able to discover most of this in a dynamic language system. And good practice including documentation can be cost effective. Once again there’s no way to tell from this article, it’s just a proclamation of what the author believes or feels based on whatever experiences they’ve had, but failed to explain.

    “Dynamic languages executed naively…”

    Anything done naively will have faults. But there are many situations where a naïve approach can be the most effective overall. And there are many situations where a dynamically typed system can perform close to or better than C. There’s nothing to make of this point in the article.

    “You often need some sort of JIT to get really good performance…”

    Some dynamic languages support compile-ahead plus an interpreter for the truly dynamic runtime needs. Also whole-program deployment tools can eliminate many of the costs of an otherwise dynamic runtime.

    “learning how best to carve a program up into types is a skill that takes years to master”

    The same is true of dynamic language systems. Yet we take the bad examples as the norm for them. Programming is hard no matter what. It takes a lot of experience and discernment.

    1. 12

      A few points in your reply that my experience disagrees with:

      closer to the location where they are introduced

      Again interactive programming provided a means of working “closer to the problem”. Which is closer? There’s no information about how to tell.

      What do you mean here? One can do interactive programming in statically typed languages as well, and statically typed languages will have the opportunity (through using lots of types) to make the problem visible closer to its introduction point.

      A large class of errors are caught, earlier in the development process…

      Caught in an earlier stage, perhaps not earlier in time. Dynamic languages do not have a compile stage. That large class of errors can be caught in different ways with a disciplined interactive dynamic languages methodology.

      IME that discipline looks a lot like one would get “for free” from a statically typed language. What kind of discipline are you talking about?

      One can refactor with greater confidence…

      Greater confidence than what? The smalltalk refactoring browser was the first, most featured for years. Simple, expressive languages with good tools can be more effective than languages with more rules. So this argument without evidence sounds theoretically true, but there’s sufficient experienced in the field over decades to suggest it’s not necessarily true in practice.

      I think the claim is roughly that the more expressive type system one has the more confidence the refactoring will be. FWIW, my experience in the industry has been that this is absolutely true. My career has almost entirely been refactoring existing code to make it a little less worse and the difficulty of the task is proportional to the power of the type system the language has that the refactoring is place in. Refactoring complex business logic in Python is absolutely miserable. I think the nature of refactoring is clearly on the side of static type systems and the only tools that make refactoring more palatable in a dynamically typed language are mimicking what one gets with a statically typed language.

      1. 1

        “One can do interactive programming in statically typed languages as well”

        Agreed. My claim is not that one if clearly superior to the other. Rather my claim is that done well, both can be effective. The article seems to claim superiority but offers no evidence.

        “statically typed languages will have the opportunity (through using lots of types) to make the problem visible closer to its introduction point.”

        Again there is an appeal that this would be the case, but it seems there are ample proponents of both static and dynamic that claim a preference for one or the other. Seems based more on personal experience and preference than anything else.

        “IME that discipline looks a lot like one would get “for free” from a statically typed language.”

        Nominal checking of syntax is not necessarily free, nor sufficient. Incremental, dynamic, contract-based design and feedback (the hard part) is needed no matter what the implementation language. Again I’m not claiming superiority, rather lack of evidence beyond personal preference for one or the other.

        “I think the claim is roughly that the more expressive type system one has the more confidence the refactoring will be”

        I understand the claim in theory. The problem is the Smalltalk refactoring browser has been around longer than any other, and has for decades served as a counter example in widespread use.

        “Refactoring complex business logic in Python is absolutely miserable”

        I believe you. I’ve never seen any decent tools for Python.

        “the only tools that make refactoring more palatable in a dynamically typed language are mimicking what one gets with a statically typed language.”

        You’re incorrect. The Smalltalk refactoring browser embraces the nature of Smalltalk and pre-dates the implementation of refactoring tools for statically typed languages by many years.

        1. 7

          IME that discipline looks a lot like one would get “for free” from a statically typed language.

          Nominal checking of syntax is not necessarily free, nor sufficient. Incremental, dynamic, contract-based design and feedback (the hard part) is needed no matter what the implementation language. Again I’m not claiming superiority, rather lack of evidence beyond personal preference for one or the other.

          What does syntax checking have to do with this? Types provide proofs about your program and a checker for them, that is not syntax, that is semantics.

          I think the claim is roughly that the more expressive type system one has the more confidence the refactoring will be

          I understand the claim in theory. The problem is the Smalltalk refactoring browser has been around longer than any other, and has for decades served as a counter example in widespread use.

          Perhaps Smalltalk is a counterexample but it’s quite hard to know. I just don’t see Smalltalk deployed to the millions of lines of code as the Python I’ve had to clean up. But I think you’re conflating a few things here. Even if the Smalltalk refactoring browser is great, types will still give you a more powerful tool to determine the correctness of a refactoring because Smalltalk simply can’t. It is the nature of types that gives this confidence. Smalltalk might be good enough and have other benefits but that is a separate argument.

          But the real win, that it is unfortunate the link doesn’t go put as #1, is that that the semantics of any dynamic language can be represented in a statically typed language with a sufficiently powerful type system. C# has had a dynamic type for a long time and GHC has recently gotten one.

          1. 0

            “Types provide proofs about your program and a checker for them, that is not syntax, that is semantics”

            By and large they are nominal or structural checks of self-consistency. Until you get into dependent types, the semantics are pretty limited. And it’s not clear yet where the cost/benefit is yet for dependent types vs. semi-formal runtime contacts.

            “I just don’t see Smalltalk deployed to the millions of lines of code as the Python I’ve had to clean up.”

            There’s s lot more Python then Smalltalk, but there are definitely millions of lines of good Smalltalk in production. I’m not here to defend the industry’s track record. Nor am I convinced that legions of bad Python programmers will suddenly create flawless typed FP programs any time soon.

            “types will still give you a more powerful tool to determine the correctness of a refactoring because Smalltalk simply can’t”

            Your bias is flying in the face of a reality that has been in practice for decades. I’m not intended to fight with people’s belief systems. I’m interested in evidence.

            “the semantics of any dynamic language can be represented in a statically typed language with a sufficiently powerful type system”

            Agreed, regarding the end result program. The experience of how a team effectively gets to the end result is different based on the language, the tools, and the team. It’s not enough to say that in the end the programs are the same.

            Anyway, not interested in high back and forth on this. I’ve been through it with you and others many times.

            1. 7

              By and large they are nominal or structural checks of self-consistency. Until you get into dependent types, the semantics are pretty limited. And it’s not clear yet where the cost/benefit is yet for dependent types vs. semi-formal runtime contacts.

              I’m not exactly sure what you mean here but correct-by-construction and whatever one’s equivalent of newtype is goes very far in helping with correctness. In this case, the constructor acts as a witness that something is correct and it travels around the program. Maybe that is limited but the impact is large.

              Your bias is flying in the face of a reality that has been in practice for decades. I’m not intended to fight with people’s belief systems. I’m interested in evidence.

              This isn’t a bias, it is a factual statement. The type checker will validate whatever the type system offers. Smalltalk does not have this. So the refactoring tool might be good enough for any programmer to use but it is less powerful than a type system in guaranteeing the correctness of a program.

              Thanks for the chat! It was a pleasure.

              1. 3

                This isn’t a bias, it is a factual statement. The type checker will validate whatever the type system offers. Smalltalk does not have this. So the refactoring tool might be good enough for any programmer to use but it is less powerful than a type system in guaranteeing the correctness of a program.

                I don’t disagree that this isn’t a fact, but it’s easy to see how Smalltalk’s refactoring tools might actually do the same kind of thing to make similar guarantees.

                There is a JavaScript code analysis system called ternjs, that can do automatic refactoring with confidence, and other truly impressive things. It works, partially, by building a type tree, but at each node there is a possibility of multiple types. I think this post describes the process as of a couple years ago. I am not at all sure how the Smalltalk’s stuff works, but I imagine it’s similar.

                  1. 4

                    Thanks!

                    This looks pretty neat, but it does mention that it doesn’t strive to be automatic all the time. And it doesn’t really seem like it does anything all that special, really.

                    Parse tree transformations, reflection, and dynamic analysis at runtime, via method wrappers. It also mentions that these capabilities are only as good as your test suite.

                    While this is pretty much true of every sort of automatic fiddling of code, I can’t see how Smalltalk can make guarantees that are stronger, or even as strong as the refactoring tools @apy suggests. The dynamic analysis looks at actual values via method wrappers, but is there a guarantee that all possible program paths are executed and analyzed?

                    1. 0

                      Right, there is s difference between being effective and giving guarantees. It is effective in practice even though Smalltalk is a very simple and expressive language. But a languages like Smalltalk and use tools will no more guarantee success than will a static typed languages and tools. Badly designed systems can type check. That doesn’t make them easy to improve.

                      1. 5

                        Badly designed systems can type check. That doesn’t make them easy to improve.

                        This is not the same argument, however. The claim is that a static type system lets one refactor with more confidence that the outcome is correct. A static type system will guarantee something about your program before you run it, and that something is dependent on how good your types are. A dynamically typed language will not guarantee anything. To reiterate: the refactor browser might be quite high quality and work well, but in terms of guarantees it can give the developer it is clearly less than what a statically typed language can offer.

          2. 6

            “ but it seems there are ample proponents of both static and dynamic that claim a preference for one or the other. Seems based more on personal experience and preference than anything else.”

            It’s not a preference: it’s mathematically provable. Programming essentially produces a bunch of state machines interacting with various input and output patterns. Certain patterns or states are a problem. You can detect these at compile time or runtime. You can detect these with analysis or testing. The difficulty of the runtime and testing approaches go up dramatically when complexity increases until they quickly reach combinatorial explosion. Analysis approaches constrain the expression of the program in a way that catches chunks of them before it’s even run (static analysis), spots others quickly with less tests (dynamic analysis), often applies to all executions rather than just test cases you thought of, and with hopefully less work as types & specs become second nature. These checks range in power and complexity as you go up from basic types to interface types to dependent types to whatever else they come up with. And in tooling from basic type checker to static analysis to provers.

            Yet, most published evidence in the literature of software verification… academic, commercial, and FOSS… shows stronger guarantees of correctness for more inputs and outputs done in the constrained, static languages. Also easier compiler optimization. There is research and existing tooling on doing such things for dynamic languages. The ones I’ve seen are much more complex and expensive in analysis than, say, Oberon or Design-by-Contract. One day, they might have methods to equal static typing in ease of analysis or correctness. So far, the evidence is against that.

            “The problem is the Smalltalk refactoring browser has been around longer than any other”

            Smalltalk is an interesting case. It’s exceptional in quite a few ways. The reason it’s easy to do refactoring in Smalltalk is that it’s designed to put stuff like that at a higher priority. Things like dynamic types, extreme decomposition with message passing, reflection, and a runtime for bytecode add up to make it easier. What it traded off was high performance, ease of implementation of compiler, and provable correctness or properties of apps and compiler given it’s more difficult to analyze apps made this way. Languages wanting those + refactoring will likely be statically typed since it’s just easier to build the tooling. That’s the path Eiffel took with better, overall results.

            https://en.wikipedia.org/wiki/Eiffel_(programming_language)

            Although, Smalltalk probably still has it beat on rapid prototyping and refactoring. My guess. :)

            1. 2

              “It’s not a preference: it’s mathematically provable.”

              A type system can be proven logically. That is a long way from proving an application will be built more effectively (time, budget, quality, features) in practice.

              “Certain patterns or states are a problem. You can detect these at compile time or runtime. You can detect these with analysis or testing. The difficulty of the runtime and testing approaches go up dramatically when complexity increases”

              This seems to assume there are two ways to solve the problem, type system X or ad hoc. In fact I’ve developed state machine generators and interpreters in Smalltalk and Lisp for things such as multiuser collaborative systems and manufacturing planning.

              “There is research and existing tooling on doing such things for dynamic languages. The ones I’ve seen are much more complex”

              Complexity is not a stranger to much of the software world. Unfortunately. Typed or not.

              As a counter example to your experience, see ObjecTime. A tool for designing realtime, distributed, communicating actors and state machines. Written using Smalltalk, it also supported C, C++, etc. It was eating IBM’s lunch in the realtime market in the day, so IBM bought the company.

              https://drive.google.com/file/d/0B0cKsRm-3yprYlJyeTI0dTduTGc/view?usp=drivesdk

              “Smalltalk is an interesting case. It’s exceptional in quite a few ways”

              Funny that type system proponents tend to position state of the art type systems in theory against average teams with bad dynamic languages such as Python in the field. Bad teams will used good languages poorly, period.

              “The reason it’s easy to do refactoring in Smalltalk is that it’s designed to put stuff like that at a higher priority”

              Exactly. I’m arguing the best in dynamic language systems and the best in typed languages can be equally effective, and there’s little evidence to show otherwise.

              “That’s the path Eiffel took with better, overall results.”

              Eiffel was a nice system. It’s arguable about better overall results.

              “Smalltalk probably still has it beat on rapid prototyping and refactoring. My guess. :)”

              Yes, I don’t recall Eiffel ever having any refactoring or even a good interactive development environment.

              1. 5

                “A type system can be proven logically. That is a long way from proving an application will be built more effectively (time, budget, quality, features) in practice.”

                The published studies in industry always showed the strongly-typed languages with interface checks improved quality and reduced debugging time for modules and integrations. Formal methods did as well. There’s time and quality. The lightweight methods, esp strong typing + design-by-contract, didn’t have significant impact on labor for basic annotations but reduced it for debugging. So, there’s budget. Features require ability to extend or modify the program rapidly and with minimal breakage. Strong, dynamic typing dominates on speed of development with most correctness of new features coming from those using strong, static typing or analysis. So, that’s the field results survying defect rates and such.

                I don’t have that many empirical studies on strong, dynamic language’s ability to do affordable development plus prove absence of entire classes of errors. If you have them, I’d appreciate teh the links for further evaluation.

                “This seems to assume there are two ways to solve the problem, type system X or ad hoc. In fact I’ve developed state machine generators and interpreters in Smalltalk and Lisp for things such as multiuser collaborative systems and manufacturing planning.”

                I’m sure you have. People did in assembly, too. The point of the strong, static types is to ensure guarantees across the software no matter what people are doing with it. So long as the code is type-safe. This can be used to knock out entire classes of problems with no extra work on the developer. You have to manually check for those if its dynamic. Many developers won’t, though, so the baseline will suck in those areas. Hence, standardizing on type systems that force a better baseline.

                “This seems to assume there are two ways to solve the problem, type system X or ad hoc. In fact I’ve developed state machine generators and interpreters in Smalltalk and Lisp for things such as multiuser collaborative systems and manufacturing planning.”

                Looks neat. We’ve been talking about what one can do with a static or dynamically-typed language. You just introduced a CASE tool whose DSL…

                https://www.eclipse.org/etrice/

                …looks like a statically-typed language that gets transformed into necessary Smalltalk, etc that gets the job done combined with a runtime. I’m not sure how a CASE tool w/ a static language replacing your dynamic language for model-driven development supports your point thats that (a) dynamic language users have comparable benefits with the language or (b) that statically-typed languages/tools aren’t needed. I could’ve easily linked Ensemble’s protocol composition, Verdi correct-by-construction generators for protocols, numerous works in Coq/HoL that genere real-time software, Atom specification language in Haskell, or Hamilton et al’s 001 toolkit that generated whole systems from static, logical specs w/ OOP and timing info. Tons of third party tools that use DSL’s or MDD for more robust software all written in arbitrary stuff. It might be indicative of something that the most robust ones are done with statically-typed languages. Those using dynamic usually just do thorough testing at best.

                “Bad teams will used good languages poorly, period.”

                The original studies on quality by Mitre, DOD, SEI, etc compared professionals on various projects using languages like assembly, Fortran, C, C++, Ada, and Java. The last three, due to typing/safety, consistently had more productivity and lower defects than the others. Ada always had least defects due to strongest typing except one study I saw where C++ and Ada projects matched. Anomaly. Also, one of earliest, correct-by-construction approaches that got tested was Cleanroom. People hit very low defect rates on the first try while remaining productive and flexible. The main way it did it was constraining how programs were expressed for easy composition and verification. Like static typing and analysis.

                What bad teams do is a strawman I’ve never relied on promoting software, assurance methods. The results of pro’s who got more done told me what I needed to know. The consistent lack of anything similarly achieved on other side outside some verified LISP’s told me some more. This forms a status quo. It’s dynamic side that need to prove themselves with bulletproof code with proofs of properties and/or empirical studies against good, static toolchains.

                “I’m arguing the best in dynamic language systems and the best in typed languages can be equally effective, and there’s little evidence to show otherwise.”

                Show me an ultra-efficient, dynamic-language, imperative application or component that’s been verified down to assembly to be correct and terminate properly. There’s a bunch of them for strong, statically-typed imperative and a few for functional with caveats. Bring on the evidence that dynamic languages are achieving just as much with proof it works + testing rather than just some testing where it appears to work.

                1. 1

                  The published studies in industry always showed the strongly-typed languages with interface checks improved quality and reduced debugging time for modules and integrations.

                  I’m always interested in reading good studies if you have 2-3 at hand.

                  Formal methods did as well.

                  Of course formal methods can be and have been used in conjunction with dynamic languages.

                  The lightweight methods, esp strong typing + design-by-contract, didn’t have significant impact on labor for basic annotations but reduced it for debugging. So, there’s budget.

                  I’d really like to read these studies, especially if they considered lightweight methods, esp. DbC, along with a high-quality dynamic language that has good tool support such as Lisp or Smalltalk.

                  I don’t have that many empirical studies on strong, dynamic language’s ability to do affordable development plus prove absence of entire classes of errors. If you have them, I’d appreciate the the links for further evaluation.

                  Here’s at least one study in the following link that puts Smalltalk ahead of Ada, C++, C, PL/I, Pascal, etc. on a significant sized project.

                  https://drive.google.com/file/d/0B0cKsRm-3yprYTR5YTRaRFBfR28/view?usp=sharing

                  This can be used to knock out entire classes of problems with no extra work on the developer. You have to manually check for those if its dynamic.

                  Many of those kinds of problems are covered by tests that would be needed in either case. Some of those kinds of problems can be addressed through tools and metaprogramming, even optional type systems, e.g. Strongtalk. I’ve used that same optional type system a fair bit with Dart.

                  Many developers won’t, though, so the baseline will suck in those areas. Hence, standardizing on type systems that force a better baseline.

                  I have doubts that a good type system will make good programmers out of bad ones, just as OOP and dynamic typed FP has not. Bad programs can be type checked.

                  https://www.eclipse.org/etrice/ …looks like a statically-typed language…

                  That’s a second, later implementation of the same ROOM methodology. ObjecTime is decades older, implemented in Smalltalk-80.

                  Show me an ultra-efficient, dynamic-language, imperative application or component that’s been verified down to assembly to be correct and terminate properly.

                  The closest I can think of off the top of my head is https://en.wikipedia.org/wiki/ACL2

                  But that’s not really my point. I’m not arguing dynamic languages are the best choice in all cases, but that they can be at least as good of a choice in most cases. Very few software systems need this level of verification. In fact most software systems are more concerned with discovering what the spec should be, and achieving an economical, maintainable implementation of a semi-formal specification.

                  1. 3

                    There is a great review of empirical studies of programming here: http://danluu.com/empirical-pl/

                    1. 2

                      Appreciate the list. Many of the studies have problems but there’s good information in the overall collection.

                    2. 3

                      “I’m always interested in reading good studies if you have 2-3 at hand.”

                      I just reinstalled my system due to a crash a few days ago. I’m still getting it organized. Maybe I’ll have the links in a future discussion. Meanwhile, I found quite a few of them Googling terms like defects, Ada, study, programming languages, comparison. Definitely include Ada, Ocaml, Haskell, ATS, or Rust given they use real power of strong, static typing. C, Java, etc do not. It’s mostly just a burden on developers that assists the compiler in those languages. It’s why study results rarely find a benefit: they’re not that beneficial. ;)

                      Thomas Leonard’s study of trying to rewrite his tool in many languages is actually illustrative of some of the benefits of static typing. He was a Python programmer who ended up on Ocaml due to type system’s benefits plus performance gain. On the page below, in Type Checking, he gives many examples of where the static typing caught problems he’d have had to think up a unit test for in Python:

                      http://roscidus.com/blog/blog/2014/02/13/ocaml-what-you-gain/

                      He doesn’t even have to think about these in the static language. Just define his stuff in a typical way allowed by the language. It knocks out all kinds of problems invisibly from there. That he was an amateur following basic tutorials without much due diligence, but still got the results, supports benefit the static types had. The ML’s also are close to dynamic languages in length due to type inference and syntactic sugar.

                      “Here’s at least one study in the following link that puts Smalltalk ahead of Ada, C++, C, PL/I, Pascal, etc. on a significant sized project.”

                      It actually doesn’t since it leaves off data on defect introduction, time for removal, and residual ones. Basically measures speed without quality. In those studies, including some on stijlist’s link, the LISPers have larger productivity boost over C++ than Smalltalk does here. I ignore LISP over strong, static types for same reason: few comparisons on defect introduction & removal during development or maintenance. The studies I saw on Ada showed it way ahead of other 3GL’s on that. Dynamic languages usually weren’t in the defect comparisons of the time, though.

                      In yours, the Smalltalk program gets developed the fastest (expected) with unknown quality. The Ada95 program… whose compiler catches all kinds of algorithmic and interface errors by default… cost 75.17 person-months over Smalltalk but 24.1 faster than C++. So, it looks good in this study compared to flagship imperative and dynamic languages of the time given it’s a straight-jacket language. Smalltalk’s design advantages mean Ada can’t ever catch up in productivity. Logical next step is redoing the same thing with modern Smalltalk and Ocaml (or something similar). As link above shows, Ocaml has strong-typing with benefits close to Ada but conciseness and productivity more like Python. That would be a nice test against Smalltalk. Actually, the better tooling means Ocaml would be handicapped against Smalltalk. So, maybe not as nice a test but any comparable or winning result on Ocaml’s side would mean more if handicapped, eh?

                      “Many of those kinds of problems are covered by tests that would be needed in either case. ”

                      You don’t need the test if the type system eliminates the possibility of it ever happening. Java programmers aren’t testing memory safety constantly. Modula-3 programmers didn’t worry about array, stack, function type, or linker errors. Rust programs that compile with linear types don’t have dangling pointers. Eiffel and Rust are immune to data races. Yeah, go test out those faster than Eiffel programmers prevent them. ;) Annotation method in Design-by-Contract documents requirements in code, catches obvious errors at compile time quickly, supports static analyzers, and can be used to generate unit tests that provably test specs. As code piles up, the kinds of things you have to test go down vs dynamic stuff where you have to hand test about everything. Good annotations, which cover every case in a range, mean the large programs of 100,000+ lines simply can’t compile with the interface errors. Imagine… although you’ve probably done it… trying to code up tests representing every way a behemoth’s software might interact incorrectly plus the runtime they impose after each change.

                      So, part of the argument is reducing maintenance by balancing QA requirements across static types, annotations, static analysis, and tests. Ensure each property holds by using whichever of them is easiest to do, maintain, and apply to maximum number of execution traces or states.

                      “I have doubts that a good type system will make good programmers out of bad ones, just as OOP and dynamic typed FP has not. Bad programs can be type checked.”

                      I’m sure the average Rust programmer will tell you a different story about how many memory errors they experience vs coding in C. They don’t test for them: it’s simply impossible to compile code that has them in safe mode. Same with SafeD and SafeHaskell. Common cases in Ada and the Wirth languages also have checks inserted automatically. If it’s common and should fail-safe, you shouldn’t have to think about it. Pointers, arrays, strings, function arguments, handles for OS resources, proper linking… these things come to mind. Strong, static typing lets you just use them correctly without thinking past the initial definitions. No need to even write tests for that.

                      “That’s a second, later implementation of the same ROOM methodology. ObjecTime is decades older, implemented in Smalltalk-80.”

                      I know. My point didn’t change. You supported what Smalltalk can do by using a 3rd party tool whose DSL (ROOM) uses static, strong types that drive the checking and code generation process. The tool supports my argument. Imagine if it let you assign to various variables interfaces, ports, states, and so on without checking or labeling them. I imagine correctness would involve more thought by developers than the constrained process I saw that prevented classes of problems by design through variable types and enforced structuring. Did the ObjectTime not have labels representing types, restrictions on what variables could contain, and similar structure to force sane expression of FSM’s, etc?

                      “The closest I can think of off the top of my head is ACL2”

                      I knew you’d go for that. I made same mistake on Hacker News where ACL2 users grilled me. They pointed out that, despite starting with LISP, ACL2 is a heavily constrained version of it because dynamic LISP couldn’t be verified so easily. Further, if you check docs, you’ll see it has all kinds of static types that are used during development. Leave them off and it won’t do much for you at all. Hard to classify ACL2 but it’s more like a statically-typed LISP than a dynamic one. .

                      “but that they can be at least as good of a choice in most cases. Very few software systems need this level of verification.”

                      I agree that they can be used and used well. I disagree on the last part given the Github issues and CVE’s indicate the average app needs better built-in support for correct applications. The weak, static languages and dynamic ones seem to catch fewer problems in small and large apps. This results in much time wasted figuring out why or residual defects because they don’t want to invest time in figuring it out. So, even average developer needs a good baseline in their language and tooling. Code away in the dynamic languages if you want but strong, static ones catch more problems you’ll run into. If you’re disciplined and experienced, you might prevent the same ones with you’re strategies of testing and designing safer constructions. Evidence indicates most won’t go that far, though, due to less discipline or experience. Plus, different, average developers rolling their own way of doing basic checks instead of built-in often leads to nightmares in debugging or integrations.

                      1. 2

                        OCaml is a nice language. The problem with that anecdote is it compares a really good static typed language (OCaml) with a really bad dynamic language (Python).

                        [The study I cited] “Basically measures speed without quality.”

                        True there are no clear citations of defects. But you are reading into that differently than I am. My assumption is if a language is cited as implementing a function point in N time units then that implies the function point is correct as implemented in that time. What would be the point of stating a function point was implemented in N time units but full of bugs? I would give the study the benefit of the doubt but admit the gap. These kinds of studies are always full of questions, which is why I’m interested in any you have available.

                        “ACL2 is a heavily constrained version of it because dynamic LISP couldn’t be verified so easily.”

                        Well, right, because one approach is to add some static tools above a dynamic language. If the argument was Lisp or Smalltalk could statically check exactly as OCaml does then you’d be right. But if the argument is there are large categories of applications where simple dynamic languages can be augmented with certain practices and tools, including some optional static checking to great effect, then you’d not be right.

                        “You supported what Smalltalk can do by using a 3rd party tool whose DSL (ROOM) uses static, strong types that drive the checking and code generation process.”

                        You are continuing to refer to a specific, second implementation of ROOM that has nothing to do with the original implementation. The original implementation does not use that DSL. The implementation of ObjecTime is almost entirely in Smalltalk. The executable models consist of those built by the tool structurally (graphically) augmented by code written in “any supported” programming language… almost always C but also C++ or Smalltalk.

                        “I disagree on the last part given the Github issues and CVE’s indicate the average app needs better built-in support for correct applications.”

                        I for one certainly do not care a zip about the average GitHub project. And I would remain skeptical that OCaml or Haskell or Idris or whatever may come along in the next 10 years would do much of anything for the average GitHub project. A type checked mess is still a mess.

                        “Logical next step is redoing the same thing with modern Smalltalk and Ocaml (or something similar).”

                        I agree. As I wrote above, I like OCaml, and would certainly not be unhappy if it turned out to be the best choice for some project I participated in. If you read everything I’ve written under this headline artical, I’ve never claimed the superiority of a dynamic language over a static typed language. I’ve suggested the evidence does not clearly favor one over the other. Good simple languages appeal to me. I’ve used Smalltalk and Lisp a lot and know they fit the bill for the kinds of systems I’ve worked on for 35 years. I’ve done enough OCaml and other ML’s to suspect they largely would meet my needs. Haskell’s another story, personally. I used Haskell 98-era a good bit, but I’m just not interested in the last 10+ years of what’s been going on there.

                        I appreciate what you are saying about Ada, OCaml, etc. and the benefits of static typing. I am disagreeing with how you are positioning dynamic languages requiring more effort to test, etc. due to lack of type checking. I’ve just not experienced that, but I understand a lot of people have not had the same experiences I’ve had with good languages, tools, and practices. But neither am I convinced those same programmers will end up with significantly better programming effectiveness in a good static language. You think they will, I think they won’t. I’m not sure we’re going to resolve those differences, or need to.

                        1. 2

                          I appreciate you clarifying your points. I think its time to tie this one up. Your comments, including the last one, have helped me plenty in understanding how we might better assess the dynamic vs static typing in future studies. I’ll also lean more on Smalltalk and its professional practices in future research for comparisons. The proper comparison will be best practices of it vs something like Ocaml plus considerations for what work static analysis, formal verification, etc required with what results.

                          Enjoy the rest of your day! :)

          3. 3

            Dynamic languages do not have a compile stage.

            This is a major pet peeve of mine; it really bugs me that anyone still believes there is any relation between a language’s type system and whether a language is compiled or interpreted. Can’t we be done with this tired misinformation?

            1. 2

              Fine. It’s a terminology thing. I misspoke. Clearly statically checked languages have a stage where checks are made prior to running the program. That’s what I meant. I started programming in Lisp in the early 1980s and Smalltalk in 1986. I’m fully aware dynamic vs static has nothing to do with compiler or interpreter.

            2. 2

              Some dynamic languages have very good tools that programmers in some static languages would envy.

              Could you provide some examples?

              1. 2

                One example, re: semi-formal tools + simple languages vs more complex languages and types… Smalltalk has the best “lint” tool I’ve seen. Combined with other tools and practices, this will go a long, inexpensive way to keeping code clean and well designed (which then makes testing, refactoring, etc. much easier.) The equivalent in a static typed languages would be a kind of “type advisor” which I think would carry those languages a good way into the mainstream.

                1. 1

                  Thanks. I’m especially curious about refactoring. Are you aware of any video showing how to rename a function and add a parameter to a function signature in Smalltalk?

                  1. 1

                    “Two Decades of the Refactoring Browser” https://youtu.be/39FAoxtmW_s

                2. 1

                  Dialyzer for Erlang also comes to mind. It catches a lot of bugs that many simplistic static type systems (like Java or Google Go) have no chance at.

              2. 0

                Here’s a challenging proposition for some people:

                Write an article about static typing without mentioning Haskell or functional programming.

                1. 10

                  Barnes wrote a whole book on it without those on a technology that systematically eliminates real-world errors with typing and restrictions. It also improved maintenance phase, esp integrations, given typing caught many problems.

                  http://www.adacore.com/knowledge/technical-papers/safe-and-secure-software-an-invitation-to-ada-2012/

                  Writings on Modula’s and Oberon’s show simpler languages whose type system caught errors C wouldnt while compiling faster. All kinds of compiler optimizations in the literature can use annotations, such as types, to improve performance on top of that. Models like Concurrent Pascal and SCOOP in Eiffel were also race free if it compiles.

                  So, it’s well-established that strong, static typing has all kinds of benefits without the FP or Haskell references. Those are just getting more benefits from work on typing and provers than imperative programs. Among others.

                  1. 3

                    Why?

                  2. [Comment removed by author]

                    1. 4

                      Instead of loosely speculating on how the article could be bad, why not address the parts that you think make it weak directly? Posturing and dismissing things like this without reason is just a waste of time.

                      1. 1

                        “A large class of errors are caught, earlier in the development process…”

                        Caught in an earlier stage, perhaps not earlier in time. Dynamic languages do not have a compile stage. That large class of errors can be caught in different ways with a disciplined interactive dynamic languages methodology.

                        “closer to the location where they are introduced”

                        Again interactive programming provided a means of working “closer to the problem”. Which is closer? There’s no information about how to tell.

                        “The types guide development and can even write code for you—the programmer ends up with less to specify”

                        Which is true of declarative programming of various flavors, even the dynamic kind. Again there’s no comparison with other dynamic means of declarative programming, so again we do not know which lead to less code or less churn.

                        “More advanced dependently typed languages routinely derive large amounts of boring code with guidance from the programmer”

                        Again this is not in comparison to anything. What is the cost in time or effort, and what kinds of problems are most suitable for current dependent tupe systems? Perhaps that boring code is not needed and a little exploration in a rapid prototyping system with a customer can rule out the need for it. Perhaps the code in question has already been developed and validated, and there’s no value in rewriting it in a dependently typed system.

                        I’ve no experience with depended types, but my sense is there’s a cost curve for many situations where Horse Logic semi-formally applied could be more cost effective in practice. But either way there’s no evidence presented.

                        “One can refactor with greater confidence…”

                        Greater confidence than what? The smalltalk refactoring browser was the first, most featured for years. Simple, expressive languages with good tools can be more effective than languages with more rules. So this argument without evidence sounds theoretically true, but there’s sufficient experienced in the field over decades to suggest it’s not necessarily true in practice.

                        “Static types can ease the mental burden of writing programs, by automatically tracking information the programmer would otherwise have to track mentally in some fashion”

                        This is a false dichotomy. Some dynamic languages have very good tools that programmers in some static languages would envy.

                        “Types serve as documentation for yourself and other programmers and provide a ‘gradient’ that tells you what terms make sense to write”

                        These tend not to be sufficient for understanding dynamic systems. And again good tools are able to discover most of this in a dynamic language system. And good practice including documentation can be cost effective. Once again there’s no way to tell from this article, it’s just a proclamation of what the author believes or feels based on whatever experiences they’ve had, but failed to explain.

                        “Dynamic languages executed naively…”

                        Anything done naively will have faults. But there are many situations where a naïve approach can be the most effective overall. And there are many situations where a dynamically typed system can perform close to or better than C. There’s nothing to make of this point in the article.

                        “You often need some sort of JIT to get really good performance…”

                        Some dynamic languages support compile-ahead plus an interpreter for the truly dynamic runtime needs. Also whole-program deployment tools can eliminate many of the costs of an otherwise dynamic runtime.

                        “learning how best to carve a program up into types is a skill that takes years to master”

                        The same is true of dynamic language systems. Yet we take the bad examples as the norm for them. Programming is hard no matter what. It takes a lot of experience and discernment.