1. 7
narkisr.com
1.

2. 14

If documents and models are unstructured, then consider using an unstructured model. An untyped language is actually typed with a single all-encompassing type. This can be simulated in any sufficiently powerful type system.

Consider, for example, Gson: it allows conversion into POJOs, thereby making use of existing data structures. But it is also possible to use Gson’s built-in all-encompassing type, and explore and navigate the unstructured JSON directly.

I don’t see any problem.

1. 2

You can also add a variant class to make typed code somewhat typeless, ie. at my work we have a C++ wrapper around jsonc that puts everything into a tree with a variant class for the values that represents a bool, int, float or string. You can create a variant that represents as many values as you want to supports, with run time type safety. You can also assert or debug if the wrong type coercion is attempted.

1. 1

To me, it seems like the type system equivalent of the safe vs unsafe debate in languages. The best practice on the safe side is to make it safe-by-default with an unsafe keyword that gives the developer freedom very selectively (eg per module). It becomes a big warning sign that the component needs strong code/data verification and validation. In this case, the dynamic typing could be the unsafe type of an otherwise statically-typed language that the program is implemented in. Just those things changing are marked with this unconstrained type with anything touching those variables validating the data they contain. Maybe some extra logging, too.

2. 4

Usually the Haskell people will define the types, and then have the compiler generate the serialization code. See the Serialization part of my article:

http://www.michaelburge.us/2017/08/17/rolling-your-own-blockchain.html

Why use ADTs rather than e.g. S-expressions or JSON? Because it lets you fully-resolve ambiguity earlier rather than later. If the data you’re parsing is from a web service, that may not be important - they might be generating it from an XSD with rigid typing. Machines often don’t emit ambiguous data.

But if the data you’re parsing is e.g. source code that a user wrote, it’s better to go from parse tree to ADT early on. Anything near humans needs an ADT.

1. 1

The article makes a really important observation that’s often overlooked. It’s really important for code to express its intent clearly. The key to working with code effectively lies in your ability to understand its purpose. Static typing can have a big impact on the way we structure code, and in many cases it can actually obscure the bigger picture as the article illustrates.

1. 20

I don’t see how the article illustrates that. The article’s argument is that converting between serialization format and your business object requires maintaining some other kind of code and that costs something.

In my experience I’ve had other costs, which I found more expensive, in dealing with using one’s serialization layer as a business object:

1. The expected fields and what they are tend to not be well documented. Just convert the JSON and use it as a dict or whatever, but it’s hard to know what should be there for someone who didn’t write the code. With static types, even if one doesn’t document the values, they are there for me to see.
2. The semantics of my serialization layer may not match the semantics of my language and, more importantly, what I want to express in my business logic. For example, JSON’s lack of integers.
3. The serialization changes over versions of the software but there is a conversion to the business object that still makes sense, I can do that at the border and not affect the rest of my code.

The call out to clojure.spec I found was a bit odd as well, isn’t that just what any reasonable serialization framework is doing for you?

As a static type enthusiast, I do not feel that the conversion on the edges of my program distract from some bigger picture. I do feel that after the boundary of my program, I do not want to care what the user gave me, I just want to know that it is correct. If one’s language supports working directly with some converted JSON, fine.

On another note, I don’t understand the Hickey quote in this article. What is true of all cars that have driven off a road with a rumble strip? They went over the rumble strip. But cars drive off of roads without rumble strips too, so what’s the strength of that observation? Does the lack of a rumble strip some how make you a more alert driver? Do rumble strips have zero affect on accidents? I don’t know, but in terms of static types, nobody knows because the studies are really hard to do. This sort of rhetoric is just distracting and worthless. In my experience, the talk of static types and bugs is really not the strength. I find refactoring and maintenance to be the strength of types. I can enter a piece of code I do not recognize, which is poorly documented, and start asking questions about the type of something and get trustworthy answers. I also find that that looking at the type of a function tends to tell me a lot about it. Not always, but often enough for me to find it valuable. That isn’t to say one shouldn’t document code but I find dynamically typed code tends to be as poorly documented as any other code so I value the types.

If one doesn’t share that experience and/or values, then you’ll disagree, and that’s fine. I’m not saying static types are objectively superior, just that I tend to find them superior. I have not found a program that I wanted to express that I preferred to express with a dynamic language. I say this as someone that spends a fair amount of time in both paradigms. The lack of studies showing strengths in one direction or another doesn’t mean dynamic types are superior to static or vice versa, it just means that one can’t say one way or the other and most of these blog posts are just echos of one’s experience and values. I believe this blog post is falling into the trap of trying to find some explanatory value in the author’s experience when in reality, it’s just the author’s experience. I wish these blog posts started with a big “In my experience, the following has been true….”

Disclaimer, while I think Java has some strengths in terms of typing, I really much prefer spending my time in Ocaml, which has a type system I much prefer and that is generally what I mean when I say “static types”. But I think in this comment, something like Java mostly applies a well.

1. 3

The call out to clojure.spec I found was a bit odd as well, isn’t that just what any reasonable serialization framework is doing for you?

My experience is that Clojure spec addresses the three points. However, Spec can live on the side as opposed to being mixed with the implementation, and it allows you to only specify/translate the types for the specific fields that you care about. I think this talk does a good job outlining the differences between the approaches.

On another note, I don’t understand the Hickey quote in this article.

What you ultimately care about is semantic correctness, but type systems can actually have a negative impact here. For example, here’s insertion sort implemented in Idris, it’s 260 lines of code. Personally, I have a much time understanding that the following Python version is correct:

def insertionsort( aList ):
for i in range( 1, len( aList ) ):
tmp = aList[i]
k = i
while k > 0 and tmp < aList[k - 1]:
aList[k] = aList[k - 1]
k -= 1
aList[k] = tmp


I can enter a piece of code I do not recognize, which is poorly documented, and start asking questions about the type of something and get trustworthy answers.

I’ve worked with static languages for about a decade. I never found that the type system was a really big help in this regard. What I want to know first and foremost when looking at code I don’t recognize is the intent of the code. Anything that detracts from being able to tell that is a net negative. The above example contrasting Idris and Python is a perfect example of what I’m talking about.

Likewise, I don’t think that either approach is superior to the other. Both appear to work effectively in practice, and seem to appeal to different mindsets. I think that alone makes both type disciplines valuable.

It’s also entirely possible that the language doesn’t actually play a major role in software quality. Perhaps, process, developer skill, testing practices, and so on are the dominant factors. So, the right language inevitably becomes the one that the team enjoys working with.

1. 8

That’s not a simple sort in Idris, it’s a formal, machine checked proof that the implemented function always sorts. Formal verification is a separate field from static typing and we shouldn’t conflate them.

For the record, the python code fails if you pass it a list of incomparables, while the Idris code will catch that at compile time.

I’m a fan of both dynamic typing and formal methods, but I don’t want to use misconceptions of the latter used to argue for the former.

1. 1

machine checked proof that the implemented function always sorts.

And if comparing spec vs test sizes apples to apples, that means we need a test for every possible combination of every value that function can take to be sure it will work for all of them. On 64-bit systems, that’s maybe 18,446,744,073,709,551,615 values per variable with a multiplying effect happening when they’re combined with potential program orderings or multiple variable inputs. It could take a fair amount of space to code all that in as tests with execution probably requiring quantum computers, quark-based FPGA’s, or something along those lines if tests must finish running in reasonable time. There’s not a supercomputer on Earth that could achieve with testing what assurance some verified compilers or runtimes got with formal verification of formal, static specifications.

Apples to apples, the formal specs with either runtime checks or verification are a lot smaller, faster, and cheaper for total correctness than tests.

2. 4

For example, here’s insertion sort implemented in Idris, it’s 260 lines of code. (…) Personally, I have a much time understanding that the following Python version is correct: (snippet)

An actually honest comparison would include a formal proof of correctness of the Python snippet. Merely annotating your Python snippet with pre- and postcondition annotations (which, in the general case, is still a long way from actually producing a proof) would double its size. And this is ignoring the fact that many parts in your snippet have (partially) user-definable semantics, like the indexing operator and the len function. Properly accounting for these things can only make a proof of correctness longer.

That being said, that a proof of correctness of insertion sort takes 260 lines of code doesn’t speak very well of the language the proof is written in.

What I want to know first and foremost when looking at code I don’t recognize is the intent of the code.

Wait, the “intent”, rather than what the code actually does?

1. 1

An actually honest comparison would include a formal proof of correctness of the Python snippet.

That’s not a business requirement. The requirement is having a function that does what was intended. The bigger point you appear to have missed is that a complex formal specification is itself a program! What method do you use to verify that the specification is describing the intent accurately?

Wait, the “intent”, rather than what the code actually does?

Correct, these are two entirely different things. Verifying that the code does what was intended is often the difficult task when writing software. I don’t find static typing to provide a lot of assistance in that regard. In fact, I’d say that the Idris insertion sort implementation is actually working against this goal.

1. 5

The bigger point you appear to have missed is that a complex formal specification is itself a program!

This is not true. The specification is a precondition-postcondition pair. The specification might not even be satisfiable!

What method do you use to verify that the specification is describing the intent accurately?

Asking questions. Normally users have trouble thinking abstractly, so when I identify a potential gap in a specification, I formulate a concrete test case where what the specification might not match the user’s intention, and I ask them what the program’s intended behavior in this test case is.

I don’t find static typing to provide a lot of assistance in that regard.

I agree, but with reservations. Types don’t write that many of my proofs for me, but, under certain reasonable assumptions, proving things rigorously about typed programs is easier than proving the same things equally rigorously about untyped programs.

1. 1

This is not true. The specification is a precondition-postcondition pair. The specification might not even be satisfiable!

A static type specification is a program plain and simple. In fact, lots of advanced type systems. such as one found in Scala, are actually Turing complete. The more things you try to encode formally the more complex this program becomes.

Asking questions. Normally users have trouble thinking abstractly, so when I identify a potential gap in a specification, I formulate a concrete test case where what the specification might not match the user’s intention, and I ask them what the program’s intended behavior in this test case is.

So, how is this different from what people do when they’re writing specification tests?

1. 3

A static type specification is a program plain and simple.

Not any more than a (JavaScript-free) HTML document is a program for your browser to run.

In fact, lots of advanced type systems. such as one found in Scala, are actually Turing complete.

I stick to Standard ML, whose type system is deliberately limited. Anything that can’t be verified by type-checking (that is, a lot), I prove by myself. Both the code and the proof of correctness end up simpler this way.

So, how is this different from what people do when they’re writing specification tests?

I am not testing any code. I am testing whether my specification captures what the user wants. Then, as a completely separate step, I write a program that provably meets the specification.

1. 1

I stick to Standard ML, whose type system is deliberately limited. Anything that can’t be verified by type-checking (that is, a lot), I prove by myself. Both the code and the proof of correctness end up simpler this way.

At that point it’s really just degrees of comfort in how much stuff you want to prove statically at compile time. I find that runtime contracts like Clojure Spec are a perfectly fine alternative.

I am testing whether my specification captures what the user wants.

I’ve never seen that done effectively using static types myself, but perhaps you’re dealing with a very different domain from the ones I’ve worked in.

1. 2

At that point it’s really just degrees of comfort in how much stuff you want to prove statically at compile time.

This is not a matter of “degree” or “comfort” or “taste”. Everything has to be proven statically, in the sense of “before the program runs”. However, not everything has to be proven or validated by a type checker. Sometimes directly using your brain is simpler and more effective.

I am testing whether my specification captures what the user wants.

I’ve never seen that done effectively using static types myself

Me neither. I just use the ability to see possibilities outside the “happy path”.

1. 1

However, not everything has to be proven or validated by a type checker.

I see that as a degree of comfort. You’re picking and choosing what aspects of the program you’re going to prove formally. The range is from having a total proof to having no proof at all.

Sometimes directly using your brain is simpler and more effective.

Right, and the part we disagree on is how much assistance we want from the language and in what form.

1. 2

You’re picking and choosing what aspects of the program you’re going to prove formally.

I’m not “picking” anything. I always prove rigorously that my programs meet their functional specifications. However, my proofs are meant for human rather than mechanical consumption, hence:

• Proofs cannot be too long.
• Proofs cannot demand simultaneous attention to more detail than I can handle.
• Abstractions are evaluated according to the extent to which they shorten proofs and compartmentalize details.

Right, and the part we disagree on is how much assistance we want from the language and in what form.

The best kind of “assistance” a general-purpose language can provide is having a clean semantics and getting out of the way when it can’t help. If you ever actually try to prove a program[0] correct, you will notice that:

• Proving that a control flow point is unreachable may require looking arbitrarily far back into the history of the computation. Hence, you want as few unreachable control flow points as possible, preferably none.

• Proving that a procedure call computes a result of interest requires making assumptions about the procedure’s precondition-postcondition pair. For second-class (statically dispatched) procedure calls, these assumptions can be discharged immediately—you know what procedure will be called. For first-class (dynamically dispatched) procedure calls, these assumptions may only be discharged in a very remote[1] part of your program. Hence, first-class procedures ought to be used sparingly.

[0] Actual programs, not just high-level algorithm descriptions that your programs allegedly implement.

[1] One way to alleviate the burden of communicating precondition-postcondition requirements between far away parts in a program is to systematically use so-called type class laws, but this is not a widely adopted solution.

1. 1

Right, and the other approach to this problem is to use runtime contracts such as Clojure Spec. My experience is that this approach makes it much easier to express meaningful specifications. I’m also able to use it where it makes the most sense, which tends to be at the API level. I find there are benefits and trade-offs in both approaches in practice.

1. 2

Right, and the other approach to this problem is to use runtime contracts such as Clojure Spec.

This is not a proof of correctness, so no.

I’ve already identified two things that don’t help: runtime checks and overusing first-class procedures. Runtime-checked contracts have the dubious honor of using the latter to implement the former in order to achieve nothing at all besides making my program marginally slower.

My experience is that this approach makes it much easier to express meaningful specifications.

I can already express meaningful specifications in many-sorted first-order logic. The entirety of mathematics is available to me—why would I want to confine myself to what can be said in a programming language?

I’m also able to use it where it makes the most sense, which tends to be at the API level.

It makes the most sense before you even begin to write your program.

1. 2

This is not a proof of correctness, so no.

I think this is the key disconnect we have here. My goal is to produce working software for people to use, and writing a proof of correctness is a tool for achieving that. There are other viable tools that each have their pros and cons. My experience tells me that writing proofs of correctness is not the most effective way to achieve the goal of delivering working software on time. Your experience clearly differs from mine, and that’s perfectly fine.

I can already express meaningful specifications in many-sorted first-order logic. The entirety of mathematics is available to me—why would I want to confine myself to what can be said in a programming language?

You clearly would not. However, there are plenty of reasons why other people prefer this. A few reasons of top of my head are the following. It’s much easier for most developers to read runtime contracts. This means that it’s easier to onboard people and train them. The contracts tend to be much simpler and more expressive. This makes it easier to read and understand them. They allow you to trivially express things that are hard to express at compile time. Contracts can be used selectively in places where they make the most sense. Contracts can be open while types are closed.

It makes the most sense before you even begin to write your program.

Again, we have a very divergent experience here. I find that in most situations I don’t know the shape of the data up front, and I don’t know what the solution is going to be ultimately. So, I interactively solve problems using a REPL integrated editor. I might start with a particular approach, scrap it, try something else, and so on. Once I settle on a way I want to do things, I’ll add a spec for the API.

Just to be clear, I’m not arguing that my approach is somehow better, or trying to convince you to use it. I’m simply explaining that having tried both, I find it works much better for me. At the same time, I’ve seen exactly zero empirical evidence to suggest that your approach is more effective in practice. Given that, I don’t think we’re going to gain anything more from this conversation. We’re both strongly convinced by our experience to use different tools and workflows. It’s highly unlikely that we’ll be changing each others minds here.

Cheers

1. 3

The contracts tend to be much simpler and more expressive. (…) Contracts can be open while types are closed.

I said “first-order logic”, not “types”. Logic allows you to express things that are impossible in a programming language, like “what is the distribution of outputs of this program when fed a sample of a stochastic process?”—which your glorified test case generator cannot generate.

Just to be clear, I’m not arguing that my approach is somehow better, or trying to convince you to use it. I’m simply explaining that having tried both, I find it works much better for me.

I’m not trying to convince you of anything either, but I honestly don’t think you have tried using mathematical logic. You might have tried, say, Haskell or Scala, and decided it’s not your thing, and that’s totally fine. But don’t conflate logic (which is external to any programming language) with type systems (which are often the central component of a programming language’s design, and certainly the most difficult one to change). It is either ignorant or dishonest.

2. 1

You can do more with a formal spec than a test. They can be used to generate equivalent tests for one like in EiffelStudio. Then, they can be used with formal methods tools, automated or full style, to prove they hold for all values. They can also be used to aid optimization by the compiler like in examples ranging from Common LISP to Strongtalk I gave you in other comment. There’s even been some work on natural language systems for formal specs which might be used to generate English descriptions one day.

So, one gets more ROI out of specs than tests alone.

1. 2

That’s why I find Clojure Spec to be a very useful tool. My experience is that runtime contracts are a better tool for creating specifications. Contracts focus on the usage semantics, while I find that types only have an indirect relationship with them.

At the same time, contracts are opt in, and can be created where they make the most sense. I find this happens to be at the boundaries between components. I typically want to focus on making sure that the API works as intended.

1. 2

Forgot to say thanks for Clojure spec link. That was a nice write-up. Good they added that to Clojure.

2. 1

“A static type specification is a program plain and simple. “

I dont think so but I could be wrong. My reading in formal specs showed they were usyally precise descriptions of what is to be done. The program is almost always a description of how to do something in a series of steps. Those are different things. Most formal specs arent executable on their own either since they’re too abstract.

There are formal specifications of how something is done in a concrete way that captures all its behaviors like with seL4. You could call those programs. The other stuff sounds different, though, since it’s about the what or too abstract to produce the result the programmer wants. So, I default on not a program with some exceptions.

1. 2

It’s a metaprogam that’s executed by the compiler with your program as the input. Obviously, you can have very trivial specifications that don’t really qualify as programs. However, you also have very complex specifications as well. There’s even a paper on implementing a type debugger for Scala. It’s hard to argue that specifications that need a debugger aren’t programs.

1. 1

I think this boils down to the definition of “program.” We may have different ones. Mine is an executable description of one or more steps that turn concrete inputs into concrete outputs optionally with state. A metaprogram can do that: it does it on program text/symbols. A formal specification usually can’t do that due to it being too abstract, non-executable, or having no concept of I/O. They are typically input too some program or embedded in one. I previously said some could especially in tooling like Isabelle or Prolog.

So, what is your definition of a program so I can test whether formal specs are programs or metaprograms against that definition? Also, out of curiosity too.

1. 2

My definition is that a program is a computational process that accepts some input, and produces some output. In case of the type system, it accepts the source code as its input, and decides whether it matches the specified constraints.

1. 1

Well, I could see that. It’s an abstract equivalent to mine it looks like. I’ll hold of debating that until I’m more certain of what definition I want to go with.

3. 4

That’s not a business requirement. The requirement is having a function that does what was intended

You’re comparing two separate things, though! The Idris isa proven correct function, while the python is just a regular function. If all the business wants is a “probably correct” function, the Idris code would be just a few lines, too.

1. 1

The point here is that formalism does not appear to help ensure the code is doing what’s intended. You have to be confident that you’re proving the right thing. The more complex your proof is, the harder it becomes to definitively say that it is correct. Using less formalism in Python or Idris results in code where it’s easier for the human reader to tell the intent.

1. 4

You can say your proof is correct because you have a machine check it for you. Empirically, we see that formally verified systems are less buggy than unverified systems.

1. 2

Machine can’t check that you’re proving what was intended. A human has to understand the proof and determine that it matches their intent.

1. 3

A human had to check the intent (validate) either way. A machine can check the implementation is correct (verify).

Like in the Idris, I have to validate that the function is supposed to sort. Once I know that’s the intention, I can be assured that it does, in fact, sort, because I have a proof.

1. 1

What I’m saying is that you have to understand the proof, and that can be hard to do with complex proofs. Meanwhile, other methods such as runtime contracts or even tests, are often easier to understand.

1. 2

That’s not how formal verification works, though. Let’s say my intention is “sort a list.” I write a function that I think does this. Then I write a formal specification, like “\A j, k \in 1..Len(sorted): j < k => sorted[j] <= sorted[k]”. Finally, I write the proof.

I need to validate that said specification is what I want. But the machine can verify the function matches the specification, because it can examine the proof.

1. 1

The fact that we’re talking past each other is a good illustration of the problem I’m trying to convey here. I’m talking about the human reader having to understand specifications like \A j, k \in 1..Len(sorted): j < k => sorted[j] <= sorted[k], only much longer. It’s easy to misread a symbol, and misinterpret what is being specified.

1. 3

You did say “have to understand the proof” (not specification) before. I strongly agree with the latter - the language we use for writing specs can easily get so complex that the specs are more error-prone than their subjects.

I once wrote a SymEx tool for PLCs and found that specs in my initial choice of property language (LTL) were much harder to get right than the PLC code itself. I then looked at the properties that would likely need to be expressed, and cut the spec language down to a few higher-level primitives. This actually helped a lot.

Even if restricting the property language isn’t an option, having a standard library (or package ecosystem) of properties would probably get us rather close - so instead of \A j, k \in ... we could write Sorted(s) and trust the stdlib / package definition of Sorted to do its name justice.

2. 5

For example, here’s insertion sort implemented in Idris, it’s 260 lines of code

When this code sample has been brought up before (perhaps by you) it’s been pointed out that this is not expected to be a production implementation and more of an example of playing with the type system. There is plenty of Python golf code out there too that one could use as an example to make a point. But, if we are going to compare things, the actual implementation of sort in Python is what..several hundred lines of C code? So your Python insertion sort might be short and sweet, but no more the production code people use than the Idris one. But if the Idris one were the production implementation, I would rather spend time understanding it than the Python sort function.

It’s also entirely possible that the language doesn’t actually play a major role in software quality. Perhaps, process, developer skill, testing practices, and so on are the dominant factors.

That is likely true, IMO. I think it’s interesting that one could replace type system in the Hickey quote with “testing” or “code review” and the statement would still be true, but people seem to zero in on types. No-one serious says that we shouldn’t have testing because we still have bugs in software.

I never found that the type system was a really big help in this regard. What I want to know first and foremost when looking at code I don’t recognize is the intent of the code.

My experience has definitely not been this. Right now I’m doing maintenance on some code and it has a call in it: state.monitors.contains(monitor) and I don’t have a good way to figuring out what state or monitors is without grepping around in the code. In Ocaml I’d just hit C-t and it’d tell me what it is. I find this to be a common pattern in my life as I have tended to be part of the clean-up crew in projects lately. The intent of that code is pretty obvious, but that doesn’t help me much for the refactoring I’m doing. But experiences vary.

1. 2

When this code sample has been brought up before (perhaps by you) it’s been pointed out that this is not expected to be a production implementation and more of an example of playing with the type system.

The point still stands though, the more properties you try to encode formally the more baroque the code gets. Sounds like you’re agreeing that it’s often preferable to avoid such formalisms in production code.

But, if we are going to compare things, the actual implementation of sort in Python is what..several hundred lines of C code? So your Python insertion sort might be short and sweet, but no more the production code people use than the Idris one.

The sort implementation in Python handles many different kinds of sorts. If you took the approach of describing all of the hundreds of lines of C of that with types in Idris, that would result in many thousands lines of code. So, you still have the same problem there.

No-one serious says that we shouldn’t have testing because we still have bugs in software.

People argue regarding what kind of testing is necessary or useful all the time though. Ultimately, the goal is to have a semantic specification, and to be able to tell that your code conforms to it. Testing is one of the few known effective methods for doing that. This is why some form of testing is needed whether you use static typing or not. To put it another way, testing simply isn’t optional for serious projects. Meanwhile, many large projects are successfully developed in dynamic languages just fine.

My experience has definitely not been this. Right now I’m doing maintenance on some code and it has a call in it: state.monitors.contains(monitor) and I don’t have a good way to figuring out what state or monitors is without grepping around in the code.

In Clojure, I’d just hit cmd+enter from the editor to run the code in the REPL and see what a monitor looks like. My team has been working with Clojure for over 8 years now, and I often end up working with code I’m not familiar with.

1. 4

Sounds like you’re agreeing that it’s often preferable to avoid such formalisms in production code.

At the moment, yes. I am not a type theorist but as far as I have seen, dependent types are not at the point where we know how to use them in effectively in a production setting yet. But I do make pretty heavy use of types elsewhere in codebases I work on when possible and try to encode what invariants I can in them when possible (which is pretty often).

If you took the approach of describing all of the hundreds of lines of C of that with types in Idris, that would result in many thousands lines of code. So, you still have the same problem there.

Maybe! I don’t actually know. The types in the Idris implementation might be sufficient to get very performant code out of it (although I doubt it at this point).

In Clojure, I’d just hit cmd+enter from the editor to run the code in the REPL …

I don’t know anything about Clojure, in the case I’m working on, running the code is challenging as the part I’m refactoring needs a bunch of dependencies and data and constructs different things based on runtime parameters. Even if I could run it on my machine I don’t know how much I’d trust it. The power of dynamic types at work.

1. 2

I don’t know anything about Clojure, in the case I’m working on, running the code is challenging as the part I’m refactoring needs a bunch of dependencies and data and constructs different things based on runtime parameters. Even if I could run it on my machine I don’t know how much I’d trust it. The power of dynamic types at work.

There is a fundamental difference in workflows here. With Clojure, I always work against a live running system. The REPL runs within the actual application runtime, and it’s not restricted to my local machine. I can connect a REPL to an application in production, and inspect anything I want there. In fact, I have done just that on many occasions.

This is indeed the power of dynamic types at work. Everything is live, inspectable, and reloadable. The reality is that your application will need to interact with the outside world you have no control over. You simply can’t predict everything that could happen at runtime during compile time. Services go down, APIs change, and so on. When you have a system that can be manipulated at runtime, you can easily adapt to the changes without having any downtimes.

1. 1

That sounds like a good argument for manipulating something at runtime, but not dynamic types. You can build statically-typed platforms that allow runtime inspection or modification. The changes will just be type-checked before being uploaded. The description of StrongTalk comes to mind.

1. 2

Static type systems are typically global, and this places a lot of restrictions on what can be modified at runtime. With a dynamic language you can change any aspect of the running application, while arbitrary eval is problematic for static type systems.

2. 1

When you have a system that can be manipulated at runtime, you can easily adapt to the changes without having any downtimes.

There are architectural choices that address this point, in most situations, better IME. That is, standard setup of load balancer for application servers and something like CARP on the load balancers. For street cred, I’ve worked as an Erlang developer.

1. 1

Sure, you can work around that by adding a lot of complexity to your infrastructure. That doesn’t change the fact that it is a limitation.

1. 1

In my experience, if uptime is really important, the architecture I’m referring to is required anyways to deal with all the forms of failure other than just the code having a bug in it. So, again in my experience, while I agree it is a limitation, it is overall simpler. But this whole static vs dynamic thing is about people willing to accept some limitations for other, perceived, benefits.

1. 1

My experience is that it very much depends. I’ve worked on many different projects, and sometimes such infrastructure was the right solution, and in others it was not. For example, consider the case of the NASA Deep Space 1 mission.

1. 2

I’m not sure how Deep Space 1 suits the point you’re making. Remote Agent on DS1 was mostly formally verified (using SPIN, I believe) and the bug was in the piece of code that was not formally verified.

1. 1

The point is that it was possible to fix tis bug at runtime in a system that could not be load balanced or restarted. In practice, you don’t control the environment, and you simply cannot account for everything that can go wrong at compile time. Maybe your chip gets hit by a cosmic ray, maybe a remote sensor gets damaged, maybe a service you rely on goes down. Being able to change code at runtime is extremely valuable in many situations.

1. 1

The things you listed are accountable for at build time. Certainly NASA doesn’t send chips that are not radiation hardened into space saying “we can just remote debug it”. Sensors getting damaged is expected and expecting services one relies on going down is table stakes for a distributed system. And while I find NASA examples really cool, I do not find them compelling. NASA does a lot of things that a vast majority of developers don’t and probably shouldn’t do. Remember, NASA also formally verifies some of their software components, but you aren’t advocating for that, which makes the NASA example confusing as to which lesson one is supposed to take from it. And those cosmic rays are just as likely to bring down one’s remote debugging facility as it is to break the system’s other components.

1. 1

I think you’re fixating too much on NASA here. The example is just an illustration of the power of having a reloadable system. There are plenty of situations where you’re not NASA and this is an extremely useful feature. If you can’t see the value in it I really don’t know what else to say here really.

1. 1

I’m responding to the example you gave, if you have other examples that are more compelling then I would have expected you to post that.

1. 1

What’s compelling is in in the eye of the beholder. It’s pretty clear that there’s nothing I can say that you will find convincing. Much like I’m not convinced by your position.

2. 1

This idea of using ‘Map String a’ for JSON received from a server is not an entirely bad idea. Consider field permissions for example - if a user requests FieldA on an object but they don’t have permission to view that field, then it will not be returned at all. Sure, languages which have lots of ways to describe that in types can find a way, but people using other languages will fall back on a map type which may not have the requested fields. It’s an interesting topic, for sure.