1. 1

    I have trouble convincing mathematicians that these little notation improvements matter, and will make our lives easier in the long run.

    1. 3

      Have you tried recommending Iverson’s Notation as a Tool of Thought?

      1. 1

        No, but if I did, they wouldn’t care. Mathematicians put a lot of thought into how to convey abstract ideas efficiently, but much less so into how to make those boring and tedious calculations less error prone. Also, the idea that a well chosen notation can lessen the mental burden of performing tricky calculations is basically heresy.

        1. 2

          a well chosen notation can lessen the mental burden of performing tricky calculations is basically heresy.

          Why is that?

          1. 2

            Because, supposedly, once you get the “big ideas” (granted, ideas are a pretty big deal in mathematics), you would not be bothered with such “trifles” as

            • “Your formulas are peppered with +1s and -1s, making them an invitation to make off-by-one errors.” “Who cares, dealing with calculation details is routine.”

            • “Your definition works in all cases except the trivial one.” (e.g., it works for all sets, except the empty set) “Who cares, the trivial case is boring.”

            Fortunately, this attitude will change when more mathematicians learn to program. It will inevitably happen.

            1. 2

              Interesting. I’m sympathetic to prioritizing big ideas and hand-waving over piddly details.

              But imo that is precisely the reason to care about notation advances like the ones Knuth is promoting. They allow you to spend less time thinking about those unimportant details, both when reading and writing mathematical formulas.

    1. 2

      I’m simultaneously educated and greatly confused.

      If α and β are arbitrary entities and ℝ is any relation defined on them, the relational statement αℝβ is a logical variable which is true if and only if α stands in the relation ℝ to β. For example, if x is any real number, then the function (x > 0) - (x < 0) assumes the values 1, 0 or -1 according as x is strictly positive, 0 or strictly negative.

      Both these sentences make sense in isolation, but I don’t understand how they connect. What are α, β and ℝ in the example?

      I understand most of the next 7 pages. But I don’t understand how the beginning connects up with them, and why Knuth keeps referring to this notation as “Iverson’s convention.”

      1. 3

        Kenneth Iverson, of APL and J fame, invented the convention of using 1 to represent true and 0 to represent false, and then allowing all the ordinary integer operations on them. This is still how both J and APL work today. The 2nd sentence from your quote shows how this convention allows you to easily define the “signnum” operator on any x as (x > 0) - (x < 0). In what follows Knuth lists other (unexpected but happy) advantages of this notation.

        Does that answer your question?

        1. 3

          Ohh, I elided the “(equal to 1)” when transcribing, but it’s load-bearing. Thanks!

        2. 3

          The expression (x > 0) - (x < 0) actually contains two examples. In the subexpression x > 0, we have α = x, β = 0, and ℝ is the relation “greater than” on numbers. In the subexpression x < 0, we have α = x, β = 0, and ℝ is the relation “less than” on numbers.

          1. 2

            Thank you!

        1. 0

          REPL and Scripting interface for CS1.

          No, no, no! A thousand times no! Remember that these are newbies. Just like you don’t let kids drink or smoke (even though these can be fun) until they are old enough, you also don’t let freshmen program in any way other than the disciplined one (think, only then code) until they have passed their first course in data structures and algorithms. The fun trial-and-error stuff can wait until they are mature enough to revert to disciplined mode when necessary.

          Static type checking and lots of syntax errors instead of runtime errors or logic errors.

          Static types are a good thing, but simplicity is more important than static types for newbies. What you want is, ideally, a gotcha-free language. Such a language is necessarily a toy one, but it can be obtained from a real one by removing features. For example, you could start with Standard ML and then

          • Remove modules and functors.
          • Remove exception handlers. (Raising exceptions is okay, but it will immediately terminate the program.)
          • Restrict type and exception declarations to the top level.
          • Restrict function definitions to the top level.

          And then maybe this has some hope to be simple enough for newbies. Similar proposals can be made that start with, say, Scheme or Pascal, and then remove features that newbies do not need to learn yet. But, if you start with a monstrosity like Scala, there will simply be to much to remove.

          Syntax/semantics allow coverage of things I care about including const/mutability, block scope, OO, subtyping, parametric types, etc.

          I agree that you want block scope, of course. But OO and subtyping? When choosing an introductory language, one must cast their personal preferences aside, and think of what is actually good for the students. Let them decide what technology stack they prefer (when the time comes), but for now teach them to think. What they need to learn is “How do I find the loop invariant?”, not “Should method argument types be covariant or contravariant?”

          Access to the full JVM for libraries.

          This is a non-argument.

          Expressive syntax that combined with libraries that allow me to give interesting assignments that don’t take thousands of lines to code.

          There are lots of interesting assignments for newbies that can be solved without using fancy libraries. Maybe you want to give them a simple GUI toolkit for their final project, but that’s about it.

          1. 4

            fun trial-and-error stuff can wait

            Trial-and-error is a great way of learning. If you don’t provide a REPL, people will still use trial and error, they just waste time wrapping the thing they want to try in boilerplate. I don’t think anyone who learns in a REPL is going to end up any less “disciplined” than someone who had to write out a full program before getting any feedback. They will, however, know lots of things that they just tried out of curiosity in the REPL, because they could.

            simplicity is more important than static types for newbies. What you want is, ideally, a gotcha-free language

            The ability to pass a value of the wrong type around for ages before eventually being told that it’s not a duck seems like a pretty big gotcha to me. I think it’s simplest for the user if you can keep the failure as close to the problem as possible, which is what static typing does. It also imposes that discipline you were so enthusiastic about one paragraph earlier.

            1. 2

              I don’t think anyone who learns in a REPL is going to end up any less “disciplined” than someone who had to write out a full program before getting any feedback. They will, however, know lots of things that they just tried out of curiosity in the REPL, because they could.

              An interactive environment can be used the way you suggest, to try new ideas and get immediate feedback about them. An example would be using the REPL to discover ways to generate ASCII art. This is unquestionably a good thing.

              But a REPL can also be used in a way that I consider less good for freshmen, namely, to allow themselves to think sloppily and let the REPL find the edge cases that their reasoning has not covered. This will get in the way when they have to learn data structures and algorithms, which is why I am saying REPLs have to wait until they have passed that course.

              The ability to pass a value of the wrong type around for ages before eventually being told that it’s not a duck seems like a pretty big gotcha to me.

              Freshmen are not going to manipulate super duper complicated data types. There is no shortage of algorithms that only manipulate arrays of ints that can keep them busy for a whole course. Types do at best a mediocre job of helping you convince yourself that such algorithms are correct.

              I think it’s simplest for the user if you can keep the failure as close to the problem as possible, which is what static typing does.

              It is what static typing sometimes does and sometimes doesn’t. Some form of static typing is probably a good thing, but a monstrosity like Scala, as the original article suggests, is definitely not the way to go.

              It also imposes that discipline you were so enthusiastic about one paragraph earlier.

              I’m not so sure about that. When I was in college, my favorite language was C++, and I was horribly undisciplined, jumping to implement the next generalized abstraction that popped in my mind, even though the current one I was working on was not even half completed. Even today, I rely too much on type inference to tie loose ends for me.

          1. 7

            The section “Finite Field Definition” is unfortunately imprecise. The right way to look at algebraic structures is that you have a fixed set S and then define operations on it:

            • A nullary (0-ary) operation is just a fixed element of S.
            • A unary operation is a function S -> S.
            • A binary operation is a function S x S -> S.
            • A ternary operation is a function S x S x S -> S.
            • And so on.

            The way the author phrases the “closed property” makes it seem as though operations could, at least in principle, return things that are not elements of S. This is not a very good way to look at algebraic structures. Math is not like programming, where a function that “supposedly returns int” could have, ahem, “interesting results” like throwing an exception, or your program remaining stuck in an endless loop, or even returning a string if you are using a funny enough programming language.

            In addition to a set and a bunch of operations, we need axioms, or else there is not much that one can do with the resulting structures. The list of field axioms is a little bit long (I think it has about 10 axioms, although it can be shortened at the price of using even heavier abstraction), and very few of them are mentioned in the article. Sadly, the axioms are not something you can omit, because they are a part of the definition of the structure. Without the axioms, you do not get the theorems, and the theorems are precisely what you need to justify that the algorithms actually work.

            The way the examples are presented is also a bit iffy. Sadly, here ring theory has a history of abuse of notation that makes its conventions hard to explain. (Ring theory is not particularly bad. Other parts of mathematics are worse.) The symbols -2, -1, 0, 1, 2, 3, etc. do not mean the usual integers, but rather their images under the canonical ring map Z -> A (where Z is the ring of integers and A is an arbitrary ring). So, in general, it does not make sense to say “1 + 2 = 3, hence the set {0,1,2} is not closed under addition”. If you have a ring of characteristic 3, then 3 = 0 and 4 = 1 in that ring, hence 1 + 2 = 0 and 2 + 2 = 1, hence the set {0,1,2} is closed under addition in that ring! In particular, all finite fields are rings of characteristic p for some prime number p.

            The author’s assertion that the order of a finite field is always a prime number is incorrect. For every prime p and every positive integer n, there exists a field of order p^n. (And it is unique up to isomorphism.) But maybe it is the case that only fields of order p are of interest in cryptography.

            Back to proper programming concerns, I do not think it is particularly useful to declare specific data types for doing modular arithmetic. Types (both static or dynamic) are supposed to give you automated checking that you are only passing around arguments that make sense. However, in number-theoretic routines, the proposition “passing this argument makes sense” is usually a much deeper theorem than what a normal type system can prove. So there is no substitute for clear documentation, and actually understanding why and how the algorithms work.

            1. 2

              But maybe it is the case that only fields of order p are of interest in cryptography.

              Prime-order fields are of primary interest for elliptic curve cryptography, yes. However, extension fields (BLS signatures and other pairing-based cryptography) are sort of popular in some niches recently. Binary fields are also found on occasion for elliptic curve implementations (though the current speed winners are all GF(p) curves).

            1. 4

              I think where it ends up is right:

              In practice you would probably just compile a version that was passed a pointer to the type information, because the type information gives you size, alignment, and pointer information all in one place with only a single argument.

              But, just as a curiosity, I think you could do a copy with only a size. The only member besides size that the typedmemmove source accesses is ptrdata, which, though the name sounds super general, only says how far into the object you need to look to be sure you’ve found all the pointers. Using that instead of the object size here seems to be an optimization: if ptrdata is 1, for instance, the runtime can quit worrying about possible pointers in an object after the first word, and if it’s zero it needn’t scan at all. You could write memmove code to conservatively act as if any word of the object might be a pointer, you’re just potentially wasting some effort.

              The detailed data about which words of the allocation have pointers/need scanning comes from a GC bitmap that’s set up at allocation time. (You can just use an address to look a word up in this bitmap.) But that means that to allocate you need pointer/(no)scan information to set the bits. If allocating just to copy data you could in theory copy the GC bitmap from source to dest before you copy the data, but you’d still need the type’s alignment to get a properly aligned slot in memory and…yeah, maybe at that point we just pass a type pointer around instead.

              This all makes me wonder what choices the team will make about compilation of generics: max speed of compiled code (by compiling as many optimized versions of the code as needed) vs. a dynamic implementation to avoid hurting compile time or binary size (so the resulting machine code looks like if you’d used interfaces). I can see the case for either: maybe these are a specialized tool for max performance for sorts, collections, etc. or maybe they’re mostly to make source better-checked and clearer. Or maybe we start with the dynamic approach (possibly quicker to implement?) then tune the generated output over future releases. Haven’t followed discussions super closely; if someone knows what has been said about this I’m interested.

              1. 2

                Yeah I wonder if there will be any implementation problems due to the combination of monomorphized generics and a potential explosion of GC bitmaps per type.

                I think most of the languages monomorphized generics like C++ and Rust don’t have GC. Although I guess D is an exception. Not sure what they do exactly, but it probably helps that they have their own back end and not LLVM.

                Not sure what C# does either. I think it has more VM support.

                1. 2

                  .NET generics use monomorphization for value types and a shared instantiation for reference types.
                  The new() constraint is handled with reflection.

                  1. 1

                    This might provide useful background on the topic.

                  2. 2

                    I believe they were intentionally careful not to specify so that they could experiment & potentially offer multiple compile-time options.

                    1. 2

                      Yes, the design document’s implementation section explicitly leaves the question open (and is worth reading).

                      Curious what they do!

                    2. 1

                      Besides reducing the code bloat and avoiding the need for a special intermediate representation of compiled but unspecialized generics, the dynamic approach has the added benefit (at least from the POV of Go’s goals) that it discourages excessively fine-grained abstractions (e.g., how Arc and Mutex have to be separately applied to get Arc<Mutex<T>> in Rust), because it would have too much runtime overhead.

                    1. 1

                      This seems a little bit like a contradiction in terms. The whole point to procedural abstraction is that the data structure that represents the procedure is irrelevant - the abstract value of a procedure is its own behavior. (Never mind that this abstraction is leaky in practice, especially in dynamic languages, as the author acknowledges in the first footnote of chapter 2.) But the whole point to this thesis is undoing that? Maybe you want to use ordinary data structures after all…

                      1. 2

                        Thanks for your reply!

                        It’s a good point. In chapter 1 he argues that these representations for pictures are all bad:

                        1. a big bag of line segments
                        2. an opaque procedure that draws onto a canvas
                        3. an object with two methods: draw(), which draws onto a canvas, and decompose(), which returns a tree explaining how the picture was built

                        This isn’t very compelling because it’s easy to imagine a better OO design than option 3. The objects are already a tree: why convert them to a different tree? Queries like getBoundingBox() or isEmpty() can be methods on the object itself. This is kind of like using an ADT to represent pictures, and writing draw() as an interpreter. (Which reminds me of this really neat design: Monoids: Theme and Variations (Functional Pearl).)

                        Some applications I find more compelling are:

                        • sending a procedure to the GPU (in a way that includes all its helper functions)
                        • sending a procedure to a database (in a way that includes all its helper functions)
                        • re-optimizing a closure, treating the closed-over values as constant. This seems like it would be a very powerful tool for writing a compiler.

                        There are a lot of applications like this in Compiling to Categories:

                        • hardware circuits
                        • automatic differentiation
                        • incremental computation
                        • interval analysis

                        And an even more basic example: why can’t the Python REPL print a lambda the same way it prints numbers, lists, etc? Instead of:

                        >>> make_adder = lambda x: lambda y: x + y
                        >>> make_adder(5)
                        <function <lambda> at 0x10502cdd0>
                        

                        I would like to see:

                        >>> make_adder(5)
                        (lambda y: 5 + y)
                        
                      1. 2

                        This is incredible! I had no idea this effort was going on! Can anyone speak to the performance implications? I assume the back-end is making minimal use, currently, but am I correct in thinking that, in theory, this could enable performance of ergonomic linear Haskell to meet or exceed the performance of C/C++/Rust?

                        1. 8

                          Linear types do not suddenly turn Haskell into a language in which performance is easy to reason about. In particular, (a) it is still a lazy language, (b) it still does not allow you to control how much indirection is used in the representation of your data structures, (c) it still has features like polymorphic recursion, which restrict the compiler’s ability to monomorphize all types and get rid of the vtables in polymorphic code.

                          1. 1

                            Since you don’t seem to be someone who is interested in the language at all, I find it odd that you are responding here about completely different concerns that you have, that i’ll add, aren’t even related to the article topic.

                            1. 2

                              First of all, I only meant to reply to a specific question. I did not intend to do any of the following: (a) suggest Haskell is a bad language to use, (b) suggest any of the raised “issues” [if they could be called that] is a terribly big deal for most use cases, (c) hurt anyone’s feelings.

                              Second, I am interested in all languages with substructural types. More precisely, I am interested in lightweight tools for algorithm verification, and there is a qualitative jump from what you can verify without substructural types, to what you can verify with them.

                              1. 1

                                ok

                            2. 1

                              Ahh, yeah, those are all still potential sources of performance problems. At the very least, this should be a good stepping stone and way to experiment with linear types. I’m excited to read more about it and play with it.

                            3. 2

                              I’m no expert but here is the original paper (2018)! https://doi.org/10.1145/3158093

                              One of the performance applications is making read and write operations work on the bits of packed (serialized) ADTs. So it removes a need to have a decoded copy of that data via serialization/deserialization. They have a graph on how this improves summing and mapping over a binary tree.

                            1. 8

                              Isn’t there a difference between functional code and side-effect-free code? I feel like, by trying to set up all of the definitions just right, this article actually misses the point somewhat. I am not even sure which language the author is thinking of; Scheme doesn’t have any of the three mentioned properties of immutability, referential transparency, or static type systems, and neither do Python nor Haskell qualify. Scouring the author’s websites, I found some fragments of Java; neither Java nor Clojure have all three properties. Ironically, Java comes closest, since Java is statically typed in a useful practical way which has implications for soundness.

                              These sorts of attempts to define “functional programming” or “functional code” always fall flat because they are trying to reverse-engineer a particular reverence for some specific language, usually an ML or a Lisp, onto some sort of universal principles for high-quality code. The idea is that, surely, nobody can write bad code in such a great language. Of course, though, bad code is possible in every language. Indeed, almost all programs are bad, for almost any definition of badness which follows Sturgeon’s Law.

                              There is an important idea lurking here, though. Readability is connected to the ability to audit code and determine what it cannot do. We might desire a sort of honesty in our code, where the code cannot easily hide effects but must declare them explicitly. Since one cannot have a decidable, sound, and complete type system for Turing-complete languages, one cannot actually put every interesting property into the type system. (This is yet another version of Rice’s theorem.) Putting these two ideas together, we might conclude that while types are helpful to readability, they cannot be the entire answer of how to determine which effects a particular segment of code might have.

                              Edit: Inserted the single word “qualify” to the first paragraph. On rereading, it was unacceptably ambiguous before, and led to at least two comments in clarification.

                              1. 7

                                Just confirming what you said: Did you say that Haskell doesn’t have immutability, referential transparency, or a static type system?

                                1. 3

                                  I will clarify the point, since it might not be obvious to folks who don’t know Haskell well. The original author claims that two of the three properties of immutability, referential transparency, and “typing” are required to experience the “good stuff” of functional programming. On that third property, the author hints that they are thinking of inferred static type systems equipped with some sort of proof of soundness and correctness.

                                  Haskell is referentially transparent, but has mutable values and an unsound type system. That is only one of three, and so Haskell is disqualified.

                                  Mutable values are provided in not just IO, but also in ST and STM. On one hand, I will readily admit that the Haskell Report does not mandate Data.IORef.IORef, and that only GHC has ST and STM; but on the other hand, GHC, JHC, and UHC, with UHC reusing some of GHC’s code. Even if one were restricted to the Report, one could use basic filesystem tools to create a mutable reference store using the filesystem’s innate mutability. In either case, we will get true in-place mutation of values.

                                  Similarly, Haskell is well-known to be unsound. The Report itself has a section describing how to do this. To demonstrate two of my favorite examples:

                                  GHCi, version 8.6.3: http://www.haskell.org/ghc/  :? for help
                                  Prelude> let safeCoerce = undefined :: a -> b
                                  Prelude> :t safeCoerce
                                  safeCoerce :: a -> b
                                  Prelude> data Void
                                  Prelude> let safeVoid = undefined :: Void
                                  Prelude> :t safeVoid
                                  safeVoid :: Void
                                  

                                  Even if undefined were not in the Report, we can still build a witness:

                                  Prelude> let saferCoerce x = saferCoerce x
                                  Prelude> :t saferCoerce
                                  saferCoerce :: t1 -> t2
                                  

                                  I believe that this interpretation of the author’s point is in line with your cousin comment about type signatures describing the behavior of functions.

                                  1. 4

                                    I don’t really like Haskell, but it is abusive to compare the ability to write a non-terminating function with the ability to reinterpret an existing object as if it had a completely different type. A general-purpose programming language is not a logic, and the ability to express general recursion is not a downside.

                                    1. 3

                                      A “mutable value” would mean that a referenced value would change. That’s not the case for a value in IO. While names can be shadowed, if some other part of the code has a reference to the previous name, that value does not change.

                                      1. 1

                                        Consider the following snippet:

                                        GHCi, version 8.6.3: http://www.haskell.org/ghc/  :? for help
                                        Prelude> :m + Data.IORef
                                        Prelude Data.IORef> do { r <- newIORef "test"; t1 <- readIORef r; writeIORef r "another string"; t2 <- readIORef r; return (t1, t2) }
                                        ("test","another string")
                                        

                                        The fragment readIORef r evaluates to two different actions within this scope. Either this fragment is not referentially transparent, or r is genuinely mutable. My interpretation is that the fragment is referentially transparent, and that r refers to a single mutable storage location; the same readIORef action applied to the same r results in the same IO action on the same location, but the value can be mutated.

                                        1. 1

                                          The value has been replaced with another. It is not quite the same thing as mutating the value itself.

                                      2. 2

                                        From your link:

                                        When evaluated, errors cause immediate program termination and cannot be caught by the user.

                                        That means that soundness is preserved–a program can’t continue running if its runtime types are different from its compile-time types.

                                        1. 1

                                          If we have to run the program in order to discover the property, then we run afoul of Rice’s theorem. There will be cases when GHC does not print out <loop> when it enters an infinite loop.

                                          1. 1

                                            Rice’s theorem is basically a fancier way of saying ‘Halting problem’, right?

                                            In any case, it still doesn’t apply. You don’t need to run a program which contains undefined to have a guarantee that it will forbid unsoundness. It’s a static guarantee.

                                    2. 5

                                      Thank you for bringing up this point. Unfortunately, “functional programming” is almost-always conflated, today, with lack of side-effects, immutability, and/or strong, static, typing. None of those are intrinsic to FP. Scheme, as you mentioned, is functional, and has none of those. In fact, the ONLY language seeing any actual use today that has all three (enforced) is Haskell. Not even Ocaml does anything to prevent side-effects.

                                      And you absolutely can write haskell-ish OOP in e.g., Scala. Where your object methods return ReaderT-style return types. It has nothing at all to do with funcitonal vs. OOP. As long as you do inversion of control and return “monads” or closures from class methods, you can do all three of: immutable data, lack of side-effects, and strong types in an OOP language. It’s kind of ugly, but I can do that in Kotlin, Swift, Rust, probably even C++.

                                      1. 2

                                        Scheme, as you mentioned, is functional, and has none of those.

                                        Why is Scheme functional? It’s clearly not made of functions:

                                        Lisp Is Not Functional

                                        A functional language is a programming language made up of functions

                                        What defun and lambda forms actually create are procedures or, more accurately still, funcallable instances

                                        I would say Haskell and Clojure are functional, or at least closer to it, but Scheme isn’t. This isn’t a small distinction…

                                        1. 2

                                          That’s a good point and I actually do agree completely. The issue, I think, is that most programmers today will have a hard time telling you the difference between a procedure and a function when it comes to programming. And it’s totally fair- almost every mainstream programming language calls them both “function”.

                                          So, Scheme is “functional” in that it’s made up of things-that-almost-everyone-calls-functions. But you’re right. Most languages are made of functions and procedures, and some also have objects.

                                          But with that definition, I don’t think Clojure counts as functional either. It’s been a couple of years, but am I not allowed to write a “function” in Clojure that takes data as input and inside the function spawns an HTTP client and orders a pizza, while returning nothing?

                                          It would appear that only Haskell is actually a functional language if we use the more proper definition of “function”

                                          1. 1

                                            But with that definition, I don’t think Clojure counts as functional either. It’s been a couple of years, but am I not allowed to write a “function” in Clojure that takes data as input and inside the function spawns an HTTP client and orders a pizza, while returning nothing?

                                            Hey, the type for main in Haskell is usually IO (), or “a placeholder inside the IO monad”; using the placeholder type there isn’t mandatory, but the IO monad is. Useful programs alter the state of the world, and so do things which can’t be represented in the type system or reasoned about using types. Haskell isn’t Metamath, after all. It’s general-purpose.

                                            The advantage of Haskell isn’t that it’s all functions. It’s that functions are possible, and the language knows when you have written a function, and can take advantage of that knowledge. Functions are possible in Scheme and Python and C, but compilers for those languages fundamentally don’t know the difference between a function and a procedure, or a subroutine, if you’re old enough. (Optimizers for those languages might, but dancing with optimizers is harder to reason about.)

                                          2. 1

                                            That article is about Common Lisp, not Scheme. Scheme was explicitly intended to be a computational representation of lambda calculus since day 1. It’s not purely functional, yes, but still functional.

                                            1. 2

                                              If anything that underscores the point, because lambda calculus doesn’t have side effects, while Scheme does. The argument applies to Scheme just as much as Common Lisp AFAICT.

                                              Scheme doesn’t do anything to control side effects in the way mentioned in the original article. So actually certain styles of code in OO languages are more functional than Scheme code, because they allow you to express the presence of state and I/O in type signatures, like you would in Haskell.

                                              That’s probably the most concise statement of the point I’ve been making in the thread …

                                              1. 2

                                                I take it we’re going by the definition of ‘purely functional programming’ then. In that case, I don’t understand why Clojure, a similarly impure language, gets a pass. Side-effects are plentiful in Clojure.

                                                1. 2

                                                  Well I said “at least closer to it”… I would have thought Haskell is very close to pure but it appears there is some argument about that too elsewhere in the thread.

                                                  But I think those are details that distract from the main point. The main point isn’t about a specific language. It’s more about how to reason about code, regardless of language. And my response was that you can reap those same benefits of reasoning in code written in “conventional” OO languages as well as in functional languages.

                                                  1. 1

                                                    That’s fair. It’s not that I disagree with the approach (I’m a big fan of referential transparency!) but I feel like this is further muddying the (already increasingly divergent) terminology surrounding ‘functional programming’. Hence why I was especially confused by the OO remarks. It doesn’t help that the article itself also begs the question of static typing.

                                        2. 2

                                          Isn’t there a difference between functional code and side-effect-free code?

                                          It depends on who you ask to. :)

                                          You may be interested in the famous Van Roy’s organization of programming paradigms: https://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf. Original graphical summary: https://continuousdevelopment.files.wordpress.com/2010/02/paradigms.jpg, revised summary: https://upload.wikimedia.org/wikipedia/commons/f/f7/Programming_paradigms.svg.

                                          (reposted from https://lobste.rs/s/aw4fem/unreasonable_effectiveness#c_ir0mnq)

                                          1. 1

                                            I like your point about the amount of information in type signatures.

                                            I agree that the type can’t contain everything interesting to know about the function.

                                            I do think you can choose to put important information in the type. In Haskell it’s normal to produce a more limited effect system, maybe one for database effects only, and another for network effects only, and then connect those at the very top level.

                                            So, you can put more in the type signature if you wish, and it can be directly useful to prevent mixing effects.

                                          1. 4

                                            Code that relies too much on indirect procedure calls (virtual methods, first-class functions) can be just as opaque as (if not more than) code that heavily relies on mutation. Functional programming fans are often guilty of playing awkward games with the control flow for the sake of the abstractions they know and love. I’m looking at you, lenses and transducers!

                                            As sources of complexity, mutation is “less bad” than indirect procedure calls. At least in some languages, it is easy to use mutation locally in order to reap global efficiency benefits. With excessively higher-order functions, the tradeoff is the other way around: the control flow has to jump all over your program (including its dependencies), just so that you can write shorter and cuter code for this one small loop.

                                            1. 6

                                              I mostly agree but I would say that OO is evolving toward FP and they aren’t that far apart (at least in certain circles, maybe not in 20 year old code).

                                              Good OO is also principled about state and I/O, just like FP.

                                              Related comment I wrote on “How to Avoid the Assignment Statement”

                                              https://news.ycombinator.com/item?id=22835750

                                              Older comment on “Disadvantages of Purely Functional Programming”

                                              https://news.ycombinator.com/item?id=11841893

                                              So while I agree with the general gist, I would add the caveat that it’s easier to do FP in OO languages than OO in functional language. And the former is pretty natural.


                                              Another way I think about it is “dependency inversion of state and I/O”, which I wrote about here:

                                              http://www.oilshell.org/blog/2020/04/release-0.8.pre4.html#dependency-inversion-leads-to-pure-interpreters

                                              And I also think in many programs it’s useful to “cheat” a little to get it working, and then refactor to a more principled architecture.

                                              1. 9

                                                There’s a fundamental difference between OO and FP. With FP the core idea is to keep data and code separate. Programs are written as pipelines of functions that you pass data through, and get another piece of data at the other end. In modern FP, data is also immutable, so the functions are referentially transparent. The big advantage here is that data is inert by nature because it doesn’t have any behaviors associated with it, and it’s transparent. What you see is what you get because it’s just data.

                                                And you operate on this data by combining a common set of functions from the standard library. Once you learn how these functions work, it becomes trivial to transform any kind of data using them. It’s akin to having a bunch of Lego pieces that you can put together in many different ways to accomplish different tasks.

                                                On the other hand, objects are state machines that have some internal state with the methods as the API for interacting with that state. An application written in OO style ends up being a graph of opaque interdependent state machines, and this tends to be quite difficult to reason about. This is why debugging is such a big part of OO development. It’s impossible to reason about a large OO application because there’s just too many things that you have to keep in your head. So, the only thing you can do is put a break point, get the application in the desired state, and try to figure out how you got there.

                                                Meanwhile, classes are really ad hoc DSLs, each class defines its own custom API and behaviors in form of its methods, and knowing how one class works tells you absolutely nothing about the next class. The more classes you have in your program, the more stuff you have to keep in your head to work with it effectively.

                                                This also makes it much more difficult to learn APIs for libraries. When you call a method, and you get an object graph back, now you have to learn about each of these objects, and how they behave. When your API is data driven, this problem doesn’t exist. You call a function, get some data back, and that’s the end of the story.

                                                Rich Hickey describes this problem here very well, and that matches my experience very closely.

                                                1. 6

                                                  No, that is sort of an old way of thinking about OO. There are a lot of people who don’t write it that way, including me. You can do something much closer to FP in a OO language.

                                                  That is the point of the linked comments. Exceprt:

                                                  I use “functions and data” (a la Rich Hickey).

                                                  Except my functions and data are both CLASSES. Data objects are basically structs, except they can do things like print themselves and answer simple queries based on their values, which helps readability (e.g. 1 liners, like word.AsFuncName() ).

                                                  Function objects are simply classes with configuration passed to constructors. That usually have a single method, but multiple methods are also often useful. Calling this method is basically equivalent to calling a curried function. But this is supremely useful for both compilers and servers, because often you have config/params that is constant once you reach main(), and then you have params that vary per request or per file processed. Many functions depend on both, and it’s cleaner to separate the two kinds of params.

                                                  So both “functions and data” are effectively and usefully implemented as classes.

                                                  The Oil interpreter started out maybe 60% in this style, and is approaching 90% of that style. It’s tens of thousands of lines of code, so it’s not a small change.

                                                  There are a few classes that are state machines, but they are explicitly limited to about 10% of the interpreter, just as you would do in a functional language. Most of it is parsing, which has “local” mutable state inside and an interface that’s basically a function.

                                                  Again, from the comments, the thing I found funny is that for lexing and parsing, languages like OCaml just borrow the exact same mutable algorithms from C for lexing and parsing (LALR parsing and DFAs for regexes). The mutable escape hatch of of OCaml is essential.

                                                  Lexing and parsing are inherently stateful. As long as it’s localized, it’s no problem. FP and OO both agree on that.

                                                  1. 1

                                                    Thing is that in practice you rarely pass functions around in Clojure. Vast majority of the time you’re passing around plain data, you pass that data through different functions to transform it, and get new kind of data back. There is very little reason to pass functions around in my experience. So, yes you can do similar stuff to OO with functions and closures, but that’s not generally how you end up structuring your applications.

                                                    And yes, you can write FP style code in OO languages, but then you’re really not making the most out of the paradigm the language was designed for. You’re much better off doing FP in an actual FP language.

                                                  2. 3

                                                    This also makes it much more difficult to learn APIs for libraries. When you call a method, and you get an object graph back, now you have to learn about each of these objects, and how they behave. When your API is data driven, this problem doesn’t exist. You call a function, get some data back, and that’s the end of the story.

                                                    It creates another problem though: exposing implementation details. Your clients may start assuming things about the data structure that need to be changed. This article tells the story of an API that exposed a list [1]. It turned out the list was too slow, and they wanted to change it. Unfortunately the API’s clients assumed it was a list. The solution given in the article? Hide the data behind a constructor and selectors. That’s basically a class definition.

                                                    1: https://itnext.io/information-hiding-for-the-functional-programmer-b56937bdb789

                                                    1. 2

                                                      Not really because I choose what data I return from the API functions in a library. Meanwhile, your anecdote could’ve happened just as easily using OO API. In fact, I would argue that it’s a far more common problem in OO since you return a graph of objects, and if you ever need to change any of them down the line you’ll be breaking the API for all your users.

                                                      Having been working with Clojure for around a decade now, I can tell you that this problem has yet to come up for me in practice.

                                                      1. 1

                                                        In fact, I would argue that it’s a far more common problem in OO since you return a graph of objects, and if you ever need to change any of them down the line you’ll be breaking the API for all your users.

                                                        One of the core tenets of OOP is to program to the interface, not the implementation. If you change the implementation but keep the interface unchanged, you are guaranteed to not break the downstream consumers.

                                                        1. 1

                                                          Interfaces only address the problem partially, because if the interface ever changes that will still create a breaking change. My experience is that changes to interfaces in OO happen far more regularly than changes to the shape of the data in functional APIs.

                                                          1. 1

                                                            As per the Java Language Spec,[1]

                                                            …here is a list of some important binary compatible changes that the Java programming language supports: … Adding new fields, methods, or constructors to an existing class or interface.

                                                            (Among others)

                                                            This is similar to making an additive change to the shape of data, e.g. adding a new field to a map which is consumed by a function that doesn’t use the field.

                                                            [1] https://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html

                                                            1. 1

                                                              Having worked with Java for around a decade, I’m well aware of how interfaces work. Yet, my experience is that breaking changes in a language like Java happen far more frequently than they do in a language like Clojure. So, while there are mitigating factors in theory, I don’t find they translate to having stable APIs in practice. That’s just my experience though.

                                                        2. 1

                                                          How does you choosing what data you return from the API prevent you from exposing implementation details? Or are you saying you just don’t care because you can change the API interface whenever you feel like it?

                                                          1. 1

                                                            I don’t really follow your question. When I write a function, I explicitly what data it returns, and the shape of that data. The shape of the data is explicitly part of the API contract.

                                                            1. 1

                                                              The shape of the data is explicitly part of the API contract.

                                                              Because the approach forces you to do that even when the shape of the data is an implementation detail that you don’t want to expose.

                                                              1. 1

                                                                That statement doesn’t make sense. The shape of the data is explicitly part of the contract provided by the API. It is not an implementation detail.

                                                                1. 1

                                                                  Every implementation choice that is not relevant to the API’s core functionality is an implementation detail. For example, it’s not relevant for an iterator to know the data structure of a collection. All the iterator cares about is that it can iterate over the elements. The concrete shape of the collection (list, set, map, queue, stack, whatever) is an implementation detail from the point of view of the iterator.

                                                                  Just because you decide to make an implementation detail a part of the API’s contract, that doesn’t mean it’s not an implementation detail anymore. That’s essentially the problem in the article that I linked.

                                                                  1. 1

                                                                    In Clojure, you have a seq interface, so any function can iterate over any collection. The same way as it works with Java interfaces. So, no you’re not leaking any more implementation details than you wold with OO API. You’re just arguing a straw man here.

                                                                    1. 1

                                                                      Just because iteration in Clojure does not depend on the implementation details of the data structure, that doesn’t mean the API’s clients can’t write code that does depend on such details. So, that does not contradict my point at all.

                                                                      I don’t know Clojure that well, so I can’t give you an example in Clojure. However, I’m pretty confident that even Clojure does not magically solve the problem of leaking implementation details (otherwise, why would it have interfaces in the first place).

                                                                      1. 1

                                                                        Again, there’s no difference between FP and OO here. Both Clojure and Java have collection interfaces. If you have an OO API that says it returns a collection, that collection is backed by a concrete implementation, such as a list, internally. That’s an implementation detail. OO doesn’t magically do away with collections. And this is why I find your whole line of argument completely absurd.

                                                                        Also, having actually used Clojure for a around a decade now professionally, I can tell you that what you’re describing has never ever come up in practice. So, forgive me if I’m not convinced by your arm chair analysis here.

                                                                        1. 1

                                                                          Again, there’s no difference between FP and OO here. If you have an OO API that says it returns a collection, and that collection is backed by a list internally, that’s an implementation detail. OO doesn’t magically do away with collections.

                                                                          I don’t follow this argument at all. Yeah, in the OO case there is an implementation detail. However, as you pointed out with the word internally: that implementation detail is not exposed to the client. Meaning, the API owner can change that list to a map or a tree or any other data structure at any time without having to worry about the client’s code at all.

                                                                          It’s also a little bit of a straw man because this discussion was not about OO vs FP. It was about your statement that APIs should produce/consume raw data rather than objects.

                                                                          Having actually used Clojure for a around a decade now professionally, I can tell you that what you’re describing has never ever come up in practice. So, forgive me if I’m not convinced by your arm chair analysis here.

                                                                          So, you’re telling me that, in a whole decade, you’ve never had to make changes to an API because a data structure turned out to be wrong? Given that I could pretty easily find an article that showed people having exactly that problem, forgive me if I conclude that either you’re not remembering correctly, or you’re an extraordinary programmer.

                                                                          1. 1

                                                                            The implementation detail is exposed to the client in exactly the same way. In both cases you have an interface that abstracts the type of collection used from the client. The API owner can change the implementation in EXACTLY the same way. If my API returned a list and then I change it to return a vector, the client does not need to make any changes because both of them support the seq interface.

                                                                            The discussion is about OO vs FP because my whole point is that in OO you end up interacting with object graphs in your API, while in FP you end up interacting with plain data. I’m simply telling you that your argument is incorrect in the context of Clojure because collections conform to interfaces. I’m starting to get the feeling that you don’t understand how interfaces work.

                                                                            So, you’re telling me that, in a whole decade, you’ve never had to make changes to an API because a data structure turned out to be wrong?

                                                                            Correct, I’ve never had to make a change to an API because I changed the implementation of the backing data structure that conformed to the same interface.

                                                                            And please do link me an article of this happening in Clojure since you claim you claim that can easily find that. Perhaps what you failed to consider is that the article you found deals with the problems in a specific language as opposed to a general FP problem that you extrapolated from it.

                                                                            It frankly amazes me that somebody who openly admits to knowing nothing about the subject can have such strong opinions on it based on an article they find.

                                                                            1. 1

                                                                              The implementation detail is exposed to the client in exactly the same way. In both cases you have an interface that abstracts the type of collection used from the client. The API owner can change the implementation in EXACTLY the same way. If my API returned a list and then I change it to return a vector, the client does not need to make any changes because both of them support the seq interface.

                                                                              As I said, only because Clojure happens to have a seq interface. This reply tells me that you didn’t really get the point I made, and I don’t know how to explain it to you at this point. It seems you just don’t want to see it.

                                                                              Correct, I’ve never had to make a change to an API because I changed the implementation of the backing data structure that conformed to the same interface.

                                                                              Then your API is not producing/consuming raw data, it’s producing/consuming interfaces. It seems you’re contradicting your own position.

                                                                              And please do link me an article of this happening in Clojure since you claim you claim that can easily find that. Perhaps what you failed to consider is that the article you found deals with the problems in a specific language as opposed to a general FP problem that you extrapolated from it.

                                                                              So, I searched “Clojure information hiding”, and the first hit literally repeats my whole point. Maybe you understand this explanation then:

                                                                              One of the goals of information-hiding is to hide implementation details, so that programmers cannot write programs that depend on those details. (If they do so and those implementation details change, then those programs must also be changed, driving up software maintenance costs.) Clojure thus does not give us any good way to hide implementation details.

                                                                              https://cs.calvin.edu/courses/cs/214/adams/labs/08/clojure/

                                                                              1. 1

                                                                                Clojure having a seq interface clearly disproves your point. I’m not sure what else there is to say here.

                                                                                Then your API is not producing/consuming raw data, it’s producing/consuming interfaces. It seems you’re contradicting your own position.

                                                                                If you think that then you didn’t understand my position. My original point was that data is immutable, transparent, and it doesn’t have behaviors associated with it. Having interfaces doesn’t invalidate any of that. Your whole argument is just a straw man.

                                                                                So, I searched “Clojure information hiding”, and the first hit literally repeats my whole point. Maybe you understand this explanation then:

                                                                                There is no information hiding in Clojure because data is part of the API. That’s my whole point of the difference between OO and FP. The claim that this is at odds with hiding implementation details is incorrect however, and I’ve already explained repeatedly why it’s incorrect.

                                                                                Seeing how this discussion just keeps going in circles I’m going to leave it here. You can feel free to continue believing what it is that you want to believe, I’ve explained all I can.

                                                                                Have a good day.

                                                                                1. 1

                                                                                  My original point was that data is immutable, transparent, and it doesn’t have behaviors associated with it. Having interfaces doesn’t invalidate any of that.

                                                                                  Okay, that changes things…

                                                                                  I guess I didn’t understand that at all from reading your comments. There is a paragraph in your initial post where you talk about “classes being ad hoc DSLs” and a bunch of other stuff that applies equally to interfaces. You know, in C++ you make interfaces as pure virtual classes. Then, in the next paragraph you make a contrast between that situation and just data. So, I thought you were talking about concrete data structures not hidden behind any interface.

                                                                                  Also, in your first counterargument, you’re countering with this graph of objects that’s hard to change. However, if that was a graph of interfaces, that problem would still be there (as you pointed out in another comment). So, this reinforced the understanding in me (and also others, it seems) that you were talking about concrete “interface-less” data structures.

                                                                                  Anyway, if what you’re arguing for includes the ability to return only interfaces from APIs, then I agree the problem that I brought up doesn’t really apply.

                                                                                  1. 1

                                                                                    The problem with graphs of objects is primarily with objects being opaque and having behaviors. This is completely tangential to the discussion around interfaces.

                                                                                    When you have a data driven API backed by immutable data, then what you see is what you get. You can see the the data returned in the API, and it doesn’t have any behaviors associated with it. All the interfaces achieve here is abstracting over the concrete implementation, so you don’t have to worry about the specific backing data structure affecting the API semantics.

                                                                                    1. 1

                                                                                      Alright, that’s great! I actually share the view that state can turn into a nasty thing. It’s just that you seemed to be arguing against the idea of data abstraction. This kind of triggered me because my experience tells me the code I work with professionally would never survive without it.

                                                                                      I guess it’s words like transparent and opaque that cause some confusion for me. They are very generic words so they can be misinterpreted easily. For example, an interface is also opaque in the sense that you can’t see the implementation details.

                                                                                      1. 2

                                                                                        Glad we’re on the same page. Unfortunately, this kind of thing happens quite often. We all use the same words, but we attach subtly different meanings to them in our heads based on our existing knowledge and experience. So, it’s really easy to end up talking past each other and not even realize it. :)

                                                      2. 2

                                                        I basically agree with everything you said about what makes a program good or bad, but I disagree with your conclusion: that functional programming leads to good programs and oop leads to bad programs (in some general sense, let’s not nit-pick or talk about absolutes).

                                                        But I disagree. In almost all languages, you are perfectly allowed to mutate inputs into functions. This includes basically all (quasi-popular) functional languages that are not named Haskell or Clojure. You are also allowed to cause side effects in your functions in all functional languages that are not named Haskell. This means that I can write input-mutating, side-effecting functional code in OCaml, LISP, Scheme, Rust, etc, etc. Some languages discourage it more than others.

                                                        My point is that I agree that making 100 opaque state machines to interact with is a bad idea for a program. But that ad-hoc, crappy, DSL is also perfectly easy to write in a functional way.

                                                        I have little doubt that a very strict OOP language that does not allow input arg mutation and side-effects in methods is possible to create and would probably work just as well as good functional languages. The only difference would be a coupling of the data to the functions. That is the only actual difference between FP and OOP in any strict sense, IMO.

                                                        1. 5

                                                          Both major ML dialects (Standard ML and OCaml) keep a typed distinction between immutable and mutable data. I find this to be good enough to tell which operations can mutate data that I care about at any given moment. Moreover, modules allow you to easily implement ostensibly immutable abstract data types that internally use mutation. (The operating word here is “easily”. It is possible in Haskell too, but it is a lot more painful.)

                                                          I would not call Rust a “functional language”, but, for similar reasons, its ability to track what data can be safely mutated at any given moment is good enough to get most of the advantages of functional programming. And then, some of the advantages of non-functional programming.

                                                      3. 3

                                                        Hopefully on-topic: My experience in functional programming has led to heavy use of composition.

                                                        One thing that’s always frustrated me about Python is that instance methods do not return “self” by default, but instead return “None”. I once hacked up a metaclass to make that happen, and suddenly Python felt much more functional! SmallTalk and some of the heavy OO languages do return “self” or “this” by default, I find that fits the Haskell part of my brain better.

                                                        What’s the zen koan? Objects are a poor man’s closures, and closures are a poor man’s objects? In Haskell I use partial application to give me roughly what object instances give me. It’s neat, try it out!

                                                        1. 4

                                                          One thing that’s always frustrated me about Python is that instance methods do not return “self” by default, but instead return “None”.

                                                          Yes! It makes it much harder to do simple (list|dict|generator) comprehensions when mutations return None.

                                                          In Haskell I use partial application to give me roughly what object instances give me. It’s neat, try it out!

                                                          I (ab)use functools.partial for this in python. Very helpful when you have a defined interface and you want to stick extra parameters in as well.

                                                        2. 3

                                                          Even principled object oriented code hides race conditions. If you connect two systems in an OO language, it may produce an incorrect result while separately systems would run just fine.

                                                          Another problem is that partial evaluation for an OO language is not obvious. If you intend to write abstract code like people can do with FP it introduces structure that may need to be contracted away.

                                                          1. 2

                                                            You don’t get a guarantee from the language, but you can absolutely structure your OO code so composition is thread-safe, and unsafe combinations are obvious.

                                                            When I say dependency inversion of I/O and state, I mean that they are all instantiated in main(). And all other code “modules” are also instantiated in main (as classes), and receive state and I/O as parameters.

                                                            If you pass the same state to two different modules, then you have to be careful that they are not run concurrently.

                                                            If they don’t accept the same state as parameters, then they are safe to run concurrently.

                                                            There are zero mutable globals. That is how the Oil interpreter is written.

                                                            It helps as I mentioned to have some classes that are like functions, and some classes that are like data. Data and functions are both usefully and easily expressed as classes. (In Oil I use ASDL to make data-like classes, for example)


                                                            tl;dr You can make state and I/O parameters in an OO language, and then you get a lot of the reasoning benefits of functional programs, along with some other flexibility (like using a mutable style inside referentially transparent functions, mentioned in my comments and in the article)

                                                            1. 1

                                                              Could you expand on your first point? What kind of systems and connection between them do you have in mind?

                                                              1. 3

                                                                An example, a simple one:

                                                                class A {
                                                                  int x;
                                                                  mutate_x (int v) { x = v };
                                                                  sendto (B receiver) {
                                                                    int y = x + 10;
                                                                    while (x < y) { receiver.receive(x); x += 1 }
                                                                  }
                                                                }
                                                                

                                                                Depending on whether a receiver here has a separate access to A and gets to mutate_x when sendto is going on, this code is either fine, or then it’s not.

                                                                1. 1

                                                                  That makes sense. Thanks for elaborating!

                                                            2. -1

                                                              Can you please paste you replies here, so I don’t have to make another click?

                                                            1. 6

                                                              I’m afraid Haskell wins this one:

                                                              • Type declarations and value definitions go in separate lines, reducing the amount of noise that ever goes in a single line.
                                                              • Fancy polymorphic types can be inferred to a much larger extent than in Scala, further reducing the amount of noise.
                                                              • Pattern matching (with branching!) on the left-hand side of a function definition is actually possible and idiomatic.
                                                              1. 8

                                                                Not a fan of Haskell in this regard – I think the split of params from their types comes at a cost, just as fancier type inference.

                                                                But yeah, Haskell (or rather Idris with its single :) may occupy a different local optimum from the one I described.

                                                                1. 6

                                                                  Haskell is moving even more in this direction. There was the quirk that, while type signatures could be in separate lines, kind signatures (the “types of types”, useful in places like datatype and type family declarations) could not. But there’s a new extension called StandaloneKindSignatures that allows that. For example:

                                                                  type MyEither :: Type -> Type -> Type
                                                                  data MyEither a b = MyLeft a | MyRight b
                                                                  
                                                                  1. 3

                                                                    I definitely do not like Haskell (hence “I’m afraid”), but credit is given where credit is due. Using polymorphism in Haskell is much more pleasant than using it in a language that requires you to explicitly spell out every single type variable.

                                                                    What is the cost in separating function signatures from function argument lists?

                                                                    1. 3

                                                                      I don’t see how separating the signature would change the properties of type inference, neither using a single : would improve anything in this regard.

                                                                      Another aspect that would be more in line with the content of your article would be how class constraints are defined:

                                                                      fromIntegral :: (Num b, Integral a) => a -> b:

                                                                      Here we can see that haskell took kind of both roads at the same time :)

                                                                  1. 4

                                                                    So…. is this different from structural typing / static duck typing?

                                                                    1. 12

                                                                      Good question. I guess it adds compile-time quack-checking?

                                                                      1. 9

                                                                        structural typing

                                                                        You could have structurally typed records without row polymorphism. You might say that { x: int, y: float } is the same type as { y: float, x: int }, but it’s not the same type as { x: int, y: float, z: string }.

                                                                        That’s how SML works. You can have two different record types with some fields in common:

                                                                        let a = { x = 1, y = 2.0 };
                                                                        let b = { x = 3, y = 4.0, z = "hello" };
                                                                        

                                                                        and then you can use the same field selector on both:

                                                                        let ax = #x a;
                                                                        let bx = #x b;
                                                                        

                                                                        But you can’t take #x and package it up into a function, because the type system has no concept of “any record type with an x field”:

                                                                        fun getX r = #x r;  // Error: unresolved flex record (can't tell what fields there are besides #x)
                                                                        

                                                                        static duck typing

                                                                        Are you thinking of Go interfaces, or C++ templates? (Or something else? :D) I think either one could plausibly be called compile-time duck typing.

                                                                        In Go you can use interfaces to have a function accept any struct that meets some requirements. But I don’t think you could express a function like this:

                                                                        // addColor has type { ...fields } -> { color: string, ...fields }
                                                                        let addColor (c : string) r = { color = c, ...r };
                                                                        let bluePoint2d = addColor "blue" { x = 1, y = 2 };
                                                                        let redPoint3d = addColor "red" { x = 3, y = 4, z = 5 };
                                                                        

                                                                        addColor can add a “color” field to any record type, without knowing which concrete type it was given. But the caller knows which record type it’s using, so it doesn’t need to cast the result.

                                                                        In C++ you can write getX as a templated function:

                                                                        template<class T, class S>
                                                                        S getX(T r) { return r.x; }
                                                                        

                                                                        But now the compiler doesn’t typecheck the definition of getX: it just waits for you to use it, and typechecks each use. That’s like duck typing (just pass it in, and see if it works), but at compile time. Just like with run-time duck typing, the problem is you can’t look at the signature of getX and understand what its requirements are.

                                                                        1. 1

                                                                          Are you thinking of Go interfaces, or C++ templates?

                                                                          Mostly TypeScript actually :) I’m not sure where the line is because TS calls what they do “structural typing” but I’m having a hard time seeing where the differences lie.

                                                                          1. 1

                                                                            Type inference for SML code using field selectors is hilariously bad, precisely because the natural and obvious principal types (having row variables in them) are inexpressible in SML.

                                                                            Out of curiosity, how would you expect your addColor function to work on records that already contain a color field? The purist’s answer would be to use something like Ur/Web’s disjointness witnesses, but, gawd, the syntax is super awful.

                                                                            1. 2

                                                                              Here’s an example in purescript that will only add color to records that don’t already have a color key:

                                                                              module Rows where
                                                                              
                                                                              import Prelude
                                                                              import Effect (Effect)
                                                                              import Effect.Class.Console as Console
                                                                              import Prim.Row (class Lacks)
                                                                              import Record as Record
                                                                              
                                                                              addColor ::
                                                                                forall r.
                                                                                Lacks "color" r =>
                                                                                String ->
                                                                                { | r } ->
                                                                                { color :: String | r }
                                                                              addColor color record = Record.union { color } record
                                                                              
                                                                              main :: Effect Unit
                                                                              main = do
                                                                                Console.logShow $ addColor "red" { foo: 1 }
                                                                               {- this won't compile
                                                                                  Console.logShow $ addColor "red" { color: "blue" }
                                                                               -}
                                                                              
                                                                              1. 1

                                                                                Fascinating. How is this Lacks type class implemented? Compiler magic?

                                                                                1. 2

                                                                                  It’s part of the Prim.Row builtin.

                                                                          2. 4

                                                                            It’s a kind of structural typing, so, I suppose it’s a kind of static duck typing.

                                                                            In general, with all of these, the more flexible the system the harder it is to work with and the less capable the inference is. Row typing is one way to weaken general structural subtyping by limiting the subtyping relationship to that which can be generated by the record/row type. So Row types are a bit easier to infer.

                                                                            1. 3

                                                                              I wrote a post a long time ago about differences between row polymorphism and (structural) subtyping: https://brianmckenna.org/blog/row_polymorphism_isnt_subtyping

                                                                              1. 1

                                                                                Oh I just realised my post was mentioned at the bottom of the posted one. Awesome!

                                                                            1. 3

                                                                              I solved this problem a while ago and wrote some notes on it.

                                                                              “Equality & Identity”: Overview, Problems, Solution, Fixing Haskell, Implementation in Dora


                                                                              One mistake in the article though:

                                                                              You want the developer to see === and think “reference equality,” not “more equals signs is better.”

                                                                              No, you don’t want this. You want === to stand for identity, such that it can have a definition that works consistently across all types, reference and value. Both sane handling of floating point values as well as the ability to freely change types from reference types to value types (and back) depend on it.

                                                                              Some minor nipicks:

                                                                              programming languages should make it simple to create types where equality comparison is disabled …

                                                                              It’s probably easier to pick sane defaults and stop the runtime from implementing random methods “for convenience”.

                                                                              … they should use this feature in their standard library where needed, such as on floating point types

                                                                              Not really seeing the reason for this.


                                                                              Overall a nice article with an entertaining description of an interesting problem!

                                                                              1. 2

                                                                                Your “fixing Haskell” article does not fix anything. The actual fix is to make Float and Double not implement the Eq type class, period.

                                                                                One thing Standard ML does right is using different functions for comparing eqtypes (e.g. int, bool, tuples of eqtypes, etc.) and floating points (i.e. real). When you use the former, you can expect all the usual mathematical properties of equality to hold. When you use the latter, you know what you are getting into.

                                                                                1. 1

                                                                                  Your “fixing Haskell” article does not fix anything.

                                                                                  I think the article does a good job to explain why you’re wrong.

                                                                                  The actual fix is to make Float and Double not implement the Eq type class, period.

                                                                                  Why not just drop floating point values altogether, “language design is hard, let’s go shopping”-style?

                                                                                  When you use the former, you can expect all the usual mathematical properties of equality to hold. When you use the latter, you know what you are getting into.

                                                                                  That’s exactly what happens – you specify the properties you need:

                                                                                  If you require identity and the guarantees it provides, e. g. for keys of maps, simply demand that the type implements it.

                                                                                  class HashMap[K: Identity + Hash, V] { ...}
                                                                                  // Double implements Identity
                                                                                  let map = HashMap[Double, String]()
                                                                                  map.put(Double.NaN, "found me")
                                                                                  // Lookup works, even for NaN:
                                                                                  assert(map.get(Double.NaN) == Some("found me"))
                                                                                  
                                                                                  1. 1

                                                                                    I think the article does a good job to explain why you’re wrong.

                                                                                    Your article literally asks for the Eq class to expose two distinct equality testing functions. What can be more broken than that? I am having a lot of trouble imagining it.

                                                                                    If you require identity and the guarantees it provides, e. g. for keys of maps, simply demand that the type implements it.

                                                                                    Using a type class to constrain the key type of a map is in itself broken. Suppose for the sake of the argument that you have a type that can be totally ordered in two different ways, both useful. (I could have also said “hashed in two different ways, both useful”, but that is less realistic.) It is perfectly reasonable to want to construct two distinct ordered map types, one for each total order. With a type class constraint on the map’s key type, you cannot this, because type classes must have canonical instances. (Or you could destroy canonicity at your own peril, as Scala does.)

                                                                                    The correct solution is to do exactly what ML does:

                                                                                    signature ORDERED =
                                                                                    sig
                                                                                        type key
                                                                                        val compare : key * key -> order
                                                                                    end
                                                                                    
                                                                                    functor TreeMap (Key : ORDERED) :> MAP
                                                                                        where type key = Key.key =
                                                                                    struct
                                                                                        (* ... *)
                                                                                    end
                                                                                    

                                                                                    The map abstraction is not parameterized just by the key type but rather by the whole ordered type structure, of which the naked key type is but one component. This is exactly how things should be.

                                                                                    Applying this lesson to your hash map example, we have

                                                                                    signature HASHABLE =
                                                                                    sig
                                                                                        type key
                                                                                        val equal : key * key -> bool           (* not necessarily ML's built-in equality! *)
                                                                                        val hash : key -> int
                                                                                    end
                                                                                    
                                                                                    functor HashMap (Key : HASHABLE) :> MAP
                                                                                        where type key = Key.key =
                                                                                    struct
                                                                                        (* ... *)
                                                                                    end
                                                                                    

                                                                                    EDIT: Fixed formatting.

                                                                                    1. 1

                                                                                      Your article literally asks for the Eq class to expose two distinct equality testing functions. What can be more broken than that? I am having a lot of trouble imagining it.

                                                                                      As mentioned in the article, you could also introduce a new trait for it.

                                                                                      I think the SML approach is well-known to those who thought about non-canonical instances in Haskell and Scala. (Rust seems to enforce uniqueness of instances, so that’s another interesting data point.)

                                                                                      The core question is whether the price one would pay for this feature is worth the cost – I wouldn’t say so; that’s why I’m approaching this by splitting things into better-defined typeclasses to relieve the pressure on needing multiple instances of the same typeclass even for trivial things.

                                                                                      For instance for numbers, you don’t need to engage in crazy contortions anymore (like in Haskell) to swap behavior between floats that compare nicely in the domain (NaN != NaN) and floats that can be put and retrieved from data structures.

                                                                                      1. 2

                                                                                        I think the SML approach is well-known to those who thought about non-canonical instances in Haskell and Scala.

                                                                                        Haskell’s designers know very well the importance of having canonical instances for type classes to work correctly, as well as the inherent conflict between canonicity of instances and modularity.

                                                                                        And yet Haskell has the -XIncoherentInstances extension.

                                                                                        The core question is whether the price one would pay for this feature is worth the cost

                                                                                        The payoff is huge. The type checker will stop you immediately if you conflate two structures constructed from different instances on the same type. For example, suppose you have two ordered sets and want to merge them. The obvious O(n) algorithm only works correctly if both ordered sets are ordered using the same total order on the element type. But Scala will silently allow you to use the wrong order on one of the sets, due to the way implicits work when used as type class instances.

                                                                                        that’s why I’m approaching this by splitting things into better-defined typeclasses

                                                                                        The problem with your “better-defined” type classes is that many of them will support similar operations. In fact, similar enough that you will want to define the same algorithms on them. But, since your “better-defined” type classes are not literally the same as each other, as far as the type checker cares, you still need to reimplement the same algorithms over and over, which defeats the point to using type classes in the first place.

                                                                                        Another way to work around this issue is to use newtype wrappers to mediate the conversion between instances. But that is disgusting in its own way. For example, before Applicative became a superclass of Monad, Haskell used to have a WrappedMonad newtype as a contrived way to get an Applicative instance from a Monad one.

                                                                                        1. 1

                                                                                          This comment was pitched at exactly my level of type knowledge - thanks!

                                                                                          1. 1

                                                                                            And yet Haskell has the -XIncoherentInstances extension.

                                                                                            … which you don’t even need. GHC allows defining incoherent instances without it.

                                                                                            For example, suppose you have two ordered sets and want to merge them. The obvious O(n) algorithm only works correctly if both ordered sets are ordered using the same total order on the element type.

                                                                                            Scala supports this with path-dependent types, but the syntactical and mental overhead never felt worth it.

                                                                                            Anyway, I wasn’t aware that SML supported applicative functors (which I think are a requirement to make this work) – when did this change?

                                                                                            A conservative approximation could also be done at the value-level – if typeclass instances were first-class values (but I don’t think that’s worth it).

                                                                                            But, since your “better-defined” type classes are not literally the same as each other, as far as the type checker cares, you still need to reimplement the same algorithms over and over, which defeats the point to using type classes in the first place.

                                                                                            Agreed. I just think that’s vastly preferable to people using one (or a few) typeclasses as random dumping grounds until these typeclass has no laws left (see Eq, Ord in Haskell).

                                                                                            Another way to work around this issue is to use newtype wrappers to mediate the conversion between instances. But that is disgusting in its own way.

                                                                                            Ick, that’s way more ugly than the “recommended” approach in Haskell; that is wrapping the data types and reimplementing the typeclass instance (e. g. TotalDouble wrapping Double to implement Eq and Ord differently from Float).

                                                                                            1. 1

                                                                                              Scala supports this with path-dependent types, but the syntactical and mental overhead never felt worth it.

                                                                                              Scala’s troubles arise from the desire to make everything first-class. “Objects with type members are basically first class modules, right?” It sounds good until you realize it does not interact well with type inference. Knowing what to make second-class is an art.

                                                                                              Anyway, I wasn’t aware that SML supported applicative functors (which I think are a requirement to make this work) – when did this change?

                                                                                              Standard ML does not have applicative functors. Making applicative functors work correctly requires a notion of “pure functor body”, which no ML dialect has as far as I can tell. (OCaml’s applicative functors create broken abstract data types when their bodies are impure.)

                                                                                              In any case, using applicative functors is not the only way to make this work. Another way is to arrange your code so that you do not need to apply the same functor to the same module twice. Unfortunately, this requires a way of thinking that is not very compatible with how either object-oriented or functional programmers design their libraries.

                                                                                              I have two cardinal rules for library design.

                                                                                              Do not expose higher-order interfaces when first-order ones will do.

                                                                                              As a counterexample, consider the Haskell API for maps:

                                                                                              empty :: Map k a
                                                                                              lookup :: Ord k => k -> Map k a -> Maybe a
                                                                                              insert :: Ord k => k -> a -> Map k a -> Map k a
                                                                                              delete :: Ord k => k -> Map k a -> Map k a
                                                                                              {- more functions -}
                                                                                              

                                                                                              This is higher-order than necessary. Essentially, lookup, insert and delete are functors that are applied to an Ord structure every time they are called. Only from the canonicity of Ord instances can you argue that the effect of these multiple applications is equivalent to a single application of a functor that produces a map type whose key type is monomorphic.

                                                                                              Have you gained anything? Strictly speaking, yes. You have gained the ability to tell your users that the implementation code for inserting into a Map Int a and into a Map String a is really the same.

                                                                                              Is this something the user cares about? IMO, no. If I am working with maps with 64-bit integer keys, I do not care whether the underlying implementation is a generic red-black tree (which would work just as well with any other key type) or a radix tree (which requires lexicographically ordered bit vectors as keys).

                                                                                              The right interface, as previously stated, has an abstract but monomorphic key type:

                                                                                              type Key
                                                                                              type Map a
                                                                                              
                                                                                              empty :: Map a
                                                                                              lookup :: Key -> Map a -> Maybe a
                                                                                              insert :: Key -> a -> Map a -> Map a
                                                                                              delete :: Key -> Map a -> Map a
                                                                                              

                                                                                              Do not enforce two unrelated sets of invariants in the same module.

                                                                                              According to most standard libraries, the obvious way to implement ordered maps is something like:

                                                                                              functor TreeMap (Key : ORDERED) :> MAP
                                                                                                  where type key = Key.key =
                                                                                              struct
                                                                                                  (* hard-coded implementation of red-black trees or what have you *)
                                                                                              end
                                                                                              

                                                                                              Never mind that you have to reimplement red-black trees from scratch if you want to implement sets as well. This design is bad for a more fundamental reason: you cannot debug the red-black tree implementation without tearing down the map abstraction!

                                                                                              The problem here is that TreeMap enforces two unrelated sets of invariants:

                                                                                              1. Trees are correctly balanced.
                                                                                              2. The in-order traversal of a tree produces a strictly ascending sequence of keys.

                                                                                              The first invariant actually does not depend on the key type, so it can be factored out into a separate module:

                                                                                              signature SEARCH_TREE =
                                                                                              sig
                                                                                                  type 'a tree
                                                                                                  (* Manipulate self-balancing search trees using zippers.
                                                                                                   * The user must know how to look for a given element. *)
                                                                                              end
                                                                                              
                                                                                              structure RedBlackTree :> SEARCH_TREE = (* ... *)
                                                                                              

                                                                                              Once we have a working search tree implementation, we can finally implement maps:

                                                                                              functor TreeMap
                                                                                                  ( structure Key : ORDERED
                                                                                                    structure Tree : SEARCH_TREE ) :> MAP where type key = Key.key =
                                                                                              struct
                                                                                                  type key = Key.key
                                                                                                  type 'a map = (key * 'a) Tree.tree
                                                                                                  (* ... *)
                                                                                              end
                                                                                              

                                                                                              The same lesson can be applied to hash maps:

                                                                                              signature TABLE =
                                                                                              sig
                                                                                                  type 'a table
                                                                                                  (* Operations on a fixed size table, including collision resolution.
                                                                                                   * The user must know where to start looking for a given element. *)
                                                                                              end
                                                                                              
                                                                                              structure LinearProbing :> TABLE = (* ... *)
                                                                                              structure QuadraticProbing :> TABLE = (* ... *)
                                                                                              
                                                                                              functor HashMap
                                                                                                  ( structure Key : HASHABLE
                                                                                                    structure Table : TABLE ) :> MUTABLE_MAP where type key = Key.key =
                                                                                              struct
                                                                                                  type key = Key.key
                                                                                                  type 'a map = (key * 'a ref) Table.table ref
                                                                                                  (* ... *)
                                                                                              end
                                                                                              
                                                                                  2. 1

                                                                                    I’m getting a bit confused by they way you contrast ‘equality’ and ‘identity’. To me they mean the same thing, and your article is suggesting === for equality and == for equivalence. This feels in line with your point, because there are many different ways to define equivalence for any particular type.

                                                                                    1. 2

                                                                                      Yes – the naming is largely driven by the desire to not have two things (where differences are important) behind two names that are quite similar.

                                                                                    2. 1

                                                                                      I really like your series; I’ll update my post to have a reference to it.

                                                                                      It’s not clear to me that having === mean two different things for value type and reference types is an improvement (it’s also not clear to me that it isn’t an improvement, either; I’m not sure). It would be interesting to study this in a human factors evaluation of a programming language (like the Quorum language) to see what makes the most intuitive sense to developers. Certainly what you propose is at least lawful which is an improvement over the current state of things!

                                                                                      My reasoning for not wanting equality on floats is the same as SML’s for not having it, which I cite in the article. A reasonable comparison on a float requires an epsilon value, which doesn’t fit into the == operator usage.

                                                                                      FYI, the Implementation in Dora post seems to be empty.

                                                                                      Thanks!

                                                                                      1. 2

                                                                                        I really like your series

                                                                                        Thanks, I appreciate it! :-)

                                                                                        It’s not clear to me that having === mean two different things for value type and reference types is an improvement

                                                                                        My argument is that it stands for the exact same thing in both reference and value types: “is this thing here identical to that thing over there”?. Sure, we know this as “reference equality” on reference types, but it’s just as crucial on value types. Some may call it “ability to find your floats in your collections again equality”.

                                                                                        Here is another article on how == vs. === relates to the various orderings the IEEE754 spec defines: Comparing & Sorting

                                                                                        A reasonable comparison on a float requires an epsilon value, which doesn’t fit into the == operator usage.

                                                                                        I disagree on that. 80% of the comparisons on floating point values are with “magic values” like 0.0, 1.0, NaN etc. that were not exposed to any kind of arithmetic computation before, and therefore can not foil the comparison.

                                                                                        FYI, the Implementation in Dora post seems to be empty.

                                                                                        Yes, I left out the link, because I haven’t written it yet. :-)

                                                                                        Here is a unit test, that might demonstrate the semantics.

                                                                                        1. 2

                                                                                          I would argue that if you have a comparison which is only good for “magic values” then what you really have is a bunch of useful functions: isNaN, isZero, etc. I will note that == NaN is going to fail anyway!

                                                                                          1. 1

                                                                                            I will note that == NaN is going to fail anyway!

                                                                                            Which is exactly where === comes in. :-)

                                                                                    1. 3

                                                                                      How do you express the following Haskell 98 type in Reason?

                                                                                      data Free f a = Pure a | Join (f (Free f a))
                                                                                      
                                                                                      1. 2

                                                                                        Equality is not hard. What is actually hard is convincing PL designers not to use the word “equality” for something that does not behave like mathematical equality. This is a sociological problem, not a technical one.

                                                                                        1. 3

                                                                                          The criticism against currying resonates with me. Partially applied functions are objects, very much in the OOP sense: opaque bundles of state and behavior. This state might be idiomatically immutable, but it is still harder to debug than transparent data structures.

                                                                                          On an unrelated matter, there should also be an article titled “On the lack of functors in F#”, providing practical workarounds for the lack of higher-kinded abstractions.

                                                                                          1. 7

                                                                                            Before you know it, you’ve wound up explaining the difference between a reference type and a value type.

                                                                                            This is timely. I’m helping someone in my family make the jump from some light python scripting to c#, and they ask hard questions who’s answers are actually insanely complicated. What is the difference between a reference and a value type? Why does basically every language give you a copy of a number when you call a function but a reference to an object?

                                                                                            As far as I can tell the answer boils down to how registers and ram work. Essentially if all numbers were references then computers would be alot slower. But most of the time an object isn’t going to fit in a register and also we know how to make pointers to ram that fit in a register but I don’t know what a pointer to another register would even look like.

                                                                                            So yeah it turns out pedagogy is important because it’s pretty damn hard to give a straight answer to the simplest question that doesn’t boil down to “languages are a human endeavor and we’ve sort of been building on previous historical accidents for several decades.”

                                                                                            1. 6

                                                                                              Is that really insanely complicated? The way I was taught to program - it was after I already knew, so it’s not how I learnt but it seemed to work well for everyone else in the course - was through very explicit imagery of the heap and the stack values and pointers as arrows. Values are copied, pointers are copied, everything is copied.

                                                                                              I think that approach worked a lot better and made it all very symbol. Pointers are just arrows pointing at memory locations, and their numerical value is the memory address they have. Everyone knows that your computer’s memory is billions of individual bytes, and it’s not really a stretch to recognise that you can number them 1,2,3,…

                                                                                              The key thing was that python was taught this way initially, then when C was taught it was the same lecturer using the same technique, the same ideas, and they could be compared directly. “Recall that in Python it works this way, well now in C you can have values stored directly in variables, and we call the variables that point into the heap ‘pointers’.” etc.

                                                                                              I think it’s a lot better to teach people how things actually work than it is to hope they can abstract an accurate mental model just from seeing a whole lot of examples.

                                                                                              1. 4

                                                                                                As far as I can tell the answer boils down to how registers and ram work.

                                                                                                This is not a very good excuse. High-level languages can (and often should) hide any indirection used in the representation of compound values. For instance, as far as a Haskell or Prolog programmer cares, the tail of a list is simply “a list”, not “a pointer to a list”, even though the latter is a more accurate description of its internal representation.

                                                                                                Popular languages are not very good at data abstraction. As a result, they only have values that can be represented in the naïvest possible ways.

                                                                                                1. 4

                                                                                                  What is the difference between a reference and a value type?

                                                                                                  I’ve often found success interpreting questions like this as “when/what information do I use to choose between a reference or a value type?”

                                                                                                  This is often much easier to answer to the beginner, at least superficially, than admittedly the question they asked. If they’re really prepared for a lecture, they can lead the conversation into that.

                                                                                                  Why does basically every language give you a copy of a number when you call a function but a reference to an object?

                                                                                                  Be honest: Explain it’s basically theology, and you might as well be asking why do Catholics eat fish on Friday. A lot of programming is superstition we call “best practices” because they seem to work, and that’s ok.

                                                                                                  Just remember the student is asking a question because they think it will help them grow, not because they need to know the answer to this question. You, as the teacher, are responsible for recognising that and telling them what they need to hear to get better, instead of blindly answering their questions.

                                                                                                  1. 2

                                                                                                    In many languages, a good answer is that values are shared.

                                                                                                    https://en.m.wikipedia.org/wiki/Evaluation_strategy#Call_by_sharing

                                                                                                    This is true for Python, JavaScript, Julia, etc.

                                                                                                    Here’s the Julia manual’s description

                                                                                                    Julia function arguments follow a convention sometimes called “pass-by-sharing”, which means that values are not copied when they are passed to functions. Function arguments themselves act as new variable bindings (new locations that can refer to values), but the values they refer to are identical to the passed values. Modifications to mutable values (such as Arrays) made within a function will be visible to the caller. This is the same behavior found in Scheme, most Lisps, Python, Ruby and Perl, among other dynamic languages.

                                                                                                    This is also semantically how Java works (or pretty close), so long as you think of primitive types as immutable (which you should ;))

                                                                                                    1. 3

                                                                                                      Julia function arguments follow a convention sometimes called “pass-by-sharing”, which means that values are not copied when they are passed to functions.

                                                                                                      I cannot comment on Julia, because I do not know it. However, Python, JavaScript and Java are all pass-by-value languages. When you call function (or method), you always pass a copy of the arguments’ values, which the callee can modify privately. The effects of such modifications are never reflected in the original copy of the arguments’ values in the caller. It just so happens that the vast majority of values in these languages[0] are references to mutable objects[1].

                                                                                                      [0] Python goes even further. It has literally no values other than object references.

                                                                                                      [1] References to objects are not the same thing as objects themselves.

                                                                                                      1. 1

                                                                                                        My argument is that that is semantically equivalent because all the value types are immutable and small.

                                                                                                        This lets you give a semantically correct explanation while leaving the implementation details (why are some values references?) till later or never.

                                                                                                        Optimising compilers don’t always pass by value anyway because of inlining.

                                                                                                        1. 1

                                                                                                          If your objects are immutable and you do not provide access to their physical identities, then you can indeed pretend they are values. In fact, this is how compound values are usually implemented in ML, Haskell and Prolog.

                                                                                                          However, the examples you cited (other than Julia, of which I cannot comment due to my ignorance) do not work like this at all. Objects are mutable and their physical identity is exposed. Yes, even in Scheme.

                                                                                                          1. 1

                                                                                                            You misunderstand me. I’m only saying that the primitive values can be treated as immutable. Which they can.

                                                                                                            1. 1

                                                                                                              When you say “primitive value”, I can only suspect that you are implying the existence of “non-primitive values”.

                                                                                                              My entire point is that the languages you mentioned (again, with the possible exception of Julia) have nothing but primitive values anyway. The identity of an object is a primitive value: you cannot decompose it into smaller parts. And the state of an object is not something you can assign to a variable, so it is not a value to begin with.

                                                                                                              And, yes, values are intrinsically immutable. You can reassign an imperative variable, after which this variable holds another value. But the old value has not changed. What did change is the variable that has been reassigned.

                                                                                                              1. 1

                                                                                                                Java and some other languages have builtin datatypes that they call primitive types. In java these are especially notable because they’re not a normal part of the object system.

                                                                                                                Anyway, we’re talking past each other and it’s a bit annoying being told how to suck eggs, so I think I’ll end this here :)

                                                                                                                No hard feelings, communication is hard.

                                                                                                  1. 32

                                                                                                    To me the big deal is that Rust can credibly replace C, and offers enough benefits to make it worthwhile.

                                                                                                    There are many natively-compiled languages with garbage collection. They’re safer than C and easier to use than Rust, but by adding GC they’ve exited the C niche. 99% of programs may work just fine with a GC, but for the rest the only practical options were C and C++ until Rust showed up.

                                                                                                    There were a few esoteric systems languages or C extensions that fixed some warts of C, but leaving the C ecosystem has real costs, and I could never justify use of a “weird” language just for a small improvement. Rust offered major safety, usability and productivity improvements, and managed to break out of obscurity.

                                                                                                    1. 38

                                                                                                      Ada provided everything except ADTs and linear types, including seamless interoperability with C, 20 years before Rust. Cyclone was Rust before Rust, and it was abandoned in a similar state as Rust was when it took off. Cyclone is dead, but Ada got a built-in formal verification toolkit in its latest revision—for some that stuff alone can be a reason to pick instead of anything else for a new project.

                                                                                                      I have nothing against Rust, but the reason it’s popular is that it came at a right time, in the right place, from a sufficiently big name organization. It’s one of the many languages based on those ideas that, fortunately, happened to succeed. And no, when it first got popular it wasn’t really practical. None of these points makes Rust bad. One just should always see a bigger picture especially when it comes to heavily hyped things. You need to know the other options to decide for yourself.

                                                                                                      Other statically-typed languages allow whole-program type inference. While convenient during initial development, this reduces the ability of the compiler to provide useful error information when types no longer match.

                                                                                                      Only in languages that cannot umabiguously infer the principal type. Whether to make a tradeoff between that and support for ad hoc polymorphism or not is subjective.

                                                                                                      1. 15

                                                                                                        I’ve seen Cyclone when it came out, but at that time I dismissed it as “it’s C, but weird”. It had the same basic syntax as C, but added lots of pointer sigils. It still had the same C preprocessor and the same stdlib.

                                                                                                        Now I see it has a feature set much closer to Rust’s (tagged unions, patterns, generics), but Rust “sold” them better. Rust used these features for Result which is a simple yet powerful construct. Cyclone could do that, but didn’t. It kept nullable pointers and added Null_Exception.

                                                                                                        1. 12

                                                                                                          Ada provided everything except ADTs and linear types

                                                                                                          Unfortunately for this argument, ADTs, substructural types and lifetimes are more exciting than that “everything except”. Finally the stuff that is supposed to be easy in theory is actually easy in practice, like not using resources you have already cleaned up.

                                                                                                          Ada got a built-in formal verification toolkit in its latest revision

                                                                                                          How much of a usability improvement is using these tools compared to verifying things manually? What makes types attractive to many programmers is not that they are logically very powerful (they are usually not!), but rather that they give a super gigantic bang for the buck in terms of reduction of verification effort.

                                                                                                          1. 17

                                                                                                            I would personally not compare Ada and Rust directly as they don’t even remotely fulfill the same use-cases.

                                                                                                            Sure, there have been languages that have done X, Y, Z before Rust (the project itself does not lay false claim to inventing those parts of the language which may have been found elsewhere in the past), but the actual distinguishing factor for Rust that places it into an entirely different category from Ada is how accessible and enjoyable it is to interact with while providing those features.

                                                                                                            If you’re in health or aeronautics, you should probably be reaching for the serious, deep toolkit provided by Ada, and I’d probably be siding with you in saying those people probably should have been doing that for the last decade. But Ada is really not for the average engineer. It’s an amazing albeit complex language, that not only represents a long history of incredible engineering but a very real barrier of entry that’s simply incomparable to that of Rust’s.

                                                                                                            If, for example, I wanted today to start writing from scratch a consumer operating system, a web browser, or a video game as a business venture, I would guarantee you Ada would not even be mentioned as an option to solve any of those problems, unless I wanted to sink my own ship by limiting myself to pick from ex-government contractors as engineers, whose salaries I’d likely be incapable of matching. Rust on the other hand actually provides a real contender to C/C++/D for people in these problem spaces, who don’t always need (or in some cases, even want) formal verification, but just a nice practical language with a systematic safety net from the memory footguns of C/C++/D. On top of that, it opens up these features, projects, and their problem spaces to many new engineers with a clear, enjoyable language free of confusing historical baggage.

                                                                                                            1. 6

                                                                                                              Have you ever used Ada? Which implementation?

                                                                                                              1. 15

                                                                                                                I’ve never published production Ada of any sort and am definitely not an Ada regular (let alone pro) but I studied and had a fondness for Spark around the time I was reading “Type-Driven Development with Idris” and started getting interested in software proofs.

                                                                                                                In my honest opinion the way the base Ada language is written (simple, and plain operator heavy) ends up lending really well to extension languages, but it also can make difficult for beginners to distinguish the class of concept used at times, whereas Rust’s syntax has a clear and immediate distinction between blocks (the land of namespaces), types (the land of names), and values (the land of data). In terms of cognitive load then, it feels as though these two languages are communicating at different levels. Like Rust is communicating in the mode of raw values and their manipulation through borrows, while the lineage of Ada languages communicate at a level that, in my amateur Ada-er view, center on the expression of properties of your program (and I don’t just mean the Spark stuff, obviously). I wasn’t even born when Ada was created, and so I can’t say for sure without becoming an Ada historian (not a bad idea…), but this sort of seems like a product of Ada’s heritage (just as Rust’s so obviously written to look like C++).

                                                                                                                To try and clarify this ramble of mine, in my schooling experience, many similarly young programmers of my age are almost exclusively taught to program at an elementary level of abstract instructions with the details of those instructions removed, and then after being taught a couple type-level incantations get a series of algorithms and their explanations thrown at their face. Learning to consider their programs specifically in terms of expressing properties of that program’s operations becomes a huge step out of that starting box (that some don’t leave long after graduation). I think something that Rust’s syntax does well (if possibly by mistake) is fool the amateur user into expressing properties of their programs on accident while that expression becomes part of what seems like just a routine to get to the meat of a program’s procedures. It feels to me that expressing those properties are intrinsic to the language of speaking Ada, and thus present a barrier intrinsic to the programmer’s understanding of their work, which given a different popular curriculum could probably just be rendered as weak as paper to break through.

                                                                                                                Excuse me if these thoughts are messy (and edited many times to improve that), but beyond the more popular issue of familiarity, they’re sort of how I view my own honest experience of feeling more quickly “at home” in moving from writing Rust to understanding Rust, compared to moving from just writing some form of Ada, and understanding the program I get.

                                                                                                            2. 5

                                                                                                              Other statically-typed languages allow whole-program type inference. While convenient during initial development, this reduces the ability of the compiler to provide useful error information when types no longer match.

                                                                                                              Only in languages that cannot unabiguously infer the principal type. Whether to make a tradeoff between that and support for ad hoc polymorphism or not is subjective.

                                                                                                              OCaml can unambiguously infer the principal type, and I still find myself writing the type of top level functions explicitly quite often. More than once have I been guided by a type error that only happened because I wrote the type of the function I was writing in advance.

                                                                                                              At the very least, I check that the type of my functions match my expectations, by running the type inference in the REPL. More than once have I been surprised. More than once that surprise was caused by a bug in my code. Had I not checked the type of my function, I would catch the bug only later, when using the function, and the error message would have made less sense to me.

                                                                                                              1. 2

                                                                                                                At the very least, I check that the type of my functions match my expectations, by running the type inference in the REPL

                                                                                                                Why not use Merlin instead? Saves quite a bit of time.

                                                                                                                That’s a tooling issue too of course. Tracking down typing surprises in OCaml is easy because the compiler outputs type annotations in a machine-readable format and there’s a tool and editor integrations that allow me to see the type of every expression in a keystroke.

                                                                                                                1. 2

                                                                                                                  Why not use Merlin instead? Saves quite a bit of time.

                                                                                                                  I’m a dinosaur, that didn’t take the time to learn even the existence of Merlin. I’m kinda stucks in Emacs’ Tuareg mode. Works for me for small projects (all my Ocaml projects are small).

                                                                                                                  That said, my recent experience with C++ and QtCreator showed me that having warnings at edit time is even more powerful than a REPL (at least as long as I don’t have to check actual values). That makes Merlin look very attractive all of a sudden. I’ll take a look, thanks.

                                                                                                            3. 5

                                                                                                              Rust can definitely credibly replace C++. I don’t really see how it can credibly replace C. It’s just such a fundamentally different way of approaching programming that it doesn’t appeal to C programmers. Why would a C programmer switch to Rust if they hadn’t already switched to C++?

                                                                                                              1. 43

                                                                                                                I’ve been a C programmer for over a decade. I’ve tried switching to C++ a couple of times, and couldn’t stand it. I’ve switched to Rust and love it.

                                                                                                                My reasons are:

                                                                                                                • Robust, automatic memory management. I have the same amount of control over memory, but I don’t need goto cleanup.
                                                                                                                • Fearless multi-core support: if it compiles, it’s thread-safe! rayon is much nicer than OpenMP.
                                                                                                                • Slices are awesome: no array to pointer decay. Work great with substrings.
                                                                                                                • Safety is not just about CVEs. I don’t need to investigate memory murder mysteries in GDB or Valgrind.
                                                                                                                • Dependencies aren’t painful.
                                                                                                                • Everything builds without fuss, even when supporting Windows and cross-compiling to iOS.
                                                                                                                • I can add two signed numbers without UB, and checking if they overflow isn’t a party trick.
                                                                                                                • I get some good parts of C++ such as type-optimized sort and hash maps, but without the baggage C++ is infamous for.
                                                                                                                • Rust is much easier than C++. Iterators are so much cleaner (just a next() method). I/O is a Read/Write trait, not a hierarchy of iostream classes.
                                                                                                                1. 6

                                                                                                                  I also like Rust and I agree with most of your points, but this one bit seems not entirely accurate:

                                                                                                                  Fearless multi-core support: if it compiles, it’s thread-safe! rayon is much nicer than OpenMP.

                                                                                                                  AFAIK Rust:

                                                                                                                  • doesn’t guarantee thread-safety — it guarantees the lack of data races, but doesn’t guarantee the lack of e.g. deadlocks;
                                                                                                                  • guarantees the lack of data races, but only if you didn’t write any unsafe code.
                                                                                                                  1. 20

                                                                                                                    That is correct, but this is still an incredible improvement. If I get a deadlock I’ll definitely notice it, and can dissect it in a debugger. That’s easy-peasy compared to data races.

                                                                                                                    Even unsafe code is subject to thread-safety checks, because “breaking” of Send/Sync guarantees needs separate opt-in. In practice I can reuse well-tested concurrency primitives (e.g. WebKit’s parking_lot) so I don’t need to write that unsafe code myself.

                                                                                                                    Here’s an anecdote: I wrote some single threaded batch-processing spaghetti code. Since it each item was processed separately, I decided to parallelize it. I’ve changed iter() for par_iter() and the compiler immediately warned me that in one of my functions I’ve used a 3rd party library which used an HTTP client library which used an event loop library which stored some event loop data in a struct without synchronization. It pointed exactly where and why the code was unsafe, and after fixing it I had an assurance the fix worked.

                                                                                                                    1. 6

                                                                                                                      I share your enthusiasm. Just wanted to prevent a common misconception from spreading.

                                                                                                                      Here’s an anecdote: I wrote some single threaded batch-processing spaghetti code. Since it each item was processed separately, I decided to parallelize it. I’ve changed iter() for par_iter() and the compiler immediately warned me that in one of my functions I’ve used a 3rd party library which used an HTTP client library which used an event loop library which stored some event loop data in a struct without synchronization. It pointed exactly where and why the code was unsafe, and after fixing it I had an assurance the fix worked.

                                                                                                                      I did not know it could do that. That’s fantastic.

                                                                                                                    2. 9

                                                                                                                      Data races in multi-threaded code are about 100x harder to debug than deadlocks in my experience, so I am happy to have an imperfect guarantee.

                                                                                                                      guarantees the lack of data races, but only if you didn’t write any unsafe code.

                                                                                                                      Rust application code generally avoids unsafe.

                                                                                                                      1. 4

                                                                                                                        Data races in multi-threaded code are about 100x harder to debug than deadlocks in my experience, so I am happy to have an imperfect guarantee.

                                                                                                                        My comment was not a criticism of Rust. Just wanted to prevent a common misconception from spreading.

                                                                                                                        Rust application code generally avoids unsafe.

                                                                                                                        That depends on who wrote the code. And unsafe blocks can cause problems that show in places far from the unsafe code. Meanwhile, “written in Rust” is treated as a badge of quality.

                                                                                                                        Mind that I am a Rust enthusiast as well. I just think we shouldn’t oversell it.

                                                                                                                      2. 7

                                                                                                                        guarantees the lack of data races, but only if you didn’t write any unsafe code.

                                                                                                                        As long as your unsafe code is sound it still provides the guarantee. That’s the whole point, to limit the amount of code that needs to be carefully audited for correctness.

                                                                                                                        1. 2

                                                                                                                          I know what the point is. But proving things about code is generally not something that programmers are used to or good at. I’m not saying that the language is bad, only that we should understand its limitations.

                                                                                                                        2. 1

                                                                                                                          I find it funny that any critique of Rust needs to be prefixed with a disclaimer like “I also like Rust”, to fend off the Rust mob.

                                                                                                                      3. 11

                                                                                                                        This doesn’t really match what we see and our experience: a lot of organisations are investigating their replacement of C and Rust is on the table.

                                                                                                                        One advantage that Rust has is that it actually lands between C and C++. It’s pretty easy to move towards a more C-like programming style without having to ignore half of the language (this comes from the lack of classes, etc.).

                                                                                                                        Rust is much more “C with Generics” than C++ is.

                                                                                                                        We currently see a high interest in the embedded world, even in places that skipped adopting C++.

                                                                                                                        I don’t think the fundamental difference in approach is as large as you make it (sorry for the weak rebuttal, but that’s hard to quantify). But also: approaches are changing, so that’s less of a problem for us, as long as we are effective at arguing for our approach.

                                                                                                                        1. 2

                                                                                                                          It’s just such a fundamentally different way of approaching programming that it doesn’t appeal to C programmers. Why would a C programmer switch to Rust if they hadn’t already switched to C++?

                                                                                                                          Human minds are sometimes less flexible than rocks.

                                                                                                                          That’s why we still have that stupid Qwerty layout: popular once for mechanical (and historical) reasons, used forever since. As soon as the mechanical problems were fixed, Sholes imself devised a better layout, which went unused. Much later, Dvorak devised another better layout, and it is barely used today. People thinking in Qwerty simply can’t bring themselves to take the time to learn the superior layout. (I know: I’m in a similar situation, though my current layout is not Qwerty).

                                                                                                                          I mean, you make a good point here. And that’s precisely what’s make me sad. I just hope this lack of flexibility won’t prevent C programmers from learning superior tools.

                                                                                                                          (By the way, I would chose C over C++ in many cases, I think C++ is crazy. But I also know ML (OCaml), a bit of Haskell, a bit of Lua… and that gives me perspective. Rust as I see it is a blend of C and ML, and though I have yet to write Rust code, the code I have read so far was very easy to understand. I believe I can pick up the language pretty much instantly. In my opinion, C programmers that only know C, awk and Bash are unreasonably specialised.)

                                                                                                                          1. 1

                                                                                                                            I tried to switch to DVORAK twice. Both times I started to get pretty quick after a couple of days but I cheated: if I needed to type something I’d switch back to QWERTY, so it never stuck.

                                                                                                                            The same is true of Rust, incidentally. Tried it out a few times, was fun, but then if I want to get anything useful done quickly it’s just been too much of a hassle for me personally. YMMV of course. I fully intend to try to build something that’s kind of ‘C with lifetimes’, a much simpler Rust (which I think of as ‘C++ with lifetimes’ analogously), in the future. Just have to, y’know, design it. :D

                                                                                                                            1. 3

                                                                                                                              I too was tempted at some point to design a “better C”. I need:

                                                                                                                              • Generics
                                                                                                                              • Algebraic data types
                                                                                                                              • Type classes
                                                                                                                              • coroutines, (for I/O and network code, I need a way out of raw poll(2))
                                                                                                                              • Memory safety

                                                                                                                              With the possible exception of lifetimes, I’d end up designing Rust, mostly.

                                                                                                                              1. 2

                                                                                                                                I agree that you need some way of handling async code, but I don’t think coroutines are it, at least not in the async/await form. I still feel like the ‘what colour is your function?’ stuff hasn’t been solved properly. Any function with a callback (sort with a key/cmp function, filter, map, etc.) needs an async_ version that takes a callback and calls it with await. Writing twice as much code that’s trivially different by adding await in some places sucks, but I do not have any clue what the solution is. Maybe it’s syntactic. Maybe everything should be async implicitly and you let the compiler figure out when it can optimise things down to ‘raw’ calls.

                                                                                                                                shrug

                                                                                                                                Worth thinking about at least.

                                                                                                                                1. 4

                                                                                                                                  Function colors are effects. There are two ways to solve this problem:

                                                                                                                                  1. To use polymorphism over effects. This is what Haskell does, but IMO it is too complex.
                                                                                                                                  2. To split large async functions into smaller non-async ones, and dispatch them using an event loop.

                                                                                                                                  The second approach got a bad reputation due to its association with “callback hell”, but IMO this reputation is undeserved. You do not need to represent the continuation as a callback. Instead, you can

                                                                                                                                  1. Define a gigantic sum type of all possible intermediate states of asynchronous processes.
                                                                                                                                  2. Implement each non-async step as an ordinary small function that maps intermediate states (not necessarily just one) to intermediate states (not necessarily just one).
                                                                                                                                  3. Implement the event loop as a function that, iteratively,
                                                                                                                                    • Takes states from an event queue.
                                                                                                                                    • Dispatches an appropriate non-async step.
                                                                                                                                    • Pushes the results, which are again states, back into the event queue.

                                                                                                                                  Forking can be implemented by returning multiple states from a single non-async step. Joining can be implemented by taking multiple states as inputs in a single non-async step. You are not restricted to joining processes that were forked from a common parent.

                                                                                                                                  In this approach, you must write the event loop yourself, rather than delegate it to a framework. For starters, no framework can anticipate your data type of intermediate states, let alone the data type of the whole event queue. But, most importantly, the logic for dispatching the next non-async step is very specific to your application.

                                                                                                                                  Benefits:

                                                                                                                                  1. Because the data type of intermediate states is fixed, and the event loop is implemented in a single centralized place, it is easier to verify that your code works “in all cases”, either manually or using tools that explicitly model concurrent processes using state machines (e.g., TLA+).

                                                                                                                                  2. Because intermediate states are first-order values, rather than first-class functions, the program is much easier to debug. Just stop the event loop at an early time and pretty-print the event queue. (ML can automatically pretty-print first-order values in full detail. Haskell requires you to define a Show instance first, but this definition can be generated automatically.)

                                                                                                                                  Drawbacks:

                                                                                                                                  1. If your implementation language does not provide sum types and/or pattern matching, you will have a hard time checking that every case has been covered, simply because there are so many cases.

                                                                                                                                  2. The resulting code is very much non-extensible. To add new asynchronous processes, you need to add constructors to the sum type of intermediate states. This will make the event loop fail to type check until you modify it accordingly. (IMO, this is not completely a drawback, because it forces you to think about how the new asynchronous processes interact with the old ones. This is something that you eventually have to do anyway, but some people might prefer to postpone it.)

                                                                                                                                  1. 3

                                                                                                                                    I agree that you need some way of handling async code, but I don’t think coroutines are it

                                                                                                                                    Possibly. I actually don’t know. I’d take whatever let me write code that looks like I’m dispatching an unlimited number of threads, but dispatches the computation over a reasonable number of threads, possibly just one. Hell, my ideal world is green threads, actually. Perhaps I should have lead with that…

                                                                                                                                    Then again, I don’t know the details of the tradeoffs involved. Whatever let me solve the 1M connections cleanly and efficiently works for me.

                                                                                                                          2. 5

                                                                                                                            I agree with @milesrout. I don’t think Rust is a good replacement for C. This article goes into some of the details of why - https://drewdevault.com/2019/03/25/Rust-is-not-a-good-C-replacement.html

                                                                                                                            1. 17

                                                                                                                              Drew has some very good points. Its a shame he ruins them with all the other ones.

                                                                                                                              1. 25

                                                                                                                                Drew has a rusty axe to grind: “Concurrency is generally a bad thing” (come on!), “Yes, Rust is more safe. I don’t really care.”

                                                                                                                                Here’s a rebuttal of that awful article: https://telegra.ph/Replacing-of-C-with-Rust-has-been-a-great-success-03-27 (edit: it’s a tongue-in-cheek response. Please don’t take it too seriously: the original exaggerated negatives, so the response exaggerates positives).

                                                                                                                                1. 11

                                                                                                                                  So many bad points from this post.

                                                                                                                                  • We can safely ignore the “features per year”, since the documentation they are based on don’t follow the same conventions. I’ll also note that, while a Rust program written last year may look outdated (I personally don’t know Rust enough to make such an assessment), it will still work (I’ve been told breaking changes are extremely rare).

                                                                                                                                  • C is not really the most portable language. Yes, C and C++ compilers, thanks to having decades of work behind them, target more devices than everything else put together. But no, those platforms do not share the same flavour of C and C++. There are simply too many implementation defined behaviours, starting with integer sizes. Did you know that some platforms had 32-bit chars? I worked with someone who worked on one.

                                                                                                                                    I wrote a C crypto library, and went out of my way to ensure the code was very portable. and it is. Embedded developers love it. There was no way however to ensure my code was fully portable. I right-shift negative integers (implementation defined behaviour), and I use fixed width integers like uint8_t (not supported on the DSP I mentioned above).

                                                                                                                                  • C does have a spec, but it’s an incomplete one. In addition to implementation defined behaviour, C and C++ also have a staggering amount of undefined and unspecified behaviour. Rust has no spec, but it still tries to minimise undefined behaviour. I expect this point will go away when Rust stabilises and we get an actual spec. I’m sure formal verification folks will want to have a verified compiler for Rust, like we currently have for C.

                                                                                                                                  • *C have many implementations… and that’s actually a good point.

                                                                                                                                  • C has a consistent & stable ABI… and so does Rust, somewhat? OK, it’s opt-in, and it’s contrived. My point is, Rust does have an FFI which allows it to talk to the outside world. It doesn’t have to be at the top level of a program. On the other hand, I’m not sure what would be the point of a stable ABI between Rust modules. C++ at least seems to be doing fine without that.

                                                                                                                                  • Rust compiler flags aren’t sable… and that’s a good point. They should probably stabilise at some point. On the other hand, having one true way to manage builds and dependencies is a god send. Whatever we’d use stable compile flags for, we probably don’t want to depart from that.

                                                                                                                                  • Parallelism and Concurrency are unavoidable. They’re not a bad thing, they’re the only thing that can help us cheat the speed of light, and with it single threaded performance. The ideal modern computer is more likely a high number of in-order cores, each with a small amount of memory, and an explicit (exposed to the programmer) cache hierarchy. Assuming performance and energy consumption trumps existing C (and C++) programs. Never forget that current computers are optimised to run C and C++ programs.

                                                                                                                                  • Not caring about safety is stupid. Or selfish. Security vulnerabilities are often mere externalities, which you can ignore if it doesn’t damage your reputation to the point of affecting your bottom line. Yay Capitalism. More seriously, safety is a subset of correctness, and correctness is the main point of Rust’s strong type system and borrow checker. C doesn’t just make it difficult to write safe programs, it makes it difficult to write correct programs. You wouldn’t believe how hard that is. My crypto library had to resort to Valgrind, sanitisers, and the freaking TIS interpreter to eke out undefined behaviour. And I’m talking about “constant time” code, that has fixed memory access patterns. It’s pathologically easy to test, yet writing tests took as long as writing the code, possibly longer. Part of the difficulty comes from C, not just the problem domain.

                                                                                                                                  Also, Drew DeVault mentions Go as a possible replacement for C? For some domains, sure. But the thing has a garbage collector, making it instantly unsuitable for some constrained environments (either because the machine is small, or because you need crazy performance). Such constrained environment are basically the remaining niche for C (and C++). For the rest, the only thing that keeps people hooked on C (and C++) are existing code and existing skills.

                                                                                                                                  1. 4

                                                                                                                                    Rust compiler flags aren’t sable… and that’s a good point. They should probably stabilise at some point. On the other hand, having one true way to manage builds and dependencies is a god send. Whatever we’d use stable compile flags for, we probably don’t want to depart from that.

                                                                                                                                    This is wrong, though. rustc compiler flags are stable, except flags behind the -Z flag, which intentionally separates the interface between stable and unstable flags.

                                                                                                                                    1. 2

                                                                                                                                      Okay, I stand corrected, thanks.

                                                                                                                                    2. 0

                                                                                                                                      But the thing has a garbage collector, making it instantly unsuitable for some constrained environments (either because the machine is small, or because you need crazy performance).

                                                                                                                                      The Go garbage collector can be turned off with debug.SetGCPercent(-1) and triggered manually with runtime.GC(). It is also possible to allocate memory at the start of the program and use that.

                                                                                                                                      Go has several compilers available. gc is the official Go compiler, GCC has built-in support for Go and there is also TinyGo, which targets microcontrollers and WASM: https://tinygo.org/

                                                                                                                                      1. 5

                                                                                                                                        Can you realistically control allocations? If we have ways to make sure all allocations are either explicit or on the stack, that could work. I wonder how contrived that would be, though. The GC is on by default, that’s got to affect idiomatic code in a major way. To the point where disabling it probably means you don’t have the same language any more.

                                                                                                                                        Personally, to replace C, I’d rather have a language that disables GC by default. If I am allowed to have a GC, I strongly suspect there are better alternatives than Go. (My most major objection being “lol no generics”. And if the designers made that error, that kind of cast doubt over their ability to properly design the rest of the language, and I lose all interest instantly. Though if I were writing network code, I would also say “lol no coroutines” at anything designed after 2015 or so.)

                                                                                                                                        1. 1

                                                                                                                                          I feel like GC by default vs no GC is one of the biggest decision points when designing a language. It affects so much of how the rest of a language has to be designed. GC makes writing code soooo much easier, but you can’t easily put non-GC’d things into a GC’d language. Or maybe you can? Rust was originally going to have syntax for GC’d pointers. People are building GC’d pointers into Rust now, as libraries - GC manages a particular region of memory. People are designing the same stuff for C++. So maybe we will finally be able to mix them in a few years.

                                                                                                                                          1. 1

                                                                                                                                            Go is unrealistic not only because of GC, but also segmented stacks, thick runtime that wants to talk to the kernel directly, implicit allocations, and dynamism of interface{}. They’re all fine if you’re replacing Java, but not C.

                                                                                                                                            D lang’s -betterC is much closer, but D’s experience shows that once you have a GC, it influences the standard library, programming patterns, 3rd party dependencies, and it’s really hard to avoid it later.

                                                                                                                                            1. 1

                                                                                                                                              Can you realistically control allocations? If we have ways to make sure all allocations are either explicit or on the stack, that could work.

                                                                                                                                              IIRC you can programmatically identify all heap allocations in a given go compilation, so you can wrap the build in a shim that checks for them and fails.

                                                                                                                                              The GC is on by default, that’s got to affect idiomatic code in a major way.

                                                                                                                                              Somewhat, yes, but the stdlib is written by people who have always cared about wasted allocations and many of the idioms were copied from that, so not quite as much as you might imagine.

                                                                                                                                              That said - if I needed to care about allocations that much, I don’t think it’d be the best choice. The language was designed and optimized to let large groups (including many clever-but-inexperienced programmers) to write reliable network services.

                                                                                                                                      2. 1

                                                                                                                                        I don’t think replacing C is a good usecase for Rust though. C is relatively easy to learn, read, and write to the level where you can write something simple. In Rust this is decidedly not the case. Rust is much more like a safe C++ in this respect.

                                                                                                                                        I’d really like to see a safe C some day.

                                                                                                                                        1. 6

                                                                                                                                          Have a look at Cyclone mentioned earlier. It is very much a “safe C”. It has ownership and regions which look very much like Rust’s lifetimes. It has fat pointers like Rust slices. It has generics, because you can’t realistically build safe collections without them. It looks like this complexity is inherent to the problem of memory safety without a GC.

                                                                                                                                          As for learning C, it’s easy to get a compiler accept a program, but I don’t think it’s easier to learn to write good C programs. The language may seem small, but the actual language you need to master includes lots of practices for safe memory management and playing 3D chess with the optimizer exploiting undefined behavior.

                                                                                                                                      1. 2

                                                                                                                                        The main innovations in Rust (substructural types[0] and lifetime analysis) would be just as useful in a language with automatic memory management. I am still astonished that most programmers seem to only appreciate these innovations when they want to avoid automatic garbage collection.

                                                                                                                                        My main interest is in making common data structures and algorithms impossible to use incorrectly. In my experience, any run of the mill Hindley-Milner variant suffices to implement data structures that advertise a purely functional interface. These data structures only use mutation internally to switch between different in-memory representations of the same abstract value. Thus, unsynchronized reads and writes from mutable stores are benign as long as they are atomic, which is not too much to ask for in a managed language.

                                                                                                                                        However, some data structures and algorithms are imperative in a way that cannot be hidden behind a purely functional interface. For example, consider a directed graph represented as an array of nodes. Each node is represented as a list of integers. Each integer is the index in the array of a forward neighbor of the original node.

                                                                                                                                        Suppose you want to implement a graph traversal algorithm, e.g., depth-first search. To make the implementation as flexible as possible for the user, you want to traverse the graph lazily, i.e., you want a stream (rather than a list) of integers, corresponding to the indices of the nodes in the order in which they are visited. The user is free to consume only a prefix of the stream if they so wish, in which case the unconsumed suffix must not be computed.

                                                                                                                                        We want graph objects to be mutable, because constructing immutable graphs is slower, both in theory (asymptotically) and in practice (in benchmarks). However, when the graph is being traversed, it should be temporarily frozen, lest the traversal produce nonsensical results.

                                                                                                                                        How do we do this without a borrow checker? Well, you can’t.

                                                                                                                                        [0] Of course, substructural types existed long before Rust, but Rust made them popular.

                                                                                                                                        1. 3

                                                                                                                                          Have you heard of http://protz.github.io/mezzo/ ? :)

                                                                                                                                          1. 2

                                                                                                                                            I have not, until now. I am reading the papers right now. Thanks for the reference.

                                                                                                                                        1. 41

                                                                                                                                          The claim is simple: in a static type system, you must declare the shape of data ahead of time, but in a dynamic type system, the type can be, well, dynamic! It sounds self-evident, so much so that Rich Hickey has practically built a speaking career upon its emotional appeal. The only problem is it isn’t true.

                                                                                                                                          Immediate standing ovation from me.

                                                                                                                                          I can only assume that oft-made claim is perpetuated from a position of ignorance. Have those people actually tried doing the thing in a statically typed language that they claim a statically typed language cannot do? Here’s an approach that appears all over my Haskell projects:

                                                                                                                                            req <- requireCheckJsonBody :: Handler Value
                                                                                                                                            let chargeId = req ^. key "data" . key "object" . key "id" . _String
                                                                                                                                          

                                                                                                                                          I don’t know (or care) what the exact structure of the JSON coming over the network will look like. I just know it will contain this one field that I care about, and here I pull it out and read it as a string.

                                                                                                                                          Do I need the entire JSON string to conform to some specific protocol (more specific than JSON itself)? No. I am just parsing it as some JSON (which is represented with the Value type).

                                                                                                                                          Do I need to parse it into some complex data type? No. I’m just building a string. I am doing — in Haskell — exactly the kind of thing that Clojurists do, but without being smug about it.


                                                                                                                                          If we keep the datatype’s constructor private (that is, we don’t export it from the module that defines this type), then the only way to produce a UserId will be to go through its FromJSON parser.

                                                                                                                                          I’m glad I read this article even for this design tip alone. I had never thought to do it this way; I thought a “smart constructor” was always necessary, even when that seemed like overkill.

                                                                                                                                          1. 5
                                                                                                                                              let chargeId = req ^. key "data" . key "object" . key "id" . _String
                                                                                                                                            

                                                                                                                                            So what does this piece of code actually do? Get the value under data->object->id as a String? _String is there to prevent name clashes with actual String? Is the magic here that the JSON payload isn’t parsed any more than it needs to be?

                                                                                                                                            Stylistically, do you know why Haskell people often seem to decide to use weird operators? Are all alternatives somehow worse?

                                                                                                                                            1. 8

                                                                                                                                              So what does this piece of code actually do? Get the value under data->object->id as a String?

                                                                                                                                              Yeah, exactly. This is how the Stripe API structures their responses. I could have picked a simpler hypothetical example, but I think even this real-world case is simple enough.

                                                                                                                                              _String is there to prevent name clashes with actual String?

                                                                                                                                              I believe so, yes. This is just a thing in the lens library.

                                                                                                                                              Is the magic here that the JSON payload isn’t parsed any more than it needs to be?

                                                                                                                                              I believe it is parsed only as much as necessary, yes. I’m not sure there’s any magic happening.

                                                                                                                                              Stylistically, do you know why Haskell people often seem to decide to use weird operators? Are all alternatives somehow worse?

                                                                                                                                              There are plenty of alternative syntaxes and approaches you could opt for. I happen to find this easy enough to read (and I think you do too, since you worked out exactly what it does), but that is of course subjective.

                                                                                                                                              1. 3

                                                                                                                                                the syntactic weirdness is mostly due to the fact that the base grammar is very simple, so you end up basically relying on making infix operators to build symbol-ful DSLs.

                                                                                                                                                This is very powerful for making certain kinds of libraries, but means that lots of Haskell looks a bit “out there” if you haven’t looked at code using a specific library before. This tends to be at its worst when doing stuff like JSON parsing (where you have very variably-shaped data)

                                                                                                                                                1. 6

                                                                                                                                                  Although conversely, I think more typical parsing with Aeson (especially the monadic form) is usually very tidy, and easy to read even by people not familiar with Haskell. It’s much less noisy than my lens example.

                                                                                                                                                  Here’s an example: https://artyom.me/aeson#recordwildcards

                                                                                                                                                  I think you probably know this, but I am posting here mostly so that other curious onlookers don’t get the wrong idea and think that Haskell is super weird and difficult.

                                                                                                                                              2. -5

                                                                                                                                                Lol what - you’re defining a benefit of dynamically typed language with your example. The json in your case IS a dynamic object.

                                                                                                                                                1. 7

                                                                                                                                                  I think you are quite confused about what we’re discussing.

                                                                                                                                                  The discussion is around type systems in programming languages. JSON is just a protocol. The JSON that my example parses is not a “dynamic object”. There is no such thing as a JSON object. JSON is only ever a string. Some data structure can be serialised as a JSON string. A JSON string can potentially be parsed by a programming language into some data structure.

                                                                                                                                                  The JSON protocol can be parsed by programming languages with dynamic type systems, e.g., Clojure, and the protocol can also be parsed by programming languages with static type systems, e.g., Haskell.

                                                                                                                                                  My example is taken verbatim from some Haskell systems I’ve written, so it is not “defining a benefit of dynamically typed language”.

                                                                                                                                                  You’re going to have to go and do a bit more reading, but luckily there is plenty of material online that explains these things. I think your comment is a good example of the kind of confusion the article’s author is trying to address.

                                                                                                                                                  1. 2

                                                                                                                                                    I read the article, and I agree somewhat with the parent commenter. It really seems that the author – and perhaps you as well – was comfortable with the idea of potentially declaring parts of the program as just handling lots of values all of a single generic/opaque/relatively-underspecified type, rather than of a variety of richer/more-specified types.

                                                                                                                                                    That position is not all that far from being comfortable with all values being of a single generic/opqaue/relatively-underspecified type. Which is, generally, the most charitable description the really hardcore static-typing folks are willing to give to dynamically-typed languages (i.e., “in Python, all values are of type object”, and that’s only if someone is willing to step up their politeness level a bit from the usual descriptions given).

                                                                                                                                                    In other words, a cynical reading would be that this feels less like a triumphant declaration of “see, static types can do this!” and more an admission of “yeah, we can do it but only by making parts of our programs effectively dynamically typed”.

                                                                                                                                                    1. 3

                                                                                                                                                      I don’t know how you’ve come to this conclusion. Moreover, I don’t understand how your conclusion is related to the argument in the article.

                                                                                                                                                      In other words, a cynical reading would be that this feels less like a triumphant declaration of “see, static types can do this!” and more an admission of “yeah, we can do it but only by making parts of our programs effectively dynamically typed”.

                                                                                                                                                      What does this even mean? How did you come up with this idea? When you want to parse some arbitrary JSON into a more concrete type, you can just do that. How does parsing make a program no longer statically typed?

                                                                                                                                                      1. 2

                                                                                                                                                        What is the difference between:

                                                                                                                                                        1. “Everything in this part of the program is of type JSON. We don’t know what the detailed structure of a value of that type is; it might contain a huge variety of things, or not, and we have no way of being sure in advance what they will be”.
                                                                                                                                                        2. “Everything in this part of the program is of type object. We don’t know what the detailed structure of a value of that type is; it might contain a huge variety of things, or not, and we have no way of being sure in advance what they will be”.

                                                                                                                                                        The first is what the article did. The second is, well, dynamic typing.

                                                                                                                                                        I mean, sure, you can argue that you could parse a JSON into some type of equally-generic data structure – a hash table, say – but to usefully work with that you’d want to know things like what keys it’s likely to have, what types the values of those keys will have, and so on, and from the type declaration of JSON you receive absolutely none of that information.

                                                                                                                                                        In much the same way you can reflect on an object to produce some type of equally-generic data structure – a hash table, say – but to usefully work with that you’d want to know things like… hey, this is sounding familiar!

                                                                                                                                                        Now do you see what I mean? That’s why I said the cynical view here is the author has just introduced a dynamically-typed section into the program.

                                                                                                                                                        1. 3

                                                                                                                                                          Any program which reads some JSON and parses it will be making some assumptions about its structure.

                                                                                                                                                          This is true of a program written in a dynamically-typed language.

                                                                                                                                                          This is true of a program written in a statically-typed language.

                                                                                                                                                          Usually, you will want to parse a string of JSON into some detailed structure, and then use that throughout your system instead of some generic Value type. But you don’t necessarily need to do that. Nothing about writing in a statically-typed programming language forces you to do that. And no, Haskell programmers don’t generally intentionally try to make their programs worse by passing Value types, or generic Map types, or just anything encoded as a String, throughout their program. That would be stupid.

                                                                                                                                                          1. 3

                                                                                                                                                            OK, I’ll do the long explanation.

                                                                                                                                                            Many programmers whose primary or sole familiarity is with statically-typed languages assume that in dynamically-typed languages all code must be littered with runtime type checks and assertions. For example, I’ve run into many people who seem to think that all Python code is, or should be, full of:

                                                                                                                                                            if isinstance(thing, some_type) ...
                                                                                                                                                            elif isinstance(thing, some_other_type) ...
                                                                                                                                                            

                                                                                                                                                            checks in order to avoid ever accidentally performing an operation on a value of the wrong type.

                                                                                                                                                            While it is true that you can parse a JSON into a data structure you can then pass around and work with, the only way to meaningfully do so is using your language’s equivalent idiom of

                                                                                                                                                            if has_key(parsed_json, some_key) and isinstance(parsed_json.get(some_key), some_type)) ...
                                                                                                                                                            elif has_key(parsed_json, some_other_key) and isinstance(parsed_json.get(some_other_key), some_other_type) ...
                                                                                                                                                            

                                                                                                                                                            since you do not know from the original type declaration whether any particular key will be present nor, if it is present, what type the value of that key will have (other than some sort of suitably-generic JSONMember or equivalent).

                                                                                                                                                            Which is to say: the only way to effectively work with a value of type JSON is to check it, at runtime, in the same way the stereotypical static-typing advocate thinks all dynamically-typed programmers write all their code. Thus, there is no observable difference, for such a person, between working with a value of type JSON and writing dynamically-typed code.

                                                                                                                                                            Now, sure, there are languages which have idioms that make the obsessive checking for members/types etc. shorter and less tedious to write, but the programmer will still be required, at some point, either to write such code or to use a library which provides such code.

                                                                                                                                                            Thus, the use of JSON as a catch-all “I don’t know what might be in there” type is not distinguishable from dynamically-typed code, and is effectively introducing a section of dynamically-typed code into the program.

                                                                                                                                                            1. 3

                                                                                                                                                              I still don’t get what point you’re trying to make. Sorry.

                                                                                                                                                              Thus, the use of JSON as a catch-all “I don’t know what might be in there” type is not distinguishable from dynamically-typed code, and is effectively introducing a section of dynamically-typed code into the program.

                                                                                                                                                              This now sounds like you’re making an argument between parsing and validation, and misrepresenting it at static vs dynamic.

                                                                                                                                                              1. 2

                                                                                                                                                                This now sounds like you’re making an argument between parsing and validation, and misrepresenting it at static vs dynamic.

                                                                                                                                                                For an alternative formulation, consider that people often claim, or want to claim, that in a statically-type language most of the information about the program’s behavior is encoded in the types. Some people clearly would like a future where all such information is encoded in the types (so that, for example, an add function would not merely have a signature of add(int, int) -> int, but a signature of add(int, int) -> sum of arguments which could be statically verified).

                                                                                                                                                                I have complicated thoughts on that – the short hot-take version is those people should read up on what happened to logical positivism – but the point here is a reminder that this article, which was meant to show a way to have nice statically-typed handling of unknown data structures, was able to do so only by dramatically reducing the information being encoded in the types.

                                                                                                                                                                1. 3

                                                                                                                                                                  the point here is a reminder that this article, which was meant to show a way to have nice statically-typed handling of unknown data structures, was able to do so only by dramatically reducing the information being encoded in the types.

                                                                                                                                                                  …How else would a program know what type the program’s author intends for the arbitrary data to be parsed into? Telepathy?

                                                                                                                                                                  1. 1

                                                                                                                                                                    I think at this point it’s pretty clear that there’s nothing I can say or do that will get you to understand the point I’m trying to make, so I’m going to bow out.

                                                                                                                                                          2. 3

                                                                                                                                                            What is the difference between:

                                                                                                                                                            1. “Everything in this part of the program is of type JSON. We don’t know what the detailed structure of a value of that type is; it might contain a huge variety of things, or not, and we have no way of being sure in advance what they will
                                                                                                                                                            2. “Everything in this part of the program is of type object. We don’t know what the detailed structure of a value of that type is; it might contain a huge variety of things, or not, and we have no way of being sure in advance what they will be”.

                                                                                                                                                            The first is what the article did. The second is, well, dynamic typing.

                                                                                                                                                            The difference is that in a statically-typed language, you can have other parts of the program where proposition 1. is not the case, but in a dynamically-typed language proposition 2. is true all the time and you can’t do anything about it. No matter what style of typing your language uses, you do have to inspect the parsed JSON at runtime to see if it has the values you expect. But in a statically-typed language, you can do this once, then transform that parsed JSON into another type that you can be sure about the contents of; and then you don’t have to care that this type originally came from JSON in any other part of your program that uses it.

                                                                                                                                                            Whereas in a dynamically-typed language you have to remember at all times that one value of type Object happens to represent generic JSON and another value of type Object happens to represent a more specific piece of structured data parsed from that JSON, and if you ever forget which is which the program will just blow up at runtime because you called a function that made incorrect assumptions about the interface its arguments conformed to.

                                                                                                                                                            Anyway even introducing a “generic JSON” type is already encoding more useful information than a dyanmically-typed language lets you. If you have a JSON type you might expect to have some methods like isArray or isObject that you can call on it, you know that you can’t call methods that pertain to completely different types like getCenterPoint or getBankAccountRecordsFromBankAccountId. Being able to say that a value is definitely JSON, even if you don’t know anything about that JSON, at least tells you that it’s not a BankAccount or GLSLShaderHandle or any other thing in the vast universe of computing that isnt JSON.

                                                                                                                                                            1. 2

                                                                                                                                                              Whereas in a dynamically-typed language you have to remember at all times that one value of type Object happens to represent generic JSON and another value of type Object happens to represent a more specific piece of structured data parsed from that JSON, and if you ever forget which is which the program will just blow up at runtime because you called a function that made incorrect assumptions about the interface its arguments conformed to.

                                                                                                                                                              This is where the discussion often veers off into strawman territory, though. Because I’ve written code in both dynamically and statically typed languages (and hybrid-ish stuff like dynamically-typed languages with optional type hints), and all the things people say about inevitable imminent doom from someone passing the wrong types of things into functions are, in my experience, just things people say. They don’t correspond to what I’ve actually seen in real programming.

                                                                                                                                                              That’s why in one of my comments further down I pointed out that the generic JSON approach used in the article forces the programmer to do what people seem to think all dynamically-typed language programmers do on a daily basis: write incredibly defensive and careful code with tons of runtime checks. My experience is that people who prefer and mostly only know statically-typed languages often write code this way when they’re forced to use a dynamically-typed language or sections of code that are effectively dynamically-typed due to using only very very generic types, but nobody who’s actually comfortable in dynamic typing does that.

                                                                                                                                                              And the best literature review I know of on the topic struggled to find any meaningful results for impact of static versus dynamic typing on defect rates. So the horror stories of how things will blow up from someone forgetting what they were supposed to pass into a function are just that: stories, not data, let alone useful data.

                                                                                                                                                              Anyway, cards on the table time here.

                                                                                                                                                              My personal stance is that I prefer to write code in dynamically-typed languages, and add type hints later on as a belt-and-suspenders approach to go with meaningful tests (though I have a lot of criticism for how Python’s type-hinting and checking tools have evolved, so I don’t use them as much as I otherwise might). I’ve seen too much statically-typed code fall over and burn the instant someone pointed a fuzzer at it to have much faith in the “if it passes type checks, it’s correct” mantra. And while I do enjoy writing the occasional small thing in an ML-ish language and find some of the idioms and patterns of that language family pleasingly elegant, mostly I personally see static typing as a diminishing-returns technique, where beyond a very basic up-front pass or two, the problems that can be prevented by static typing quickly become significantly smaller and/or less likely as the effort required to use the type system to prevent them increases seemingly without bound.

                                                                                                                                                              1. 3

                                                                                                                                                                This is where the discussion often veers off into strawman territory, though. Because I’ve written code in both dynamically and statically typed languages (and hybrid-ish stuff like dynamically-typed languages with optional type hints), and all the things people say about inevitable imminent doom from someone passing the wrong types of things into functions are, in my experience, just things people say. They don’t correspond to what I’ve actually seen in real programming.

                                                                                                                                                                I disagree - passing the wrong types of things into functions is definitely a phenomenon I’ve personally seen (and debugged) in production Ruby, JavaScript, and Python systems I’ve personally worked on.

                                                                                                                                                                For instance, I’ve worked on rich frontend JavaScript systems where I was tasked with figuring out why a line of code a.b.c was throwing TypeError but only sometimes. After spending a bunch of time checking back to see where a ultimately came from, I might find that there was some function many frames away from the error in the call stack that sets a from the result of an xhr that isn’t actually guaranteed to always set a key b on a, and that code was not conceptually related to the code where the error happened, so no one thought it was unusual that a.b wasn’t guaranteed, which is how the bug happened.

                                                                                                                                                                In a statically typed language, I could convert the JSON that will eventually become a into a specifically-typed value, then pass that down through 10 function calls to where it’s needed, without worrying that I’ll find 10 frames deep that SpecificType randomly doesn’t have a necessary field, because the conversion from the generic to the specific would’ve failed at the conversion site.

                                                                                                                                                                I am a fan of statically typed languages, and a huge reason for this is because I’ve debugged large codebases in dynamically-typed languages where I didn’t write the original code and had to figure it out by inspection. Static typing definitely makes my experience as a debugger better.

                                                                                                                                                                1. 1

                                                                                                                                                                  definitely a phenomenon I’ve personally seen (and debugged)

                                                                                                                                                                  Notice I didn’t say “your stories are false”.

                                                                                                                                                                  Nor did you refute my claims that I’ve seen statically-typed code which passed static type checks fall over and crash when fuzzed.

                                                                                                                                                                  We each can point to instances where our particular bogeyman has in fact happened. Can either of us generalize usefully from that, though? Could I just use my story to dismiss all static typing approaches as meaningless, because obviously they’re not catching all these huge numbers of bugs that must, by my generalization, be present in absolutely every program every person has ever written in any statically-typed language?

                                                                                                                                                                  The answer, of course, is no. And so, although, you did write a lot of words there, you didn’t write anything that was a useful rebuttal to what I actually said.

                                                                                                                                                      2. 0

                                                                                                                                                        I appreciate your condencending tone but really you should work on your ability to argument. The original post claims that this statement is not true:

                                                                                                                                                        in a static type system, you must declare the shape of data ahead of time, but in a dynamic type system, the type can be, well, dynamic!

                                                                                                                                                        You argue however that this is indeed not true because you can have “dynamic” data and “static” types when that’s just a silly loophole. Surely you want data as an object right? A string of characters without the meta structure are completely useless in the context of programming.

                                                                                                                                                        Just because you can have a static type that doesn’t have strict, full protocol implementation doesn’t mean that you don’t need to declare it before hand which renders the original statement absolutely correct - you must declare static shape of data that matches what your type expects. The claim that types can be “lose” doesn’t invalidate this statement.

                                                                                                                                                        1. 4

                                                                                                                                                          I appreciate your condencending tone but really you should work on your ability to argument.

                                                                                                                                                          I’m sorry you feel that way. I genuinely did my best to be kind, and to present some absolute truths to you that I had hoped would clear up your confusion. Unfortunately, it looks like you’ve decided to dig your heels in.

                                                                                                                                                          You argue however that this is indeed not true because you can have “dynamic” data and “static” types when that’s just a silly loophole.

                                                                                                                                                          I don’t know what you are talking about. Dynamic data? What does this mean? And what silly loophole?

                                                                                                                                                          In the context of this argument: the JSON being parsed is a string. It’s not static. It’s not dynamic. It’s a string.

                                                                                                                                                          which renders the original statement absolutely correct - you must declare static shape of data that matches what your type expects.

                                                                                                                                                          No, you don’t. Again, you have misunderstood me, you have misunderstood the article, and you have misunderstood some pretty basic concepts that are fundamental to constructing a cohesive argument in this debate.

                                                                                                                                                          The argument is whether or not — in a statically-typed programming language — the JSON string you are parsing needs to conform 1:1 to the structure of some data type you are trying to parse it into.

                                                                                                                                                          The answer is: No. Both statically-typed and dynamically-typed programming languages can parse arbitrary data.

                                                                                                                                                          1. 0

                                                                                                                                                            The answer is: No. Both statically-typed and dynamically-typed programming languages can parse arbitrary data.

                                                                                                                                                            That was never the topic; you can parse arbitrary data with a pen and a piece of toilet paper…

                                                                                                                                                            1. 2

                                                                                                                                                              That was never the topic

                                                                                                                                                              Yes it was. Perhaps you should have actually read the article.

                                                                                                                                                              you can parse arbitrary data with a pen and a piece of toilet paper…

                                                                                                                                                              At this point, it is clear you are not even trying to add anything constructive. I suggest we leave this discussion here.

                                                                                                                                                              1. 0

                                                                                                                                                                Oof, I guess there had to be first failed discussion experience here on lobsters. I’m sorry but you are absolutely inept at discussing this. Maybe it’s better if we don’t continue this. Cheers.

                                                                                                                                                      3. 2

                                                                                                                                                        The grandparent’s code example uses way more than one type.