1. 8

    I’d vote for Haskell, because it really forces you to think functionally. With other languages it’s too easy to fall back to using side effects, and sometimes you might not even realize you’re doing it.

    One can debate whether going all-in on purity is the right way to structure programs, but Haskell will at least force you to learn that style of programming so you can be the most informed.

    Compared to the other languages you listed, Haskell also has the benefit of having a strong type system. This is another way to learn about functional programming. For example, in Haskell you can directly express the concept of “functor”, whereas those other languages don’t provide a real way to do that. I think one of the main reasons that monads are so difficult for people to understand is because people don’t have a language in which it’s even possible to express “monad” as a concept. But in Haskell it’s 3 short lines of code.

    1. 37

      Wow this is a bad take.

      Experience tells us that it’s the boring menial jobs that get automated first.

      Most “boring menial” jobs are still around. There are still clerks, truck drivers, etc.

      If you find yourself doing copy and paste programming, watch out: your buddy computer can do it too. The work on generating UI has already started, see for instance, pix2code.

      GUI generators have been around for decades.

      So I’m sorry to say that, but those programmers who specialize in HTML and JavaScript will have to retrain themselves.

      I’m guessing he’s never done HTML or JavaScript.

      You might think that programmers are expensive–the salaries of programmers are quite respectable in comparison to other industries. But if this were true, a lot more effort would go into improving programmers’ productivity, in particular in creating better tools.

      A lot of effort does go into creating better tools.

      I’m sorry to say that, but C and C++ programmers will have to go. These are the languages whose only raison d’être is to squeeze maximum performance from hardware. We’ll probably always be interested in performance, but there are other ways of improving it. We are familiar with optimizing compilers that virtually eliminated the job of an assembly language programmer. They use optimizers that are based on algorithmic principles–that is methods which are understandable to humans. But there is a whole new generation of AI waiting in the aisles, which can be trained to optimize code written in higher level languages.

      I, too, believe that AIs are actual magic.

      (This argument is so played out that Sufficiently Smart Compiler was an injoke… on the C2 wiki.)

      Imagine a system, which would take this definition of quicksort written in Haskell:

      That “quicksort” is notoriously inefficient. I guess our Magic AI Smart Compiler will somehow make it faster than C, but then why wouldn’t the MASC make the C even better?

      (And for that matter, why use quicksort at all? Just say sort(list), done.)

      [Explanation of quicksort] You can’t get any simpler than that.

      Nonsense. I can get much simpler than that:

      Sort(list) ≜ ε x Permutations(list):
         ∀i, j ∈ 1..Len(list): i ≤ j ⇒ x[i] ≤ x[j]
      

      That’s the formal definition of a sort. I didn’t even need to specify the algorithm! Now the MASC can use quicksort, mergesort, radix sort, whatever’s most efficient!

      This is what a human optimizer would do. But how much harder is this task than, say, playing a game of go against a grandmaster?

      Much?

      I can’t help but see parallels with Ancient Greece. The Ancient Greeks made tremendous breakthroughs in philosophy and mathematics–just think about Plato, Socrates, Euclid, or Pythagoras–but they had no technology to speak of. Hero of Alexandria invented a steam engine, but it was never put to work.

      They had tons of technology! They made major advances in mechanics, metallurgy, architecture, city planning, medicine… technology doesn’t start with the steam engine.

      As long as the dumb AI is unable to guess our wishes, there will be a need to specify them using a precise language. We already have such language, it’s called math. The advantage of math is that it was invented for humans, not for machines. It solves the basic problem of formalizing our thought process, so it can be reliably transmitted and verified.

      Modern math is utterly inadequate to express most of our requirements. Try to formally express the requirement “this sort function should be parallelizable and minimize memory use for the average case”.

      The definition of quicksort in Haskell is very mathematical. It can be easily verified using induction, because it’s recursive, and it operates on a recursive data structure: a list.

      Again, his version of sorting was much more low-level than “math”, because he actually wrote an implementation of sorting and did not specify what sorting actually is.

      In mathematics, a monad is defined as a monoid in the category of endofunctors.

      For the record, this definition comes from A Brief, Incomplete, and Mostly Wrong History of Programming Languages, which was written to make fun of Haskell. It’s not a great argument for how clear and precise mathematics is.

      The recursive function tells us a linear, one-dimensional, story. It appeals to our story-telling ability. The [11 line] functor-driven approach appeals to our visual cortex. There is an up and down, and left and right in the tree. Not only that, we can think of the algorithm in terms of movement, or animation. We are first “growing” the tree from the seed and then “traversing” it to gather the fruit from the branches. These are some powerful metaphors.

      What? Those metaphors aren’t mathematical at all and don’t give us the precise understanding that he spent the blog post arguing for.

      I’m often asked by programmers: How is learning category theory going to help me in my everyday programming? […] Category theory is the insurance policy against the drying out of your current watering hole.

      There is more to math than category theory.

      Programmers think category theory is way more important to math than it actually is. All of the mathematicians I’ve talked to think it’s a useful tool that has a lot of applicability in some domains, but they’re not nearly as zealous about it as programmers are. There’s lots of other important math, too. Like probability! I bet that for 95% of programmers out there probability will be more directly useful to know than category theory would be.

      1. 9

        In mathematics, a monad is defined as a monoid in the category of endofunctors.

        For the record, this definition comes from A Brief, Incomplete, and Mostly Wrong History of Programming Languages, which was written to make fun of Haskell. It’s not a great argument for how clear and precise mathematics is.

        Actually, it comes from Categories for the Working Mathematician by Saunders Mac Lane, published in 1971.

        1. 4

          It’s still mostly known from the satirical essay (well worth reading by the way: http://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html).

          My personal take is that if someone is using that quote they’re being ironic.

        2. 8

          I think what many people miss is that programming is a new type of math. Algorithms have existed for thousands of years, we just had a terrible way of defining them until now. People are creating new mathematical objects every day when they program, they just don’t realize it.

          In other words, mathematicians and programmers play in a mental world of their own making. What the software world lacks, and mathematics is really good at, is a social system to gather all the good ideas and teach them. Instead, in the software world, we have people bumbling about reinventing the same ideas over and over again.

          1. 5

            I agree with most of what you wrote, but not with everything:

            Modern math is utterly inadequate to express most of our requirements. Try to formally express the requirement “this sort function should be parallelizable and minimize memory use for the average case”.

            That’s just because ‘parallelizable’ and ‘memory usage’ and ‘average case’ are not formally defined.

            For the record, this definition comes from A Brief, Incomplete, and Mostly Wrong History of Programming Languages, which was written to make fun of Haskell. It’s not a great argument for how clear and precise mathematics is.

            “A monoid in the category of endofunctors” is as clear and precise as statements come, mathematically speaking. It’s not clear for most people because most people (including me) are not familiar with monoids, categories, and endofunctors.

          1. 25

            Hm, a language that uses product types instead of sum types to represent “result or error” is certainly not doing exceptions right.

            1. 6

              Exactly

              If a function returns a value and an error, then you can’t assume anything about the value until you’ve inspected the error.

              You can still use the returned value and ignore the error. I’m not calling that “right”

              1. 2

                That is intentional and useful. If the same error has different consequences when called by different callers, the error value should be ignored. However, when writing idiomatic Go, you generally return a zero value result for most error scenarios.

                Assigning _ to an error (or ignoring the return values altogether) is definitely something that could be trivially caught with a linter.

                1. 1

                  I still think it’s a mistake to allow both to coexist. In this case, you need an additional tool to catch something. Whereas with proper sum types, it is simply not possible to access a return value if you get an error. What you do with that is up to you.

                  val, err := foo()
                  if err != nil { /* log and carry on, or ignore it altogether */ }
                  bar(val) // <- nothing prevent you from doing that, and it very likely a mistake
                  
              2. 4

                Thought the same. It gets this right:

                Go solves the exception problem by not having exceptions.

                And here it goes wrong:

                Instead Go allows functions to return an error type in addition to a result via its support for multiple return values.

                1. 4

                  Another case of “Go should have been SML + channels”, really. That would have been as simple, cleaner, and less error prone. But what can you expect from people who think the only flaw of C is that it doesn’t have a GC…

                  1. 1

                    Not just GC but also bounds checking.

                    1. 1

                      That’s so obvious that I forgot about it. I don’t know of any modern language that doesn’t have a (bound-checked) array type. On the other hand they decided to keep null nil…

                      1. 1

                        You have to keep in mind that C is most certainly not a modern language & they were intentionally starting with C & changing only what they felt would add a lot of value without adding complexity.

                        1. 1

                          For sure, I just don’t get why anyone would want that. 🤷 I guess.

                          1. 2

                            Primarily linguistic simplicity in the service of making code easier to read.

                1. 1

                  Most of these definitions seem pretty good, but I have some critique of a few of them.

                  An abstraction is a precise description of the ways in which different data types share common structure.

                  Hm, that definition captures only a very narrow sense of the word. Functional programming is based on lambda calculus, and in lambda calculus the word abstraction means lambda abstraction (i.e., what most people would informally call a function). Of course, the word abstraction can also mean many other things, even within functional programming.

                  An algebra is a set of objects together with a set of operations on those objects.

                  I think it’s more common for object, in a mathematical sense, to refer to an entire group (which is more consistent with its usage in category theory), not the elements of a group. The word elements is more appropriate for a group because a group is a set with some structure, and a set contains elements.

                  Functional Effect

                  Interesting. I’ve never heard this term outside of this blog post. In papers about monads and algebraic effects, the term computational effect is the term I usually encounter.

                  To map over a polymorphic data type is to change one type parameter into another type, by specifying a way to transform values of that type. For example, a list of integers can be mapped into a list of strings, by specifying a way to transform an integer into a string. To be well-defined, mapping over any polymorphic data type with an identity function must yield something that is equal to the original value.

                  I’d really expect to see at least a passing mention of the word functor here, if for nothing else to give the reader a keyword to Google. It’s interesting that only one of the functor laws is mentioned here (the identity law, but not the composition law). Seems arbitrary?

                  Sometimes called generics, parametric polymorphism is a feature of some programming languages that allows universally quantifying a function or a data type over one or more type parameters. Such polymorphic functions and polymorphic data types are said to be parameterized by those type parameters. Parametric polymorphism allows the creation of generic code, which works across many different data types; and also the creation of generic data types (like collections).

                  This is good, but it misses the most important point about parametric polymorphism: parametricity. This property distinguishes parametric polymorphism from ad hoc polymorphism.

                  Universal quantification asserts that a statement or type holds for all possible values of a given variable.

                  It’s a bit awkward to talk about a type “holding” for values. This makes a little more sense in the context of Curry-Howard, but even then it’s still fairly awkward exposition. I would expect this entry to make at least a passing mention to parametric polymorphism, which also makes an appearance in this glossary.

                  A universal type is a type that is universally quantified over. In programming languages, all type parameters of functions and data types are universal types.

                  Hm, I’m still getting this feeling that the glossary contains many related (or even synonymous) terms without linking them together in a helpful way.

                  1. 1

                    Thank you for your feedback, I incorporated much of it into a revision!

                    I didn’t change abstraction since I am deliberately using a more narrow definition, a functional analogue of how the word is used in mainstream programming languages like Java. (Any definition that considers #define of the C preprocessor to be a form of abstraction is not useful in everyday parlance nor related at all to how mainstream programming language communities use the term.)

                  1. 45

                    RustTL;DR: It’s a very impressive language with a clear vision and priorities, but the user interface needs a lot of work. Also, the collection library is much better than Scala’s.

                    • Generics with <>. It’s 2017 by now, we know it’s a bad idea. One of the reasons why the language suffers from abominations like the “turbofish” operator ::<>.

                    • Strings don’t offer indexing, because it doesn’t make sense for UTF-8. Correct! But Strings offer slicing … WAT?

                    • Misuse of [] for indexed access. Having both () and [] doing roughly the same thing, especially since [] can be used to do arbitrary things, doesn’t make sense. Pick one, use the other for generics.

                    • Inconsistent naming. str and String, Path and PathBuf etc.

                    • :: vs. . is kind of unnecessary.

                    • Mandatory semicola, but with some exceptions in arbitrary places: struct Foo; vs. struct Foo {}

                    • Arbitrary abbreviations all over the place. It’s 2017, your computer won’t run out of memory just because your compiler’s symbol table stores Buffer instead of Buf.

                    • Can someone decide on a casing rule for types, please, instead of mixing lowercase and uppercase names? Some types being “primitive” is an incredibly poor excuse.

                    • Also, having both CamelCase and methods_with_underscores?

                    • Library stutter: std::option::Option, std::result::Result, std::default::Default

                    • iter(), iter_mut(), into_iter() … decide prefix or postfix style and stick with it.

                    • Coercions do too many things. For instance, they are the default way to convert i32 to i64, instead of just using methods.

                    • Also, converting numbers is still broken. For instance, f32 to i32 might result in either an undefined value or undefined behavior. (Forgotten which one it is.)

                    • Bitcasting integers to floats is unsafe, because the bits could be a signaling NaN, causing the CPU to raise an FP exception if not disabled.

                    • Forward and backward annotations: #[foo] struct Foo {} vs struct Foo { #![foo] }. Also /// for normal documentation, //! for module level documentation. Documentation already uses Markdown, so maybe just let people drop a markdown file in the module dir? That would make documentation much more accessible when browsing through GitHub repositories.

                    • Also, documentation can cause compiler errors … that’s especially fun if you just commented a piece of code for testing/prototyping.

                    • Type alias misuse: In e.g. io crate: type Result<T> = Result<T, io::Error> … just call it IoResult.

                    • Macros are not very good. They are over-used due to the fact that Rust lacks varargs and abused due to the fact that they require special syntax at call-site (some_macro!()).

                    • Pattern matching in macros is also weird. x binds some match to a name in “normal” pattern matching, but matches on a literal “x” in “macro pattern matching”.

                    • println! and format! are very disappointing given that they use macros.

                    • Compiler errors … ugh. So many things. Pet peeve: “Compilation failed due to 2 errors” … 87 compiler errors printed before that.

                    1. 8
                      • Library stutter: std::option::Option, std::result::Result, std::default::Default
                      • Type alias misuse: In e.g. io crate: type Result<T> = Result<T, io::Error> … just call it IoResult.

                      How ya gonna square that circle?

                      1. 2

                        I think std::io::IoResult would be fine – it would solve the issue of having vastly different Results flying around, while not having single-use namespaces that are only used by one type.

                        1. 2

                          The general pattern is to import Io instead. When doing this, IoResult would be jarring.

                          use std::io;
                          
                          fn my_fun() -> io::Result<T> {
                          
                          }
                          
                      2. 14

                        It’s 2017,

                        I have some news for you, @soc.

                        1. 3

                          Haha, good catch. Now you see how old this list is. :-)

                          The only thing I got to delete since then was “get rid of extern crate”.

                        2. 3

                          What’s your preferred alternative to generics with <>?

                          1. 6

                            [], as it was in Rust before it was changed for “familiarity”.

                            Unlike <>, [] has a track of not being horribly broken in every language that tried to use it.

                            1. 5

                              How is <> broken?

                              1. 16

                                It complicates parsing due to shift and comparison operators.

                                1. 2

                                  Ah, yeah, that makes sense.

                                2. 19

                                  Pretty much no language has ever managed to parse <> without making the language worse. The flaws are inherent in its design, as a compiler author you can only pick where you place the badness; either:

                                  • Add additional syntax to disambiguate (like ::<> vs. <> in Rust).
                                  • Have weird syntax to disambiguate (like instance.<Foo>method(arg1, arg2) in Java).
                                  • Read a potentially unlimited amount of tokens during parsing, then go back and fix the parse tree (like in C#).
                                  • etc.

                                  In comparison, here are the issues with using [] for generics:

                                  • None.

                                  For newly created languages (unlike C++, which had to shoehorn templates/generics into the existing C syntax) it’s a completely unnecessary, self-inflicted wound to use <> for generics.

                                  More words here: Why is [] better than <> for generic types?

                                  1. 2

                                    Those are good reasons to not use <>, but as a Haskeller I personally find either style somewhat noisy. I’d rather just write something like Option Int. Parentheses can be used for grouping if needed, just like with ordinary expressions.

                                    1. 2

                                      Haskell feels like it is in the same category as D, they both just kicked the can a tiny bit further down the road:

                                      Both need (), except for a limited special-case.

                                      1. -1

                                        I don’t see how Haskell kicked the can down the road. The unit type is useful in any language. Rust has a unit type () just like Haskell. Scala has it too.

                                        I’m not sure what “special case” you are referring to.

                                        1. 2

                                          The fact that you still need () for grouping types in generics as soon as you leave the realm of toy examples – just as it is in D.

                                          (Not sure what’s the comment on the unit type is about…)

                                          1. 4

                                            Ah, I understand what you’re saying now. But that’s already true for expressions at the value level in most programming languages, so personally I find it cleaner to use the same grouping mechanism for types (which are also a form of expressions). This is especially applicable in dependently typed languages where terms and types are actually part of the same language and can be freely mixed.

                                            However, I can also appreciate your argument for languages with a clear syntactic distinction between value expressions and type expressions.

                              2. 1

                                D’s use of !() works pretty well. It emphasizes that compile-time parameters aren’t all that crazy different than ordinary runtime parameters.

                                1. 1

                                  I prefer my type arguments to be cleanly separated from value arguments (languages that fuse them excepted).

                                  I find D’s approach slightly ugly, especially the special-cases added to it.

                                  1. 1

                                    I prefer my type arguments to be cleanly separated from value arguments

                                    Well, in D they aren’t type vs value arguments, since you can pass values (and symbol aliases) as compile-time arguments as well. That’s part of why I like it using such similar syntax, since it isn’t as restricted as typical type generics.

                                    I find D’s approach slightly ugly, especially the special-cases added to it.

                                    The one special case is you can exclude the parenthesis for a single-token CT argument list and actually I thought I’d hate it when it was first proposed and I voted against it… but now that it is there and I used it, I actually like it a lot.

                                    Sure does lead to a lot first timer questions on the help forums though… it certainly isn’t like any other language I know of.

                              3. 2

                                Also, converting numbers is still broken. For instance, f32 to i32 might result in either an undefined value or undefined behavior. (Forgotten which one it is.)

                                Yeah, I kind of feel the same way. Even with try_from() dealing with number conversions is a pain in Rust.

                                1. 1

                                  You saved me a lot of typing. 100% agree.

                                  1. 1

                                    Thanks! I’d love to know the reason why someone else voted it down as “troll” – not because I’m salty, but because I’m genuinely interested.

                                  2. 1

                                    2 pains I have with Rust right now:

                                    • I would like to be able to connect to a database (Teradata specifically)
                                    • I want to launch a subprocess with other than the default 3 stdio descriptors (e.g. exec $CMD $FD<>$PIPE in sh)
                                    1. 2

                                      I know it’s technically unsafe and that might preclude it from your use, but does CommandExt::pre_exec not fit your bill?

                                      1. 2

                                        That could work. I’m still new to Rust so I haven’t fully explored the stdlib.

                                    2. 1

                                      Ahahaha, this is a great list. I’m curious about a couple things though, since you’ve obviously put a lot of thought into it…

                                      Bitcasting integers to floats is unsafe, because the bits could be a signaling NaN, causing the CPU to raise an FP exception if not disabled.

                                      The docs for f32::from_bits() and such talk about precisely this, but I considering the misdesign of signaling NaN’s really don’t see how it could possibly be made better. Any ideas?

                                      …They are over-used due to the fact that Rust lacks varargs…

                                      What little experience I have with programming language design makes me feel like varargs are a hard problem to deal with in a type-safe language, at least if you want to allow different types for the args (instead of, say, forcing them all to be what Rust would call &dyn Display or something). Do you know of any language which does it Right?

                                      1. 1

                                        The docs for f32::from_bits() and such talk about precisely this, but I considering the misdesign of signaling NaN’s really don’t see how it could possibly be made better. Any ideas?

                                        Rust could have disabled the trapping of signaling NaN’s on start up, but I think Rust fell into the same design mistake of C:

                                        Scared of making the use-case of the 0.01% (people who want signaling NaN’s to trap) harder to achieve, they made life worse for the 99.99%.

                                        varargs are a hard problem to deal …

                                        Agreed, it’s safe to say that language designers hate them. :-)

                                        … at least if you want to allow different types for the args

                                        I think this is only partially the reason. You can still have only same-typed varargs at runtime, but allow recovering the individual types of the arguments in macro calls – which is exactly the case for format! and friends.

                                        Do you know of any language which does it Right?

                                        I think in the case of format strings, focusing on varargs is the wrong approach. If you imagine how you want an ideal API to look like, you probably want to interpolate things directly inside the string, never having to go through the indirection of some vararg method.

                                        Instead of having the formatting parameters in one place, and the to-be-interpolated values in a different one, like in …

                                        let carl = "Carl"
                                        let num = 1.234567;
                                        format!("{}'s number is {:.*}, rounded a bit", carl, 2, num)
                                        // -> "Carl's num is 1.23, rounded a bit"
                                        

                                        … wouldn’t it be much nicer to write (this is Scala):

                                        val carl = "Carl"
                                        val num = 1.234567
                                        f"$carl's num is $num%.2f, rounded a bit"
                                        // -> "Carl's num is 1.23, rounded a bit"
                                        
                                        1. 1

                                          Julia has nice string interpolation too. I honestly don’t understand why more programming languages don’t have it. Does everyone just forget how useful it is in bash when they come to design their language?

                                    1. 3

                                      IMO some of the best empirical evidence in favor of static types is how many people are fans of mypy and typescript.

                                      1. 3

                                        Uhm… That just measures hype, it tells absolutely nothing if it actually provides value. Programmers like new toys :-)

                                        1. 1

                                          And also .. how many people are actually fans of mypy and using it? How big percentage? Because I wouldn’t really say that it is that many.

                                        2. 3

                                          To be more precise, I would call that empirical evidence in favor of gradual types. I typed the whole Oil frontend with MyPy and it worked quite well [1]. (Oil will eventually be a totally statically typed program, but it’s still in an intermediate state now.)

                                          But I still want to start with a dynamic language. Basically because I’m “reverse engineering” shell by observation – I would call it schema discovery. Writing types at first doesn’t make sense because I don’t know what they are. I have to write some code to figure that out :)

                                          Although Oil is an unusual project in some respects, I don’t think it’s unusual in this respect. I would say this situation is probably more common than people realize. “Observing the world before modelling it” seems to be the common thread. Data science is an easy example: you get a bunch of real world data and you can’t make assumptions it before writing code.

                                          Another really common situation is that your types live outside the process / language. For example, an SQL database has its own types, and common distributed systems tools like protobuf/thrift/capnproto have a language-independent notion of types.

                                          Big distributed systems are more like “observing the world”, not modelling it. That is, the stuff on the other side of the wire might have been written 15 years ago, and you don’t control it, etc.

                                          [1] http://www.oilshell.org/blog/2019/06/13.html

                                          1. 0

                                            Writing types at first doesn’t make sense because I don’t know what they are. I have to write some code to figure that out :)

                                            Having a static type system and needing to write types are two different things. A language with type inference can check that your code type checks without requiring types to be written explicitly. People often equate static typing with manifest typing, and due to that misconception people often think that static typing just gets in the way.

                                            1. 2

                                              You equate having to write types out with having to know them. I write plenty of code without explicitly declaring types while still paying the cost of having to think about them. We’re not living in an utopia where structural typing is the default.

                                        1. 3

                                          As someone who very occasionally types a line or two of Haskell, but never committed any of the basic types to memory, I have to say that ironically I couldn’t remember what a functor actually was until this bit:

                                          The best (but still extremely bad) alternative name to Functor is Mappable.

                                          This is anecdotal and subjective and all, but the name Mappable is just so much more obvious to me. The point of “well what about cartography” also seems to be contradicted by a previous point

                                          Reliability is also subjective. It all depends on context and familiarity.

                                          In context, there is no confusion between printed on paper maps and the type. “Mappable is a type that can be mapped over” is much more consistent than “Functor is a type that can be mapped over”, and should be clear to most developers. Yes, beginners will still need to learn the meaning of “mapping over”, but once they do, why not give them a memorable (if imprecise) name instead of another conceptual pointer they need to keep in their heads.

                                          I understand the desire for precise, unambiguous language by Haskell folks, but it really just doesn’t resonate well with me. I’d much rather optimize for readability and ergonomics.

                                          1. 3

                                            why not give them a memorable (if imprecise) name instead of another conceptual pointer they need to keep in their heads.

                                            I don’t think the name should be changed, because functor is a perfectly good word with plenty of relevant educational material pointing to it. It’s more precise than “mappable”, because functors actually go further than the concept of mapping functions over containers.

                                            However, the analogy of mapping functions over containers is useful, because it allows the student to actually get to work with this algebra rather than navel-gaze on the meanings and origins of names.

                                            1. 3

                                              Mappable is good, but I think Functor is better still because we can recall the “functor laws” from category theory (functors preserve identity and composition). These laws aren’t just abstract curiosities—they are actually useful for reasoning about programs. Mappable doesn’t really convey the fact that these laws should hold.

                                              1. 3

                                                I’m not sure most programmers can recall the functor laws from category theory. I certainly never heard of it.

                                                1. 4

                                                  To me, the key detail is: when you hear the word “functor” for the first time, you aren’t led into a false sense of familiarity. If it’s a new word to you, you can Google it and learn about the functor laws. But if we just called it “mappable”, you might think “ok I know what this is doing” and never learn the proper definition. And if you already know what functors are, you can immediately start using them without having to learn a “friendlier” neologism when learning a new language.

                                                  1. 3

                                                    Not most programmers, but most people who learn a language which explicitly uses those abstractions.

                                              1. 1

                                                This abstraction, monoids, then comes with the added benefit of being abstract enough that all of your programmers can spend their time explaining it to each other instead of writing programs that use the abstraction to do IO, and therefore deal with any actual users.

                                                Don’t you mean monads here?

                                                1. 1

                                                  It might be a typo, but it’s not wrong per se. Monads are monoids in a specific kind of monoidal category.

                                                1. 15

                                                  Continue learning formal methods with Coq and Isabelle. I started this year so plan is to keep going.

                                                  1. 13

                                                    DM me if you have any questions about Coq. I started learning it in 2013, and learning it has been the most rewarding technical investment I’ve ever made. I’ve proven a few nontrivial results including the soundness of STLC and System F, the Kleene fixpoint theorem, and some category theory. I’m not an expert by any means, but I’ve spent enough time begging Coq to accept my proofs that by now I’m pretty ok at it.

                                                    I also have some introductory material available here that might be helpful.

                                                    1. 1

                                                      Thanks. I appreciate the offer and the links.