1. 44
  1. 16
    let (|DivisibleBy|_|) by n = if n%by=0 then Some DivisibleBy else None
    let findMatch = function
      | DivisibleBy 3 & DivisibleBy 5 -> "FizzBuzz"
      | DivisibleBy 3 -> "Fizz"
      | DivisibleBy 5 -> "Buzz"
      | _ -> ""
    let fizzBuzz n  = n |> Seq.map findMatch |> Seq.iteri (printfn "%i %s")
    [<EntryPoint>]
    let main argv =
      fizzBuzz [1..100]
      0 // we're good. Return 0 to indicate success back to OS
    

    Look ma, no ifs or switches!

    I’m kinda confused how this qualifies as “no ifs or switches” when there’s an active pattern with an if expression right there on the first line that’s used multiple times.

    1. 3

      Seq.iteri

      It’s also buggy, as this appears to output the indices of the input sequence rather than their values

      1. 1

        It is a lie.

    2. 18

      Classes are required on JVM in order to hold code objects. Guard clauses on functions are usually compiled to switches. Monads are always present, even if they’re only implied. Common output devices like framebuffers are mutated in-place. Generic data structures are not just useful to build and share, but eventually get promoted to core types in languages.

      Just like with the referenced if/else/switch post, a lot of this advice is not just situational or contextual, but relies on a particular collection of beliefs about the nature of programming. In that article, the incomplete insight was that there exists a universal eliminator for any enumeration. What was missing was the understanding that the universal eliminator is isomorphic to the paramorphisms leading out from the enumeration. (Note that the more familiar katamorphisms are special cases of paramorphisms.)

      We should look for more of these insights. Ideally, our languages would simply not offer constructs which should be avoided. In Monte, I hope that I’ve avoided adding anything to the language which should always be avoided. This is a sort of meta-avoidance; there are coding constructs that I meta-avoid adding to languages! For example, I didn’t need any of the constructs who should be avoided in order to build an eliminator for enumerations. Examining a hand-rolled eliminator, we see that it has three methods: .walk() for paramorphisms, .run() for katamorphisms, and ._printOn() for pretty-printing.

      1. 10

        I think it would be useful if you could list what you specifically mean in this context by the terms ‘catamorphism’ and ‘paramorphism’. I understand that you’ve linked the given terms, and appreciate that, however wikipedia’s definitions are also of very little value when one does not remember much of the subject (I last did category theory, type theory, et al 6 years ago! most of these terms I have forgotten out of lack of use!), and they also lead you to go down rabbit-holes when chasing definitions. For example, catamorphism depends on the definition of endofunctor, which depends on the definition of a functor, and the understanding of the difference thereof between an endofunctor and a functor. And that’s just one of the many, many different trees that you can end up going down when you wish to remember or learn the definition of a term.

        1. 3

          Honestly, I happen to have those definitions fresh in my mind and still have no idea in what sense they’re being applied here. It’s not clear to me how a paramorphism or a catamorphism even applies to an enumeration, which term I’ve never seen marshaled in a type-theoretical context except to mean a tagged union whose type constructors all have arity 0.

          1. 1

            I think that you understood everything. Maybe my point is not very deep or enlightening; I merely mean that every way to eliminate an enumeration like the one in the original article is going to give a fold over that enumeration’s type. You’re right that enumerations are like degenerate tagged unions with nullary constructors, and that’s precisely the connection which is required to make the entire idea work.

            1. 1

              Ah, so you mean a fold at the type level over the inhabitants of the enumeration? I take your point now, but I read that article as applying to a more concrete domain, namely making the type open for extension by encapsulating dispatch over its inhabitants, which I guess you acknowledged when you mentioned that these things are contextual. I think my confusion came from the suggestion that the article was missing an insight, when the insight in question is potentially valuable to a language designer trying to (like you said) avoid making problematic constructs expressible in the first place, but probably wouldn’t be of much use to the author in trying to solve the problem right in front of them.

          2. 1

            “Katamorphism” and “paramorphism” are merely a way to be more precise about the common idea of “folding” a list or other collection using a summarizing operation. They’re both folds, but slightly different from each other.

            1. 4

              Right, and that was at least vaguely clear from the articles you listed. How they are different from each other, or how they apply in any way to this subject matter, is not. Using precise language is admirable, however it has a fundamental accessibility problem and it would perhaps be useful for you to address that.

              1. 1

                Agreed. It’s part of the nature of the top-level post; in order to be on-topic, I need to reply directly to what’s been posted. However, in this case, the original post is an assorted collection of opinions, and so unfortunately my response is not going to seem lucid.

                I think that a lot of folks struggled with this problem when replying to the post; we can’t simply say that it’s vague and incorrect, but are obligated to explain ourselves with a modicum of evidence and consideration.

        2. 8

          Fully agree about error handling and inheritance; glad to see it stated clearly.

          I don’t think I understand the objection about interfaces; they seem good but I haven’t used them enough to understand the problems mentioned. When I have to call Java code from Clojure, seeing that it accepts an interface is usually a relief, because it means I don’t have to inherit from some class.

          The bit that bugs me is the section where it talks about “mutability”. To me the term mutability refers to a property of data structures, but he’s using it to refer to the way languages bind values to variables. These two things are somewhat related, but using the same term to refer to both is misleading and confusing. The problem is that “mutable data structures” is a very clear and unambiguous term, but I don’t have a great term to refer to variable bindings which cannot be reassigned. Suggestions?

          1. 2

            Given that he gives an F# example, I think you can just assume he is referring to how it works in F#, with “mutable” and “ref”. He used “=”, but that is possibly only because people wouldn’t recognize the F# syntax which is using := and <-.

            Shadowing isn’t mutability at all, and it isn’t as confusing as mutability. You can’t share a shadowed variable by accident, but accidental shared mutable state is common.

            What makes you think the arguments aren’t concerning mutability?

            1. 3

              What makes you think the arguments aren’t concerning mutability?

              I’ve never used F#, but in my experience when people learn functional programming, they have a hard time understanding the difference between “I can’t change the contents of a data structure” and “I can’t change what value this variable points to”. I think the reason people confuse them is that some people use the term “immutability” to refer to both things.

              I am looking for better terminology so that we can have discussions about these different language features in a less confusing way. Outside the world of F#, I have found that “immutability” usually refers to a property of data structures rather than a property of variables, but if that’s not generally true then maybe it’s time to stop using that word in ambiguous contexts?

            2. 2

              I don’t have a great term to refer to variable bindings which cannot be reassigned. Suggestions?

              Is that SSA? Or linear / affine types?

              1. 1

                I call them mutable and immutable variables when I need to make the distinction.

                The word “constant” means different things in different language communities. I use the word “constant” to denote an expression that has the same value in all executions of a program. Like 42 or 2+2. Under my definition, an immutable variable that is bound to a constant is itself a constant. Like if you bind PI to 3.1416.

                1. 1

                  I think he’s talking about something even more simple than that. I think it’s more akin to Java’s final keyword + no shadowing.

                  Linear and affine types have to do with how the bindings are consumed/used. He uses the term “mutation” here, which leads me to believe he’s talking about “write” not “read”.

                  1. 1
                    1. 1

                      That probably works well enough, but I did C++ for a long time and const is overloaded there, so it might still not be clear enough…

                      Then again, final is also overloaded in Java.

                      So, I don’t know!

              2. 6

                Please be conservative about using sarcasm in writing things like this? I’m having trouble working out which sentences are ironic and which are sincere.

                1. 3

                  I agree. There were a few points where I was confused. I can imagine it would be quite difficult for someone from whom English isn’t their first language.

                  On the other hand, I kind of speak like he writes, so I think I get what he’s saying. Like when he says he “loves” all the interesting ways we’ve come up with to handle errors, I think he’s BOTH being genuine and snarky. Error handling mechanisms are genuinely interesting and fun to think about, but he’s also criticizing the fact that we often write software in ways that feel like we require complex error handling schemes and conventions.

                2. 5

                  If you’ve got too many things you’re sticking names on, it becomes difficult to reason about. Life becomes sad. Don’t do that. See Code Cognitive Load essay.

                  Related: Miller’s Law.

                  As I have watched myself and many other people in the technology sector work, I have come to realize that at a deep level, most of us, most of the time, should think more and work less. Extremely so. There are entire battalions of tech people who should stay home, take the day off, go fishing …

                  This is true in a lot more places than just technology, and a lot of “work” is actively harmful to the wellbeing of others or oneself. See David Graeber’s excellent book, Bullshit Jobs. UBI would solve this and so many other things.

                  1. 4

                    It’s good to see a kindred spirit when it comes to approach to writing code.

                    I don’t know if it’s possible or even desirable to teach this stuff. Maybe it’s better to just allow others to learn it through experience.

                    The problem I’ve noticed with talking to programmers about such things is they will often start to talk about ‘best practices’, because they value rules around practice, or say ‘but actually’ and list the ways in which you’re actually using ‘if’ and ‘else’ or ‘switch’ but it just looks different (missing the point) — or point out that mutation exists in an entirely different type of software development for performance reasons.

                    I do like seeing this kind of advice from experience popping up, though, as I’m sure reading opinions of more experienced programmers helped me eventually, even if I didn’t necessarily understand it on first or thirteenth read.

                    1. 10

                      I don’t know if it’s possible or even desirable to teach this stuff. Maybe it’s better to just allow others to learn it through experience.

                      It’s possible to push experience “down into” teaching with good lessons and exercises. We don’t do it too much in programming, but it’s really successful in other fields, like military strategy.

                    2. 3

                      Thank you for reminding me of FizzBuzz, that brings me back. Here’s my fave of the ones I’ve made so far:

                      (import srfi-1 srfi-42)
                      (do-ec (: i 1 101)
                      	 (if (any identity (map (lambda (str n) (if (zero? (modulo i n)) (display str) #f))
                      				'(Fizz Buzz) '(3 5)))
                      	     (newline) (print i)))
                      

                      Why I am so fascinated by the absolute basics of level one programming is beyond me… but I am. Sorta like a simple poem.

                      I didn’t deliberately avoid mutables and such. I was like “if #t flags or vector-set! or whatever is what it takes then it’s fine”, but it just ended up mostly functional except for the display stuff.

                      To me, a class is like a record or a struct kind of. “Data with its own behavior”, as (I think?) Norvig put it. A set of vars that belong together. I think of one of the early OOP languages and it was named Simula. And I’m like: am I simulating something? A set of objects with classes? Then I wanna use OOP. Otherwise no. And that means most often no. But I love it when it’s yes.

                      1. 3

                        How do you create libraries and frameworks without avoiding interfaces? I suspect that the limitations of F# is preventing the author from embracing interfaces like they are embraced in Haskell.

                        The clue is that F# nudges you towards classes if you want to use interfaces. I have worked with F# code where “everything is data” in the same way it is in untyped Python. You’ll be passing records around everywhere, sometimes maybe containing functions if you feel fancy. When an interface is obvious from the code, you like them, you may even document it. When they aren’t, the lesson is not that you need static typing, the lesson is that “it is just too complicated”. Yes, F# is typed, but it doesn’t have expressive types. The result in Python and F# is the same, you just blame yourself for being “too smart” and you retreat: inline stuff, write a 100th implementation of sequenceA with a “descriptive” new name.

                        Because it doesn’t compose: you can’t have an union type of records. Apparently the Great Pumpkin doesn’t want you to have interfaces or libraries.

                        Just another example of how F# is half-assed…

                        1. 2

                          I have worked with F# code where “everything is data” in the same way it is in untyped Python. You’ll be passing records around everywhere, sometimes maybe containing functions if you feel fancy.

                          I’m not really sure what you mean by this, do you have a clearer way of explaining it? Passing records around to functions is also how most Haskell libraries are written, so I’m not really sure what you’re getting at here.

                          1. 2

                            I find that Haskell typically uses type classes heavily. Sure, they can get compiled into records, but you don’t write them, you don’t have to construct them, and the inference algorithm will select instances for you. This results in a totally different style of programming, since it becomes all about the interfaces and their laws. Nobody would write F# like that since it would be way too verbose.

                            Since you don’t write it, I wouldn’t call it data. In the “programs are proofs” POV, you’d say that the compiler finds proof. In F#, inference is not used to the same degree, and your proofs turn into explicit data that isn’t separated the way it is in Haskell.

                          2. 1

                            Apparently he doesn’t create libraries and frameworks anymore. (?)

                            I’m doing a C-like self-hosting compiler with a module system, where I want to not have loops in the import graph, or if possible type loops at all. I have no idea how I’d do this without inheritance. A Statement is either an IfStatement, ExpressionStatement, or SequenceStatement, where a SequenceStatement contains an array of Statements? Bzz, type loop. And import loop, if you have your classes nicely separated. Sure you might say, “then all of that should really be in one module.” That’s just saying the entire compiler should be in one module. No, thank you. I’m not doing that anymore…

                            How do you do entity abstraction without inheritance? Maybe the answer is just “you don’t” - well, then modus ponens, modus tollens; I do, hence inheritance.

                            This just sounds to me like the goto backlash where people were abusing it so much that half the industry considered it toxic for a few decades. while (true) { if (condition) continue; break; It’s a goto! You’re just writing it without saying it, like censoring swears. I’m worried that we’ll get in a range of bad oop reinventions. Structs filled with function pointers…

                            But hey, thanks for pointing out those C# patterns. I might steal those. :)

                            1. 2

                              Apparently he doesn’t create libraries and frameworks anymore.

                              Right, but he also states that he avoids interfaces. My post is speculating on whether F#‘s bad support for interfaces is making him also need to avoid libraries, since they become pointless if interfaces don’t work well.

                          3. 3

                            This is largely how I prefer (not) to code as well. I write Elixir code mostly which lends itself well to avoiding these patterns for the most part.

                            In the avoiding try/catch/rescue section the idea of keeping validation and error handling at the edges and having a pure functional core especially resonated with how I like to design applications.

                            To add to the if/else/switch section, I think it simply comes down to using the best control structure for the context. I’m not going to use a case block or create new functions to match on instead of using an if/else block if there are only two possible branch outcomes. If later on it gets more complex, then I’ll refactor into one of these approaches and (preferably) match on types.

                            1. 1

                              In the avoiding try/catch/rescue section the idea of keeping validation and error handling at the edges and having a pure functional core especially resonated with how I like to design applications.

                              Where I sometimes struggle is when a decision needs to be made deep down in your business logic that would then require you to do some IO (e.g., get more records out of the database).

                              Do you just throw an exception and let it bubble all the way up? This feels wrong in the context of “functional core” and “make invalid states unrepresentable”, etc.

                              Do you break apart your business logic so your top level loop is threading business logic and IO? That can be pretty awkward- are you going to do half of an operation that’s supposed to be conceptually atomic/transactional, then hope that someone “up there” does IO and calls the function that does the second half?

                              Do you just prefetch every piece of data you could conceivably need even if you might not use some of it? That’s wasteful and potentially not even feasible.

                              1. 1

                                This is an excellent area for discussion. In my view, the term “business logic” is likely to be a major cause of confusion. I’ve heard people use it frequently, sure – but are we on the same page? I tend to think not – at least, not until I understand the context. So, to make it concrete, I think it is essential to get specific:

                                • Can you pick a few concrete examples of the kind of business logic you are talking about?
                                • Can you define “business logic”? This is hard – but at least can you define what it is not?
                                • What is the rate of change of business logic?
                                • Does the business logic need to be versioned?
                                • How much of the code is business logic and how much is “plumbing” (i.e. moving data around) and concerned with IO, threading, and other low-level details?

                                Some businesses (e.g. financial organization) have complex, detailed, business logic that varies across sales regions, geographies, regulatory boundaries, and so on. In my opinion, in such situations, representing the logic with data structures can be one great way to drive out bugs and version the logic. For example, it allows certain promotions to be offered at certain times and audited.

                                To take another example: if “business logic” means (in context), the rules for detecting and escalating possible account fraud, there may be an elaborate set of low-level algorithms that come into play to assess the probabilities of fraud. In these cases, I think modular software design is particularly important.

                                1. 1

                                  When I think of “business logic” or “high level” logic, I often think in terms of some basic abstractions of “time”. Transactions, like you mentioned, are very useful. Many systems offer some variation of them. Without them, there is only a shaky foundation for higher-level logic.

                                  1. 1

                                    I definitely sympathize with your questions on how intermediate I/O breaks the idea of a functional core. The concept makes the most sense by far in stateless applications as there can be one well defined “core” that represents a contiguous block of CPU time.

                                    Ime in cases where intermediate I/O is required, you have to make the decision to have it either no longer be a single “core”, or consider expected errors to be legal states. Both solutions you point out, neither of which are pretty. I usually opt for the first. Ive tried the second, but you often end up having to treat error states so differently from happy-path states in your functions that it’s largely pointless.

                                    To rele’s point, the idea of “business logic” is very vague. I think that often “business logic” is actually the I/O glue, not the pure functions (oh it’s this type of user? Then we need to hydrate this extra piece of data… which part is the “business” logic here? The type branching or the data hydration? Both?).

                                    All this said, pulling out as much of your inner application logic as possible into small and modular pure functions is invaluable regardless of if your application can have one single “functional core” or not.

                                2. 3

                                  “I don’t use if statements” is some new form of “I don’t even own a TV”, I’m guessing? Weird flex, but OK.

                                  1. 5

                                    It’s not really new, lots of schools of thoughts have been trying to be really careful about branching for a long time.

                                    1. 1

                                      Branching is relatively expensive, and kind of ugly.

                                      1. 2

                                        Not as ugly as all the alternatives I’ve ever seen proposed, unless you have non-local return.

                                    2. 2

                                      So, then, how does one handler errors internal to a program, then, if a given error indicates that a program should abort? Or that a core assumption of the program has been violated?

                                      Or the sorts of conditions that indicate those sorts of problems built to be impossible to achieve, by ensuring that all possible parsing/data-ingest errors happen at the edge of the program?

                                      1. 4

                                        So, then, how does one handler errors internal to a program

                                        Yeah, in a functional program written with the “functional core, imperative shell” model, there’s no such thing as errors “inside” the codebase, other than bugs. Everything that could fail for reasons that could be predicted ahead of time is handled by the imperative shell at the outside.

                                        1. 1

                                          How would you handle a divide-by-zero error in a calculator?

                                          1. 5

                                            It is not an error, it is a correct state that has to be encoded somehow. Dividing by zero is a valid operation, not a bug.

                                            1. 4

                                              As @Leonidas said, dividing by zero is an expected operation for a calculator app, so it isn’t considered an error by the definition used here.

                                              That can feel a little disingenuous because you can kind of go “reductio ad absurdum” on it and say that nothing in your program is an error: you should always expect the network might be down, the file disappeared because the hard drive caught fire, you ran out of memory, etc.

                                              I don’t have a good response to that because it’s probably correct.

                                              I feel like there are three different things that we collectively call “errors” for shorthand, but that might be doing some harm:

                                              1. Legitimate programmer mistakes. Indexing an array beyond its bounds, etc.
                                              2. Invalid state caused by inputs. This is the user typing in “100/0” into the calculator. This isn’t an “error”- the programmer is completely able to predict that this will happen (and WHEN it may happen!).
                                              3. Invalid state caused by the universe. OOM, network died mid-request, bad file permissions, memory corruption from cosmic rays, etc.

                                              I believe all three should be handled differently in the vast majority of programs.

                                              IMO, case 0 should just cause a crash.

                                              Case 1 should be handled by your type system if you’re using one. When doing an arithmetic operation, be prepared to return NaN or a DivByZero variant of an ADT or something.

                                              Case 2 should “throw an exception” (or equivalent) to be handled at the top loop of the program. Show a message to the user about what went wrong, try to clean up if possible, and then shutdown or try again or whatever.

                                        2. 2

                                          What are classes for?

                                          Nygaard would likely argue that they provide a programming construct that supports the modeling of phenomena, real or imagined. Kay might argue that they are a programming construct to support messaging. An academic or functional programmer would likely argue that they are just a programming construct to organize your data and code.

                                          1. 1

                                            This is just a guide on how to write idiomatic functional programming.

                                            1. 1

                                              If/then/else/switch. Strict pattern-matching guarantees that all code paths are exhausted, and if I’m ever going to need those code paths, I define them as soon as possible and never worry about them again. This is Bob’s polymorphic argument stated differently. If you have to think about it once, code it up and never think about it again. Where we part ways is that my argument is that for most work, I don’t want nor care. Instead of assuming I might need it somewhere else, I only code chunks of code that have specific business meaning in only one context. My scope can’t be all of existence. I don’t know the future the universe holds for me or this code. Trying to cover all of the bases once and forever looks like a fool’s game.

                                              I don’t understand what the author is saying here. Is the FizzBuzz thing supposed to be a concrete example or counterexample of their style?

                                              1. 1

                                                Anyone else notice code blocks in this article being truncated by several characters on the left if the device is in portrait orientation?

                                                Might just be me, but it makes the article unnecessarily hard to read, at least on an iPhone 12 Mini.

                                                1. 1

                                                  No, I noticed the same on my iPhone SE.