1. 4

    Does it count as ‘boring Haskell’ if it starts by enabling 39 language extensions?

    1. 3

      Did you look at the list? It’s almost entirely syntax sugar extensions.

      1. 1

        GADTs, existential quantification, multiparameter typeclasses, polykinds, rankntypes, scopedtypevariables, type families and datakinds are hardly syntactic sugar.

        I think enabling stuff like ViewPatterns is fine. That is syntactic sugar. RankNTypes is not.

      2. 1

        They start with 39 extensions in their proposed standard library, by my hand count.

        “Boring” seems like a carefully-chosen word to distract from the obvious commercialization being attempted by the author.

        1. 4

          Distract? We’re trying to get paid to move more companies into using Haskell. That’s my (I’m Chris) job and the whole thesis of the article is that there is a highly productive subset of Haskell that is extremely useful in software projects operating under commercial constraints.

          The first line is:

          Goal: how to get Haskell into your organization, and how to make your organization more productive and profitable with better engineering.

          Your reply isn’t constructive and casts aspersions by claiming the explicit point of article is somehow an act of subterfuge. We just want people to start using better tools. For us programmers at FP Complete the reasons for that are selfish but straight-forward: we’re programmers who want to use the tools we like because they make our work less tedious.

          I want to get paid to write software in nice programming languages. I want to create more jobs where people get to get paid to write code using nice programming languages.

          1. 1

            The ‘highly productive subset of Haskell that is extremely useful in software projects operating under commercial constraints’ does not involve starting with 39 language extensions and a huge pile of extremely complex type system nonsense that results in awful error messages. If people want to use a language with cryptic type errors and high performance they should use C++.

            Haskell’s highly productive subset is Haskell with at most about 3 language extensions, all of which are totally optional, along with a set of well-built libraries. I’d avoid typeclasses entirely, and even if you don’t go that far, certainly I’d avoid GADTs, lenses of any kind, and anything to do with the word ‘monad’. No MTL or anything of that nature.

          2. 3

            I’m confused, is commercialization a good thing here or a bad thing?

            1. 1

              39 extensions in their proposed standard library

              That’s what I meant, whoops.

              1. 3

                A lot of language extensions are innocuous and useful. It’s not our fault those extensions haven’t been folded into the main language. What’s your point?

                1. 1

                  The extensions haven’t been folded into the main language because they’re completely unnecessary. You do not need GADTs to write Haskell. Advanced type system trickery is exactly the sort of unnecessary crap that a ‘Boring Haskell’ movement should be not using.

                  1. 1

                    Many of those extensions are actually less well-thought-out than Haskellers think.

                    It is widely acknowledged that, as a general rule, orphan instances are bad. One problem with them is that they allow third-party modules to introduce distinctions between types that were originally meant to be treated as isomorphic. (At least assuming you are a well-meaning Haskeller. If you wanted to allow third parties to subvert the meaning of your code, you would use Lisp or Smalltalk, not Haskell.)

                    Then GADTs and type families are dangerous for exactly the same reason.

            1. 2

              I always thought that was pretty clever.

              That was actually pretty stupid, on so many level I don’t even go to details.

              1. 23

                This isn’t Hacker News, if there are reasons it’s stupid, we go into why it’s stupid :P

                1. 0

                  I did that immediately in my second comment.

                  1. -1

                    This morning, I thought of saying the same thing about trivial dismissive comments in response to your comment here.

                    1. 3

                      I’m sorry, what? How is saying “I found a part of confusing” in any way similar to “this is stupid on so many level I don’t even go to details”?

                      1. -5

                        I was following it up until the linear logic part; I think the first principles of LL could be explained better here.

                        This isn’t Hacker News, if there are reasons the first principles can be explained better, we go into how it can be explained better :p

                  2. 4

                    Oh yes, I think the same. So much that can go wrong with physical devices. Imagine someone lightly touching the arm and getting it out of alignment.

                    Note that you can just attach a USB mouse to an iOS device and click buttons with that. USB mice are easy to emulate in software.

                    1. 1

                      Thats one thing. Second is that you could just run the whole app in emulator. Or decompile it and just cut the approving routine from it. Whole

                      There was no programmatic way around this.

                      is just a failure of imagination. Off-course, best solution would be to run this through management and force them to remove this bullshit altogether.

                      1. 11

                        Or decompile it and just cut the approving routine from it.

                        Yes just reverse engineer a banking application and hope you don’t mess anything up. It’s just money, nothing serious could happen if you introduce bugs.

                        Also, from the post:

                        we couldn’t just run it in a simulator

                        Maybe don’t make so many assumptions about the author or their problems. For all you know, they exhausted all the reasonable options before resorting to this tomfoolery. Or this technique was used as a stopgap while coming up with a more robust solution. Or they simply didn’t have the time to do better. A business runs on money, not perfectly elegant solutions.

                        1. 6

                          You’re imagining this was a technical constraint on their side. It might have been a legal or contractual constraint.

                          1. 5

                            The approval was software-based, but in a closed system. The approval mechanism ran in a third-party iOS application, and we couldn’t just run it in a simulator.

                            I’m assuming there were considerably more constraints going on that make the “obvious” solutions in applicable.

                            1. 4

                              Those are some pretty big “just”s!

                              Personally, since it’s presumably a capacitive touch screen, I’d attach a wire to the correct area of the screen, and hook it up via a relay to a wet sausage. It would require weekly maintenance, maybe more in warm weather.

                              1. 3

                                This kind of armchair quarterbacking isn’t helpful. You could have rephrased all of your points as questions to find out the situation: “I wonder why they didn’t run this through management…” Not only does this make your comment less hostile, it opens you up to learn (as @saturn said, it might have been a legal requirement). Assuming you haven’t changed your mind after learning the full context, you might be able to persuade people that an assumption they held isn’t actually true – that’s how you provide technical advice on something you didn’t build without pissing off the creators.

                          1. 9

                            Here’s an introduction and demo given at Strange Loop.

                            1. 6

                              I’m implementing an applicative validation library. I know there are many others but lately I’ve been enjoying implementing stuff that already exists just to understand it. Wish I had more original ideas but this is fun too.

                              1. 5

                                My experience with my own data is that almost all of it already has its own compressed file format optimised for the kind of data it is. E.g. JPEG for photos, MP4 for video, etc. Adding another layer of compression to that is not only a waste if time but often makes the dataset slightly bigger. Text, program binaries, and VM images could be compressed for archival storage but consider the fact that storage has gotten ridiculously cheap while your time has not. But if you really want to archive something just pick a format that’s been around for decades (gz, bzip2, xz) and call it a day.

                                1. 4

                                  “consider the fact that storage has gotten ridiculously cheap while your time has not. “

                                  This is true for middle class folks and up. Maybe working class folks without lots of bills. Anyone below them might be unable to afford extra storage or need to spend that money on necessities. The poverty rate in 2017 put that at 39.7 million Americans. Tricks like in the article might benefit them if they’re stretching out their existing storage assets.

                                  1. 4

                                    Consider that a 1TB hard drive costs $40 - $50. That’s $0.04 per gigabyte. Now say you value your time at $10 an hour. Even one minute spent archiving costs more than than a 1GB of extra space, and the space saved is unlikely to be that much. If you don’t have $40 - $50, then of course, you can’t buy more space. That doesn’t mean space isn’t cheaper than time. It’s just another example of how it’s expensive to be poor.

                                    1. 1

                                      One other thing to add to the analysis is that one can burn DVD’s while doing other things. Each one only accounts for the time to put one in, click some buttons, take one out, label it, and store it. That’s under a minute. Just noting this in case anyone is guessing about how much work it is.

                                      The time vs space cost still supports your point, though. Anyone that can easily drop $40-100 on something is way better off.

                                  2. 3

                                    Adding another layer of compression, especially if it’s the same algorithm, often won’t shrink the file size that much. However, it does make it very convenient to zip up hundreds of files for old projects, freelance work, and have it as a single file to reason about.

                                    I would not be so cavalier with the archive file format. For me, it is far more important to ensure reliability and surviveability of my data. Zipping up the files is just for convenience.

                                    1. 4

                                      That’s why there is tar, which, by itself, doesn’t do any compression.

                                      1. 1

                                        I was thinking that tar suffers from the same file concatenation issue that affects other SOLID container formats. But it looks like cpio can be used to extract a tarball skipping any damaged sections.

                                      2. 1

                                        A benefit of zipping together files is that it makes transferring the zipped archive between machines/disks much easier and faster. Your computer will crawl at writing out a hundred small files, and one equally-sized file will be much faster.

                                    1. 6

                                      A month ago I switched to a remote job after working at a busy and social company for three years. There are many benefits in working remotely, but by far the most important drawback is that not having other people around for most of every working day is simply incredibly lonely.

                                      “Go out more” as advertised in this article seems to miss this point. Meeting people out of work hours doesn’t solve the unnatural state during the default day. If you see a chimpanzee spend most of his days in an isolated part of the forest, something is probably wrong with it. I think the same holds for humans.

                                      I would say that the only really healthy approach to remote work is to rent a space with a number of other remote workers (not necessarily from the same company) that you can get to know on a personal level. Which is what I’m lucky enough to be doing now.

                                      1. 21

                                        Comments like this always remind me of something I read a while ago. Someone said: “Introverts think extroverts are different. Extroverts think introverts are broken.”

                                        Most people I met are like you, but some of us aren’t. And I’ve been working mostly remotely with short interludes for more than a decade without ever experiencing loneliness.

                                        Totally agree with you that after work socializing is a poor substitute for those who need this social contact.

                                        1. 6

                                          I’m the same way. I’ve never been lonely, and I’ve worked remote for almost 4 years now. If anything, I probably know my coworkers better because there’s a bit more incentive to talk on slack since we are (mostly) all remote.

                                          1. 4

                                            I consider myself mostly an introvert. Still, being around people, even without interacting with them, almost seems like a necessity of life. I feel that especially for introverts, who are less likely to spontaneously catch up with people outside of work, the company of people during work is quite essential.

                                            Perhaps it is just a matter of getting used to a new situation, but so far it strikes me as odd that in discussions about remote work, this issue is usually brushed off with “get out more often!”.

                                            1. 6

                                              I feel similarly; I have always considered myself an introvert, and after 8 years of working in offices, I was really happy to accept a remote position.

                                              After two and a half years I started paying for a permanent desk in a coworking space in the city, because sitting in my home office in the suburbs all day was driving me crazy. I’d go out for a walk every day, usually have lunch out just to be out and about more, but it was still not enough. It’s been a year since I started working in the coworking space and every now and then I’ll have a reason to stay home (waiting for a package or getting something at home fixed), and it reminds me how lonely it is.

                                              (I’m glad not everyone has this experience! But I don’t think the divide is as clean cut as extrovert/introvert.)

                                              (edit: reading @technomancy’s comment below makes me realise the part where I’m working remotely across timezones — and indeed, in an “unpopular” timezone (UTC+10/11) — probably makes the biggest difference of all. I think it’d be much different if my team was in my timezone.)

                                          2. 7

                                            Why aren’t you interacting with your team? I’m on slack and have multiple video calls with teammates per day. It’s actually slightly better than being in the office, because the office creates the sense that if someone is not there (but working elsewhere) they’re inaccessible.

                                            1. 3

                                              I take issue with the assumption that multiple video calls a day is an appropriate level of interaction for all remote teams. It may work if you’re all working collaboratively on a small set of projects, but that definitely isn’t a given, and without the need for that level of collaboration and communication, it’d be hard to justify interrupting the work of the rest of your team to satisfy your personal social requirements.

                                              1. 2

                                                Sometimes that’s not possible. I remoted from China for about three months when I was visiting my partner out there. The team was distributed in Europe and the Americas. The language barrier and the huge timezone difference were very isolating. Definitely started to go a bit crazy spending so much time alone, and I’m definitely on the more introverted side.

                                                Now we’re in MXCD and it’s significantly better. Loads of (10+) co-working spaces within walking distance, whereas Guangzhou only had 2-3, with about an hour commute each way, and people are awake/online, so lots more slack based interaction.

                                                We also have an apartment dog, which I would recommend to anyone if you’re animal inclined. It’s really nice having a doggy friend when working from the apt, and she provides a good incentive to take regular breaks instead of powering through the day.

                                                1. 9

                                                  I remoted from China for about three months when I was visiting my partner out there. The team was distributed in Europe and the Americas.

                                                  Working with a team across a time zone difference is usually done remotely, but it’s a vastly different thing from working remotely with people in the same or adjacent time zones that it’s misleading to draw conclusions about one from experience in the other.

                                                  1. 2

                                                    There are multiple additive factors. Radical timezone differences, difficulty communicating with locals, and being physically isolated (working from a home office) all significantly contribute to a feeling of general isolation. It’s not just about being physically separate from your team.

                                                    The move from Guangzhou to Mexico City highlighted for me how each of those factors contributed to that feeling of loneliness, and how, as each one of them was addressed, remote work needn’t be so isolating.

                                                    My experience mirrors @kivikakk, and @thev in that I’ve found co-working spaces can help significantly. They help not just with the feeling of physical isolation, but because they tend to attract people in a similar industry, and if you are overseas, because they tend to attract a higher percentage of expats who generally have similar in-country experiences, and who may speak your native language.

                                              2. 2

                                                Speak for yourself, pal. I work for a mostly-remote company, and we’re not all basement-dwelling losers, and as far as I know nobody’s secretly dying of loneliness. Back when I was commuting to a cube farm, I wasn’t there to hang out with friends, I was there to pay the rent.

                                                1. 1

                                                  My friends started a company together and they’re not all in one place so they spend their time on a Discord call together, which I always thought was kind of clever.

                                                  1. 1

                                                    I love spending days at home alone. I feel much more peaceful and balanced. Be careful not to generalize from your own experience.

                                                  1. 1

                                                    I recently paired with someone who had shot themselves in the foot with React Hooks. What I observed is this: hooks are much harder to form a mental model around. They seem like magic, so this person was using them in ways that don’t make sense, expecting this to work out magically. Once we started reasoning about a hook-owning component as a class component, this confusion cleared up. Classes are perhaps easier to reason about because there are many of analogues beyond React. That isn’t true of hooks. Most people, including myself, are surprised to see that something like that exists. They’re have a dubious necessity and they seem counterproductive to clear thinking.

                                                    1. 1

                                                      Firstly, I agree with the fundamental point here that using a familiar word for a new concept may be misleading. It privileges one analogy over others. It may even force you to unlearn an immediate misconception before you can truly learn the concept, and even then you may continue to mentally trip over it afterwards. This phenomenon is frequently used on purpose in advertising or propaganda.

                                                      However, with respect to Functor vs. Mappable:

                                                      class Functor f where
                                                          fmap :: (a -> b) -> f a -> f b
                                                      

                                                      I mean, enough said, right?

                                                      1. 4

                                                        For people coming from more mainstream languages, “map” has been learned as an iteration over a collection. That is not the correct base intuition for Functor, though incidentally map does iterate over instances that are collections. Many common instances are not collections, and the name Mappable may lead to confusion with iteration.

                                                        1. 2

                                                          My point is you can’t logically make that argument and then call the function fmap. I mean, pick a lane!

                                                      1. 10

                                                        Interestingly enough, FPers used “functor” because it was a familiar word from math for a similarish concept. But mathematicians borrowed it from “function words” in linguistics, which was a similarish concept, too! It’s only an “pointer” name if you don’t have that background, and otherwise it leverages your intuition as much as “class” does.

                                                        If we want to avoid familiarity, though, why “functor” at all? “Flurp” has even fewer collisions, and is shorter to boot!

                                                        1. 2

                                                          The point is that a movement to rename Functor to Mappable is misled because these connections lead to more confusion than clarity. That the author doesn’t make this argument against the name Functor doesn’t invalidate the argument. I can’t tell if Flurp is throw-away sarcasm, but why not? Many symbols in APL seem arbitrary. I would guess that some are not. Most of the time, I can’t tell the difference. But (seemingly) arbitrary symbols are no harder to learn. So whether it’s Flurp or a name like Functor, which is just as inscrutable to many people, the name “doesn’t matter.”

                                                          1. 2

                                                            Names don’t matter but consistency matters, so please no Flurp!

                                                            1. 1

                                                              Many symbols in APL seem arbitrary.

                                                              The symbols perhaps (though even there: many of them are intentionally suggestive) but not the names of the symbols. J’s vocab page:

                                                              https://www.jsoftware.com/help/dictionary/vocabul.htm

                                                              “Tally”, “Ravel”, “Stitch”, “Cut”, etc….

                                                              That human beings can attach meaning to arbitrary symbols in tight contexts (as happens everywhere in math) is not an argument that arbitrary names are just as good as thoughtfully chosen ones. These are basically unrelated phenomena.

                                                          1. 2

                                                            I love this blog, the interactive animations make so many concepts so much easier to understand. I remember reading the post that explains how each post is made, and I was just in awe at the amount of work that goes into each of those. In comparison, I’m having a hard time writing up a couple hundred words every few weeks.

                                                            1. 1

                                                              Yeah it’s a really high quality blog!

                                                            1. 5

                                                              People who “just ignore comments because they get outdated” never seem to see that they’re the reason comments get outdated.

                                                              I try to comment the shit out of my code, because otherwise how will I be able to read it back after 12 months of bitrot and working on other things?

                                                              1. 4

                                                                how will I be able to read it back after 12 months of bitrot and working on other things?

                                                                By…

                                                                • Using meaningful variable names and function names. (Yup, suffers from the same problem as comments, in that what the function does may change. But it tends to slap you in the face every time you look at it.)

                                                                • Using short simple functions whose intent and functionality is simple and obvious. (Use Humble Function / Humble Class pattern at the higher levels)

                                                                • Use pre-condition, post-condition and invariant (especially class invariant) asserts. Think of them as “executable comments”.

                                                                • Use unit testing where each test case sets up and checks and tears down a single behaviour. Think of it as executable documentation.

                                                                ie. Do good design.

                                                                1. 5

                                                                  This is good advice, and it works in a lot of cases, but there are definitely cases where it just totally breaks down. The most common such cases, in my experience, is in code whose goal is to be very fast. It is very often the case that better performance comes at the cost of readable code. (In some circumstances, if you’re especially clever, you can write code that is both fast and readable. But I personally fail regularly at this, despite my efforts.) In cases like these, I compensate by writing lots of comments explaining the technique.

                                                                  1. 2

                                                                    The most common such cases, in my experience, is in code whose goal is to be very fast.

                                                                    This story went by recently… https://lobste.rs/s/drl28y/john_carmack_on_parallel I wish I could up vote it twice….

                                                                    I note Walter Bright the D language guy retweeted it as well.

                                                                    It’s a discipline I want to get better at.

                                                                    Because the hard rule under fast code is “no code is faster than no code”…. so odds on if it’s more complicated… it’s slower.

                                                                    You really need to prove it is worth the complexity.

                                                                    I have always loved this tale…

                                                                    From http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.7173&rep=rep1&type=pdf

                                                                    2 The Oberon Compiler

                                                                    Niklaus Wirth is widely known as the creator of several programming languages. What is less well known is that he also personally wrote several compilers, including the first single-pass compiler for Modula-2 that later evolved into the initial compiler for Oberon 5. These compilers distinguished themselves by their particularly simple design – they didn’t aspire to tickle the last possible bit of achievable performance out of a piece of code, but aimed to provide adequate code quality at a price-point of reasonable compilation speed and compiler size. At the time, this was in stark contrast to almost all other research in compilers, which generally had been characterized by an enormous and ever-increasing complexity of optimizations to the detriment of compilation speed and overall compiler size.

                                                                    In order to find the optimal cost/benefit ratio, Wirth used a highly intuitive metric, the origin of which is unknown to me but that may very well be Wirth’s own invention. He used the compiler’s self-compilation speed as a measure of the compiler’s quality. Considering that Wirth’s compilers were written in the languages they compiled, and that compilers are substantial and non-trivial pieces of software in their own right, this introduced a highly practical benchmark that directly contested a compiler’s complexity against its performance. Under the self- compilation speed benchmark, only those optimizations were allowed to be incorporated into a compiler that accelerated it by so much that the intrinsic cost of the new code addition was fully compensated .

                                                                    And true to his quest for simplicity, Wirth continuously kept improving his compilers according to this metric, even if this meant throwing away a perfectly workable, albeit more complex solution. I still vividly remember the day that Wirth decided to replace the elegant data structure used in the compiler’s symbol table handler by a mundane linear list. In the original compiler, the objects in the symbol table had been sorted in a tree data structure (in identifier lexical order) for fast access, with a separate linear list representing their declaration order. One day Wirth decided that there really weren’t enough objects in a typical scope to make the sorted tree cost-effective. All of us Ph.D. students were horrified: it had taken time to implement the sorted tree, the solution was elegant, and it worked well – so why would one want to throw it away and replace it by something simpler, and even worse, something as prosaic as a linear list? But of course, Wirth was right, and the simplified compiler was both smaller and faster than its predecessor.

                                                                    All that said, if you want to see well commented code… it’s a lost art.

                                                                    Look at assembler language code from the glory days of assembler programming.

                                                                    There it’s plain obvious that say an “add” was happening… the comments were like a blow by blow essay about the why.

                                                                    1. 2

                                                                      Because the hard rule under fast code is “no code is faster than no code”…. so odds on if it’s more complicated… it’s slower.

                                                                      You really need to prove it is worth the complexity.

                                                                      Yes, that is the premise of my comment. :-) In my experience, though, it’s the reverse: faster code tends to be more complicated. See my other comment where I compared the “simple” version of memchr with the fast version. The size difference between them is approximately three orders of magnitude. But oh boy, it is worth it. (Note that the graphs are in log scale.)

                                                                      Now, the benefit here is that using the optimized implementation has roughly the same complexity as implementing the naive version. So the complexity is almost entirely contained. But it’s paid somewhere.

                                                                      1. 1

                                                                        Fortunately for sanity that is a rare case…. and is darm close to assembler code even in rust since it is mostly asm primitives.

                                                                        In fact, I almost wonder whether adding a rust compiler into the mix is worth it? Or would a pure asm implementation (probably copy pasted out of the Intel reference docs) do better in simplicity / speed?

                                                                        1. 2

                                                                          I find the Rust code easier to read that asm personally, but that’s my own personal bias. glibc’s memchr is written in asm, and its performance is roughly comparable (see the red and purple bars in the linked graph). It also simplifies compilation: all I need is a Rust compiler, instead of also needing an assembler.

                                                                          But in my work, these cases actually aren’t rare:

                                                                          I’m not particularly proud of the last three. I think they could all probably be simpler. But they have a crap ton of unit tests. (N.B. I am not claiming that these things have irreducible complexity. I’m just saying that I haven’t been smart enough to make them simpler, despite trying.)

                                                                          1. 2

                                                                            You’re clearly on that peculiar and difficult (and hopefully not thankless) end of the spectrum… writing the core libraries the rest of us mortals to piggyback on…

                                                                            I do note a couple of things browsing about that code…

                                                                            • SIMD/AVX primitives were devised by Satan. On the Rusty Russel scale of API Goodness they are (if you’re lucky) a low 3 out of 10 where the documentation for each one has a large bucket of fine fine print. Yup, that’s stuff is gnarly.
                                                                            • Some like the ripgrep stuff actually on the quiet has a huge amount of function points… (before context, after context, ….) that just adds and adds up. The only guy I have seen who has a really good handle on handling combinatorial requirements explosions is Andrei Alexandrescu.. I really think he is on to something with his Design by Introspection I’d be curious to see his approach used in Rust and see how well Rust copes with it.
                                                                            • Some of the algorithmic stuff like the NFA/DFA stuff is just plain hairy at an algorithmic level.

                                                                            In terms of relatively low comment to code ratio doing very hairy stuff, Andrei Alexandrescu’s Checkedint library is an interesting example…https://github.com/dlang/phobos/blob/master/std/experimental/checkedint.d

                                                                            The core idea is the compiler knows everything about the types and variables you’re using…. You should be able to ask it at compile time (in simple and understandable ways) and handle variation at compile time.

                                                                            1. 1

                                                                              Thanks for the links. I’ll check out Andrei’s talk and see if I can steal some good ideas. :-)

                                                                    2. 1

                                                                      This problem regularly happened in high-assurance when writing low-level code. Their trick was to write an English description of what it did, a formal spec that made it more precise, and equivalent implementation. In parts of Karger’s VMM, they were even using assembly. They closely scrutinized and tested it against the higher-level spec, though.

                                                                      So, translating that to mainstream development, you might have two versions of the code: one that makes the algorithm very clear with pre-/post-conditions and invariants; the highly-optimized version. Property-based testing and fuzzing can then establish equivalence while catching errors in either. That should fit the string processing libraries Ive seen you post.

                                                                      Now, you do have an extra copy to update. Updating the simpler code should only take a tiny fraction of time of optimization, though.

                                                                      1. 2

                                                                        Now, you do have an extra copy to update. Updating the simpler code should only take a tiny fraction of time of optimization, though.

                                                                        Hah, yes. My favorite example of this is memchr. In Rust, it’s:

                                                                        haystack.iter().position(|&b| b == needle)
                                                                        

                                                                        But the SSE2 accelerated version is this big pile of goop — At least it’s written in a high level language though. glibc implements this in Assembly.

                                                                        And then double it for the AVX version. This could be reduced somewhat by using a platform independent abstraction over SIMD vectors, since the SIMD operations used here are fairly simple, but you still end up with at least one copy of goop.

                                                                        But the simple/naive version is just one line of code, eminently readable, pretty fast on its own and portable across every supported architecture.

                                                                        The things we sacrifice for performance…

                                                                        1. 1

                                                                          Yeah, those are good examples. The sequential version can be used for the parallel versions since they’re supposed to have equivalent output (albeit maybe not same order). If you wanted, might also use a parallel language or pseudocode formalism for such things.

                                                                          1. 1

                                                                            The outputs should be equivalent, including order.

                                                                            But yeah, if I had a deeper intrinsic interest in formalism, I might pursue that, but I also have a strong interest in shipping and a TODO list a few years long already. :-) Thorough unit testing is good enough for now.

                                                                            1. 1

                                                                              Yeah that should be enough for your situation. I’d push the other stuff harder on commercial side

                                                                              Far as ordering, I worded it that way because of difference between parallelism and concurrency. Some languages/notations with strengths in one have weaknesses in other, such as ordering. Just trying to be accurate.

                                                                    3. 1

                                                                      So, if I were a true scotsman did good design, it’d be sweet right?

                                                                      Of course, all the code other people write and I have to edit, and I wish they’d documented better, is that because they’re not “doing good design” too? Because I find the only time I can jump quickly in to other people’s code is when it’s both documented and well-designed.

                                                                      And if the explanation of the what/how/why of a thing is longer than can be compressed into a singleCamelCasedName of sensible length?

                                                                      You shouldn’t have to pollute whatever namespace in which you’re focused by breaking things up into functions that are never used elsewhere just to make the steps of a process clearer, when you could just throw in some documentation / explanation instead.

                                                                      1. 2

                                                                        I’m not saying good documentation isn’t a nice to have…. I’m saying given (the admittedly unpleasant choice) of good design xor good documentation I’ll choose good design every time.

                                                                        Why? The compiler and the CPU keeps good design honest, nothing forces the commenter to be honest.

                                                                        Yes, it’s partly a “no true scotsman” argument as I have come to define “Good Design” as “How little I need to read and understand to make a beneficial change”.

                                                                        In the realm of multi-megaline code bases this is a fairly compelling criteria, as I have met no one who can keep such a monster in their head.

                                                                        This sums up my relative priorities… http://sweng.the-davies.net/Home/rustys-api-design-manifesto

                                                                    4. 3

                                                                      I think the counterpoint is that if you can ignore a comment, then why have the comment to begin with? When a comment is being ignored, it means that the information was not needed or that the same information can be had from a different source, presumably by reading the code. The latter would be an example of self-documenting code.

                                                                      1. 2

                                                                        Code can’t say ain’t, to misquote Sol Worth.

                                                                        Code can only say what the authors decided to do, not what they chose to not do, and not why they chose to not do what they didn’t do. Maybe there’s no real reason one path was chosen above another path. Maybe the other path contains dragons. Maybe the other path used to contain dragons, but those dragons have been slain, or have moved on, or are worth fighting now because this choice blocks some other progress.

                                                                        You can advocate for code documentation, but documentation is just comments in a different file, as far as the idea that comments can go out of date is concerned.

                                                                    1. 3

                                                                      I’m not sure this is the best example but it’s one I’ve been debating with myself lately: Edward Kmett’s recursion-schemes library. There are very few comments (even doc comments) in the source. And I think the source is clearly written (to the extent that I can judge this as an advanced beginner, junior intermediate Haskell programmer). But the clarity depends on a familiarity with the recursion schemes literature and a working knowledge of Haskell. So maybe the “comments” are externalized as papers. I wouldn’t be able to use the library without that context. But, once I have that context, the code seems like the clearest expression of the concepts. Comments explaining what the code does would be less expressive than the code or would start to resemble one of the papers.

                                                                      I’m not sure if this satisfies as “real world” - it is a public codebase and programmers do use this library to do things. Mostly I’m skeptical that there can be any consensus about what is and isn’t self-documenting. In my career (which isn’t so long yet), I’ve mostly worked on maintaining and extending other people’s code. Much of it wasn’t very clearly written. But even when there are comments, I’ve found that reading the code is the most straightforward way to understand what’s going on. There are occasional exceptions, where there is an essential context that the code doesn’t express. And it’s very frustrating when this isn’t noted in some way. But the self-documenting argument always seemed to me to be a preference for/against a certain style of reading code more than a quality of the code itself.

                                                                      1. 26

                                                                        While spreading yourself overly thin is definitely a bad idea, this article is basically about the generalist vs specialist debate, and goes all-in on the specialist side.

                                                                        While there is value in being very knowledgeable in a certain domain or technology, it also makes you less flexible with regards to possible employers or different projects within a single company. I’ve worked with plenty of developers who refuse to learn a new technology stack, because they think they’ll start as beginners again and it will hamper their career growth (more specifically, they fear that they will not grow their income at the same rate they would if they continued working with the same thing).

                                                                        However, the technology landscape changes, perhaps not overnight but definitely over the course of an entire career. Sticking to what you’re experienced at may lead to a dead end some years in the future.

                                                                        Additionally, I believe that many of the important skills that make a developer more valuable are not related to using specific technologies or tools, but rather in generic skills that are transferable across languages, tools and frameworks.

                                                                        Using and learning new technologies might also prevent you from forming tunnel vision. If all you have is classes and inheritance, everything starts looking like it should be an object, for example.

                                                                        You probably shouldn’t use a different language for every project, the same way you shouldn’t use a different Javascript framework for every project just because it’s the hot new thing. But I also think you shouldn’t bet the house on a single language and/or framework just because you consider yourself an expert in that niche.

                                                                        1. 10

                                                                          While spreading yourself overly thin is definitely a bad idea, this article is basically about the generalist vs specialist debate, and goes all-in on the specialist side.

                                                                          I don’t agree. I think the article is trying to signpost a danger for new folks coming to our field - that is the temptation to feel like I MUST LEARN ALL THE THINGS! rather than realizing that there is tremendous depth at play here and that while having a broad skillset is good, you MUST be able to go deep on some small subset of things, whether or not you’re a specialist or a generalist.

                                                                          I’d also argue that being a generalist can make it damn hard to stay employed, because the entire recruiting machine wants to plug discrete shapes into discretely shaped holes, and if you don’t fit that model you’re gonna have to work 10 times as hard to make it past said machine rather than working with it.

                                                                          That’s my experience, anyway.

                                                                          1. 3

                                                                            Yes, recruiting machines want developers with specific boxes checked on their resume. But I generally find it pretty easy to write a resume for a specific position highlighting the stuff I’ve done they want to hear while only mentioning others in passing.

                                                                            There is also a shortage of qualified workers in our field, at least for now, so as soon as you can match a few of the qualities they’re looking for, you can land an interview, and from there it’s easier to convince a hiring manager that you’re a capable worker.

                                                                            1. 2

                                                                              I think it’s possible we’re talking about two different things here.

                                                                              Landing a job is one set of skills - thriving in a job is quite another.

                                                                              I’m talking about the latter.

                                                                              1. 2

                                                                                I’m replying specifically to your remark about recruitement looking for specific profile features to fill certain positions in your parent comment.

                                                                            2. 3

                                                                              Breadth & depth are not in contradiction, once you get beyond a very beginner level – particularly in terms of the kind of tech that gets used in our industry, since programming language design for ‘general purpose’ languages & best practices for ‘enterprise’ system is extremely intellectually incestuous. If you know two C-like languages, you can read and write fifty more without actually learning them, or learn them as well in an afternoon as a beginner could in two months.

                                                                              It’s possible to go deep into a domain that doesn’t have broad applicability, but you have to look hard for such a thing. Extremely deep and seemingly-disconnected subjects like cryptography & type / category theory are having a big impact, and weird and previously-obscure languages like haskell are starting to influence how normies write their java.

                                                                              The strangest statement here is where OP says that “even experienced developers don’t know that much”, for a list of technologies that looks like what somebody would put down as the minimum requirements for applying for a junior web dev position. It’s one thing to go all-in on specialist knowledge, but it’s another thing entirely to define the domain so narrowly that everybody with a bachelor’s degree and a vague interest counts as expert.

                                                                              Since this article is aimed at junior devs & folks who are still in college, let me be clear: at this stage of development, you lack the background to determine what is and is not likely to be relevant to future tasks and projects, so your job (first and foremost) is to learn everything. Doing ‘professional’ work with the kind of attenuated understanding that comes from only studying things that are obviously applicable to outsiders is the source of many problems, ranging from merely wasteful and stupid to actually dangerous.

                                                                              1. 1

                                                                                Since this article is aimed at junior devs & folks who are still in college, let me be clear: at this stage of development, you lack the background to determine what is and is not likely to be relevant to future tasks and projects, so your job (first and foremost) is to learn everything. Doing ‘professional’ work with the kind of attenuated understanding that comes from only studying things that are obviously applicable to outsiders is the source of many problems, ranging from merely wasteful and stupid to actually dangerous.

                                                                                I don’t disagree, but I also think it’s super important to learn how to learn the necessary skills to accomplish your goals.

                                                                                Maybe that’s key - create goals for yourself. Build things. Meaningful things, not just variations of Hello World or examples of the particular pony tricks your new pet language can do well.

                                                                                In learning what you need to actually build projects that you can be proud of, you’ll achieve many goals people have stated here - learning the important stuff that’s not language or tech driven, and learning how to learn in a focused way to achieve a particular task / goal, and also getting used to the idea of having projects for yourself that you design, build, test and “ship” for whatever values of “ship” you’re comfortable with - maybe putting them on your github page for example.

                                                                                1. 2

                                                                                  not just variations of Hello World or examples of the particular pony tricks your new pet language can do well.

                                                                                  This is a good point. You don’t learn much by catering to a toolset’s strengths.

                                                                                  In college (and, to a lesser extent, as a junior dev – and to a greater extent, before college, if you are lucky enough to get coding that early) you’ve got time to do things the hard way, and there are a lot of lessons that are best learned by doing things the hard way. So, this is the time to intentionally use the wrong tool for the job, be perverse, and jump into projects that are way above your skill level and way outside your comfort zone. The more you do this, the better you’ll become at not being discouraged by technical and social hurdles, & the bigger your comfort zone will become.

                                                                                  A pattern I see with colleagues who learned to code in college is that they’re very precious about sticking to their preferred tools & idioms. They did all their exploration in four years, and ever since, they’ve been under pressure to perform, so new tools and techniques are not just alien but represent a (only mostly imaginary) threat to their livelihoods. They become hyper-specialists. They have never learned the lessons that can only be gained from doing the maximally-wrong thing, because they have never been secure enough to be willing to waste their time, and as a result they’re stuck in a lower grade of expertise than they could attain.

                                                                                  The case I typically make for being a generalist, which all these things feed back into, is that the utility of knowledge is weighted by rarity, and rarity is affected by expected utility. Utility and expected utility have little to do with each other outside of situations where nothing is being discovered – if you’re a code monkey writing trivial mashups of big existing libraries, insulated from users, it’s possible to do your work for years and never be surprised or need to learn a new concept, but if you’re doing non-trivial work then you will frequently run into problems that you couldn’t have predicted, and those problems will be ones whose best solutions rely upon knowledge that you couldn’t have previously predicted you would have needed. General knowledge (like a background in common data structures and algorithms) is broad enough to solve a lot of problems, but it’s also standardized – everybody with a CS degree has at least vague familiarity with a bunch of sorting algorithms & their time complexity, can implement a linked list or a binary tree, knows that tree rebalancing is a thing and can look up how to do it, has some basic graph theory under their belt & can write a greedy graph traversal algorithm, etc. So, your value as a developer (whether you are somebody’s employee or just writing your own projects) is based around how you’ve deviated from the norm: if you know MUMPS or J or Idris, or you can write a bootloader or a prover or a compiler, or you know category theory or fast fourier transform or fast inverse square root, or you can write professional-quality prose or can translate between russian and korean.

                                                                                  It’s a high bar to set, for every programmer to know enough things that nobody else knows as to be sure, statistically, that at least a couple of them will unexpectedly come in handy. But, we don’t need as many professional programmers as we have, and folks who pass this bar are going to be a lot more useful (not in the ‘10x programmer’ sense of straight linear productivity but in the sense that some obvious-but-ultimately-bad plans will never be attempted). Anyway, such folks have more opportunities & are less replacable, so I recommend everybody endevour to become such a person.

                                                                                  1. 1

                                                                                    The case I typically make for being a generalist, which all these things feed back into, is that the utility of knowledge is weighted by rarity, and rarity is affected by expected utility. Utility and expected utility have little to do with each other outside of situations where nothing is being discovered – if you’re a code monkey writing trivial mashups of big existing libraries, insulated from users, it’s possible to do your work for years and never be surprised or need to learn a new concept

                                                                                    This is the danger of over-specialization. You don’t learn how to learn quickly and effectively, and never gain that intellectual suppleness which will allow you to adversity and new situations.

                                                                                    It’s a high bar to set, for every programmer to know enough things that nobody else knows as to be sure, statistically, that at least a couple of them will unexpectedly come in handy. But, we don’t need as many professional programmers as we have, and folks who pass this bar are going to be a lot more useful (not in the ‘10x programmer’ sense of straight linear productivity but in the sense that some obvious-but-ultimately-bad plans will never be attempted). Anyway, such folks have more opportunities & are less replacable, so I recommend everybody endevour to become such a person.

                                                                                    So there’s how you build your skill set and how you market yourself. These do not necessarily need to correlate closely :)

                                                                                    I am definitely in favor of priming ones self to be a generalist, but I also think it’s important to be able to market yourself to at least a particular broad area of our industry.

                                                                                    For instance, I tend towards “Devops” work which is a CRAPPY designation for anything involving infrastructure and not generally solving hard computer science problems, but still code-centric.

                                                                                    So, yes build a generalist’s skill set, but be prepared to sail your career ship in a particular direction or you may find that there are no winds to propel you.

                                                                                    1. 2

                                                                                      Sure. Few companies will give roles to junior devs that entail broad responsibilities. I don’t think that means you need to hide those skills. The broader your skillset, the more likely it is that any given narrow specialization will fall within your wheelhouse.

                                                                                      That said, I’ve been at the same place since my intership so maybe the market for candidates is allergic to the ‘overqualified’ & I’m just unaware.

                                                                              2. 2

                                                                                you MUST be able to go deep on some small subset of things, whether or not you’re a specialist or a generalist.

                                                                                Where do you find this “MUST”? As far as I can tell, speaking at such a level of generality, the only must is what is needed to do a job, solve a problem, achieve an aim, etc. One needs to be as specialized as the circumstance requires. But I’m struggling to make the leap from that to some categorical imperative of depth, your MUST.

                                                                                1. 1

                                                                                  And how might one attain the level of depth needed to do a given job if one spends one’s time chasing bright shiny new languages and tools?

                                                                                  You’re right, we’re speaking in generalities, and I’m sorry my use of the word MUST seems to have triggered you, but my general point still holds, even if you downgrade the word in question to, say, a lowercase ‘need to’?

                                                                                  1. 1

                                                                                    Haha, I’m not triggered. I think you’re setting up a false dilemma. If I have a job that requires a superficial understanding of a bunch of tools, then I would not be “thriving” in my job if I was trying to cultivate specialization in a few of those tools. I think the real distinction is between stuff that is necessary to do the job and stuff that isn’t. It’s not between depth and breadth. For example, I was recently talking to a friend who just founded a consulting company. In the past few months, he’s worked with several new languages, none of which he plans to specialize in. Working with those languages is what he needed to do to get the job done. Beyond getting the job done, learning more about those languages has diminishing returns. Specializing in anything is only a good idea if the specialization has utility. I think a better principle is to invest time in things proportional to their utility. That really depends on the context and could result in specialization or generalization.

                                                                              3. 2

                                                                                many of the important skills that make a developer more valuable are not related to using specific technologies or tools, but rather in generic skills that are transferable across languages, tools and frameworks.

                                                                                This ^.

                                                                                Or, as they say, “learn weightier shit”.

                                                                                1. 1

                                                                                  This is something different again - you can’t learn “the weightier shit” unless you choose a language or two and stick there. Humans, even the most intelligent ones, can only learn so many things at once.

                                                                                  So, my point stands. Choose a subset of tools and go deep. Go deep means, in addition to mastery of that particular tool or language, that you learn “the weighty shit” :)

                                                                                  1. 2

                                                                                    Oh no, no, no, it’s the other “weightier” stuff. It’s things that make sense in engineering in general. Like, a function that does one thing and doesn’t affect surroundings in unexpected ways is as good a thing in Java, as it is in Scheme. Or, say, the idea that you need a queue in between producers and consumers. Or various implications of forks vs. threads vs. polling loops. Or understanding why you can’t parse HTML with a regex.

                                                                                    Knowledge like this weighs more than knowing how do you sort an array in your current language/framework.

                                                                              1. 2

                                                                                I’m not sure that focusing on a few items on the list is any better than attempting to address them all. The preface to How to Design Programs discusses an idea of “transferable skills”, problem solving strategies which aren’t domain specific. Tools come and go, but these skills remain applicable. That’s the kind of thing I’m interested in. The quantity of things I’m learning about is no indication of how useful those things are, how much more effective they make me. If I’m learning a lot of things that make me much more effective, it would be counterproductive to cut back in observation of a rule about quantity. I’m not sure there’s anything inherently better about focusing on one language for a year or focusing on 8 languages for a year. The outcome can be shallow in both cases. The important question is how fundamental, how transferable are the skills I’m building using the tools I’m working with.

                                                                                1. 11

                                                                                  The first chapter of Structure and Interpretation of Computer Programs, especially section 1.2, contains a great discussion of these ideas. The authors distinguish between recursive procedures and recursive processes. “Recursive procedure” describes how a function is defined (in terms of itself). “Recursive process” describes how the definition expands when it’s evaluated. A recursive process is often slow because they expand into tree structures where the same sub-expression is computed more than once. Why I’m mentioning it here is that SICP shows that a more performant implementation does not depend on imperative constructs like for-loops. You can define recursive procedures which generate what the authors call “iterative processes”, similar in resource consumption to for-loops. They make it very clear why some implementations are faster than others and that it’s more nuanced than recursion vs. for-loops.

                                                                                  1. 11

                                                                                    Pretty standard “why Haskell?” post.

                                                                                    That opening sentence is dense enough that you could spend many hours researching the history and merits of things like “purely functional”, “non-strict semantics”, and “strong static typing”.

                                                                                    We get it, Haskell programmers are smarter than everyone else.

                                                                                    Reading between the lines, it sounds like the team actually has more experience on infrastructure projects than PL projects. They hand wave away writing infrastructure glue as trivial, but foresee themselves drowning in tech debt implementing the language interpretation in C++, Java, Swift, etc. That tells me they’re confident in writing infrastructure glue, but less confident writing PL code.

                                                                                    I may be totally off base, but that’s the vibe I got from reading the post.

                                                                                    1. 17

                                                                                      We get it, Haskell programmers are smarter than everyone else.

                                                                                      The odd thing is, I’ve quite often heard people say they use Haskell because they’re not smart enough to ship things reliably in most other languages. I empathise.

                                                                                      1. 19

                                                                                        I, for one, consider myself too stupid to not use Haskell.

                                                                                        1. 5

                                                                                          I, for one, consider myself too stupid to use Haskell.

                                                                                          An abstraction too hard to understand is just magic.

                                                                                          1. 8

                                                                                            You aren’t forced to use abstractions that are too hard to understand. You can go a very long way — all the way, in fact — just using simple things in Haskell.

                                                                                        2. 11

                                                                                          I prefer OCaml rather than Haskell but the point is the same. I feel much more relaxed when I write OCaml code compared to C (or C++) code. I don’t have the same anxiety of crashes and corruption issues.

                                                                                          In the same vein, I’m currently on a mixed codebase and I’m not able to test the code in actual setups and I’m not at work all 5 days of the week currently. I have to do some bugfixes currently and therefore I code, push on branch and hope for the best. I’m much more confident with my fixes for the OCaml code and they usually just work indeed. The fixes to the C and C++ codebases however will likely be partial or trigger issues elsewhere instead.

                                                                                          1. 5

                                                                                            I have similar experience with a mixed C#/F# codebase. It is not that the F# code does not have bugs – I create plenty bugs – but in F# there’s a greater proportion of interesting bugs.

                                                                                          2. 9

                                                                                            I thought the same of myself, but (to add another perspective) it turns out I’m also not smart enough to use many of the libraries on Hackage, or to reliably manage memory usage, so I moved away. I still appreciate and emulate the discipline of its type system when I have to use a less strictly-typed language, though.

                                                                                            1. 5

                                                                                              I think that’s a pithy (and therefore somewhat imprecise) way of saying that Haskell’s type system lets them encode more correctness guarantees than mainstream languages with less sophisticated type systems, so they feel they can offload more correctness-checking work to the compiler, and therefore don’t have to be as “smart” about reasoning about edge cases to write equally-reliable code. I think that more complicated statement is true enough, but I wouldn’t say that people who are smarter than those Haskell programmers are deliberately choosing to write code in languages with less-strict type systems because they can get away with it.

                                                                                              1. 3

                                                                                                I agree with you. Full disclosure: writing Haskell is 100% of my income and I say the same pithy line above. I just didn’t want to seem like I’m saying “look at me, I write Haskell, I M smrt.”

                                                                                                1. 3

                                                                                                  I wouldn’t say that people who are smarter than those Haskell programmers are deliberately choosing to write code in languages with less-strict type systems because they can get away with it.

                                                                                                  I think what you are saying is completely true, the asprin must follow the headache. They don’t desire an escape because they either aren’t pushing themselves to actually build at their ability level or they are naturally gifted and don’t feel any need to offload cognitive weight. It’s not that they are flexing, it’s that they don’t feel the weight. Perhaps some of them don’t really know that things would be easier if they didn’t bear it. I notice people with good working memory don’t write notes and then I’ll go to my notes and be able to “recall” something that they couldn’t. Sometimes a lack of ability leads us to use tools that expand our performance far beyond what any natural ability could accomplish. The palest ink is better than the most powerful memory. Or as I often say “Paper don’t forget”.

                                                                                                2. 4

                                                                                                  I have trouble articulating why, but I find this cliche really condescending.

                                                                                                  1. 4

                                                                                                    It might be because the subtle implication is that other programmers are too stupid to realise they aren’t smart enough to have come to the same conclusion.

                                                                                                    I’ve said essentially the same, but that’s the tone of general discourse. It isn’t reasonable to hold Haskell developers to a different social standard than any other kind of programmer.

                                                                                                    1. 3

                                                                                                      I think it’s because you can easily read it as saying “not only am I smart, I am also exceedingly humble.”

                                                                                                      An attempt at a better formulation:

                                                                                                      It’s not that I use Haskell because I’m abnormally smart—I’m not—but I was able to take the time to learn it and I love it because although of course there is some complexity and novelty to the language it is also very helpful and nice to have such a strong and expressive type system, so I find myself being able to code with more ease and confidence. Other languages may be easier to learn but there’s a big difference between how fast you can learn a language and how much that language actually helps you in the work of coding especially over time, and there I think the effort of learning Haskell is well worth it, and I feel this clearly almost every time I do some bigger refactoring and the compiler helps me step by step until it’s done.

                                                                                                      1. 2

                                                                                                        Would you also feel this way if someone said “I use JavaScript because I’m not smart enough to use other languages?”

                                                                                                        I think within this feeling that the cliche is condescending is an implicit assumption that Haskell is a language for smart people and not the rest of us. The same statement about JavaScript doesn’t feel condescending because few feel like JavaScript is reserved for exceptionally smart people. So when a Haskell programmer (who must be smart because they’re a Haskell programmer) says they do something seen as reserved for exceptionally smart people (Haskell programming) because they aren’t that smart, where does that leave the rest of us chumps? That’s the condescension, as I understand it.

                                                                                                        I think what Haskell programmers are trying to say is that they’re not exceptionally smart and Haskell isn’t Mensa for programmers. I saw a tweet recently to the effect of “I am a plumber and when I work with Haskell, I have less leaky pipes”. That’s the sentiment I find in the cliche.

                                                                                                        1. 3

                                                                                                          Would you also feel this way if someone said “I use JavaScript because I’m not smart enough to use other languages?”

                                                                                                          I know you didn’t ask me, but to be fair, JavaScript doesn’t have the same elitist/academic reputation that Haskell does. People who don’t know any better think that Haskellers are all geniuses, so when a Haskeller says “I’m not very smart”, it’s interpreted as insincere. If a JavaScript user says “I’m not smart enough to use anything else”, you’d probably not try to call their bluff.

                                                                                                          1. 4

                                                                                                            That’s exactly my point. That assumption should be addressed explicitly, not emotions downstream from that assumption.

                                                                                                            1. 2

                                                                                                              The common perception is more or less completely backwards. The person who can write good software in a language that is essentially out to murder them is likely more naturally gifted than the one who needs the language to assist them. Sure some people use languages with a lot of compile time checks because they are hoping to eek out every last bit of effectiveness, but most of us are trying to make the thing work when it runs. In javascript running gives absolutely no guarantees of workingness, that makes things harder, and you have to be better to compensate. A person from my meet up group said it best “F# feels like bowling with bumpers”, and Haskell is no different. If anything Haskell has more bumpers than F#.

                                                                                                              1. 2

                                                                                                                To clarify: I already totally agree with you.

                                                                                                                I’m just also trying to empathise with the people who have this weird negative and emotional reaction to Haskell.

                                                                                                                1. 2

                                                                                                                  oh yeah I wasn’t particularly fighting what you were saying, just saying there’s real javascript wizards out there.

                                                                                                        2. 3

                                                                                                          Yes this is part of why I champion F# so much in the .net space. I don’t have to consider so many things at once and it makes managing my ADHD wrt programming much easier. Less burden on my working memory, and much easier to focus.

                                                                                                          1. 2

                                                                                                            Yes! When I write Haskell I only have to consider function parameters. I have been diagnosed with ADHD, and I really do have less temporary storage compared to most of my coworkers. That reduction in state space means I can write code with less effort.

                                                                                                          2. 3

                                                                                                            Yes, that’s the main take-away I got from the post, though I don’t think I fully articulated that.

                                                                                                            In my experience, using Haskell forces you to think about your design a lot more, and the language gives you tools to express and validate your design. If you’re not that confident in your design, like if you’re writing something outside of your area or expertise, that would be quite valuable. Especially if you’re going to tweak core elements of your design frequently.

                                                                                                            Writing infrastructure glue code is harder than it sounds. It’s difficult to handle all the edge cases, both for correctness and performance. Yet the team seems to have done that with little effort. This further leads me to believe they’re mostly infrastructure people, using Haskell because it’s well suited to the problem they personally find most difficult.

                                                                                                            But I stand by my sarcastic comment about the post’s opening paragraph. Common discourse about Haskell is full of humble brags like that. The “not smart enough for other languages” thing feels similarly smug to me.

                                                                                                            1. 8

                                                                                                              But I stand by my sarcastic comment about the post’s opening paragraph. Common discourse about Haskell is full of humble brags like that. The “not smart enough for other languages” thing feels similarly smug to me.

                                                                                                              Damned if we do, damned if we don’t.

                                                                                                              I feel there’s not really any approach Haskellers can take when describing the language that won’t be met with the kind of ridicule you find on r/programmingcirclejerk.

                                                                                                              We could lie and say it’s just another language, and the differences are no big deal. In that case, there’s no point learning it because we already know JS/Ruby/whatever.

                                                                                                              We can say there are truly great ideas in there, and that this makes a significant reduction to the cost of software development over time, to which we are called academics, elitists, and liars.

                                                                                                              We can say “yeah, whatever. I like it, and it’s easier for me to do this than most other languages”, and then we get called “smug”.

                                                                                                              1. 1

                                                                                                                They could have left out the opening paragraph, which added nothing to the post at all.

                                                                                                                We can say “yeah, whatever. I like it, and it’s easier for me to do this than most other languages”,

                                                                                                                That phrasing is fine with me, I don’t find it smug at all. Is the difference between that and “I’m not smart enough for other programming languages” not clear?

                                                                                                                1. 3

                                                                                                                  That phrasing is fine with me, I don’t find it smug at all.

                                                                                                                  Do you not hear how arrogant you sound? Like other programmers should care about what phrasing is fine with you?

                                                                                                                  Is the difference between that and “I’m not smart enough for other programming languages” not clear?

                                                                                                                  Not really, no. Haskellers constantly receive pithy dismissals, so you should expect to receive pithy responses in return.

                                                                                                                  1. -1

                                                                                                                    Like other programmers should care about what phrasing is fine with you?

                                                                                                                    Sorry, when you replied to my comment I assumed you were talking to me. My mistake, I will let you soliloquize about how you’re so tragically misunderstood on your own.

                                                                                                                    I hope we’re in agreement on this. After all, if you provide me nothing but pithy dismissals, you should expect to receive pithy responses in return, correct?

                                                                                                                    1. 3

                                                                                                                      I want to break the cycle! Less pith more content!

                                                                                                                      Haskell is really cool and worth learning.

                                                                                                            2. 1

                                                                                                              Would you think that Scala is in the same space as Haskell, reliability-wise? I’m seeing rather poor performance & reliability (on many levels) when looking at existing Scala projects, whereas comparable python projects are doing fine.

                                                                                                              1. 2

                                                                                                                I worked on a Scala project at my last job and it was hot garbage, mostly due to bad tooling. I feel at least 30% of my development was wasted untangling dependency hell and swearing at sbt.

                                                                                                                1. 1

                                                                                                                  I haven’t even looked at Scala in the last four years, so I honestly couldn’t tell you.

                                                                                                              2. 16

                                                                                                                Pretty standard “why Haskell?” post.

                                                                                                                I didn’t think so. The section on control flow, especially in the context of an interpreter, is one of the things I like most about writing Haskell. I don’t see it mentioned so much in “why Haskell?” posts. In fact, they explicitly avoid a discussion about the usual suspects: purely functional, non-strict semantics, and strong static typing.

                                                                                                                We get it, Haskell programmers are smarter than everyone else.

                                                                                                                I interpreted the sentence you are referring to differently. In my experience, Haskell programmers tend to avoid the larger community of programmers because talking about things they care about elicits angry and dismissive reactions like yours. The reference to the density of the opening sentence is just, in my interpretation, an admission that the sentence doesn’t mean much to someone who isn’t already familiar with those terms. I have found Haskell programmers to be, in general, humble and curious people who are quick to admit what they don’t know and eager to help others understand.

                                                                                                                They hand wave away writing infrastructure glue as trivial, but foresee themselves drowning in tech debt implementing the language interpretation in C++, Java, Swift, etc. That tells me they’re confident in writing infrastructure glue, but less confident writing PL code.

                                                                                                                This sounds like the No True Scotsman fallacy. The real PL hacker doesn’t need the power of Haskell? So if you’re using Haskell, you’re not a real PL hacker? What I read was a list of trade-offs they made. Part of that trade-off was writing more code you might find in a library in other languages. We don’t even know their level of confidence for writing that code, as you suggest. We just know what they valued most.

                                                                                                                These reactions to Haskell always make me sad. There seems to be such a disconnect between the perception of Haskell programmers and the reality of Haskell programmers. As @jgt said, Haskell programmers tend to be people who choose Haskell because they doubt they are sufficiently intelligent to program well in a less powerful language. It seems more about humility than ego.

                                                                                                                1. 14

                                                                                                                  We get it, Haskell programmers are smarter than everyone else.

                                                                                                                  While this line is particularly shitty, your entire comment is dripping with condescension and dismissiveness. It’s not up to the standards of the lobste.rs community as far as I’m concerned. I’ve marked your comment as a troll but I also want to be explicit:

                                                                                                                  If anyone does actually start learning Haskell exclusively to feel smarter than other people, unless they are already geniuses they’ll probably come to appreciate how humbling an experience it is. And I can tell you from experience there are plenty of people who act as though they are smarter than everyone else outside of the Haskell community (generally a higher percentage in most other language communities I’ve participated in, actually). In contrast, the feeling I’ve gotten most often from experienced Haskell programmers is one of thoughtfulness, enthusiasm for learning, and a desire to share their knowledge and help others learn as well.

                                                                                                                  But the reality is, if you’re misreading this piece so egregiously that your takeaway is that anyone using Haskell is a selfish egoist only intent on making other people feel small or satisfying their own desires at the expense of all else, then I’m going to say that yes, most Haskell programmers are probably smarter than you at least, just not for the reasons you are imagining.

                                                                                                                  I may be totally off base, but that’s the vibe I got from reading your comment.

                                                                                                                  1. 2

                                                                                                                    your takeaway is that anyone using Haskell is a selfish egoist only intent on making other people feel small or satisfying their own desires at the expense of all else

                                                                                                                    The opening paragraph of the post is fluff, it serves no purpose towards the topic of the article. But it does name drop a bunch of concepts, and then note that the reader may not have heard of them.

                                                                                                                    Just because I commented on the self-congratulating opening of this post, you’ve extrapolated that I’m kind of crazed anti-Haskell fanatic. I’m not, I just rolled my eyes at that opening. If you’re looking for crazed fanatics, you should probably reread your own comment.

                                                                                                                    So yeah, you’re so far off base it’s absurd.

                                                                                                                    1. 4

                                                                                                                      I’d just like to note that I got the same impression as @ddellacosta, maybe to a somewhat milder degree. That quoted line in particular is hard to not read as coming from a scratched ego.

                                                                                                                      That said, as most things it’s not going to be one-sided, and the whole “no actually we’re too dumb to use other programming languages” does not come across as particularly sincere or convincing either.

                                                                                                                      1. 1

                                                                                                                        If you’re used to a statically typed language like C# and then you try to use javascript one day after years of C# you will definitely feel dumb. Maybe it’s just different and I’m not used to it, but it’s entirely reasonable to think that it is indeed actually harder. If you add more compiler checks on you can see how other languages might start also appearing similarly difficult.

                                                                                                                      2. 1

                                                                                                                        The opening paragraph of the post is fluff, it serves no purpose towards the topic of the article. But it does name drop a bunch of concepts, and then note that the reader may not have heard of them.

                                                                                                                        The quote you chose was from the section “Why Haskell?” not the opening paragraph. Regardless, it is entirely reasonable of the authors to mention major features of Haskell that helped influence their decision, and it’s also entirely reasonable to suggest that interested readers unfamiliar with the language look elsewhere for topics already covered extensively by others. Along these lines, I also suggest taking a moment to consider the title of the piece, as well as the very first line of your original comment.

                                                                                                                        Just because I commented on the self-congratulating opening of this post, you’ve extrapolated that I’m kind of crazed anti-Haskell fanatic.

                                                                                                                        I was explicit in that I was talking about your entire comment, the line I singled out was just particularly shitty. More to the point, while I never stated that you are an anti-Haskell fanatic, it’s clear to me that you are not approaching the piece with an open mind, and it’s also clear you have a chip on your shoulder. I can’t speak to what or why that is.

                                                                                                                        I’m not, I just rolled my eyes at that opening. If you’re looking for crazed fanatics, you should probably reread your own comment.

                                                                                                                        I have. Since when does harshly criticizing a biased, poorly thought-out, and poorly written comment make me a fanatic?

                                                                                                                        So yeah, you’re so far off base it’s absurd.

                                                                                                                        I’m really not, but I am done discussing this with you.

                                                                                                                        1. -4

                                                                                                                          I’m really not [off base], but I am done discussing this with you.

                                                                                                                          If you believe you know my own thoughts and feelings better than I do myself, you were done before you started.

                                                                                                                          1. 5

                                                                                                                            I mean you did come to a post about haskell to give a broad condemnation of haskell programmers. If you aren’t a crazed anti-haskell fanatic then you sure are rude.

                                                                                                                            1. 1

                                                                                                                              I wouldn’t call criticizing a comment I thought was annoyingly self-congratulating a “broad condemnation of Haskell programmers.”

                                                                                                                              1. 6

                                                                                                                                Whether or not you intended it “We get it, Haskell programmers are smarter than everyone else.” is going to be interpreted as “I think Haskell programmers are very conceited and snobby.”. I think it’s an entirely fair and reasonable interpretation of your statement, and while you’ve stated that it was in response to their article you haven’t clarified that you don’t view haskell programmers as conceited and snobby. A clear statement that you didn’t intend to condemn all haskell programmers probably would have went a long way. Next time when critiquing on a particular person in really any demographic it’s probably a good idea to not refer to the whole demographic unless of course you intend to condemn the whole demographic. That’s kinda what that does.

                                                                                                                  1. 11

                                                                                                                    Summary: author’s expectations of a young language exceed the actual implementation, so they write a Medium article.

                                                                                                                    If you can’t tell: slightly triggering article for me, and I don’t use/advocate for Elm. I’d much prefer if the author either pitched in and helped, or shrugged and moved on to something else. Somehow, yelling into the void about it is worse to me, I think because there are one or two good points in there sandwiched between non-constructive criticisms.

                                                                                                                    1. 34

                                                                                                                      The article provides valuable information for people considering using Elm in production. The official line on whether Elm is production ready or not is not at all clear, and a lot of people suggest using it.

                                                                                                                      1. 6

                                                                                                                        I didn’t like that he makes unrelated and unsupported claims in the conclusion (“Elm is not the fastest or safest option”). That’s not helpful.

                                                                                                                        1. 5

                                                                                                                          I read “fastest” and “safest” as referring to “how fast can I can get work done” and “is this language a safe bet”, not fast and safe in the sense of performance. If that’s the right interpretation, then those conclusions flow naturally from the observations he makes in the article.

                                                                                                                          1. 1

                                                                                                                            Right, the author made the same clarification to me on Twitter, so that’s definitely what he meant. In that sense, the conclusion is fine. Those are very ambiguous words though (I took them to mean “fastest runtime performance” and “least amount of runtime errors”).

                                                                                                                            1. 1

                                                                                                                              Yeah definitely. I also was confused initially.

                                                                                                                        2. 3

                                                                                                                          TBF, I was a little too snarky in my take. I don’t want to shutdown legitimate criticism.

                                                                                                                          The official line on whether Elm is production ready or not is not at all clear, and a lot of people suggest using it.

                                                                                                                          That ambiguity is a problem. There’s also a chicken/egg problem with regard to marketing when discussing whether something is production ready. I’m not sure what the answer is.

                                                                                                                          1. 4

                                                                                                                            It’s even more ambiguous for Elm. There are dozens of 100K+ line commercial code bases out there. How many should there be before the language is “production ready”? Clearly, for all those companies, it already is.

                                                                                                                            Perhaps the question is misguided and has reached “no true Scotsman” territory.

                                                                                                                            1. 3

                                                                                                                              That’s one reason why this topic is touchy to me: things are never ready until the Medium-esque blogosphere spontaneously decides it is ready, and then, without a single ounce of discontinuity, everyone pretends like they’ve always loved Elm, and they’re excited to pitch in and put forth the blood, sweat, and tears necessary to make a healthy, growing ecosystem. Social coding, indeed.

                                                                                                                              In a sense, everyone wants to bet on a winner, be early, and still bet with the crowd. You can’t have all those things.

                                                                                                                              1. 2

                                                                                                                                I like your last paragraph. When I think about it, I try to reach the same impossible balance when choosing technologies.

                                                                                                                                I even wrote a similar post about Cordova once (“is it good? is it bad?”). Hopefully it was a bit more considered as I’d used it for 4 years before posting.

                                                                                                                                The thing that bothers me with the developer crowd is somewhat different, I think. It’s the attempt to mix the other two unmixable things. On one hand, there’s the consumerist attitude to choosing technologies (“Does it work for me right now? Is it better, faster, cheaper than the other options?”). On the other hand, there are demands for all the benefits of open source like total transparency, merging your PR, and getting your favourite features implemented. Would anyone demand this of proprietary software vendors?

                                                                                                                                I’m not even on the core Elm team, I’m only involved in popularising Elm and expanding the ecosystem a bit, but even for me this attitude is starting to get a bit annoying. I imagine it’s worse for the core team.

                                                                                                                                1. 2

                                                                                                                                  Hey, thanks for your work on Elm. I’m much less involved than you, but even I find the “walled garden” complaints a little irritating. I mean, if you don’t like this walled garden, there are plenty of haphazard dumping grounds out there to play in, and even more barren desert. Nobody’s forcing anybody to use Elm! For what it’s worth, I think Evan and the Elm core team is doing great work. I’m looking forward to Elm 1.0, and I hope they take their time and really nail it.

                                                                                                                                2. 2

                                                                                                                                  The author of this article isn’t pretending to be an authority on readiness, and claiming that they’ll bandwagon is unwarranted. This article is from someone who was burned by Elm and is sharing their pain in the hopes that other people don’t get in over their heads.

                                                                                                                                  Being tribal, vilifying the “Medium-esque blogosphere” for acts that the author didn’t even commit, and undermining their legitimate criticisms with “well, some people sure do love to complain!” is harmful.

                                                                                                                            2. 3

                                                                                                                              I’d like to push back on this. What is “production ready”, exactly? Like I said in another comment, there are dozens of 100K+ line commercial Elm code bases out there. Clearly, for all those companies, it already is.

                                                                                                                              I’ve used a lot of other technologies in production which could easily be considered “not production ready”: CoffeeScript, Cordova, jQuery Mobile, Mapbox. The list goes on. They all had shortcomings, and sometimes I even had to make compromises in terms of requirements because I just couldn’t make particular things work.

                                                                                                                              The point is, it either works in your particular situation, or it doesn’t. The question is meaningless.

                                                                                                                              1. 7

                                                                                                                                Here are my somewhat disjoint thoughts on the topic before the coffee has had a chance to kick in.

                                                                                                                                What is “production ready”, exactly?

                                                                                                                                At a minimum, the language shouldn’t make major changes between releases that require libraries and codebases to be reworked. If it’s not at a point where it can guarantee such a thing, then it should state that fact up front. Instead, its creator and its community heavily promote it as being the best thing since sliced bread (“a delightful language for reliable webapps”) without any mention of the problems described in this post. New folks take this to be true and start investing time into the language, often quite a lot of time since the time span between releases is so large. By the time a new release comes out and changes major parts of the language, some of those people will have invested so much time and effort into the language that the notion of upgrading (100K+ line codebases, as you put it) becomes downright depressing. Not to mention that most of those large codebases will have dependencies that themselves will need upgrading or, in some cases, will be have to be deprecated (as elm-community has done for most of my libraries with the release of 0.19, for example).

                                                                                                                                By promoting the language without mentioning how unstable it really is, I think you are all doing it a disservice. Something that should be perceived as good, like a new release that improves the language, ends up being perceived as a bad thing by a large number of the community and so they leave with a bad taste in their mouth – OP made a blog post about it, but I would bet the vast majority of people just leave silently. You rarely see this effect in communities surrounding other young programming languages and I would posit that it’s exactly because of how they market themselves compared to Elm.

                                                                                                                                Of course, in some cases it can’t be helped. Some folks are incentivized to keep promoting the language. For instance, you have written a book titled “Practical Elm” so you are incentivized to promote the language as such. The more new people who are interested in the language, the more potential buyers you have or the more famous you become. I believe your motivation for writing that book was pure and no one’s going to get rich off of a book on Elm. But, my point is that you are more bought into the language that others normally are.

                                                                                                                                sometimes I even had to make compromises in terms of requirements because I just couldn’t make particular things work.

                                                                                                                                That is the very definition of not-production-ready, isn’t it?

                                                                                                                                Disclaimer: I quit Elm around the release of 0.18 (or was it 0.17??) due to a distaste for Evan’s leadership style. I wrote a lot of Elm code (1 2 3 4 and others) and put some of it in production. The latter was a mistake and I regret having put that burden on my team at the time.

                                                                                                                                1. 1

                                                                                                                                  From what I’ve seen, many people reported good experiences with upgrading to Elm 0.19. Elm goes further than many languages by automating some of the upgrades with elm-upgrade.

                                                                                                                                  FWIW, I would also prefer more transparency about Elm development. I had to scramble to update my book when Elm 0.19 came out. However, not for a second I’m going to believe that I’m entitled to transparency, or that it was somehow promised to me.

                                                                                                                                  To your other point about marketing, if people are making decisions about putting Elm into production based on its tagline, well… that’s just bizarre. For example, I remember looking at React Native in its early stages, and I don’t recall any extensive disclaimers about its capabilities or lack thereof. It was my responsibility to do that research - again, because limitations for one project are a complete non-issue for another project. There’s just no one-size-fits-all.

                                                                                                                                  Finally, calling Elm “unstable” is simply baseless and just as unhelpful as the misleading marketing you allege. I get that you’re upset by how things turned out, but can’t we all have a discussion without exaggerated rhetoric?

                                                                                                                                  That is the very definition of not-production-ready, isn’t it?

                                                                                                                                  Exactly my point: there is no such definition. All those technologies I mentioned were widely used at the time. I put them into production too, and it was a good choice despite the limitations.

                                                                                                                                  1. 6

                                                                                                                                    From what I’ve seen, many people reported good experiences with upgrading to Elm 0.19. Elm goes further than many languages by automating some of the upgrades with elm-upgrade.

                                                                                                                                    And that’s great! The issue is the things that cannot be upgraded. Let’s take elm-combine (or parser-combinators as it was renamed to), for example. If you depended on the library in 0.18, then, barring the invention of AGI, there’s no automated tool that can help you upgrade because your code will have to be rewritten to use a different library because elm-combine cannot be ported to 0.19 (not strictly true, because it can be ported but only by the core team, but my point still stands because it won’t be). Language churn causes ecosystem churn which, in turn, causes pain for application developers so I don’t think it’s a surprise that folks get angry and leave the community when this happens given that they may not have had any prior warning before they invested their time and effort.

                                                                                                                                    Finally, calling Elm “unstable” is simply baseless and just as unhelpful as the misleading marketing you allege. I get that you’re upset by how things turned out, but can’t we all have a discussion without exaggerated rhetoric?

                                                                                                                                    I don’t think it’s an exaggeration to call a language with breaking changes between releases unstable. To be completely honest, I can’t think of a better word to use in this case. Fluctuating? In flux? Under development? Subject to change? All of those fit and are basically synonymous to “unstable”. None of them are highlighted anywhere the language markets itself, nor by its proponents. I’m not making a judgement on the quality of the language when I say this. I’m making a judgement on how likely it is to be a good choice in a production environment, which brings me to…

                                                                                                                                    Exactly my point: there is no such definition. All those technologies I mentioned were widely used at the time. I put them into production too, and it was a good choice despite the limitations.

                                                                                                                                    They were not good choices, because, by your own admission, you were unable to meet your requirements by using them. Hence, they were not production-ready. Had you been able to meet your requirements and then been forced to make changes to keep up with them, then that would also mean they were not production-ready. From this we have a pretty good definition: production-readiness is inversely proportional to the likelihood that you will “have a bad time” after putting the thing into production. The more that likelihood approaches 0, the more production-ready a thing is. Being forced to spend time to keep up with changes to the language and its ecosystem is “having a bad time” in my book.

                                                                                                                                    I understand that our line of work essentially entails us constantly fighting entropy and that, as things progress, it becomes harder and harder for them maintain backwards-compatibility but that doesn’t mean that nothing means anything anymore or that we can’t reason about the likelihood that something is going to bite us in the butt later on. From a business perspective, the more likely something is to change after you use it, the larger risk it poses. The more risks you take on, the more likely you are to fail.

                                                                                                                                    1. 1

                                                                                                                                      I think your definition is totally unworkable. You’re claiming that technologies used in thousands upon thousands of projects were not production ready. Good luck with finding anything production ready then!

                                                                                                                                      1. 7

                                                                                                                                        I’ve been working with Clojure for almost a decade now, and I’ve never had to rewrite a line of my code in production when upgrading to newer versions because Cognitect takes backwards compatibility seriously. I worked with Java for about a decade before that, and it’s exact same story. There are plenty of languages that provide a stable foundation that’s not going to keep changing from under you.

                                                                                                                                        1. 4

                                                                                                                                          I am stating that being able to put something in production is different from said thing being production ready. You claim that there is no such thing as “production ready” because you can deploy anything which is a reduction to absurdity of the situation. Putting something into production and being successful with it does not necessarily make it production ready. It’s how repeatable that success is that does.

                                                                                                                                          It doesn’t look like we’re going to get anywhere past this point so I’m going to leave it at that. Thank you for engaging and discussing this with me!

                                                                                                                                          1. 1

                                                                                                                                            Thank you as well. As I said in another comment, this is the first time I tried having an extended discussion in the comments in here, and it hasn’t been very useful. Somehow we all end up talking past each other. It’s unfortunate. In a weird way, maybe it’s because we can’t interrupt each other mid-sentence and go “Hang on, but what about?…”. I don’t know.

                                                                                                                                          2. 4

                                                                                                                                            This doesn’t respond to bogdan’s definition in good faith.

                                                                                                                                            production-readiness is inversely proportional to the likelihood that you will “have a bad time” after putting the thing into production. The more that likelihood approaches 0, the more production-ready a thing is.

                                                                                                                                            In response to your criticisms, bogdan proposed a scale of production-readiness. This means that there is no such distinction between “production-ready” and not “production-ready”. Elm is lower on this scale than most advocates imply, and the article in question provides supporting evidence for elm being fairly low on this scale.

                                                                                                                                            1. 1

                                                                                                                                              What kind of discussion do you expect to have when the first thing you say to me is that I’m responding in bad faith? Way to go, my friend.

                                                                                                                                              1. 4

                                                                                                                                                Frankly, I don’t really want to have a discussion with you. I’m calling you out because you were responding in bad faith. You didn’t address any of his actual points, and you dismissed his argument condescendingly. The one point you did address is one that wasn’t made, and wasn’t even consistent with bogdan’s stance.

                                                                                                                                                1. 1

                                                                                                                                                  In my experience, the crusader for truth and justice is one of the worst types of participants in a forum.

                                                                                                                                                  We may not have agreed, but bogdan departed from the discussion without histrionics, and we thanked each other.

                                                                                                                                                  But you still feel you have to defend his honour? Or are you trying to prove that I defiled the Truth? A little disproportionate, don’t you think?

                                                                                                                                                  (Also: don’t assign tone to three-sentence comments.)

                                                                                                                                    2. 5

                                                                                                                                      I disagree that the question is meaningless just because it has a subjective aspect to it. A technology stack is a long term investment, and it’s important to have an idea how volatile it’s going to be. For example, changes like the removal the of the ability to do interop with Js even in your own projects clearly came as a surprise to a lot of users. To me a language being production ready means that it’s at the point where things have mostly settled down, and there won’t be frequent breaking changes going forward.

                                                                                                                                      1. 1

                                                                                                                                        By this definition, Python wasn’t production ready long after the release of Python 3. What is “frequent” for breaking changes? For some people it’s 3 months, for others it’s 10 years. It’s not a practical criterion.

                                                                                                                                        Even more interestingly, Elm has been a lot less volatile than most projects, so it’s production ready by your definition. Most people complain that it’s changing too slowly!

                                                                                                                                        (Also, many people have a different perspective about the interop issue; it wasn’t a surprise. I don’t want to rehash all that though.)

                                                                                                                                        1. 5

                                                                                                                                          Python wasn’t production ready long after the release of Python 3.

                                                                                                                                          Python 3 was indeed not production-ready by many people’s standards (including mine and the core team’s based on the changes made around 3.2 and 3.3) after its release up until about version 3.4.

                                                                                                                                          Even more interestingly, Elm has been a lot less volatile than most projects, so it’s production ready by your definition. Most people complain that it’s changing too slowly!

                                                                                                                                          “it’s improving too slowly” is not the same as “it’s changing too slowly”.

                                                                                                                                          1. 1

                                                                                                                                            Sorry, this doesn’t make any sense.

                                                                                                                                            By @Yogthos’s definition, neither Python 2 nor Python 3 were “production ready”. But if we’re going to write off a hugely popular language like that, we might as well write off the whole tech industry (granted, on many days that’s exactly how I feel).

                                                                                                                                            Re Elm: again, by @Yogthos’s definition it’s perfectly production ready because it doesn’t make “frequent breaking changes”.

                                                                                                                                            1. 5

                                                                                                                                              By @Yogthos’s definition, neither Python 2 nor Python 3 were “production ready”.

                                                                                                                                              Python 2 and 3 became different languages at the split as evidenced by the fact that they were developed in parallel. Python 2 was production ready. Python 3 was not. The fact that we’re using numbers to qualify which language we’re talking about proves my point.

                                                                                                                                              It took five years for Django to get ported to Python 3. (1 2)

                                                                                                                                              Re Elm: again, by @Yogthos’s definition it’s perfectly production ready because it doesn’t make “frequent breaking changes”.

                                                                                                                                              You’re hanging on the wording here and “frequent” is not as important to Yogthos’ argument as “breaking changes” is.

                                                                                                                                              1. 1

                                                                                                                                                I don’t think we’re going to get anywhere with this discussion by shifting goalposts.

                                                                                                                                          2. 2

                                                                                                                                            I think most people agree that Python 3 was quite problematic. Your whole argument seems to be that just because other languages have problems, you should just accept random breaking changes as a fact of life. I strongly disagree with that.

                                                                                                                                            The changes around ecosystem access are a HUGE breaking change. Basically any company that invested in Elm and was doing Js interop is now in a really bad position. They either have to stay on 0.18, re-implement everything they’re using in Elm, or move to a different stack.

                                                                                                                                            Again, as I noted there is subjectivity involved here. My standards for what constitutes something being production ready are different than yours apparently. That’s fine, but the information the article provides is precisely what I’d want to know about when making a decision of whether I’d want to invest into a particular piece of technology or not.

                                                                                                                                            1. 1

                                                                                                                                              I don’t think you are really aware of the changes to Elm because you’re seriously overstating how bad they were (“re- implement everything” was never the case).

                                                                                                                                              I agree that there is useful information in the article – in fact, I try to read critical articles first and foremost when choosing technologies so it’s useful to have them. I never said that we should accept “random breaking changes” either (and it isn’t fair to apply that to Elm).

                                                                                                                                              I still don’t see that you have a working definition of “production ready” – your definition seems to consist of a set with a single occupant (Clojure).

                                                                                                                                              As an aside, this is the first time I’ve had an extended discussion in the comments here on Lobsters, and it hasn’t been very useful. These things somehow always end up looking like everyone’s defending their entrenched position. I don’t even have an entrenched position – and I suspect you may not either. Yet here we are.

                                                                                                                                              1. 3

                                                                                                                                                Perhaps I misunderstand the situation here. If a company has an Elm project in production that uses Js interop, what is the upgrade path to 0.19. Would you not have to rewrite any libraries from the NPM ecosystem in Elm?

                                                                                                                                                I worked with Java for around a decade before Clojure, and it’s always been rock solid. The biggest change that’s happened was the introduction of modules in Java 9. I think that’s a pretty good track record. Erlang is another great example of a stack that’s rock solid, and I can name plenty of others. Frankly, it really surprises me how cavalier some developer communities regarding breaking changes and regressions.

                                                                                                                                                Forum discussions are always tricky because we tend to use the same words, but we assign different meanings to them in our heads. A lot of the discussion tends to be around figuring out what each person understands when they say something.

                                                                                                                                                In this case it sounds like we have different expectations for what to expect from production ready technology. I’m used to working with technologies where regressions are rare, and this necessarily colors my expectations. My views on technology adoption are likely more conservative than majority of developers.

                                                                                                                                                1. 2

                                                                                                                                                  Prior to the 0.19 release, there was a way to directly call JS functions from Elm by relying on a purely internal mechanism. Naturally, some people started doing this, despite repeated warnings that they really shouldn’t. It wasn’t widespread, to my knowledge.

                                                                                                                                                  All the way in 2017, a full 17 months before 0.19 release, it was announced that this mechanism would be removed. It was announced again 5 months before the release.

                                                                                                                                                  Of course, a few people got upset and, instead of finding a migration path, complained everywhere they could. I think one guy wrote a whole UI framework based on the hack, so predictably he stomped out of the community.

                                                                                                                                                  There is an actual JS interop mechanism in Elm called ports. Anybody who used this in 0.18 (as they should have) could continue using it unchanged in 0.19. You can use ports to integrate the vast majority of JS libraries with Elm. There is no need to rewrite all JavaScript in Elm. However, ports are asynchronous and require marshalling data, which is why some people chose to use the internal shortcut (aka hack) instead.

                                                                                                                                                  So, if a company was using ports to interop with JS, there would be no change with 0.19. If it was using the hack, it would have to rewrite that portion of the code to use ports, or custom elements or whatever – but the rework would be limited to bindings, not whole JS libraries.

                                                                                                                                                  There were a few other breaking changes, like removing custom operators. However, Elm has a tool called elm-upgrade which helps to identify these and automatically update code where possible.

                                                                                                                                                  There were also fairly significant changes to the standard library, but I don’t think they were any more onerous than some of the Rails releases, for example.

                                                                                                                                                  Here are the full details, including links to previous warnings not to use this mechanism, if you’re interested: https://discourse.elm-lang.org/t/native-code-in-0-19/826

                                                                                                                                                  I hope this clarifies things for you.

                                                                                                                                                  Now, regarding your “rock solid” examples by which I think you mean no breaking changes. If it’s achievable, that’s good – I’m all for it. However, as a counterexample, I’ll bring up C++ which tied itself into knots by never breaking backward compatibility. It’s a mess.

                                                                                                                                                  I place less value on backward compatibility than you do. I generally think that backward compatibility ultimately brings software projects down. Therefore, de-prioritising it is a safer bet for ensuring the longevity of the technology.

                                                                                                                                                  Is it possible that there are technologies which start out on such a solid foundation that they don’t get bogged down? Perhaps – you bring up Clojure and Erlang. I think Elm’s core team is also trying to find that kind of foundation.

                                                                                                                                                  But whether Elm is still building up towards maturity or its core team simply has a different philosophy regarding backward compatibility, I think it’s at least very clear that that’s how it is if you spend any time researching it. So my view is that anybody who complains about it now has failed to do their research before putting it into production.

                                                                                                                                                  1. 2

                                                                                                                                                    I feel like you’re glossing over the changes from native modules to using ports. For example, native modules allowed exposing external functions as Tasks allowing them to be composed. Creating Tasks also allows for making synchronous calls that return a Task Never a which is obviously useful.

                                                                                                                                                    On the other hand, ports can’t be composed like Tasks, and as you note can’t be used to call synchronous code which is quite the limitation in my opinion. If you’re working with a math library then having to convert the API to async pub/sub calls is just a mess even if it is technically possible to do.

                                                                                                                                                    To sum up, people weren’t just using native modules because they were just completely irresponsible and looking to shoot themselves in a foot as you seem to be implying. Being able to easily leverage existing ecosystem obviously saves development time, so it’s not exactly surprising that people started using native modules. Once you have a big project in production it’s not trivial to go and rewrite all your interop in 5 months because you have actual business requirements to work on. I’ve certainly never been in a situation where I could just stop all development and go refactor my code as long as I wanted.

                                                                                                                                                    This is precisely the kind of thing I mean when I talk about languages being production ready. How much time can I expect to be spending chasing changes in the language as opposed to solving business problems. The more breaking changes there are the bigger the cost to the business is.

                                                                                                                                                    I’m also really struggling to follow your argument regarding things like Rails or C++ to be honest. I don’t see these as justifying unreliable tools, but rather as examples of languages with high maintenance overhead. These are technologies that I would not personally work with.

                                                                                                                                                    I strongly disagree with the notion that backwards compatibility is something that is not desirable in tooling that’s meant to be used in production, and I’ve certainly never seen it bring any software projects down. I have however seen plenty of projects being brought down by brittle tooling and regressions.

                                                                                                                                                    I view such tools as being high risk because you end up spending time chasing changes in the tooling as opposed to solving business problems. I think that there needs to be a very strong justification for using these kinds of tools over ones that are stable.

                                                                                                                                                    1. 3

                                                                                                                                                      I think we’re talking past each other again, so I’m going to wrap this up. Thank you for the discussion.

                                                                                                                                        2. 4

                                                                                                                                          The question isn’t even close to meaningless… Classifying something as “production ready” means that it is either stable enough to rely on, or is easily swapped out in the event of breakage or deprecation. The article does a good enough job of covering aspects of elm that preclude it from satisfying those conditions, and it rightly warns people who may have been swept up by the hype around elm.

                                                                                                                                          Elm has poor Interop, and is (intentionally) a distinct ecosystem from JS. This means that if Elm removes features you use, you’re screwed. So, for a technology like Elm (which is a replacement of JS rather than an enhancement) to be “production ready” it has to have a very high degree of stability, or at least long term support for deprecated features. Elm clearly doesn’t have this, which is fine, but early adopters should be warned of the risks and drawbacks in great detail.

                                                                                                                                          1. 0

                                                                                                                                            What is “production ready”, exactly?

                                                                                                                                            Let’s keep it really simple, to me ‘production-ready’ is when the project version gets bumped to 1.0+. This is a pretty established norm in the software industry and usually a pretty good rule of thumb to judge by. In fact Elm packages enforce semantic versioning, so if you extrapolate that to Elm itself you inevitably come to the conclusion that hasn’t reached production-release readiness yet.

                                                                                                                                          2. 3

                                                                                                                                            The term “production ready” is itself not at all clear. Some Elm projects are doing just fine in production and have been for years now. Some others flounder or fail. Like many things, it’s a good fit for some devs and some projects, and not for some others – sometimes for reasons that have little to do with the language or its ecosystem per se. In my (quite enjoyable!) experience with Elm, both official and unofficial marketing/docs/advocates have been pretty clear on that; but developers who can’t or won’t perceive nuance and make their own assessments for their own needs are likely to be frustrated, and not just with Elm.

                                                                                                                                            I agree that there’s valuable information in this article. I just wish it was a bit less FUDdy and more had more technical detail.

                                                                                                                                          3. 9

                                                                                                                                            I think there’s an angle to Elm’s marketing that justifies these kinds of responses: Those “author’s expectations” are very much encouraged by the way the Elm team presents their language.

                                                                                                                                            Which criticisms do you find unfair, which are the good points?

                                                                                                                                            1. 5

                                                                                                                                              think there’s an angle to Elm’s marketing that justifies these kinds of responses

                                                                                                                                              I’m sympathetic to both Elm and the author here. I understand Elm’s marketing stance because they ask devs to give up freely mixing pure/impure code everywhere in their codebase on top of a new language and ecosystem. (In general, OSS’s perceived need for marketing is pretty out of hand at this point and a bit antithetical to what attracts me to it in the first place). OTOH it shouldn’t be possible to cause a runtime error in the way the author described, so that’s a problem. I’d have wanted to see more technical details on how that occurred, because it sounded like something that type safety should have protected him from.

                                                                                                                                              Fair criticisms:

                                                                                                                                              • Centralized ecosystem (though this is by design right now as I understand)
                                                                                                                                              • Centralized package repo
                                                                                                                                              • Official docs out of date and incomplete

                                                                                                                                              Unfair criticisms:

                                                                                                                                              • PRs being open after 2 years: one example alone is not compelling
                                                                                                                                              • Tutorials being out of date: unfortunate, but the “Cambrian explosion” meme from JS-land was an implicit acknowledgement that bitrot was okay as long as it was fueled by megacorps shiny new OSS libs, so this point is incongruous to me (even if he agrees with me on this)
                                                                                                                                              • “Less-popular thing isn’t popular, therefore it’s not as good”: I understand this but also get triggered by this; if you want safe, established platforms that have a big ecosystem then a pre-1.0 language is probably not the place to be investing time

                                                                                                                                              The conclusion gets a little too emotional for my taste.

                                                                                                                                              1. 2

                                                                                                                                                Thanks for the detailed reply; the criticism of the article seems valid.

                                                                                                                                                (As a minor point, the “PRs being open” criticism didn’t strike me as unsubstantiated because I’ve had enough similar experiences myself, but I can see how the article doesn’t argue that well. Certainly I’ve felt that it would be more honest/helpful for elm to not accept github issues/prs, or put a heavy disclaimer there that they’re unlikely to react promptly, and usually prefer to fix things their own way eventually.)

                                                                                                                                            2. 6

                                                                                                                                              A lot of the things listed in the articles are things that have been explicitly done to make things harder for contributions to happen. The development of Elm has explicitly made choices to make things harder, and not in a merely incidental way.

                                                                                                                                              This isn’t “the language is young” (well except for the debug point), a lot of this is “the language’s values go against things useful for people deploying to production”)

                                                                                                                                              1. 2

                                                                                                                                                I don’t know, other than the point about the inability to write native modules and the longstanding open PR’s, all of the rest of the issues very much seem symptomatic of a young language.

                                                                                                                                                The native module point sounds very concerning, but I don’t think I understand enough about elm or the ecosystem to know how concerning it is.

                                                                                                                                                1. 4

                                                                                                                                                  I’ve been vaguely following along with Elm, and the thng that makes me err on agreeing with this article is that the native module thing used to not be the case! It was removed! There was a semi-elegant way to handle interactions with existing code and it was removed.

                                                                                                                                                  There are “reasons”, but as someone who has a couple ugly hacks to keep a hybrid frontend + backend stack running nicely, I believe having those kinds of tricks are essential for bringing it into existing code bases. So seeing it get removed is a bit red flag for me.

                                                                                                                                                  Elm still has a lot of cool stuff, of course

                                                                                                                                                  1. 2

                                                                                                                                                    I never relied on native modules, so I didn’t really miss them. But we now have ports, which I think is a much more principled (and interesting) solution. I felt that they worked pretty well for my own JS interop needs.

                                                                                                                                                    Stepping back a bit, if you require the ability do ugly hacks, Elm is probably not the right tool for the job. There are plenty of other options out there! I don’t expect Elm to be the best choice for every web front-end, but I do appreciate its thoughtful and coherent design. I’m happy to trade backward compatibility for that.

                                                                                                                                                  2. 2

                                                                                                                                                    If you spend any amount of time in the Elm community you will find that contributions to the core projects are implicitly and explicitly discouraged in lots of different ways. Even criticisms of the core language and paradigms or core team decisions are heavily moderated on the official forums and subreddit.

                                                                                                                                                    Also how are we using the term “young”? In terms of calendar years and attention Elm is roughly on par with a language like Elixir. It’s probably younger in terms of developer time invested, but again this is a direct result of turning away eager contributors.

                                                                                                                                                    I think it’s fine for Elm to be a small project not intended for general production usage, but Evan and the core team have continually failed to communicate that intent.

                                                                                                                                              1. 4

                                                                                                                                                But mathematicians had no formal notation for describing conditions when recursion terminates.

                                                                                                                                                This is not true - mathematicians have long had ways of describing discontinuous functions which are what this is. https://en.wikipedia.org/wiki/Piecewise

                                                                                                                                                1. 2

                                                                                                                                                  Not really continuity is an analytic notion that doesn’t have much to do with piecewise notation. Sure you can define discontinuous functions with it, but the main example given there (the definition of abs) is a continuous function (but nondifferentiable at x=0). Furthermore he seems to be talking about halting conditions actually, and saying that mathematicians had no formal ‘notation’ for it seems misguided, in the context of lambda calculus, unless I’m misinterpreting something. If we’re talking about recursions in a non-CS context, then yeah, they’re sequences which are by definition non-terminating. But I do agree that is seems wrong to say that McCarthy ‘invented’ definition cases.

                                                                                                                                                  1. 1

                                                                                                                                                    From “Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part 1”:

                                                                                                                                                    Most of the ideas are well known, but the notion of conditional expression is believed to be new, and the use of conditional expressions per- mits functions to be defined recursively in a new and convenient way.

                                                                                                                                                    and

                                                                                                                                                    The dependence of truth values on the values of quantities of other kinds is expressed in mathematics by predicates, and the dependence of truth values on other truth values by logical connectives. How- ever, the notations for expressing symbolically the dependence of quantities of other kinds on truth values is inadequate, so that English words and phrases are generally used for expressing these dependences in texts that describe other dependences symbolically. For example, the function |x| is usually defined in words. Conditional expressions are a device for expressing the dependence of quantities on propositional quantities.

                                                                                                                                                    I’d be interested to know if McCarthy was wrong. I think that mathematicians used ad-hoc ways of conveying the idea but there was no agreed on formalism. A piecewise function is, as I understand it, just the idea that one function can be defined as a set of one or more equations, depending on the input. Piecewise is not a formalism.

                                                                                                                                                    1. 3

                                                                                                                                                      It’s not like most math is actually formal in some absolute sense, there are many other examples where common concepts are utilized through natural language.

                                                                                                                                                      But I believe the formalism behind piecewise definitions is just function composition. It’s easy to see when the domain and the codomain are the same set (define equations f_k on partitions, compose with identity on the complement of the partition, then chain compose them all), it takes a few more steps to construct when they’re not.