1.  

    This is a great list with some wisdom only earned through experience.

    That being said, I’m puzzled about 77. “Mandatory code reviews do not automatically improve code quality nor reduce the frequency of incidents.”

    Of course code reviews don’t catch all issues, but the fact that they catch some seems like trivial proof that they reduce the frequency of incidents.

    1.  

      Of course code reviews don’t catch all issues, but the fact that they catch some seems like trivial proof that they reduce the frequency of incidents.

      I totally agree, but try doing a s/code reviews/types/ here and watch people lose their minds :)

    1. 19

      I immediately understand that I am dealing with a loser if I hear it.

      Man. Fuck this guy.

      Hey, programmers, if you ever get the feeling your boss thinks you’re a loser, then fire your boss. Walk away. There are plenty of great programming jobs out there, and nobody should surrender their dignity by remaining in a working relationship with someone who talks this way about their employees.


      From the comments…

      A good manager should create such arrangement that no one would even think about asking such question, right? So when solders ask “What should I do next?” that is a sign of bad management. Am I wrong?

      You are absolutely right.

      …Is this a huge self-own?

      1. 4

        A hundred times this.

        I’ve had so many positions where “what should I do next?” was 100% clear, there was backlog of stuff, and the items were actionable and I had the authority to get work done.

        I’ve also had positions where every single time I just did what I felt was right was a) underspecified b) officially not important enough (but not clear to anyone) c) blocked by other resources or just was disillusioned anyway. Then you end of in a wait state of everyone around you shouting WE HAVE SO MUCH TO DO but nothing new can be started, nothing in progress can be finished, and surfing the web is the only sane choice.

      1. 6

        The post author does not mention it, but there is also Haskell Programming From First Principles which she co-authored along with Chris Allen. Many beginner learners say good things about the book. Some, like me, who appreciate concise writing found it lacking. In the end, whatever book you use - you won’t make much inroads into Haskell (or any radically new tech for that matter) without actually trying to develop projects in it. That’s how we learn. Hands-on work.

        There is also Real World Haskell. Although it is a bit outdated (but being revived by volunteers), it contains great practical recipes.

        1. 9

          Totally agree with your point about actually getting your hands dirty. Haskell is no different from any other language in that regard. You’ll get nowhere simply flirting with the language and pondering poor monad analogies.

          The post author does not mention it

          I think there’s a reason for that, though I hope this thread doesn’t descend into an argument about that bit of history.

          1. 3

            Huh, did the two coauthors of that book have a falling out after it was published? I read it myself and liked it well enough, although I found it aimed at a level of Haskell understanding a little more basic than my own at the time I read it.

            1. 5

              Yes, though as I said I hope this thread doesn’t turn into all of us discussing it. There are statements from the authors and other discussions elsewhere, but let’s all leave it at that.

              Instead, we can talk about Julie’s subsequent work.


              I bought an early access copy of Finding Failure (and Success) in Haskell and I thought it was really good, especially for people new to the language. The exercises are practical, and help you understand the why behind the what. Motivating examples are so important. Otherwise, I think most people who see a tutorial like “Here’s how monad transformers work” would be like “Ok? But so what?”

              1. 2

                Chris Allen (the other co-author) has branched off on his own as well, looking to publish the “next in series” book titled Haskell Almanac. Sadly, however, there has been no update on this book, just as there is none on the much anticipated Intermediate Haskell. Though luckily there is Thinking in Types by the author of polysemy.

                As I see it, Haskell lacks intermediate level books more than beginners books.

                1. 2

                  The final release of Haskell Programming from First Principles now has the OK. I’m releasing it by the end of this month. I’ll work on the print version after that. I have a printer that can do an offset print run ready to go. Just a matter of figuring out how many I should run and how to finance it. I have a climate controlled storage unit ready for the books. I never found a suitable solution for third party logistics so my wife and I will be shipping all the print books.

                  As I see it, Haskell lacks intermediate level books more than beginners books.

                  You’re right that this is the more immediate problem now. Over 5 years ago when I started on HPFFP making sure no beginner was left behind was the more pressing issue.

                  I have work to do on https://lorepub.com before it’s ready to sell print books (~1-3 days of coding and ops work from seeing the deployment for digital sales). Once the HPFFP print version is available for sale and that situation is stable, I’ll get back to the Almanac. After the Almanac, I’ll be seeing if I can be more productively employed as a publisher than an author. I believe the process we hammered out for HPFFP can be well applied to other topics and educational goals.

          2. 4

            Speaking of getting your hands dirty… There is the https://github.com/qfpl/applied-fp-course/ where you actually build a small REST backend with Haskell. Sort of a fill in the blanks style, independent levels of increasing complexity thing. :)

            Disclaimer: I’m biased, I wrote it.

          1. 4

            This is a case of improper data modeling, but the static type system is not at fault—it has simply been misused.

            The static type system is never at fault, it behaves just like the programmer tells it to. But this is kind of handwaving over the very point this article attempts to address. This particular case of “improper data modeling” would never be a problem on dynamically-typed systems.

            Bad analogy time: it is pretty much like advocating the use of anabolic steroids, because they make you so much stronger, but when the undesired side effects kick in, you blame the hormonal system for not keeping things in balance.

            1. 9

              Bad analogy time: it is pretty much like advocating the use of anabolic steroids, because they make you so much stronger, but when the undesired side effects kick in, you blame the hormonal system for not keeping things in balance.

              To me it feels like that’s exactly what proponents of dynamic typing often do “I can write all code super fast” and then when people say there’s issues when it is accidentally misused (by another programmer or the same programmer, in the future) it is always “you should’ve used more hammock time to think about your problem real hard” or “you should’ve written more tests to make sure it properly handles invalid inputs”.

              1. 5

                You are not wrong and this is just a proof that the debate around type systems is still too immature. There is certainly a component of dynamism in every computer system that programmers crave, and it usually lives out of bounds of the language environment, on the operating system level. Dynamically typed languages claim to offer that dynamism inside their own environment, but most of the programs don’t take advantage of that.

                There is no known definitive argument on either side that would definitively bury its respective contender. Programmers sometimes seem too afraid of some kind of Tower of Babel effect that would ruin the progress of Computer Science and I believe that the whole debate around static and dynamic type systems is just a reflex of that.

              2. 2

                This particular case of “improper data modeling” would never be a problem on dynamically-typed systems.

                I think this is addressed in the appendix about structural vs nominal typing. In particular, very dynamic languages like Python and Smalltalk still allow us to do such “improper data modelling”, e.g. by defining/using a bunch of classes which are inappropriate for the data. Even if we stick to dumb maps/arrays, we can still hit essentially the same issues once we get a few functions deep (e.g. if we’ve extracted something from our data into a list, and it turns out we need a map, which brings up questions about whether there’ll be duplicates and how to handle them).

                Alternatively, given the examples referenced by the author (in the linked “parse, don’t validate” post) it’s reasonable to consider all data modelling in dynamically-typed systems to be improper. This sounds a bit inflammatory, but it’s based on a core principle of the way dynamically-typed languages frame things: they avoid type errors in principle by forcing all code to “work” for all values, and shoehorning most of those branches into a sub-set of “error” or “exceptional” values. In practice this doesn’t prevent developers having to handle type errors; they just get handled with branching like any other value (with no compiler to guide us!). Likewise all dynamic code can model all data “properly”, but lots of code will model lots of data by producing error/exceptional values; that’s arguably “proper” since, after all, everything in a dynamic system might be an error/exception at any time.

                Side note: when comparing static and dynamic languages, it’s important to remember that using “exceptions” for errors is purely a convention; from a language/technology standpoint, they’re just normal values like anything else. We can assign exceptions to variables, make lists of exceptions, return exceptions from functions, etc. it’s just quite uncommon to see. Likewise “throwing” and “catching” is just a control-flow mechanism for passing around values; it doesn’t have anything to do with exception values or error handling, except by convention. I notice that running raise 42 in Python gives me TypeError: exceptions must derive from BaseException, which doesn’t seem very dynamic/Pythonic/duck-typical; yet even this “error” is just another value I can assign to a variable and carry on computing with!

                1. 1

                  The point I was trying to make is that, in the example mentioned in the article, the reason why the type description was inaccurate at first has only to do with the fact that the programmer must provide the checker information about UserId‘s subtype. On a dynamically-typed system, as long as the JSON type supports Eq, FromJSON and ToJSON, you are fine, and having to accurately determine UserId‘s subtype would never be a problem.

                  So I do understand the appeal of static typing in building units, but not systems, especially distributed ones, and this is why I believe the article is myopic, but dynamic language advocates do a terrible job in defending themselves. Having to process JSON payloads is the least of the problems if you are dealing with distributed systems; how would you accurately type check across independent snippets of code in a geographically-distributed network over which you have no control is a much more interesting problem.

                  1. 1

                    On a dynamically-typed system, as long as the JSON type supports Eq, FromJSON and ToJSON, you are fine, and having to accurately determine UserId‘s subtype would never be a problem.

                    That’s not true. At some point your dynamically typed system will make an assumption about the type of value (the UserId in this case) that you’re applying some function to.

                    1. 1

                      For practical purposes, it is true. The system internals indeed need to resolve the dependency on that interface, indeed, either with fancy resolution mechanisms or attempting to call the function in a shoot-from-the-hip fashion. But it is not common between dynamic language implementations that the programmer needs to specify the type, so it is not a problem.

              1. 40

                The claim is simple: in a static type system, you must declare the shape of data ahead of time, but in a dynamic type system, the type can be, well, dynamic! It sounds self-evident, so much so that Rich Hickey has practically built a speaking career upon its emotional appeal. The only problem is it isn’t true.

                Immediate standing ovation from me.

                I can only assume that oft-made claim is perpetuated from a position of ignorance. Have those people actually tried doing the thing in a statically typed language that they claim a statically typed language cannot do? Here’s an approach that appears all over my Haskell projects:

                  req <- requireCheckJsonBody :: Handler Value
                  let chargeId = req ^. key "data" . key "object" . key "id" . _String
                

                I don’t know (or care) what the exact structure of the JSON coming over the network will look like. I just know it will contain this one field that I care about, and here I pull it out and read it as a string.

                Do I need the entire JSON string to conform to some specific protocol (more specific than JSON itself)? No. I am just parsing it as some JSON (which is represented with the Value type).

                Do I need to parse it into some complex data type? No. I’m just building a string. I am doing — in Haskell — exactly the kind of thing that Clojurists do, but without being smug about it.


                If we keep the datatype’s constructor private (that is, we don’t export it from the module that defines this type), then the only way to produce a UserId will be to go through its FromJSON parser.

                I’m glad I read this article even for this design tip alone. I had never thought to do it this way; I thought a “smart constructor” was always necessary, even when that seemed like overkill.

                1. 5
                    let chargeId = req ^. key "data" . key "object" . key "id" . _String
                  

                  So what does this piece of code actually do? Get the value under data->object->id as a String? _String is there to prevent name clashes with actual String? Is the magic here that the JSON payload isn’t parsed any more than it needs to be?

                  Stylistically, do you know why Haskell people often seem to decide to use weird operators? Are all alternatives somehow worse?

                  1. 8

                    So what does this piece of code actually do? Get the value under data->object->id as a String?

                    Yeah, exactly. This is how the Stripe API structures their responses. I could have picked a simpler hypothetical example, but I think even this real-world case is simple enough.

                    _String is there to prevent name clashes with actual String?

                    I believe so, yes. This is just a thing in the lens library.

                    Is the magic here that the JSON payload isn’t parsed any more than it needs to be?

                    I believe it is parsed only as much as necessary, yes. I’m not sure there’s any magic happening.

                    Stylistically, do you know why Haskell people often seem to decide to use weird operators? Are all alternatives somehow worse?

                    There are plenty of alternative syntaxes and approaches you could opt for. I happen to find this easy enough to read (and I think you do too, since you worked out exactly what it does), but that is of course subjective.

                    1. 3

                      the syntactic weirdness is mostly due to the fact that the base grammar is very simple, so you end up basically relying on making infix operators to build symbol-ful DSLs.

                      This is very powerful for making certain kinds of libraries, but means that lots of Haskell looks a bit “out there” if you haven’t looked at code using a specific library before. This tends to be at its worst when doing stuff like JSON parsing (where you have very variably-shaped data)

                      1. 6

                        Although conversely, I think more typical parsing with Aeson (especially the monadic form) is usually very tidy, and easy to read even by people not familiar with Haskell. It’s much less noisy than my lens example.

                        Here’s an example: https://artyom.me/aeson#recordwildcards

                        I think you probably know this, but I am posting here mostly so that other curious onlookers don’t get the wrong idea and think that Haskell is super weird and difficult.

                    2. -5

                      Lol what - you’re defining a benefit of dynamically typed language with your example. The json in your case IS a dynamic object.

                      1. 7

                        I think you are quite confused about what we’re discussing.

                        The discussion is around type systems in programming languages. JSON is just a protocol. The JSON that my example parses is not a “dynamic object”. There is no such thing as a JSON object. JSON is only ever a string. Some data structure can be serialised as a JSON string. A JSON string can potentially be parsed by a programming language into some data structure.

                        The JSON protocol can be parsed by programming languages with dynamic type systems, e.g., Clojure, and the protocol can also be parsed by programming languages with static type systems, e.g., Haskell.

                        My example is taken verbatim from some Haskell systems I’ve written, so it is not “defining a benefit of dynamically typed language”.

                        You’re going to have to go and do a bit more reading, but luckily there is plenty of material online that explains these things. I think your comment is a good example of the kind of confusion the article’s author is trying to address.

                        1. 2

                          I read the article, and I agree somewhat with the parent commenter. It really seems that the author – and perhaps you as well – was comfortable with the idea of potentially declaring parts of the program as just handling lots of values all of a single generic/opaque/relatively-underspecified type, rather than of a variety of richer/more-specified types.

                          That position is not all that far from being comfortable with all values being of a single generic/opqaue/relatively-underspecified type. Which is, generally, the most charitable description the really hardcore static-typing folks are willing to give to dynamically-typed languages (i.e., “in Python, all values are of type object”, and that’s only if someone is willing to step up their politeness level a bit from the usual descriptions given).

                          In other words, a cynical reading would be that this feels less like a triumphant declaration of “see, static types can do this!” and more an admission of “yeah, we can do it but only by making parts of our programs effectively dynamically typed”.

                          1. 2

                            I don’t know how you’ve come to this conclusion. Moreover, I don’t understand how your conclusion is related to the argument in the article.

                            In other words, a cynical reading would be that this feels less like a triumphant declaration of “see, static types can do this!” and more an admission of “yeah, we can do it but only by making parts of our programs effectively dynamically typed”.

                            What does this even mean? How did you come up with this idea? When you want to parse some arbitrary JSON into a more concrete type, you can just do that. How does parsing make a program no longer statically typed?

                            1. 2

                              What is the difference between:

                              1. “Everything in this part of the program is of type JSON. We don’t know what the detailed structure of a value of that type is; it might contain a huge variety of things, or not, and we have no way of being sure in advance what they will be”.
                              2. “Everything in this part of the program is of type object. We don’t know what the detailed structure of a value of that type is; it might contain a huge variety of things, or not, and we have no way of being sure in advance what they will be”.

                              The first is what the article did. The second is, well, dynamic typing.

                              I mean, sure, you can argue that you could parse a JSON into some type of equally-generic data structure – a hash table, say – but to usefully work with that you’d want to know things like what keys it’s likely to have, what types the values of those keys will have, and so on, and from the type declaration of JSON you receive absolutely none of that information.

                              In much the same way you can reflect on an object to produce some type of equally-generic data structure – a hash table, say – but to usefully work with that you’d want to know things like… hey, this is sounding familiar!

                              Now do you see what I mean? That’s why I said the cynical view here is the author has just introduced a dynamically-typed section into the program.

                              1. 2

                                Any program which reads some JSON and parses it will be making some assumptions about its structure.

                                This is true of a program written in a dynamically-typed language.

                                This is true of a program written in a statically-typed language.

                                Usually, you will want to parse a string of JSON into some detailed structure, and then use that throughout your system instead of some generic Value type. But you don’t necessarily need to do that. Nothing about writing in a statically-typed programming language forces you to do that. And no, Haskell programmers don’t generally intentionally try to make their programs worse by passing Value types, or generic Map types, or just anything encoded as a String, throughout their program. That would be stupid.

                                1. 3

                                  OK, I’ll do the long explanation.

                                  Many programmers whose primary or sole familiarity is with statically-typed languages assume that in dynamically-typed languages all code must be littered with runtime type checks and assertions. For example, I’ve run into many people who seem to think that all Python code is, or should be, full of:

                                  if isinstance(thing, some_type) ...
                                  elif isinstance(thing, some_other_type) ...
                                  

                                  checks in order to avoid ever accidentally performing an operation on a value of the wrong type.

                                  While it is true that you can parse a JSON into a data structure you can then pass around and work with, the only way to meaningfully do so is using your language’s equivalent idiom of

                                  if has_key(parsed_json, some_key) and isinstance(parsed_json.get(some_key), some_type)) ...
                                  elif has_key(parsed_json, some_other_key) and isinstance(parsed_json.get(some_other_key), some_other_type) ...
                                  

                                  since you do not know from the original type declaration whether any particular key will be present nor, if it is present, what type the value of that key will have (other than some sort of suitably-generic JSONMember or equivalent).

                                  Which is to say: the only way to effectively work with a value of type JSON is to check it, at runtime, in the same way the stereotypical static-typing advocate thinks all dynamically-typed programmers write all their code. Thus, there is no observable difference, for such a person, between working with a value of type JSON and writing dynamically-typed code.

                                  Now, sure, there are languages which have idioms that make the obsessive checking for members/types etc. shorter and less tedious to write, but the programmer will still be required, at some point, either to write such code or to use a library which provides such code.

                                  Thus, the use of JSON as a catch-all “I don’t know what might be in there” type is not distinguishable from dynamically-typed code, and is effectively introducing a section of dynamically-typed code into the program.

                                  1. 2

                                    I still don’t get what point you’re trying to make. Sorry.

                                    Thus, the use of JSON as a catch-all “I don’t know what might be in there” type is not distinguishable from dynamically-typed code, and is effectively introducing a section of dynamically-typed code into the program.

                                    This now sounds like you’re making an argument between parsing and validation, and misrepresenting it at static vs dynamic.

                                    1. 2

                                      This now sounds like you’re making an argument between parsing and validation, and misrepresenting it at static vs dynamic.

                                      For an alternative formulation, consider that people often claim, or want to claim, that in a statically-type language most of the information about the program’s behavior is encoded in the types. Some people clearly would like a future where all such information is encoded in the types (so that, for example, an add function would not merely have a signature of add(int, int) -> int, but a signature of add(int, int) -> sum of arguments which could be statically verified).

                                      I have complicated thoughts on that – the short hot-take version is those people should read up on what happened to logical positivism – but the point here is a reminder that this article, which was meant to show a way to have nice statically-typed handling of unknown data structures, was able to do so only by dramatically reducing the information being encoded in the types.

                                      1. 2

                                        the point here is a reminder that this article, which was meant to show a way to have nice statically-typed handling of unknown data structures, was able to do so only by dramatically reducing the information being encoded in the types.

                                        …How else would a program know what type the program’s author intends for the arbitrary data to be parsed into? Telepathy?

                                        1. 1

                                          I think at this point it’s pretty clear that there’s nothing I can say or do that will get you to understand the point I’m trying to make, so I’m going to bow out.

                                2. 2

                                  What is the difference between:

                                  1. “Everything in this part of the program is of type JSON. We don’t know what the detailed structure of a value of that type is; it might contain a huge variety of things, or not, and we have no way of being sure in advance what they will
                                  2. “Everything in this part of the program is of type object. We don’t know what the detailed structure of a value of that type is; it might contain a huge variety of things, or not, and we have no way of being sure in advance what they will be”.

                                  The first is what the article did. The second is, well, dynamic typing.

                                  The difference is that in a statically-typed language, you can have other parts of the program where proposition 1. is not the case, but in a dynamically-typed language proposition 2. is true all the time and you can’t do anything about it. No matter what style of typing your language uses, you do have to inspect the parsed JSON at runtime to see if it has the values you expect. But in a statically-typed language, you can do this once, then transform that parsed JSON into another type that you can be sure about the contents of; and then you don’t have to care that this type originally came from JSON in any other part of your program that uses it.

                                  Whereas in a dynamically-typed language you have to remember at all times that one value of type Object happens to represent generic JSON and another value of type Object happens to represent a more specific piece of structured data parsed from that JSON, and if you ever forget which is which the program will just blow up at runtime because you called a function that made incorrect assumptions about the interface its arguments conformed to.

                                  Anyway even introducing a “generic JSON” type is already encoding more useful information than a dyanmically-typed language lets you. If you have a JSON type you might expect to have some methods like isArray or isObject that you can call on it, you know that you can’t call methods that pertain to completely different types like getCenterPoint or getBankAccountRecordsFromBankAccountId. Being able to say that a value is definitely JSON, even if you don’t know anything about that JSON, at least tells you that it’s not a BankAccount or GLSLShaderHandle or any other thing in the vast universe of computing that isnt JSON.

                                  1. 2

                                    Whereas in a dynamically-typed language you have to remember at all times that one value of type Object happens to represent generic JSON and another value of type Object happens to represent a more specific piece of structured data parsed from that JSON, and if you ever forget which is which the program will just blow up at runtime because you called a function that made incorrect assumptions about the interface its arguments conformed to.

                                    This is where the discussion often veers off into strawman territory, though. Because I’ve written code in both dynamically and statically typed languages (and hybrid-ish stuff like dynamically-typed languages with optional type hints), and all the things people say about inevitable imminent doom from someone passing the wrong types of things into functions are, in my experience, just things people say. They don’t correspond to what I’ve actually seen in real programming.

                                    That’s why in one of my comments further down I pointed out that the generic JSON approach used in the article forces the programmer to do what people seem to think all dynamically-typed language programmers do on a daily basis: write incredibly defensive and careful code with tons of runtime checks. My experience is that people who prefer and mostly only know statically-typed languages often write code this way when they’re forced to use a dynamically-typed language or sections of code that are effectively dynamically-typed due to using only very very generic types, but nobody who’s actually comfortable in dynamic typing does that.

                                    And the best literature review I know of on the topic struggled to find any meaningful results for impact of static versus dynamic typing on defect rates. So the horror stories of how things will blow up from someone forgetting what they were supposed to pass into a function are just that: stories, not data, let alone useful data.

                                    Anyway, cards on the table time here.

                                    My personal stance is that I prefer to write code in dynamically-typed languages, and add type hints later on as a belt-and-suspenders approach to go with meaningful tests (though I have a lot of criticism for how Python’s type-hinting and checking tools have evolved, so I don’t use them as much as I otherwise might). I’ve seen too much statically-typed code fall over and burn the instant someone pointed a fuzzer at it to have much faith in the “if it passes type checks, it’s correct” mantra. And while I do enjoy writing the occasional small thing in an ML-ish language and find some of the idioms and patterns of that language family pleasingly elegant, mostly I personally see static typing as a diminishing-returns technique, where beyond a very basic up-front pass or two, the problems that can be prevented by static typing quickly become significantly smaller and/or less likely as the effort required to use the type system to prevent them increases seemingly without bound.

                                    1. 2

                                      This is where the discussion often veers off into strawman territory, though. Because I’ve written code in both dynamically and statically typed languages (and hybrid-ish stuff like dynamically-typed languages with optional type hints), and all the things people say about inevitable imminent doom from someone passing the wrong types of things into functions are, in my experience, just things people say. They don’t correspond to what I’ve actually seen in real programming.

                                      I disagree - passing the wrong types of things into functions is definitely a phenomenon I’ve personally seen (and debugged) in production Ruby, JavaScript, and Python systems I’ve personally worked on.

                                      For instance, I’ve worked on rich frontend JavaScript systems where I was tasked with figuring out why a line of code a.b.c was throwing TypeError but only sometimes. After spending a bunch of time checking back to see where a ultimately came from, I might find that there was some function many frames away from the error in the call stack that sets a from the result of an xhr that isn’t actually guaranteed to always set a key b on a, and that code was not conceptually related to the code where the error happened, so no one thought it was unusual that a.b wasn’t guaranteed, which is how the bug happened.

                                      In a statically typed language, I could convert the JSON that will eventually become a into a specifically-typed value, then pass that down through 10 function calls to where it’s needed, without worrying that I’ll find 10 frames deep that SpecificType randomly doesn’t have a necessary field, because the conversion from the generic to the specific would’ve failed at the conversion site.

                                      I am a fan of statically typed languages, and a huge reason for this is because I’ve debugged large codebases in dynamically-typed languages where I didn’t write the original code and had to figure it out by inspection. Static typing definitely makes my experience as a debugger better.

                                      1. 1

                                        definitely a phenomenon I’ve personally seen (and debugged)

                                        Notice I didn’t say “your stories are false”.

                                        Nor did you refute my claims that I’ve seen statically-typed code which passed static type checks fall over and crash when fuzzed.

                                        We each can point to instances where our particular bogeyman has in fact happened. Can either of us generalize usefully from that, though? Could I just use my story to dismiss all static typing approaches as meaningless, because obviously they’re not catching all these huge numbers of bugs that must, by my generalization, be present in absolutely every program every person has ever written in any statically-typed language?

                                        The answer, of course, is no. And so, although, you did write a lot of words there, you didn’t write anything that was a useful rebuttal to what I actually said.

                            2. 0

                              I appreciate your condencending tone but really you should work on your ability to argument. The original post claims that this statement is not true:

                              in a static type system, you must declare the shape of data ahead of time, but in a dynamic type system, the type can be, well, dynamic!

                              You argue however that this is indeed not true because you can have “dynamic” data and “static” types when that’s just a silly loophole. Surely you want data as an object right? A string of characters without the meta structure are completely useless in the context of programming.

                              Just because you can have a static type that doesn’t have strict, full protocol implementation doesn’t mean that you don’t need to declare it before hand which renders the original statement absolutely correct - you must declare static shape of data that matches what your type expects. The claim that types can be “lose” doesn’t invalidate this statement.

                              1. 3

                                I appreciate your condencending tone but really you should work on your ability to argument.

                                I’m sorry you feel that way. I genuinely did my best to be kind, and to present some absolute truths to you that I had hoped would clear up your confusion. Unfortunately, it looks like you’ve decided to dig your heels in.

                                You argue however that this is indeed not true because you can have “dynamic” data and “static” types when that’s just a silly loophole.

                                I don’t know what you are talking about. Dynamic data? What does this mean? And what silly loophole?

                                In the context of this argument: the JSON being parsed is a string. It’s not static. It’s not dynamic. It’s a string.

                                which renders the original statement absolutely correct - you must declare static shape of data that matches what your type expects.

                                No, you don’t. Again, you have misunderstood me, you have misunderstood the article, and you have misunderstood some pretty basic concepts that are fundamental to constructing a cohesive argument in this debate.

                                The argument is whether or not — in a statically-typed programming language — the JSON string you are parsing needs to conform 1:1 to the structure of some data type you are trying to parse it into.

                                The answer is: No. Both statically-typed and dynamically-typed programming languages can parse arbitrary data.

                                1. 0

                                  The answer is: No. Both statically-typed and dynamically-typed programming languages can parse arbitrary data.

                                  That was never the topic; you can parse arbitrary data with a pen and a piece of toilet paper…

                                  1. 1

                                    That was never the topic

                                    Yes it was. Perhaps you should have actually read the article.

                                    you can parse arbitrary data with a pen and a piece of toilet paper…

                                    At this point, it is clear you are not even trying to add anything constructive. I suggest we leave this discussion here.

                                    1. 0

                                      Oof, I guess there had to be first failed discussion experience here on lobsters. I’m sorry but you are absolutely inept at discussing this. Maybe it’s better if we don’t continue this. Cheers.

                            3. 2

                              The grandparent’s code example uses way more than one type.

                          1. 2

                            I’m curious what others think. Is this article off-topic for this site? Not sure the ‘practices’ tag even applies to it. At least it seems to touch the grey line.

                            1. 0

                              It is about how we, as practicing technologists, get paid. We can use this information to judge job offers, negotiate better, and generally perform in the system more effectively.

                              I believe it to be both interesting, actionable, and relevant.

                              1. 4

                                I don’t believe anything “general” can be derived from this post; it seems quite specific to the largest companies in places like the SF Bay Area. That’s one system among many.

                                The post also hints at identity politics which are also location-specific. Hypotheses extrapolated and applied to the rest of the world is a weird (and unfortunately common) case of American Exceptionalism.

                                I think it’s fine for this post to be on this website, but let’s be clear: it has little to do with the typical remuneration opportunities available in, say, Gdańsk.

                                1. 1

                                  I flagged it as ‘offtopic’ because I believe it is none of those (jgt did a great job summarizing it)

                              1. -3

                                This article is obviously wrong in its conclusion. To see how, first recall that while Haskell’s types don’t form a category, we can imagine a “platonic” Hask whose objects are types, whose arrows are functions, and where undefined and friends have been removed.

                                Now, consider that platonic Hask is but one object of Cat. From size issues, it is immediate that Cat cannot be a subcategory of Hask; that is, that Hask cannot describe all of Cat’s objects. It follows that Haskell typeclasses like Functor are not arrows in Cat, but endofunctors on Hask, and that Control.Category does not capture objects in Cat, but the internal category objects in Hask.

                                Finally, pick just about any 2-category, indeed say Cat, and then ask whether Hask can represent it faithfully: The answer is a clear, resounding, and obvious “no”. Going further, pick any ∞-category, say Tomb, and then ask whether Hask can even represent a portion of any object; an ∞-object is like a row of objects, one per level, but Haskell’s type system could only see one single level of types at a time. (This is not just theoretical; I have tried to embed Tomb into Haskell, Idris, and Coq, and each time I am limited by the relatively weak type system’s upper limits.)

                                I wonder why the author believes otherwise.

                                1. 16

                                  This article is obviously wrong in its conclusion.

                                  I think the word “obviously” is relative to the reader’s familiarity with category theory.

                                  For the purposes of the misconception she is addressing, the author’s conclusion — to me — is obviously correct.

                                  You appear to be refuting her argument in some different context. I’m interested to hear your argument (although it would probably be a long time before I learn the CT necessary to properly understand your argument), but switching out the context the argument was made in to refute the entire original argument makes your own argument (to me, at least) appear as an attack against a straw-man.

                                  1. -1

                                    My argument ought to follow readily for any ML, and we can see the scars it causes in the design of many MLs. Idris, for example, uses a hierarchy of universes to avoid universe-inconsistency paradoxes as it climbs this tower that I’m talking about. Haskell and Elm don’t bother trying to climb the tower at all. SML and OCaml have exactly one tier, adding on the module system, and strict rules governing the maps between modules and values.

                                    I’m not removing the word “obviously”. Cat obviously contains Hask, Set, and many other common type systems as objects; the size issues around Cat are usually one of the first things mentioned about it. (Third paragraph in WP and nCat, for example.) And Cat is one of the first categories taught to neophytes, too; for example, in the recent series of programmer-oriented lectures on category theory, Programming with Categories, Cat is the second category defined, after Set.

                                    My refutation is of the article’s title: Yes indeed, dynamic type systems are more open, simply because there are certain sorts of infinite objects that, when we represent them symbolically, still have infinite components. Haskell can represent any finite row of components with multi-parameter typeclasses but that is not sufficient for an ∞-category. By contrast, when we use dynamic type systems, especially object-based systems, our main concern is not about the representation of data, since that is pretty easy, but the representation of structures. For categories, for example, there are many different ways to give the data of a category, depending on what the category should do; we can emphasize the graph-theoretic parts, or the set-theoretic parts, or even transform the category into something like a Chu space.

                                    Finally, if static type systems are so great, why isn’t your metatheory, the one you use for metaphysics and navigating the world, a static type system? Probably because you have some sort of open-world assumption built into the logic that you use for day-to-day reasoning, I imagine. This assumption is the “open” that we are talking about when we talk about how “open” a type system is! Just like how we want a metatheory in our daily lives that is open, we all too often want to represent this same sort of open reasoning in our programming languages, and in order to do that, we have to have ways to either subvert and ignore, or entirely remove, limited static types.

                                    1. 5

                                      My argument ought to follow readily for any ML, and we can see the scars it causes in the design of many MLs. Idris, for example, uses a hierarchy of universes to avoid universe-inconsistency paradoxes as it climbs this tower that I’m talking about.

                                      Could you give examples of useful programs that are inexpressible in a typed way without a hierarchy of universes? Even when doing pure mathematics (which demands much stronger logical foundations than programming), most of the time I can fix a single universe and work with (a tiny part of) what lives in it.

                                      When programming in ML, the feature that I want the most badly is the ability to “carve out” subsets of existing types (e.g., to specify that a list must contain a given element). This would be actually useful for specifying preconditions and postconditions of algorithms (which is ultimately the point to programming, i.e., implementing algorithms). But it does not require hierarchical type universes.

                                      Yes indeed, dynamic type systems are more open, simply because there are certain sorts of infinite objects that, when we represent them symbolically, still have infinite components.

                                      You seem to be confusing symbols with their denotation. Symbols are finite out of necessity, but you can use them to denote infinite objects just fine, whether you use a type system or not.

                                      Haskell can represent any finite row of components with multi-parameter typeclasses but that is not sufficient for an ∞-category.

                                      The arity of a multiparameter type class has absolutely nothing to do with n-categories. But, in any case, why is Haskell supposed to do represent ∞-categories in its type system? It is a general-purpose programming language, not a foundation of mathematics.

                                      Finally, if static type systems are so great, why isn’t your metatheory, the one you use for metaphysics and navigating the world, a static type system? Probably because you have some sort of open-world assumption built into the logic that you use for day-to-day reasoning, I imagine.

                                      Every nominal type definition literally brings a new type of thing into existence. What exactly is this, if not dealing with an open world?

                                      And, by the way, my metatheory is ML.

                                      1. 3

                                        Can any programming language usefully represent these infinite objects? Is that ever useful?

                                        Surely you can just build something with opaque objects within Haskell if the type system is too restrictive?

                                    2. 9

                                      I wonder why the author believes otherwise.

                                      Probably because the author isn’t comparing Hask to all of category theory. They’re comparing it to the unitype, which cannot faithfully represent anything at all.

                                      1. -5

                                        As long as we are using “probably” to speak for the author, I think that they probably are not familiar enough with type theory to understand that there are size issues inherent to formalizing type systems.

                                        Please reread the original article; they do not talk about “unityping” or Bob Harper’s view on type theory of languages which don’t know the types of every value.

                                        1. 26

                                          The author is Alexis King, who is a PLT researcher, an expert in both Haskell and Racket and has discussed category theory in depth on Twitter. I’d be shocked if she didn’t understand the ramifications here and was intentionally simplifying things for her target audience.

                                          1. -1

                                            Sure, and I am just a musician. Obviously, therefore, the author is right.

                                            Anyway, they didn’t talk about size issues, nor did they talk about “unitype” ideas, in the article. I am not really fond of guessing what people are talking about. I am happy to throw my entire “probably” paragraph into the trash, as I do not particularly value it.

                                      2. 4

                                        I don’t know enough category theory to follow your argument precisely, but I’d argue that the category theoretic perspective isn’t relevant in this discussion. How much of category theory you can model using Haskell’s type system is totally unrelated to how much you can model with a program written in Haskell. I guess I don’t even need to make this argument, but still, whatever code you were planning to write with Javascript, can be mechanically translated by a Haskell beginner line-by-line to a Haskell program that simply uses JSON.Value everywhere.

                                        I believe the parts of category theory you can’t model in Haskell’s types corresponds to the kinds of relationships you can’t get the type checker to enforce for you. And you go into the language knowing you can’t model everything in types, so that’s no news. What’s relevant is how much you can model, and whether that stuff helps you write code that doesn’t ruin people’s lives and put bread on the table. As a full time Haskeller for a long time, my opinion is that the answer is “yes”.

                                        I think the friction comes from the desire to view the language as some sort of deity that you can describe your most intricate thoughts and it will start telling you the meaning of life. For me, once I stopped treating GHC (Haskell’s flagship compiler) as such and started viewing it as a toolbox for writing ad-hoc support structures to strengthen my architecture here and there it all fell into place.

                                        1. 2

                                          I’m going to quote some folks anonymously from IRC, as I think that they are more eloquent than I am about this. I will say, in my own words, that your post could have “Haskell” replaced with basically any other language with a type system, and the same argument would go through. This suggests that the discussion is not at all about Haskell in particular, but about any language with a type system. I would encourage you to reconsider my argument with that framing.

                                          (All quoted are experts in both Python and Haskell. Lightly edited for readability.)

                                          Maybe another way of making the point is to say that the job of a type system is to reduce the number of programs you can write, and proponents of a type system will argue that enough of the reduction comes from losing stupid/useless/broken programs that it’s worth it.

                                          The problem with this argument and the statement [IRC user] just made is the same, I think. It depends. Specifically, it depends on whether one is trying to use the type system as a mathematical object, or as a practical programming tool. And further, on how good your particular group of programmers is with their practical programming tools on the particular day they write your particular program. With a mathematical system, you can produce something correct and prove it; with a practical programming tool, you can produce something correct and run it.

                                      1. 3

                                        The following is with the assumption that this is some side-project with potential commercial ambitions.

                                        I think going “deep in a technical rabbit hole” is a problem that almost all side-project programmers face, and I think it’s because fiddling with some technical challenge is within our comfort zone, whereas picking up the phone and asking a stranger to hand over their dollars to use your project (which you probably think is terrible and not worthy of demanding fees) is daunting, and can potentially be a huge blow to your ego.

                                        When a compiler says “whoops, I can’t compile this without the locks of hair from a few dozen yaks”, it makes you grumble.

                                        When a potential customer says “this looks nice, but I don’t care about it enough to pay you for it”, it is truly gut-wrenching.

                                        This was (and is) my challenge too, and I don’t have any handy secret to overcome it. You just need to push through that pain. Keep picking up the phone and hustling until it sucks less, and people say “yeah, that sounds great. Let’s do this.”

                                        Having external accountability (especially from people waiting to send me money) is a huge motivator for me to just ship stuff, regardless of how gross the code is.

                                        1. 7

                                          I find it curious that dotfiles are among the things listed by the author that don’t scale. I’ll quote Steve Losh because he says it better than I ever could:

                                          I can count on my balls how many times I’ve sat down to program at someone else’s computer in the last five years. It just never happens.

                                          1. 3

                                            I’ve done it. Either because it’s just Peer Programming, or because of a client’s Misplaced Paranoia.

                                            1. 2

                                              I’m equally surprised to see static blog generators there. Sure, some generators have slow build times for large sites… and many others don’t. If anything they scale better than CMSes because you only build the blog every so often, but it needs no maintenance and can be served to a large crowd of visitors per minute from a free or dirt cheap hosting.

                                              I generally agree with the idea that we should be solving problems for everyone whenever possible, but some things just can’t have universal “good” defaults. Highly domain specific example: MuseScore has no default shortcut for “toggle concert pitch”. For people writing for woodwinds, having one is a real time saver. Everyone else usually has no idea what on earth is “concert pitch”. People writing different kinds of music can benefit from simpler shortcuts for their common tasks a lot. I bet same goes for many other applications, if the default shortcut for a thing you do every minute is Ctrl-Alt-Meta-Escape-Super-F14, you should rather change it and add it to the dotfiles than put up with it or argue with people whose needs are different that they should cater to your needs.

                                              1. 1

                                                I’m equally surprised to see static blog generators there.

                                                This surprised me too, until I read a comment where the author explained their rationale:

                                                If the problem you’re solving is “I want to have a website to post my articles on”, then I think the solution should probably not involve git, local builds from the terminal, or CNAME configs to get a custom domain.

                                              2. 1

                                                Agreed. I mean it’s also not that I’m a clueless fool when I work with other people’s computers. It may take a bit longer, but I don’t see the problem.

                                                This is not really like handing your hammer to another person on a construction site. This is more like having to put on their shoes and trousers, because the hammer is not the problem.

                                                1. 1

                                                  Ya that seemed odd to me. I have have a dotfiles directory where I store configurations for the software I use most often. There has never been a time during machine setup or server configuration where running setup-dotfiles.sh has not given me the exact environment I like, customizations and all. It’s not like Vim is software that introduces a lot of breaking changes.

                                                  1. 1

                                                    I’ve had to jump on a cow-orker’s workstation to help diagnose a problem and man, is it painful as nothing works like I expect it to. And the customizations I have aren’t that many (in fact, I tend to remove default settings in bash), but I’ve been using said settings for over 20 years now.

                                                    The problem I see with the author’s approach is either fighting for change (what if they reject it?) or just living with the ever changing set of defaults (which in my experience destroy any hope for a good long term work flow to develop).

                                                  1. 1

                                                    I used to rely on stuff like history | grep, but now this problem is completely solved for me with fzf.

                                                    1. 1

                                                      Speaking of which, is there a way to make fzf’s Alt+c also search higher up directories or a reason not to do that?

                                                      1. 1

                                                        Not sure if there’s a grand philosophical reason, but I know I’d be annoyed if every search went through my entire file system. You could probably do this if you wanted to — perhaps with an alias and a subshell — but I like it the way it is.

                                                    1. 2

                                                      I don’t really buy the argument. That isn’t to say I disagree and think JSON (or some other representation) is better, but I don’t think the syntax matters all that much. I also don’t think the invented collection encodings or $type prefix notations are necessary in either case.

                                                      I think the two encodings are roughly equivalent, and I can’t see a technical reason why one “beats” the other. You might prefer one over the other for some aesthetic reason, but that’s not the argument made in the article. If a colleague were to present this argument to me, I’d suggest they just write [de]serialisers for both encodings.

                                                      1. 1

                                                        XML when done right isn’t bad - it’s people who just use some sort of XML (like in the example with the attributes) and without a DTD to validate..

                                                        And I suppose there’s JSON Schema but I’ve never seen anyone use it so far.

                                                        1. 1

                                                          I’ve found XML nicer also better to parse as you get SAX parsing which provides you another tool to keep memory usage and latencies predictable. Also with a little thought you can strap your chunked HTTP stream into your zlib decompressor and poke directly your internal data structure.

                                                          Of course this really shines for large payloads.

                                                      1. 5

                                                        The clickbait refers to “Macintosh HD”, an anachronistic default name for the internal drive.

                                                        1. 2

                                                          I think “clickbait” is an overly-harsh criticism here. What might you have otherwise entitled the article?

                                                        1. 7

                                                          This was my home office for most of last year.

                                                          However, I have now decided to return to a life of living out of a suitcase and slowly travelling the world, writing Haskell on a Macintosh Book Air. I’m looking forward to when Fruit Company releases the new machines with the old keyboards, as I spilled a glass of exquisite Georgian red wine in my current machine and typing has become somewhat less ergonomic.

                                                          1. 1

                                                            Living the dream. I’d love to hear more about how you do this, since doing the same thing someday is on my bucket list.

                                                            1. 8

                                                              I started working remotely six years ago, and was working partly remotely for about 18 months prior. I always knew I wanted to work from home, and initially it was something I requested as part of the salary negotiation process. Us programmers have plenty of leverage in that regard.

                                                              All through my freelance/consulting years, I had to convince companies to let me work from home, and would pitch it as them getting more documentation — because communication should be mostly asynchronous, in written form — and also buying my services at a lower rate, since I don’t have to pay the extortionate costs of living in, e.g. London. Everyone wins.

                                                              I could have happily continued living in Poland by the beach, but visa issues with my partner (Russian) meant we would potentially be separated for a couple of months. Rather than endure the bitter Winter by the Baltic Sea alone, in February and March of 2018 we went and lived in Thailand. I then realised there’s no good reason for us to not continue floating around different countries that are more affordable and have a better climate. We went sailing in Greece in May, lived in Belgrade in June, Warsaw in July (not very original I guess; I’m half Polish), Ukraine for three months, and then a combination of London and Russia towards the holiday end of the year. At the end of 2018 I quit working for other people, and I have been focusing on my own projects since.

                                                              Travel plans for this year so far include Russia, Thailand, Sri Lanka, Armenia (or perhaps Georgia), and Ukraine. My partner is a junior web developer, and she is now looking for her first remote job. All of my employees also work from wherever they want. If you’re curious about writing Haskell specifically: I don’t think anyone was going to hire me to do this. I had to start my own businesses, get funding, and build my own team of Haskell people.

                                                              Happy to answer most other questions you might have.

                                                              1. 1

                                                                Sounds amazing, your own Haskell business.

                                                                Care to elaborate more on what application domain you use Haskell for, if possible?

                                                                1. 3

                                                                  All three businesses are web applications. One is in sales lead generation, one is in price comparison, and my primary focus is a marketplace product for the reinsurance industry. I’m using similar tech in all three: Yesod, PostgreSQL, Redis, NixOS, AWS.

                                                                  1. 2

                                                                    NixOS fits so well with Haskell ethos.

                                                                    Very intereting to hear all this, thanks.

                                                          1. 2

                                                            Flying to London today so I can get my visa to Russia, and also my international driving permit for my moped trip in Thailand.

                                                            Oh, also working at the Google for Startups Campus in Shoreditch for the next week or so.

                                                            1. 1

                                                              I’m prepping to do a sabbatical, most of which I’ll spend in London. I know you also write Elm, could I hit you up to ask you about what interesting stuff is going on in the city, which meetups are good, etc?

                                                              1. 1

                                                                To be honest, I don’t really know. I think the Haskell meetups are quite good, but otherwise although my businesses are based in London and legally I too am based here, I probably only spend a couple of weeks in London per year.

                                                            1. 7

                                                              I just read this page and the four linked blog posts. There isn’t very much consistency in the detail about what constitutes simple/boring Haskell, short of an admonition against so-called “fancy types” enabled by some GHC extensions and a bit of name-dropping about certain packages. The motivation seems to be aimed at making Haskell development in industry have less variance so that it can grow more quickly, in the name of “inclusion”. There’s also some odd animosity toward researchers’ tendency to use Haskell as a research tool, as though somehow this causes industry practitioners to reach for fancy types over simple/boring Haskell.

                                                              I agree with the idea of preferring simple code over complex code, but I find myself repelled by the arguments in all four posts and the “isn’t it obvious” attitude of the landing page.

                                                              The authors’ message might be more well received if they framed this in terms of the tradeoffs between examples of simple/boring implementations and their fancy types counterparts. Right now it reads like a crusade and smacks of more unnecessary division in the Haskell community.

                                                              1. 4

                                                                Right now it reads like a crusade

                                                                I think any of these kinds of software development “manifesto” documents read like a crusade. I’m all for shipping boring simple Haskell code, but I agree a better approach would just be a bunch of “cookbook” style articles demonstrating building simple things with simple Haskell.

                                                              1. 12

                                                                I haven’t worked with a quality focused team since ~2009, so it has nothing to do with weakness, and turning this into a moral choice that someone is making seems misplaced to me. I think it’s a capitalist choice, and yet again capitalism optimizing for nothing useful.

                                                                The worse is better theory winning is not some victory lap for C, but I believe just a part of the fact that consumers / clients have no other choices, and if they do the cost and effort of switching is almost an impossible hurdle. The idea of me switching to an iPhone or my wife switching to Android is almost an insurmountable set of unknown complexity.

                                                                1. 2

                                                                  I don’t think the article really states it as a moral choice, but rather as an emergent property of software development as it is practiced.

                                                                  1. 1

                                                                    I’m sure there’s a philosophical name for this. It’s a practice that results in morally problematic results, despite that practice not being a deliberate moral choice. Sort of like how capitalism as currently practiced fills the ocean with microplastic garbage despite nobody making a choice to do that.

                                                                    1. 5

                                                                      Hot take: most “morality” is just a matter of aesthetics. Billions of people would presumably rather be alive than not existing because a non-capitalist system is grossly inefficient at developing the supporting tech and markets for mass agriculture. Other people would prefer that those folks not exist if it meant prettier beachfront property, or that their favorite fish was still alive.

                                                                      Anyways, that’s well off-topic though I’m happy to continue the conversation in PMs. :)

                                                                      1. 8

                                                                        Just as “software development” is a pretty broad term, “capitalism” is a pretty broad term. I wouldn’t advocate eliminating capitalism any more than I would advocate eliminating software development. The “as currently practiced” is where the interesting discussion lies.

                                                                      2. 3

                                                                        There’s an economic name for it - externality - though economics is emphatically not philosophy.

                                                                        1. 1

                                                                          Sort of like how capitalism as currently practiced fills the ocean with microplastic garbage despite nobody making a choice to do that.

                                                                          This is a classic False Cause logical fallacy.

                                                                          Capitalism is not the cause of microplastic pollution. The production of microplastics and subsequent failure to safely dispose of microplastics is the cause of microplastic pollution.

                                                                          Microplastics produced in some centrally-planned wealth-redistribution economy would be just as harmful to the environment as microplastics produced in a Capitalist economy (although the slaves in the gulags producing those microplastics would be having less of a fun time).

                                                                          Further example:

                                                                          • Chlorofluorocarbons were produced in Capitalist economies.
                                                                          • Scientists discovered that chlorofluorocarbons are poking a hole in the ozone layer and giving a bunch of Australians skin cancer.
                                                                          • People in Capitalist economies then decided that we should not allow further use of chlorofluorocarbons.
                                                                          1. 3

                                                                            Again, the key phrase here is not “capitalism”, but “as currently practiced”. Capitalism doesn’t cause microplastics, but it doesn’t stop them either. In other words microplastics are “an emergent property of capitalism as it is practiced”. You could practice it differently and not produce microplastics, but apparently the feedback mechanism between the bad result (microplastics/bloated software) and the choices (using huge amounts of disposable plastics/using huge amounts of software abstractions) is not sufficient to produce a better result. (Of course assuming one thinks the result is bad to begin with.)

                                                                            1. 0

                                                                              Of course assuming one thinks the result is bad to begin with.

                                                                              That is really the heart of the matter, as far as I see it. In contemporary discourse, capitalism as a values system (versus capitalism as a set of observations about markets) does not have a peer, does not have a countervailing force.

                                                                              I’m sure there’s a philosophical name for this

                                                                              @leeg brought this up as well, but “negative externality” is in the ballpark of what you are looking for . An externality is simply some effect on a third party, and whose value is not accounted for within the system. Environmental pollution is a great example of a negative externality. Many current market structures do not penalize pollution at a level commensurate with the damage caused to other parties. Education is an example of a positive externality: the teachers and administrators in schools rarely achieve a monetary reward commensurate with the long-term societal and economic impact of the education they have provided.

                                                                              Societies attempt to counteract these externalities by some degree of magnitude (regulations and fines for pollution, tax exemptions for education), and much ink is spilled in policy debates as to whether or not the magnitudes are appropriate.

                                                                              Bring back in my first statement, that capitalism (née economic impact) is not only values system, but is the only system that is assumed to be shared in contemporary discourse. This results in a lot of roundabout arguments, in pursuit of other values, being made in economic terms.

                                                                              What people really wish to convey, what really motivates people, may be something else. However, they cannot rely on those values being shared, and resort to squishy, centrist, technocratic studies and statistics that hide their actual values, in hopes other people will at least share in the appeal to this-or-that economic indicator (GDP, CPI, measures of inequality, home ownership rates, savings rates, debt levels, trade imbalances, unemployment, et cetera). This technocratic discussion fails to resolve the actual difference in values, and causes conflict-averse people to tune it out entirely, thus accepting the status quo (“capitalism”). I lament this, despite being very centrist and technocratically-inclined myself.

                                                                              Rambling further would eclipse the scope of what is appropriate for a post on Lobsters, so I will chuck it your way in a DM.

                                                                              1. -1

                                                                                Capitalism doesn’t cause microplastics, but it doesn’t stop them either.

                                                                                I’m not sure I understand what you’re trying to say here. How is Capitalism related to the production of microplastics? Are you saying that in a better form of Capitalism, the price of the the externality of microplastic pollution would be costed into its production, thus making microplastics not financially viable?

                                                                                I’m also not sure microplastic pollution is strongly analogous to bloated software.

                                                                                1. 3

                                                                                  I apparently chose an explosive analogy here, and now I’m fascinated by all the stuff that’s coming back.

                                                                                  But let me just try again with something less loaded…how about transportation?

                                                                                  The bad effects in the essay (wasted resources, bugs, slowness, inelegance) are a result of how we do software development. Assume for argument that most people don’t choose waste, bugs, slowness, and inelegance deliberately. Nevertheless, that’s what we get. It’s an “emergent property” of all the little choices of how we do it.

                                                                                  Similarly, most people—I hope certainly the engineers involved—don’t choose to have the NOx pollution, two-hour commutes, suburban sprawl, unwalkable communities, and visual blight that result from how we do transportation. It just happens because of how we do it.

                                                                                  So we’re all actively participating in making choices that cause an outcome that a lot of participants don’t like.

                                                                                  My point was just that there are lots of things like this, not just software development. So I figure this sort of problem must have a name.

                                                                                  (And yes, this means writing an essay about how awful the result is doesn’t do anything to fix it, because the feedback from result to cause is very weak.)

                                                                                  1. 2

                                                                                    So I figure this sort of problem must have a name.

                                                                                    Engineering. Engineering is trading off short commutes for private land. Engineering is a system of cars that get every individual acting alone where they need to go, even though getting all people at the same destinations from the same origin really calls for mass transit. Engineering is families with kids making different living and thus commuting arrangements than single people. These are all tradeoffs.

                                                                                    The ideal keyboard takes no space and has a key for everything you want to type from letters to paragraphs. Everything else is engineering. The ideal city has zero school, work, leisure, and shopping commutes for everybody. What we have instead is engineering.

                                                                                    The ideal bus line goes to every possible destination and stops there. It also takes no time to complete a full circuit. We compromise, and instead have buses that work for some cities and really don’t for others.

                                                                        1. 1

                                                                          Overworked, underpaid (and proud of it!), and stacked almost exclusively with deeply-PC/‘woke’ folk. I’ll, uh, pass.

                                                                          1. 2

                                                                            stacked almost exclusively with deeply-PC/‘woke’ folk. I’ll, uh, pass.

                                                                            I’m curious; how do you know this? Is it just from their “Diversity & Inclusion” mission statement?

                                                                            1. 2

                                                                              That, casual conversation with some of their older Ops folk, and a chat with Syd himself from ‘back in the day’.

                                                                              1. 9

                                                                                Thanks. It’s definitely a red flag, which is unfortunate because at least superficially, “social justice” sounds like a good thing. Unfortunately, there’s a large overlap between that and hateful tribalism. For example, from this job ad

                                                                                with the goal to change the IT industry from a white, bearded clump to something that’s a little less monochrome and have a few more x-chromosomes

                                                                                Being genuinely inclusive is good and important. Casting aspersions on an entire group of people (their own employees, no less!) for their genitalia and/or skin colour is never ok. For some reason this is given a pass when it comes from proponents of the correct political ideology.

                                                                                1. 1

                                                                                  Wouldn’t with the goal to make the IT industry more diverse amount to the same? That’s what I understand from this quote, the only difference being that the quote clearly states the current state of affairs and what would make it more diverse.

                                                                                  1. 5

                                                                                    I find it totally offensive for myself or any of my peers to be described as a “white, bearded clump”.

                                                                                  2. 1

                                                                                    I am curious to understand why you immediately redflagged this after law’s statement and rejected the massive evidence (https://www.glassdoor.co.uk/Overview/Working-at-GitLab-EI_IE1296544.11,17.htm) – at least compared to a one-line statement – that Gitlab is, at the very least, a nice place to work in.

                                                                                    1. 2

                                                                                      Good question. I think it’s because it’s far riskier for one’s own political capital or reputation to say something critical, and I think this is especially true of criticising political correctness. Nobody ever got fired for saying “oh yeah, it’s great. I am happy, everyone is happy.”

                                                                                      Or perhaps looking at it another way: a “woke” culture in a company is a good thing to some people. There are many people who are that flavour of political extremist, and would feel welcome among their own. The original observation was indeed “this is a woke company”, and not “this is a bad company.”

                                                                                      Glassdoor are not letting me read reviews without an account, but if the company were an echo chamber (likely, since I don’t believe the diversity movement is interested in diversity of opinion), then what’s to correct for all the positive reviews coming from people who 1. want to save their own skin, and/or 2. are quite comfortable with political correctness?

                                                                                      1. 2

                                                                                        How is law risking anything by saying what he said – or anything for that matter – under a nickname?

                                                                                        1. 2

                                                                                          I don’t know about this person specifically, but it’s not uncommon to be able to deduce who a person is by combing through their post history, and possibly cross-referencing it against content they’ve authored in other online communities.

                                                                                          1. 3

                                                                                            I don’t won’t to be impolite by insisting (sorry if I am) but you actually trusted this person’s single-line statement rather than publicly available, verified, anonymous feedback.

                                                                                            1. 3

                                                                                              Don’t worry, I don’t think you’ve been impolite. It’s totally fair to ask.

                                                                                              You are right, I drew a likely (in my mind) conclusion from a single source over an entire repository of reviews. I’ve presented my justification for this; perhaps it’s not entirely legitimate and it will be based on some of my own experiences and biases.

                                                                                              I wouldn’t say I “trust” the above anecdote comprehensively, but it’s certainly a signal. I could see a motive for someone to say some company is “bad”, but I don’t understand why someone would describe a company’s culture as “woke” if it isn’t.

                                                                              2. 1

                                                                                Dodged a bullet, thanks.

                                                                                1. 1

                                                                                  I was shocked to see how much less I’d make at Gitlab - my pay would be literally half what it is right now. They index their remote pay to cost of living wher eyou live, and in the United States it’s indexed for an entire state. In my home state, cost of living varies WIDELY based on what part of the state you are in, and this acted much to my detriment.

                                                                                  I understand and appreciate the difficulty of figuring out what to pay remote workers in a global workforce, but I definitely think Gitlab hasn’t solved it yet. I’m also grateful their salary transparency after the introductory interview meant that we weren’t wasting each others’ time - I wish more companies did this.

                                                                              1. 2

                                                                                So far this weekend I’ve been stuck at the border between Poland and Ukraine for 9 hours. The Polish authorities didn’t properly register my girlfriend’s visa in the system, so now it looks like she’s overstayed. They separated us into two buildings. We’ve been here 9 hours with no food, water, or sleep. They made us get off the train (no refund of course) and walk through the snow with all our luggage at midnight.

                                                                                We need to be at Boryspil airport in just under 24 hours, otherwise we miss our trip to Turkey for NYE that I’ve already spent thousands on. The guards said we can take a bus from here to just across the border, in some middle of nowhere town called Yahodyn. From there I’m hoping we can get to Kyiv. Who fucking knows at this point though.

                                                                                EDIT: Looks like we’re getting through the border now. After 11 hours.

                                                                                1. 2

                                                                                  I’m leaving my apartment this week. Taking a train to Warsaw, and then an overnight train to Kyiv. Then I’ll fly to Keyseri in Turkey for a week of snowboarding on a volcano.

                                                                                  1. 8

                                                                                    I think instead of trying to shoehorn bash into make, I will allow make to continue being make and delegate bash scripting to bash scripts.

                                                                                    1. 0

                                                                                      I think instead of trying to shoehorn bash into make,

                                                                                      How is this shoehorning bash into make? Every recipe used is under 3 commands. I don’t see what you would gain by splitting out parts of this makefile into a bash script. If anything, it would be more difficult to read because the relevant code would no longer be integrated.

                                                                                      1. 2

                                                                                        How is this shoehorning bash into make?

                                                                                        The article recommends setting a specific shell, and some specific shell flags. This is just to make it easier to write Bash inside a Makefile.

                                                                                        I don’t see what you would gain by splitting out parts of this makefile into a bash script. If anything, it would be more difficult to read because the relevant code would no longer be integrated.

                                                                                        I think a Makefile is useful as an ad-hoc command runner with a bundled directed acyclic graph resolver. I also think Bash is an excellent tool, though it comes with plethora caveats (enough so that I generally don’t write shell scripts without ShellCheck [and how would this work in a Makefile?]), and it seems unwise to introduce further caveats by having to account for the Makefile’s exceptional parsing rules. You think they’re essentially the same? Think again. Unless you’re suggesting that three commands ought be enough for anybody, in which case, I’m not sure what anyone can learn from this article except that with enough fettling you can make a tool do something it’s not exactly designed to in some trivial cases.

                                                                                        My approach basically is to keep everything as boring and simple as possible, so I can focus on making money with my business. Adding complexity to a Makefile is unlikely to be a wise investment for my business, IMO.

                                                                                        1. 1

                                                                                          Unless you’re suggesting that three commands ought be enough for anybody

                                                                                          But…. they really are enough for anyone. In order to use the “bundled directed acyclic graph resolver,” one needs to add the graph information. Usually recipes only need a few commands to transform the inputs into outputs. More than this and I agree that it’s probably better to split it off into its own script. However, the stuff that OP is adding also makes it easier to do inline bash programs. For example, .ONESHELL allows you to do something like

                                                                                          foo: bar baz
                                                                                                  if [ -z $(SOME_FLAG) ]; then
                                                                                                          frobnicate $^ -o $@
                                                                                                  fi
                                                                                          

                                                                                          which would require backslashes at the end of each line otherwise. Personally, I have no problem with bash-isms. At some point I remember needing a bash feature in my Makefile, but I can’t remember the context at the moment.

                                                                                          1. 1

                                                                                            In order to use the “bundled directed acyclic graph resolver,” one needs to add the graph information

                                                                                            Yeah. That comes from the part that make gives you, i.e., where you declare your targets and their dependencies. It doesn’t come from the recipe.

                                                                                            However, the stuff that OP is adding also makes it easier to do inline bash programs.

                                                                                            Yes. I know. This is the approach I have been disagreeing with the entire time. I do not think it’s a good idea to write Bash scripts inside a Makefile. I’ll ask you again: How do you run ShellCheck on a Makefile?

                                                                                            I have no problem with bash-isms

                                                                                            I also don’t have a problem with bashisms, but when people say “bashisms”, they don’t mean using Bash in a Makefile. They mean writing shell scripts specific to Bash when they ought to have written a more portable version, i.e., using features defined by POSIX.

                                                                                            1. 1

                                                                                              I’ll ask you again: How do you run ShellCheck on a Makefile?

                                                                                              You don’t. But I don’t really see the problem with that. It’s not like there’s hostile user input. At worst, the directory make is running in has spaces in it (or some weird character).

                                                                                              1. 1

                                                                                                It’s not like there’s hostile user input

                                                                                                Spoiler alert: The hostile user is you.


                                                                                                ShellCheck is not a user input validation library. It’s an invaluable shell script static analysis tool. It will catch things you hadn’t thought of. To think otherwise, I could only ascribe to hubris.