1. 3

    Follow the Unix philosophy

    Counter argument: $ man ls

    1. 2

      How so?

      Do you mean as in $ man ls gives you a lot of options? In which case I’d say that that ls doesn’t follow the Unix philosophy.

    1. 3

      This 1000x, but for all software ever

      1. 2

        From the top voted answer:

        I ignore almost all Best Practices when it doesn’t come with explanation on why it exists

        Also:

        “Whether your scepticism be as absolute and sincere as you pretend, we shall learn by and by, when the company breaks up: We shall then see, whether you go out at the door or the window…” ― David Hume

        1. 3

          For me, the main problem with SVG is that I can’t find a way to re-colour them through regular CSS. Is this possible?

          1. 4

            If you include them directly in the DOM, you can style them with CSS.

            1. 3

              Interesting. It would be nice if you could style them without also having to inline them in your HTML

              1. 12

                You can inline them once and reuse them multiple times with the <use> tag. Or if you prefer, you can avoid inlining them altogether and just reference a remote file like this:

                <svg>
                  <use xlink:href="images/icons.svg#my-icon"></use>
                </svg>
                
                1. 1

                  THANK YOU! This is the solution I’ve been looking for. Note that if you are experimenting with this on codepen or similar, you may have trouble getting it to work since browsers don’t like cross-origin URI’s in the <use> tag.

                2. 3

                  Yeah, I know, I’ve wished that many a time. I think there are JS libraries to load the SVG into the DOM for you, but I haven’t worked with SVG in a few months so I’m not sure.

            1. 4

              Overly simplified title is overly simplified. The article says refuse tests issued by recruiters, otherwise they’re fine and have a purpose.

              1. 4

                Well, actually no, it says don’t do a home tech test until you’ve talked to an employer first, for example if you’ve had a phone interview.

                But what is perhaps implied is that a phone interview tends to negate the need for a technical test in the first place (check interviewee basic competence without employer time commitment) so they’re probably a waste of time for everyone in general.

                IMHO better to have a phone interview to check basic competence, and then bring candidates in for pair programming exercise and face-to-face interview follow-up.

                1. [Comment removed by author]

                  1. 2

                    That can be a lot of pressure for an applicant. Introverts will probably do worse than they would on the job.

                    It’s also expensive and turns away people who can’t take a couple days off their current job without arousing suspicion.

                  2. 4

                    bring candidates in for pair programming exercise

                    Unless the job (a) specifically requires a significant amount of pair programming and (b) expects candidates to already be skilled at pair programming before starting, you’re probably significantly increasing the rate of false negatives. Pair programming is a different skill than solo programming, and comparatively few developers have much experience with it.

                1. 2

                  No mention of readability or maintainability. Surely these are only slightly less important than execution?

                  Also consistency makes adapting and improving practises hard. Its nice to have but maybe not fundamentally important.

                  1. 1

                    duplication is far cheaper than the wrong abstraction

                    I’ve always had a problem with this, it seems to me there is no such thing as the “right” abstraction. That is, most abstractions start to fail when old code meets new requirements. It’s in the nature of coding abstractions to never quite match the messiness and ambiguity of the real world.

                    Obviously, we shouldn’t completely abandon abstraction or DRY, these are the most powerful tools a programmer has. I guess the author is suggesting that you retain some repetition until you discover a better abstraction, but then I’ve always found better abstractions are easier to find when you already have some kind of abstraction.

                    1. 2

                      Incoherent, author states this:

                      It’s time to admit it: the REST-haters are right. REST does not make for a great raw data-level API architecture, and efforts like GraphQL and RPC-like architectures are likely to end up producing something more usable for complicated data needs.

                      Then spends the rest of the article contradicting that statement.

                      Also who decided Hypermedia APIs were a bad idea?

                      1. 1

                        I’m not sure about this, but doesn’t the traffic between Cloudflare and your hosted systems travel in plaintext?

                        1. 2

                          HTTP is broken

                          Does the terrible “is broken” meme ever die? HTTP is clearly NOT broken.

                          1. 4

                            Quite a few even are whining about how any self-respecting developer should be using open-source tools

                            Few weasel words in the English language are more hostile, meaningless and distracting than “whining”. What a mean-minded article.

                            1. 5

                              After written language and money, software is only the third major soft technology to appear in human civilization.

                              What about numeracy, culture, art, religion? Just to name a few.

                              1. 5

                                I would call spoken language a technology, in fact. As well as the ones you’ve listed. :)

                                1. 3

                                  I agree with you about numeracy. “Culture” and “art” include all technologies, so they aren’t good candidates for inclusion in a list of technologies. “Religion” isn’t a technology; it’s an aspect of human nature, like bipedal locomotion or binocular vision.

                                  I am pleasantly surprised to learn that we have records of numeracy (in the form of tally sticks) extending back some thirty thousand years, much longer than written language. And maybe oral numeracy is older still.

                                  What other things would you name?

                                  1. 2

                                    The line between “hard” and “soft” technology, and between “soft technology” and “not a technology” both seem a bit nebulous. For example, are transportation networks a soft technology? Something like the Roman arch bridge seems like a hard technology, and the empire-wide trade network seems like a form of social organization rather than any kind of technology. But the road network seems like a soft technology.

                                    I might also propose something like “administration” as a soft technology—encompassing things like management, bureaucracy, accounting, organization, etc. These are the technologies on which the modern state, corporation, and economy are built. Arguably money is a special case of this, a form of accounting.

                                    1. 1

                                      The list was for “general purpose” soft technologies.

                                  1. 10

                                    I’m not a big fan of these “fire people” type of articles. I suppose this might be a bit tongue-in-cheek but it still implies employees are disposable and that there aren’t other solutions to problem members of staff like retraining or changing company policy.

                                    Also not a big fan of DHH’s contrarian, all-or-nothing bloviating that seems to pass as “thought leadership”.

                                    1. 4

                                      The title was a reaction to Jason Calacanus' statement: “Fire people who are not workaholics”.

                                      1. 4

                                        I read “fire” as “don’t hire.”

                                        1. 4

                                          Alternative title: “Workaholics considered harmful”.

                                        1. 17

                                          I found the subjective and anecdotal evidence in this a little unconvincing. The bravado (“I worked at…”, “I had written…”) also doesn’t inspire much confidence in the author.

                                          1. 26

                                            Suffers from much the same problem as every rewrite story. They failed, I succeeded, therefore some random decision I made is responsible. I will tell you which decision you should think it was, but not provide sufficient information to verify my assessment. In particular, I will tell you about the stupid decisions made by those other morons, but not explain their reasoning.

                                            1. 3

                                              100 times this!

                                            2. 12

                                              Experience inspires more confidence in my heart than theorems do, though of course the author could be making everything up. It’d be nice to have more detail, though.

                                              For example, by “purely functional” Prolog, do they just mean that they didn’t use assert and retract (making it stateless), or did they additionally restrict their use of Prolog predicates to a functional pattern, where backtracking was banned and one of the arguments of each predicate was used as a “return value”, and the others were always bound? If backtracking wasn’t banned, was cut banned? How about negation?

                                              In its current form, this article simply says, “Wakalixes actually do matter. They actually do allow you to program better, faster and more cleanly.” Reading it will probably not help you to be a better programmer, unless you happen to be committing one of the beginner’s blunders the author calls out in the article, and even in that case it doesn’t tell you what you should be doing instead.


                                              A problem with the credibility of the author’s claims for the wakalixes is that it’s hard to separate claims for the effectiveness of a language, or even a programming paradigm, from the effectiveness of the programmers who were programming in it, and especially the effectiveness of the social environment they’re embedded in.

                                              Presumably if the 250 Java programmers working for the big-six firm couldn’t figure out how to reimplement a neural-network image classifier that one person implemented “on spec” after ten years, it’s not because they weren’t programming functionally — as the author pointed out, you can program functionally in Java; it’s because they weren’t making progress, probably because of mismanagement. (Or it might be because the author was just extremely lucky and chanced on such a great set of system parameters that 2500 person-years wasn’t sufficient to chance on it again, but I doubt it.) In ten years you can learn a lot about neural networks and image processing, and you can try a lot of different things. With a reasonable neural-network toolkit, which should take less than a year to build, you should be able to try about five or six million carefully-thought-out programming experiments.

                                              A more likely culprit there is that they tried to plan out the solution of a problem that they didn’t know how to solve, which is a mistake the people often make even when they know better. As our own michaelochurch said, “Industrial management has a 200-year history of successfully adding value by reducing variance, because in a concave world, low variance and high output go together. In a convex world (as in software) it’s the opposite: the least variance is in the territory where you don’t want to be anyway. Convexity is a massive game-changer. It renders the old, control-based, management regime worse than useless; in fact, it becomes counterproductive.”

                                              What would that kind of mismanagement look like in this case? Mismanagement by variance reduction here would probably involve optimizing the process to improve the chance that any given attempt would succeed, by putting lots of programmers on it and giving them lots of time, with the consequence that maybe in ten years they investigated three or four things that didn’t work, instead of five million.

                                              Trying five million experiments isn’t enough, of course. You have to focus your efforts on things that might work, and learn as much as possible from each experiment. But speeding up the process of trial and error is a huge advantage, not just because you get more trials in, but because, due to hyperbolic discounting, the lessons of a quick experiment are much more memorable than the lessons of a slow one, in more or less direct proportion to their speed.

                                              Also, if most of your experiments are going to fail — as they should, to maximize variance and the chance of netting a unicorn — experiments that fail quickly are much less demoralizing than experiments that take a long time to fail. (And of course you want to minimize people’s incentive to whitewash the results. Especially people with high prestige. For example, failed experiments often drag on for years because managers are afraid of losing headcount.)

                                              As this process continues, how do you prefer to focus resources to the more promising exploration directions, while continuing to devote substantial effort to the dark horses? In a sense, the traditional CYA management approach errs in the direction of overfocusing on the most promising candidates. It turns out there is actually a bunch of applicable research, some of which, like multi-armed bandit algorithms, actually is being applied at some companies to the problem of managing R&D and has a robust management research literature, while other parts, like A* search, is overlooked, as far as I can tell.

                                              And all of this has only a limited amount to do with your programming paradigm. You can iterate quickly in Java, you can iterate quickly in assembly, and you can iterate quickly in Haskell. The obstacles are different in each case, but you can do it.

                                              1. 8

                                                I agree with kragen: I’m fine with this sort of argument being based on experience; what else could it possibly be based on?

                                                I have another problem with it. The author claims to have been working in computing for… well, I don’t feel like trying to track down their LinkedIn as they suggest, especially since their name doesn’t appear to be anywhere on the site itself, but presumably “longer than you’ve been alive” is supposed to mean several decades at least. I resent both the assumption and the ageism there, but I suppose that’s irrelevant.

                                                But anyone with that much experience is going to be able to solve much harder problems than people with dramatically less. Even the author themself would have to do serious introspection to have any confidence that their efficacy is due to the choice of paradigm rather than to the experience. And, frankly, I don’t believe it - I’m confident that, all else being equal and apart from any difficulty caused by being annoyed about it, the author could do these dramatic rewrites in any paradigm.

                                                Also, of course, they don’t say anything about how maintainable people found their rewrites, after they’d moved on to the next one. The reduction in number of lines is suggestive, but it seems like we’re supposed to take it for granted that these were substantial improvements, when all that’s really being claimed is that they were successful replacements.

                                                I’m a big fan of functional programming, and actually for a lot of the reasons the author alludes to. They just haven’t demonstrated a connection.

                                                1. 10

                                                  Perhaps more interesting is that someone with several decades of experience never worked on a project that failed…

                                                  We learn a lot from failure. Perhaps most importantly, we learn what to learn from our successes. I worry that someone who has never failed doesn’t know why they succeed.

                                                  It’s unfortunate that mostly only successes get written up. There’s a lot of selection bias in the stories we read.

                                                  1. 6

                                                    It’s unfortunate that mostly only successes get written up.

                                                    In some engineering fields failures do get written up extensively, more than even successes, but in others I agree with your assessment. Failures in aerospace and civil engineering especially get a lot of study, partly because regulations require a detailed investigation, and partly because they’re spectacular enough to captivate public attention. Things like the Challenger explosion, the Tacoma Narrows bridge collapse, Apollo 13, the Titanic, etc. are probably as famous as any successes in those fields, and far more pored over by both scientists and popular documentaries. (Engineering curriculum design includes a lot of this kind of history postmortem study, too.)

                                                    Is there a list of canonical interesting failures in computing? The Intel division bug is probably the one I’ve seen mentioned most often in that regard.

                                                    1. 6

                                                      Is there a list of canonical interesting failures in computing? The Intel division bug is probably the one I’ve seen mentioned most often in that regard.

                                                      There are a few canonical examples of failed software projects I remember, probably from a software engineering course. The definition of failure varies, from just eventually being killed before releasing/being deployed to being obscenely late and over budget, to being deployed but having costly and/or dangerous bugs.

                                                      The ones I remember off the top of my head were the Ariane 5 rocket, the THERAC-25 radiation therapy machine, and the Denver airport baggage handling system. Those were all old enough to be in a textbook 15 years ago, though. I wonder if there’s a good collection of newer ones, in particular of the kind that cause failures in large distributed systems.

                                                      1. 3

                                                        The big fail that I recall is the Chrysler payroll system

                                                        http://c2.com/cgi/wiki?ChryslerComprehensiveCompensation

                                                        In that it was heralded as a king of XP -> agile, whatever and then just dropped into nothingness without a good failure writeup, just that wiki page that sort of acted as a living document of folks asking “what happened?”

                                                        1. 2

                                                          I don’t think the CCC failure was particularly unusual; the old figure was that about ⅔ of big software projects like that fail. One unusual thing about it is that it was that, due to their focus on incremental delivery, it had already been deployed and was doing a substantial fraction of Chrysler’s payroll before being canceled.

                                                    2. 9

                                                      what else could it possibly be based on?

                                                      I’d feel more comfortable with the article if:

                                                      1. It talked about the experiences of other developers.
                                                      2. Didn’t just focus on the positives with the author’s approach and explored some of the deficiencies.
                                                      3. There are studies that look into this specific area but I can’t see any mention in the article.
                                                      4. It talks in absolutes: “So, does functional programming stack up in the real world? Yes. Check my profile.”

                                                      You could read the exact same article from you average, OO enterprise veteran technical architect.

                                                      I’d also be super interested to see how the author’s co-workers viewed his work. In my experience this type of humblebrag comes from your run-of-the-mill hero developer.

                                                      1. 2

                                                        That’s fair. I agree with all of these points.

                                                        1. 2

                                                          I agree.

                                                        2. 2

                                                          The author claims to have been working in computing for…

                                                          The email address on the contact page suggests the author is Douglas Michael Auclair. In a former life (?) he maintained Marlais, the Dylan interpreter.

                                                          He’s here on LinkedIn.

                                                          1. 1

                                                            Thanks. sigh To be clear, I do not doubt his anecdotes, as far as they go. It was definitely jarring to realize there was no “about the author” anywhere on the blog, and yet he was making that appeal. But it’s not the kind of thing someone would bother to make up.

                                                          2. 1

                                                            Agreed!

                                                          3. 3

                                                            I agree with your statement. Is such way of promoting functional programming really needed nowadays? Let’s all consider that no matter how many valid and provable arguments you will provide to an established (imperative OO C++ community toxicity) status quo, the only way you are really going to sway them to your side, is by producing code that outperforms their solutions, both in terms of developer scalability and actual execution of the resulting binaries. And it is proven that this can happen, so why are we going again over this through personal viewpoints?

                                                            The problem we are having with modern functional programming languages is that because legacy companies base their success on legacy code written in legacy languages, they need to maintain the counterproductive rhetorics around. This is a very twisted side-effect of inertia, we should not feel compelled to reply each time, anymore.

                                                            edit: typos, more clarity :)

                                                            1. 2

                                                              Unfortunately, I find that language/approach evangelism is hard to do in a principled, scientific way, because it fundamentally isn’t science in most cases. It’s business. And if you stick to conservative arguments supported by evidence (evidence from experiments that you’ll almost never be allowed the time necessary to perform, so you’ll have to use what’s already on the ground) then you’re often going to lose against an opposition inclined to dishonest arguments (e.g. “if we use Haskell instead of Java, we won’t be able to hire anyone!!!!111”) and phony existential risk.

                                                              The OP, at least, can convincingly tell a personal story of success that he owes to functional programming. Is it scientific proof of the “superiority” of FP? No, of course not. It’s still much more useful than a lot of what pops up in the discourse around PL choice.

                                                            1. 28

                                                              The obvious benefit to working quickly is that you’ll finish more stuff per unit time.

                                                              I think some people could benefit from doing less work at a higher quality.

                                                              1. 18

                                                                I think that the world could be a significantly better place if people were writing less software full stop.

                                                                1. 3

                                                                  Why do you think so? That sounds counterintuitive to me.

                                                                  1. 10

                                                                    Largely because the majority, maybe the vast majority, of software being written today is being written to solve problems introduced by previous software. Add to this Sturgeon’s Law, and in general the strong sense that what passes for thought in Silicon Valley is profoundly ignorant and ahistorical, and it feels to me like we’ve entered a negative feedback loop where what stinks today is going to be cast into future generations' immutable truth.

                                                                    I’m not some kind of nostalgia junkie; software has always been terrible. It’s just that there’s so much more of it now. And most software in the world isn’t being written in Silicon Valley; but the culture of the Valley is held as an exemplar, and other software cultures are being ignored and forgotten.

                                                                    1. 4

                                                                      I’m not some kind of nostalgia junkie; software has always been terrible.

                                                                      Agreed. The overall state of software is so very bad, it’s downright disgusting. I think almost every day of my life there’s some moment when some piece of software I am using, whether it’s on a PC, tablet, phone, or “other”, misbehaves in a way that makes me want to pull my last hair out and scream. The running joke around the office is how at least once a week, I throw a tantrum and insist I’m swearing off technology altogether and joining the Amish. And it’s not that far from being the truth!

                                                                      Seriously, it’s 2015 and VPNs still suck donkey balls, there isn’t a decent web browser in existence, networks are ridiculously unreliable… the list goes on and on.

                                                                      Probably the best thing I can say about software these days is this: almost every app I use on my laptop on a regular basis has some kind of auto-save / session resume feature, so I can just power my laptop off and go, and then resume everything later without much fuss. And that’s handy given how quirky the hibernate/resume feature is on my current laptop running Fedora Linux.

                                                                      1. 2

                                                                        How do we square this with “move fast and break stuff”?

                                                                        Given that software is going to be in a state of broken-ness most of the time, might as well get on with in rather than attempt to make it perfect before creating it, right?

                                                                        1. 8

                                                                          “Move fast and break stuff” is the problem. It’s an asset allocation decision masquerading as deep thought.

                                                                      2. 3

                                                                        I agree with the sentiment, perhaps for different reasons. Would like to hear what jfb would say.

                                                                        Software: a series of human devised names, that compile down to bitwise representations that can be interpreted as data or as an instruction. It’s “names all the way down”. We’re essentially involved in a business not unlike law, there’s a bunch of human defined rules in interpreted language also interpreted by humans (who wrote the interpreter / compiler / API / DSL / … list goes on - incidently, think of how many languages one might need to learn for a Ruby On Rails deployment. I can count > 6.)

                                                                        More software, more complexity. More complexity, higher cost to solve problems.

                                                                        Ultimately we want to be governed by physical constraints in what’s possible. An average program uses the work of 1000’s of others - this is how it’s going to be - but, strong design are diluted. I like to think software is in it’s cambrian explosion - lots of weird asymmetrical shit floating around, unsettled on any decent design, in the lieu of deliberative intentional design, waiting for physical constraints to weed out the unfit designs.

                                                                        1. 1

                                                                          <cynical>I’m working my way through the Breaking Smart essays and one of the points brought up is that the progress the author is talking about is not zero-sum, but rather creates wealth. I think he is right, but I do wonder what percentage of that wealth is purely incestual in that it is fixing the junk created by the whole process? How man consultants are billing hours becase the industry brute-forces itself into so complex solutions they need specialists just to manage the complexity of their own creation.</cynical>

                                                                          1. 1

                                                                            “…the cost of everything and the value of nothing.”

                                                                    2. 4

                                                                      people might benefit from publishing less work at a higher quality, but i believe churning out the equivalent of piano finger exercises in private helps a lot. the more you do things, the better you get at doing them.

                                                                      1. 3

                                                                        I think some people could benefit from doing less work at a higher quality.

                                                                        Absolutely agreed, but I think the OP here has some good points. I especially agree with the bit about how a todo list that you don’t quickly complete items off of, becomes one where you only add stuff to it. I think I may, er, have experienced that phenomenon myself. :-(

                                                                        But in the end, like everything in life, there are tradeoffs to be made to suit the moment and the task at hand. And it will always be a judgment call on whether to work as fast as possible, or focus on quality.

                                                                        1. 5

                                                                          This is confusing, at least the first example the author is still passing a mock function, just not a mock object.

                                                                          1. 2

                                                                            What distinguishes “a mock function” from just “a function” when you do this?

                                                                            1. 3

                                                                              Same thing that distinguishes “a mock object” from just “an object”.

                                                                              1. -1

                                                                                I don’t know what that is.

                                                                                1. 0

                                                                                  You don’t know what a mock is?

                                                                          1. 3

                                                                            The problem with branching is really merging. So what’s merging? It’s the worst.

                                                                            The problem for me with the above is that branchless, trunk development still creates merge conflicts.

                                                                            1. 2

                                                                              I’m wondering what are the disadvantages of monolithic version control? Beside, potentially, the size of the repo (although only a problem for really large projects) are there other possible issues?

                                                                              1. 6

                                                                                From experience it tends to lead to monolithic code by the ease of crossing module barriers when doing bug fixes in maintenance mode of the life cycle.

                                                                                It can also lead to hard dependencies on customized libraries. Think when you are importing a third party library into your code and start applying patches to get going. I can even give you an open source example. I’m in the progress of porting Dart to OpenBSD. The project has a custom fork of:

                                                                                • chrome
                                                                                • eclipse
                                                                                • nss/nspr from Mozilla (they are dropping it)

                                                                                Everything is pretty much hard wired. It takes tremendous effort to make it use system wide installed equivalents of their ‘third party’ dependencies. Not to mention that they embrace the third parties into their own build system essentially dropping any cross platform work that upstream might have done so far for other platforms.

                                                                                When you have clear separation between the code you can easily change & your own product. By the nature of a clearly defined API you will still have a working code base assuming that you adhere to the rules outlined between the projects. When that barrier is brought down - everything becomes ‘fair game’ especially when stuff is on fire.

                                                                                1. 3

                                                                                  I imagine it slows down most git operations, especially git pulls. As I understand it, developers are given laptops with the Facebook monorepo pre installed because it takes so long to clone.

                                                                                1. 5

                                                                                  Maybe needs some initial content so people can get an idea of what you want to publish.