1. 32

    I’m assuming the NIST guy and Munroe are assuming the passwords are not stored as SHA256 hashes…

    The 4 words from a dictionary provides plenty of entropy (even with your “how long they typed” caveat) to foil any brute force approach with a password hashing algorithm implemented by a responsible engineer (bcrypt, scrypt, pbkdf2, etc).

    1. 10

      The thing that gets overlooked a lot, though, is that mass cracks of things like some website’s breached accounts table almost never use brute force, or even brute-force-with-dictionary, as their first tactic. They try big lists of common passwords and password patterns first, and enjoy a high degree of success from doing so.

      And if you got people to move en masse to the diceware/XKCD-style password scheme, every cracking tool would update to try stuff like


      etc. because that’s what people would actually choose as their passphrases. The only way to avoid this is to force people to use a tool that selects random passwords for them, and even then they’d fight against having to remember one of these for every site or service they use. At which point you need the tool to remember the passwords for them, and then you’ve arrived at “just use a password manager”.

      1. 7

        That’s not how these passwords work. There’s way too much entropy to pre-calculate tables (and more entropy from the salting). And it’s too much entropy to crack for a reasonable price if any sensible KDF is used.

        4 words randomly selected from a large dictionary, say (200,000 words) yields 70 bits of entropy. That on it’s own is way too costly to precompute tables for, and salting typically adds at least another 32 bits (often lots more than 32 bits).

        See the working in my own password generation script here, which generates passwords with entropy of at least X and has another function for approximating the cost to crack a password:


        1. 1

          There’s way too much entropy

          My point is that “entropy” is a red herring.

          If you let the user choose their own diceware-style passphrase, you’re going to get things that are cracked within a fraction of a second, because they’ll be choosing things like “my-password-for-2019”, “my-password-for-ebay”, and so on. The alleged entropy of a passphrase of n dictionary words strung together is pointless in that situation, because nobody will be using a brute-force scan of every combination of n dictionary words as a way to crack these.

          Consider an analogy: it’s like saying you’ve developed a lock that’s unpickable because it has a million pins in it, and look how long it would take to pick a million pins! But somebody comes along with an under-door tool and yanks the handle from the other side without even trying to pick the lock. So sure, that was a million-pin lock, but it’s irrelevant how many pins it had because the door’s still open in a couple seconds via a simpler attack method.

          And since you presumably want to disallow reuse of a password across sites/services, if you’re not letting users choose their own, you haven’t really demonstrated an advantage over a password manager that just generates long random strings, because the only real thing the diceware system has going for it is memorability and users aren’t going to commit that many distinct passwords to memory (or recall them correctly later on even if they do try to memorize).

          1. 2

            Neither XKCD or Diceware recommend creating passphrases like that. Of course if you don’t pick your words randomly then your entropy is lower, that doesn’t make entropy a red herring. Entropy remains the key point.

            XKCD passwords are useful for passwords that you need to remember or transmit to other people. Like the passwords for your password managers or wifi networks or whatever.

      2. 1

        The whole idea behind these password hashers is that you aim to make the work take a fixed amount of time (say 5ms). That leaves 200 verifications a second on a single core. This is more than enough for a legitimate use case for authentication but is completely a showstopper against brute forcing with a good password.

        1. 1

          For scrypt, we only have Litecoin to go by as far as estimates go, and it has a weak choice of parameters. With Litecoin, we have around ~300TH/sec, which means a known 4-word structure password out of XKCD’s 2048 dict is cracked in under 0.05 seconds.

        1. 4

          can i mention that bitcoin is an environmental disaster regardless of its other technical merrits/flaws? https://www.economist.com/the-economist-explains/2018/07/09/why-bitcoin-uses-so-much-energy https://www.theguardian.com/technology/2018/jan/17/bitcoin-electricity-usage-huge-climate-cryptocurrency

          edit: updated the news articles

          1. 3

            These are articles from authors and publications with specific agendas: they have a dog in the fight with their livelihoods depending on existing economic systems not being taken over by some bitcoin-like arrangement. Here is an article from another perspective:


            1. 3

              Electricity is 90% of the cost to mine bitcoin. As such, bitcoin mining uses an exorbitant amount of power: somewhere between an estimated 30 terrawatt hours alone in 2017 alone. That’s as much electricity as it takes to power the entire nation of Ireland in one year.

              Indeed, this is a lot, but not exorbitant. Banking consumes an estimated 100 terrawatts of power annually

              Bitcoin’s market cap is $154 billion.

              The market cap of the top five global banks is $1.6 trillion.

              Banking seems to have a hell of a better ROI than Bitcoin.

            2. 1

              It is also a proof of concept coded by someone who has managed to stay anonymous despite global scrutiny, that exercise must have costed some cognitive bandwidth… What this tells me: We. Can. Do. Better. The sooner we do the sooner we can put down the poisoned puppy called bitcoin.

            1. 34

              Build systems are hard because building software is complicated.

              Maybe it’s the first commit in brand new repository and all you have is foo.c in there. Why am I telling the compiler what to build? What else would it build??

              Compilers should not be the build system, their job is to compile. We have abstractions, layers, and separation of concerns for a reason. Some of those reasons are explained in http://www.catb.org/~esr/writings/taoup/html/ch01s06.html. But the bottom line is if you ask a compiler to start doing build system things, you’re going to be frustrated later on when your project is complex and the build system/compiler mix doesn’t do something you need it do.

              The good news is that for trivial projects, writing your own build system is likewise trivial as well. You could do it in a few lines of bash if you want. The author did it in 8 lines of Make but still thinks that’s too hard? I mean, this is like buying a bicycle to get you all around town and then complaining that you have stop once a month and spend 5 minutes cleaning and greasing the chain. Everyone just looks at you and says, “Yes? And?”

              1. 5

                The author could have done it in two if he knew Make. And no lines if he just has a single file project. One of the more complex projects I have uses only 50 lines of Make, with 6 lines (one implicit rule, and 5 targets) doing the actual build (the rest are various defines).

                1. 3

                  What are the two lines?

                  1. 4

                    I’m unsure what the two lines could be, but for no lines I think spc476 is talking about using implicit rules (http://www.delorie.com/gnu/docs/make/make_101.html) and just calling “make foo”

                    1. 2

                      I tried writing it with implicit rules. Unless I missed something, they only kick in if the source files and the object files are in the same directory. If I’m wrong, please enlighten me. I mentioned the build directory for a reason.

                      1. 2

                        Right, the no lines situation only applies for the single file project setup. I don’t know what are the 2 lines for the example given in the post.

                    2. 3

                      First off, it would build the executable in the same location as the source files. Sadly, I eventually gave up on a separate build directory to simplify the makefile. So with that out of the way:

                      CFLAGS ?= -Iinclude -Wall -Wextra -Werror -g
                      src/foo: $(patsubst %.c,%.o,$(wildcard src/*.c))

                      If you want dependencies, then four lines would suffice—the two above plus these two (and I’m using GNUMake if that isn’t apparent):

                      .PHONY: depend
                          makedepend -Y -- $(CFLAGS) -- $(wildcard src/*.c) 

                      The target depend will modify the makefile with the proper dependencies for the source files. Okay, make that GNUMake and makedepend.

                    3. 1


                      ├── Makefile
                      ├── include
                      │   └── foo.h
                      └── src
                          ├── foo.c
                          └── prog.c


                      CFLAGS = -Iinclude
                      VPATH = src:include
                      prog: prog.c foo.o
                      foo.o: foo.c foo.h

                      Build it:

                      $ make
                      cc -Iinclude   -c -o foo.o src/foo.c
                      cc -Iinclude    src/prog.c foo.o   -o prog
                      1. 1

                        Could you please post said two lines? Thanks.

                        1. 4

                          make could totally handle this project with a single line actually:

                          foo: foo.c main.c foo.h

                          That’s more than enough to build the project (replace .c with .o if you want the object files to be generated). Having subdirectories would make it more complex indeed, but for building simple project, we can use a simple organisation! Implicit rules are made for a case where source and include files are in the same directory as the Makefile. Now we could argue wether or not it’s a good practice or not. Maybe make should have implicit rules hardcoded for src/, include/ and build/ directories. Maybe not.

                          In your post you say that Pony does it the good way by having the compiler be the build system, and build project in a simple way by default Maybe ponyc is aware of directories like src/ and include/, and that could be an improvement over make here. But that doesn’t make its build system simple. When you go the the ponylang website, you find links to “real-life” pony projects. First surprise, 3 of them use a makefile (and what a makefile…): jylis, ponycheck, wallaroo + rules.mk. One of them doesn’t, but it looks like the author did put some effort in his program organisation so ponyc can build it the simple way.

                          As @bityard said, building software is complex, and no build system is smart enough to build any kind of software. All you can do is learn to use your tool so you can make a better use of them and make your work simpler.

                          Disclaimer: I never looked at pony before, so if there is something I misunderstood about how it works, please correct me.

                      2. 2

                        Build systems are hard because building software is complicated.

                        Some software? Yes. Most software? No. That’s literally the point of the first paragraph of the blog.

                        Compilers should not be the build system


                        We have abstractions, layers, and separation of concerns for a reason


                        But the bottom line is if you ask a compiler to start doing build system things, you’re going to be frustrated later on when your project is complex and the build system/compiler mix doesn’t do something you need it do.

                        Agree, if “the compiler’s default behaviour is the only option. Which would be silly, since the blog’s first paragraph argues that some projects need more than that.

                        The good news is that for trivial projects, writing your own build system is likewise trivial as well

                        I think I showed that’s not the case. Trivial is when I don’t have to tell the computer what it already knows.

                        The author did it in 8 lines of Make but still thinks that’s too hard?

                        8 lines is infinity times the ideal number, which is 0. So yes, I think it’s too hard. It’s infinity times harder. It sounds like a 6 year old’s argument, but it doesn’t make it any less true.

                        1. 7

                          I have a few projects at work that embed Lua within the application. I also include all the modules required to run the Lua code within the executable, and that includes Lua modules written in Lua. With make I was able to add an implicit rule to generate .o files from .lua files so they could be linked in with the final executable. Had the compiler had the build system “built in” I doubt I would have been able to do that, or I still would have had to run make.

                          1. -1

                            Compilers should not be the build system


                            Please, do not ever write a compiler.

                            Your examples are ridiculous: using shell invocation and find is far, far from the simplest way to list your source, objects and output files. As other pointed out, you could use implicit rules. Even without implicit rules, that was 2 lines instead of those 8:

                            foo: foo.c main.c foo.h
                                    gcc foo.c main.c -o foo

                            Agree, if “the compiler’s default behaviour is the only option.

                            Ah, then you want the compiler to embed in its code a way to be configured for every and all possible build that it could be used in? This is an insane proposition, when the current solution is either the team writing the project configuring the build system as well (could be done in shell, for all that matters), or thin wrappers like Rust and Go are using around their compilers: they foster best practices while leaving the flexibility needed by heavier projects.

                            You seem so arrogant and full of yourself. You should not.

                            1. 3

                              I’d like to respectfully disagree with you here.

                              Ah, then you want the compiler to embed in its code a way to be configured for every and all possible build that it could be used in?

                              That’s not at all what he’s asking for.

                              This is an insane proposition

                              I think this is probably true.

                              You seem so arrogant and full of yourself. You should not.

                              Disagree. He’s stated his opinion and provided examples demonstrating why he believe’s his point is valid. Finally, he has selectively defended said opinion. I don’t think that’s arrogance at all. This, for example, doesn’t read like arrogance to me.

                              I don’t appreciate the name calling and I don’t think it has a place here on lobste.rs.

                              1. -3

                                What is mostly arrogant is his dismissal of “dumb” tools, simple commands that will do only what they are asked to do and nothing else.

                                He wants his tools to presume his intentions. This is an arrogant design, which I find foolish, presumptuous, uselessly complex and inelegant. So I disagree on the technical aspects, certainly.

                                Now, the way he constructed his blog post and main argumentation is also extremely arrogant or in bad faith, by presenting his own errors as normal ways of doing things and accusing other people to build bad tools because they would not do things his way. This is supremely arrogant and I find it distasteful.

                                Finally, his blog is named after himself and seems a monument to his opinion. He could write on technical matters without putting his persona and ego into it, which is why I consider him full of himself.

                                My critic is that beside his technical proposition, which I disagree with, the form he uses to present them makes him a disservice by putting people he interacts with on edge. He should not if he wants his writings to be at all impactful, in my opinion.

                                1. 2

                                  the form he uses to present them makes him a disservice by putting people he interacts with on edge

                                  Pot, meet Kettle.

                                  Mirrors invite the strongest responses.

                          2. 1

                            yeah. on the flip side we have that too much configuration makes overcomplicated build systems. For me, there’s a sweet spot with cmake.

                          1. 16

                            seriously, how do you prevent stuff like this from happening if you have a big team. for small companies its relatively easy to prevent. but lets say you have 1000 engineers, how do you prevent 1 of them from making a mistake…

                            1. 38

                              One way to catch this sort of thing is sentinel data — in this case, you could use a unique value as a test account’s password and use that account for testing every service, then search everywhere you can think of for that value. If it shows up anywhere, the siren goes off. In enterprise storage similar things are done for “data loss prevention” to make sure people don’t move sensitive files to someplace they shouldn’t.

                              1. 4

                                I have never heard of those techniques but it seems interesting! Do you have some recommendation of good readings about this subject?

                              2. 4

                                It can easily happen in a team of 10 people. I’m of the strong belief that someone on the team needs to be responsible as a security architect and that’s their main role.

                              1. 3

                                Okay, anyone ever had issues with large documents reflowing slowly in MS word? I can only imagine what happens now..

                                1. 14

                                  I feel sure that the SOLID principles belong in the “helps people who already expert at doing SOLID things, harms everyone else” category.

                                  • SRP: what is a “responsibility”? As the author has found out, you can have multiple answers that are all correct. My WorkflowManagerBean has a single responsibility, managing workflows. On the other hand: your map function has two responsibilities: iterating over a sequence, and applying the passed function.
                                  • OCP: nobody since Bertrand Meyer has given a coherent explanation of OCP. Bob Martin’s version is related to avoiding the fragile base class problem in C++; most of us aren’t doing C++. If this were the orange site, people would be ‘helpfully’ replying with descriptions of the OCP, and no two of these descriptions would be congruent.
                                  • LSP: the thing that confuses subclasses with subtypes.
                                  • ISP: if my objects have a single responsibility, why are there different interfaces to segregate?
                                  • DIP: if I invert dependencies twice I get back to where I started.
                                  1. 3

                                    My SOLID principal has been to “avoid OOP”. People sometimes make fun of haskell for requiring too much theory but I find that OOP requires the same level of theory to not create a foot-gun. But at least with haskell, I have something more like algebra than UML.

                                    For the most part, i’m preferring procedural style C++ because it’s easy to design, easy to test, fast to read, and document. Too much abstraction can hurt quite a lot and so many times OOP actually ends up complicating things more than it brings any advantage.

                                    1. 3

                                      What I’ve been slowly discovering is that OOP also requires too much theory, but internalising that theory lets me avoid all the incidental complexity that built up around OOP during the Software Engineering times. Don’t give me Java. Give me a vtable, a lookup primitive, a delegate primitive, a selector type and an object type, and I can build the OOP system I need without having to contort my design to fit the OOP system you/Sun/AT&T provided.

                                      1. 2

                                        “Give me a vtable, a lookup primitive, a delegate primitive, a selector type and an object type, and I can build the OOP system I need without having to contort my design to fit the OOP system you/Sun/AT&T provided.”

                                        I like the simplicity of your summary. Everything except a selector type looks familiar. What’s that?

                                        1. 3

                                          some kind of interned symbol type, like a Ruby/Smalltalk/LISP symbol or an objc selector.

                                  1. 5

                                    Very strong opinions here…

                                    As far as I’m concerned, strong scrum practices would defeat these issues.

                                    Bad tools are not scrum. Lack of ownership is not scrum.

                                    People who try to use scrum as a way to wrap a process around bad ideas will never benefit from it.

                                    Take the good ideas, apply scrum, and most importantly, adapt to what you learn.

                                    1. 38

                                      adapt to what you learn.

                                      Umm. Point 5 and 6 of TFA?

                                      I’ve learnt from seeing it in practice both in my own experience and speaking to many others… The article is pretty spot on.

                                      Ok. Warning. Incoming Rant. Not aimed at you personally, you’re just an innocent bystander, Not for sensitive stomachs.

                                      Yes, some teams do OK on Scrum (all such teams I have observed, ignore largish chunks of it). ie. Are not doing certified scrum.

                                      No team I have observed, have done as well as they could have, if they had used a lighter weight process.

                                      Many teams have done astonishingly Badly, while doing perfect certified Scrum, hitting every Toxic stereotype the software industry holds.


                                      I remember the advent of “Agile” in the form of Extreme Programming.

                                      Apart from the name, XP was nearly spot on in terms of a light weight, highly productive process.

                                      Then Kanban came.

                                      And that was actually good.

                                      Then Scrum came.

                                      Oh my.

                                      What a great leap backwards that was.

                                      Scrum takes pretty much all the concepts that existed in XP…. and ignores all the bits that made it work (refactoring, pair programming, test driven development, …), and piles on stuff that slows everything down.

                                      The thing that really pisses me off about Scrum, is the amount of Pseudo Planning that goes on in many teams.

                                      Now planning is not magic. It’s simply a very data intensive exercise in probabilistic modelling.

                                      You can tell if someone is really serious about planning, they track leave schedules and team size changes and have probability distributions for everything and know how to combine them, and update their predictions daily.

                                      The output of a real plan is a regularly updated probability distribution, not a date.

                                      You can tell a work place bully by the fact their plans never change, even when a team member goes off sick.

                                      In some teams I have spoken to, Scrum planning is just plain unvarnished workplace bullying by powertripping scrum managers, who coerce “heroes” to work massive amounts of unpaid overtime, creating warm steaming mounds of, err, “technical debt”, to meet sprint deadlines that were pure fantasy to start with.

                                      Yes, if I sound angry I am.

                                      I have seen Pure Scrum Certified and Blessed Scrum used to hurt people I care about.

                                      I have seen Good ideas like Refactoring and clean code get strangled by fantasy deadlines.

                                      The very name “sprint “ is a clue as to what is wrong.

                                      One of the core ideas of XP was “Sustainable Pace”…. which is exactly what a sprint isn’t.

                                      Seriously, the one and only point of Agile really is the following.

                                      If being able to change rapidly to meet new demands has high business value, then we need to adapt our processes, practices and designs to be able to change easily.

                                      Somehow that driving motivation has been buried under meetings.

                                      1. 8

                                        I 100% agree with you actually.

                                        I suppose my inexperience with “real certified scrum” is actually the issue.

                                        I think it’s perfectly fine and possible to take plays out of every playbook you’ve mentioned and keep the good, toss the bad.

                                        I also love the idea that every output of planning should be a probabilistic model.

                                        Anyone who gets married to the process they pick is going to suffer.

                                        Instead, use the definitions to create commonly shared language, and find the pieces that work. For some people, “sprint” works. For others, pair programming is a must have.

                                        I think adhering to any single ideology 100% is less like productivity and more like cultish religion.

                                        1. 5

                                          fantasy deadlines

                                          Haha. Deadlines suck so let’s have em every 2 weeks!

                                          1. 3

                                            As they say in the XP world: if it hurts, do it more often.

                                            1. 3

                                              True. It’s a good idea. One step build pipeline all the way to deployment. An excellent thing, all the pain is automated away.

                                              If you planned it soundly, then a miss is feedback to improve your planning. As I say, planning is a data intensive modelling exercise. If you don’t collect the data, don’t feed it back into your model… your plans will never improve.

                                              If it was pseudo planning and a fantasy deadline and the only thing you do is bitch at your team for missing the deadline… it’s workplace bullying and doing it more will hurt more and you get a learned helplessness response.

                                        2. 12

                                          Warning: plain talk ahead, skip this if you’re a sensitve type. Scrum can actually work pretty well with mediocre teams and mediocre organizations. Hint we’re mostly all mediocre. This article wreaks of entitlement; I’m a special snowflake, let ME build the product with the features I want! Another hint; no one wants this. Outside of really great teams and great developers, which by definition most of us aren’t, you are not capable.

                                          Because all product decision authority rests with the “Product Owner”, Scrum disallows engineers from making any product decisions and reduces them to grovelling to product management for any level of inclusion in product direction.

                                          This the best thing about scrum/agile imo. Getting someone higher in the food chain to gatekeep what comes into development and prioritize what is actually needed is a huge benefit to every developer wether you realize it or not. If you’ve never worked in a shop where Sales, Marketing and Support all call their pet developers to work on 10 hair on fire bullshit tasks a day, then you’ve been fortunate.

                                          1. 9

                                            Scrum can actually work pretty well with mediocre teams and mediocre organizations. Hint we’re mostly all mediocre.

                                            The problem is: Scrum also keeps people mediocre.

                                            Even brilliant people are mediocre, most of the time, when they start a new thing. Also, you don’t have to be a genius to excel at something. A work ethic and time do the trick.

                                            That said, Scrum, because it assumes engineers are disorganized, talentless children, tends to be a self-fulfilling prophecy. There’s no mentorship in a Scrum shop, no allowance for self-improvement, and no exit strategy. It isn’t “This is what you’ll do until you earn your wings” but “You have to do this because you’re only a developer, and if you were good for anything, you’d be a manager by now.”

                                            1. 3

                                              That said, Scrum, because it assumes engineers are disorganized, talentless children, tends to be a self-fulfilling prophecy.

                                              Inverting the cause and effect here is an equally valid argument, that most developers in fact are disorganinzed, talentless children as you say, and the sibling comment highlights. We are hi-jacking the “Engineer” prestige and legal status, with none of the related responsibility or authority.

                                              There’s no mentorship in a Scrum shop, no allowance for self-improvement, and no exit strategy.

                                              Is there mentoring and clear career paths in none scrum shops? This isn’t a scrum related issue. But regardless, anyone who is counting on the Company for self actualization is misguided. At the end of the day, no matter how much we would all like to think that our contributions matter, they really don’t. To the Company, we’re all just cogs in the machine. Better to make peace with that and find fulfillment elsewhere.

                                              1. 3

                                                Scrum does not assume “engineers” at all. It assumes “developers”. Engineers are highly trained group of legally and ethically responsible professionals. Agile takes the responsibility of being an engineer right out of our hands.

                                                1. 4

                                                  Engineers are highly trained group of legally and ethically responsible professionals.

                                                  I love this definition. I have always said there’s no such thing as a software engineer. Here’s a fantastic reason why. Computer programmers may think of themselves as engineers, but we have no legal responsibilities nor ethical code that I am aware. Anyone can claim to be a “software engineer” with no definition of what that means and no legal recourse for liars. It requires no experience, no formal education, and no certification.

                                                  1. 1

                                                    True, but why?

                                                    IMHO, because our field is in its infancy.

                                                    1. 2

                                                      I dislike this reason constantly being thrown around. Software engineering has existed for half a century, name another disipline where unregulated work and constantly broken products are allowed to exist for that long. Imagine if nuclear engineering was like us. I think the real reason we do not get regulated is majority of our field does not need rigor and companies would like a lower salary for engineers, not higher. John Doe the web dev does not need the equalivalent of a engineering stamp each time he pushes to production because his work is unlikely to be a critical system where lives are at stake.

                                                      1. 1

                                                        I’m pretty sure that most human disciplines date in the thousands years.

                                                        Nuclear engineering (that is well rooted in chemistry and physics) is still in its infancy too, as both Chernobyl and Fukushima show pretty well.

                                                        But I’m pretty sure that you will agree with me that good engineering take a few generations if you compare these buildings with this one.

                                                        The total lack of historical perspective in modern “software engineers” is just another proof of the infancy of our discipline: we have to address our shortsighted arrogance as soon as possible.

                                                        1. 1

                                                          We’re talking about two different things. How mature a field is not a major factor in regulation. Yes I agree with your general attitude that things get better over time and we may not be at that point. But we’re talking about government regulating the supply of software engineers. That decision has more to do with public interests then how good software can be.

                                                          1. 1

                                                            That decision has more to do with public interests then how good software can be.

                                                            I’m not sure if I agree.

                                                            In my own opinion current mainstream software is so primitive that anybody could successful disrupt it.

                                                            So I agree that software engineers should feel much more politically responsible for their own work, but I’m not sure if we can afford to disincentivate people to reinvent the wheel, because our current wheels are triangular.

                                                            And… I’m completely self-taught.

                                              2. 3

                                                This the best thing about scrum/agile imo. Getting someone higher in the food chain to gatekeep what comes into development and prioritize what is actually needed is a huge benefit to every developer wether you realize it or not.

                                                While I agree with the idea of this, you did point out that this works well with mediocre teams and, IME, this gatekeeping is destructive when you have a mediocre gatekeeper. I’ve been in multiple teams where priorities shift every week because whoever is supposed to have a vision has none, etc. I’m not saying scrum is bad (I am not a big fan of it) but just that if you’re explicitly targeting mediocre groups, partitioning of responsibility like this requires someone up top who is not mediocre. Again, IME.

                                                1. 2

                                                  Absolutely, and the main benefit for development is the shift of blame and responsibility to that higher level, again, if done right. Ie there has to be a ‘paper trail’ to reflect the churn. This is were jira (or whatever ticketing system) helps, showing/proving scope change to anyone who cares to look.

                                                  Any organization that requires this level of CYA (covery your ass) is not worth contributing to. Leeching off of, sure :)

                                                  1. 2

                                                    So are you saying that scrum is good or that scrum is good in an organization that you want to leech off of?

                                                    1. 1

                                                      I was referring to the case the gp proposed where the gatekeeper themselves are mediocre and/or incompetent, and in the case scape goats are sought, the agile artifacts can be used to effectively shield development, IF they’re available. In this type of pathological organization, leeching may be the best tactic, IMO. Sorry that wasn’t clear.

                                                2. 3

                                                  I’m in favour of having a product owner.

                                                  XP had one step better “Onsite Customer” ie. You could get up from your desk and go ask the guy with the gold what he’d pay more gold for and how much.

                                                  A product owner is a proxy for that (and prone to all the ill’s proxies are prone to).

                                                  Where I note things go very wrong, is if the product owner ego inflates to thinking he is perhaps project manager, and then team lead as well and then technical lead rolled into one god like package…. Trouble is brewing.

                                                  Where a product owner can be very useful is in deciding on trade offs.

                                                  All engineering is about trade offs. I can always spec a larger machine, a more expensive part, invest in decades of algorithm research… make this bigger or that smaller…

                                                  • But what will a real live gold paying customer pay for?
                                                  • What will they pay more for? This or That? And why? And how much confidence do you have? Educated guess? Or hard figures? (ps: I don’t sneer at educated guesses, they can be the best one has available… but it gives a clue to the level of risk to know it’s one.)
                                                  • What will create the most re-occurring revenue soonest?
                                                  • What do the customers in the field care about?
                                                  • How are they using this system?
                                                  • What is the roadmap we’re on? Some trade offs I will make in delivering today, will block some forks in the road tomorrow.

                                                  Then there is a sadly misguided notion, technical debt.

                                                  If he is wearing a project manager hat, there is no tomorrow, there is only The Deadline, a project never has to pay back debt to be declared a success.

                                                  If he is wearing a customers hat, there is no technical debt, if it works, ship it!

                                                  Since he never looks at the code….. he never sees what monsters he is spawning.

                                                  The other blind spot a Product Owner has is about what is possible. He can only see what the customers ask for, and what the competition has, or the odd gap in our current offering.

                                                  He cannot know what is now technologically feasible. He cannot know what is technologically desirable. So engineers need wriggle room to show him what can or should be done.

                                                  But given all that, a good product owner is probably worth his weight in gold. A Bad One will sink the project without any trace, beyond a slick of burnt out and broken engineers.

                                              1. 4

                                                The distribution of programming talent is likely normal, but what about their output?

                                                The ‘10X programmer’ is relatively common, maybe 1 standard deviation from the median? And you don’t have to get very far to the left of the curve to find people who are 0.1X or -1.0X programmers.

                                                Still a good article! I think this confusion is the smallest part of what he’s trying to say.

                                                1. 6

                                                  That’s an interesting backdoor you tried to open to sneak the 10x programmer back into not being a myth.

                                                  1. 6

                                                    They exist, though. So, more like the model that excludes them is broken front and center. Accurate position is most people aren’t 10x’ers or even need to be that I can tell. Team players with consistency are more valuable in the long run. That should be majority with some strong, technical talent sprinkled in.

                                                    1. 3

                                                      Is there evidence to support that? As you know, measuring programmer productivity is notoriously difficult, and I haven’t seen any studies to confirm the 10x difference. I agree with @SeanTAllen, it’s more like an instance of the hero myth.

                                                      EDIT: here are some interesting comments by a guy who researched the literature on the subject: https://medium.com/make-better-software/the-10x-programmer-and-other-myths-61f3b314ad39

                                                      1. 5

                                                        Just think back to school or college where people got the same training. Some seemed natural at the stuff running circles around others for whatever reason, right? And some people score way higher than others on parts of math, CompSci, or IQ tests seemingly not even trying compared to those that put in much effort to only underperform.

                                                        People that are super-high performers from the start exist. If they and the others study equally, the gap might shrink or widen but should widen if wanting strong generalists since they’re better at foundational skills or thinking style. I don’t know if the 10 applies (probably not). The concept of gifted folks making easy work of problems most others struggle is something Ive seen a ton of in real life.

                                                        Why would they not exist in programming when they exist in everything else would be the more accurate question.

                                                        1. 0

                                                          There’s no question that there is difference in intellectual ability. However, I think that it’s highly questionable that it translates into 10x (or whatever-x) differences in productivity.

                                                          Partly it’s because only a small portion of programming is about raw intellectual power. A lot of it is just grinding through documentation and integration issues.

                                                          Partly it’s because there are complex interactions with other people that constrain a person. Simple example: at one of my jobs people complained a lot about C++ templates because they couldn’t understand them.

                                                          Finally, it’s also because the domain a person applies themselves to places other constraints. Can’t get too clever if you have to stay within the confines of a web framework, for example.

                                                          I guess there are specific contexts where high productivity could be realised: one person creating something from scratch, or a group of highly talented people who work well together. But those would be exceptional situations, while under the vast majority of circumstances it’s counterproductive to expect or hope for 10x productivity from anyone.

                                                          1. 2

                                                            I agree with all of that. I think the multipliers kick in on particular tasks which may or may not produce a net benefit overall given conflicting requirements. Your example of one person being too clever with some code for others to read is an example of that.

                                                            1. 3

                                                              I think the 10x is often realized by just understanding the requirements better. For example, maybe the 2 week long solution isn’t really necessary because the 40 lines you can write in the afternoon are all the requirement really required.

                                                            2. 2

                                                              There’s no question that there is difference in intellectual ability. However, I think that it’s highly questionable that it translates into 10x (or whatever-x) differences in productivity.

                                                              It does not simply depends on how you measure, it depends on what you measure.

                                                              And it may be more than “raw intellectual power”. For me it’s usually experience.

                                                              As a passionate programmer, I’ve faced more problems and more bugs than my colleagues.
                                                              So it often happens that I solve in minutes problems that they have struggled for hours (or even days).
                                                              This has two side effects:

                                                              • managers tends to assign me the worst issues
                                                              • colleagues tends to ask me when the can’t find a solution

                                                              Both of this force me to face more problems and bugs… and so on.

                                                              Also such experience make me well versed at architectural design of large applications: I’m usually able to avoid issues and predict with an high precision the time required for a task.

                                                              However measuring overall productivity is another thing:

                                                              • I can literally forget what I did yesterday morning (if it was for a different customer than the one I’m focused now)
                                                              • at time I’m unable to recognize my own code (with funny effects when I insult or lode it)
                                                              • when focused, I do not ear people talking at me
                                                              • I ignore 95% of mails I receive (literally all those with multiple recipients)
                                                              • being very good at identifying issues during early analysis at times makes some colleague a bit upset
                                                              • being very good at estimating large projects means that when you compare my estimation with others, mine is usually higher (at times a lot higher) because I see most costs upfront. This usually leads to long and boring meeting where nobody want to take the responsibility to adopt the more expensive solution (apparently) but nobody want to take the risk of alternative ones either…
                                                              • debating with me tends to become an enormous waste of time…

                                                              So when it’s a matter of solving problems by programming, I’m approach the 10x productivity of the myth despite not being particularly intelligent, but overall it really depends on the environment.

                                                              1. 1

                                                                This is a good exposition of what a 10x-er might be and jives with my thoughts. Some developers can “do the hard stuff” with little or no guidance. Some developers just can’t, no matter how much coaching and guidance are provided.

                                                                For illustration, I base this on one tenure I had as a team lead, where the team worked on some “algorithmically complex” tasks. I had on my team people who were hired on and excelled at the work. I had other developers who struggled. Most got up to an adequate level eventually (6 months or so). One in particular never did. I worked with this person for a year, teaching and guiding, and they just didn’t get it. This particular developer was good at other things though like trouble shooting and interfacing with customers in more of a support role. But the ones who flew kept on flying. They owned it, knew it inside and out.

                                                                It’s odd to me that anyone disputes the fact there are more capable developers out there. Sure “productivety” is one measure, and not a good proxy for ability. I personally don’t equate 10x with being productive, that clearly makes no sense. Also I think Fred Brookes Mythical Man Month is the authoritative source on this. I never see it cited in these discussions.

                                                          2. 2

                                                            There may not be any 10x developers, but I’m increasingly convinced that there are many 0x (or maybe epsilon-x) developers.

                                                            1. 3

                                                              I used to think that, but I’m no longer sure. I’ve seen multiple instances of what I considered absolutely horrible programmers taking the helm, and I fully expected those businesses to fold in a short period of time as a result - but they didn’t! From my point of view, it’s horrible -10x code, but for the business owner, it’s just fine because the business keeps going and features get added. So how do we even measure success or failure, let alone assign quantifiers like 0x?

                                                              1. 1

                                                                Oh, I don’t mean code quality, I mean productivity. I know some devs that can work on the same simple task for weeks, miss the deadline, and be move on to a different task that they also don’t finish.

                                                                Even if the code they wrote was amazing, they don’t ship enough progress to be of much help.

                                                                1. 1

                                                                  That’s interesting. I’ve encountered developers who were slow but not ones who would produce nothing at all.

                                                                  1. 4

                                                                    I’ve encountered it, though it was unrelated to their skill. Depressive episodes, for example, can really block someone. So can burnout, or outside stresses.

                                                                    Perhaps there are devs who cannot ship code at all, but I’ve only encountered unshipping devs that were in a bad state.

                                                                2. 1

                                                                  You’re defining programming ability by if a business succeeds though. There are plenty of other instances where programming is not done for the sake of business, though.

                                                                  1. 1

                                                                    That’s true. But my point is that it makes no sense to assign quantifiers to programmer output without actually being able to measure it. In business, you could at least use financials as a proxy measure (obviously not a great one).

                                                              2. 1

                                                                Anecdotally, I’m routinely stunned by how productive maintainers of open source frameworks can be. They’re certainly many times more productive than I am. (Maybe that just means I’m a 0.1x programmer, though!)

                                                                1. 1

                                                                  I’m sure that’s the case sometimes. But are they productive because they have more sense of agency? Because they don’t have to deal with office politics? Because they just really enjoy working on it (as opposed to a day job)? There are so many possible reasons. Makes it hard to establish how and what to measure to determine productivity.

                                                            2. 3

                                                              I don’t get why people feel the need to pretend talent is a myth or that 10x programmers are a myth. It’s way more than 10x. I don’t get why so many obviously talented people need to pretend they’re mediocre.

                                                              edit: does anyone do this in any other field? Do people deny einstein, mozart, michaelangelo, shakespear, or newton? LeBron James?

                                                              1. 4

                                                                Deny what exactly? That LeBron James exists? What is LeBron James a 10x of? Is that Athelete? Basketball player? What is the scale here?

                                                                A 10x programmer. I’ve never met one. I know people who are very productive within their area of expertise. I’ve never met someone who I can drop into any area and they are boom 10x more productive and if you say “10x programmer” that’s what you are saying.

                                                                This of course presumes that we can manage to define what the scale is. We can’t as an industry define what productive is. Is it lines of code? Story points completed? Features shipped?

                                                                1. 2

                                                                  Context is a huge factor in productivity. It’s not fair to subtract it out.

                                                                  I bet you’re a lot more then 10X better then I am at working on Pony… Any metric you want. I don’t write much C since college, I bet you’re more then 10X better then me in any C project.

                                                                  You were coding before I was born, and as far as I can tell are near the top of your field. I’ve been coding most of my life, I’m good at it, the difference is there though. I know enough to be able to read your code and tell that you’re significantly more skilled then I am. I bet you’re only a factor of 2 or 3 better at general programming then I am. (Here I am boasting)

                                                                  In my areas of expertise, I could win some of that back and probably (but I’m not so sure) outperform you. I’ve only been learning strategies for handling concurrency for 4 years? Every program (certainly every program with a user interface) has to deal with concurrency, your skill in that sub-domain alone could outweigh my familiarity in any environment.

                                                                  There are tons of programmers out there who can not deal with any amount of concurrency at all in their most familiar environment. There are bugs that they will encounter which they can not possibly fix until they remedy that deficiency, and that’s one piece of a larger puzzle. I know that the right support structure of more experienced engineers (and tooling) can solve this, I don’t think that kind of support is the norm in the industry.

                                                                  If we could test our programming aptitudes as we popped out of the womb, all bets are off. This makes me think that “10X programmer” is ill-defined? Maybe we’re not talking about the same thing at all.

                                                                  1. 2

                                                                    No I agree with you. Context is important. As is having a scale. All the conversations I see are “10x exists” and then no accounting for context or defining a scale.

                                                                2. 2

                                                                  While I’m not very familiar with composers, I can tell you that basketball players (LeBron) can and do have measurements. Newton created fundamental laws and integral theories, Shakespeare’s works continue to be read.

                                                                  We do acknowledge the groundbreaking work of folks like Ken Ritchie, Ken Iverson, Alan Kay, and other computing pioneers, but I doubt “Alice 10xer” at a tech startup will have her work influence software engineers hundreds of years later, so bar that sort of influence, there are not enough metrics or studies to show that an engineer is 10x more than another in anything.

                                                              2. 3

                                                                The ‘10X programmer’ is relatively common, maybe 1 standard deviation from the median? And you don’t have to get very far to the left of the curve to find people who are 0.1X or -1.0X programmers.

                                                                So, it’s fairly complicated because people who will be 10X in one context are 1X or even -1X in others. This is why programming has so many tech wars, e.g. about programming languages and methodologies. Everyone’s trying to change the context to one where they are the top performers.

                                                                There are also feedback loops in this game. Become known as a high performer, and you get new-code projects where you can achieve 200 LoC per day. Be seen as a “regular” programmer, and you do thankless maintenance where one ticket takes three days.

                                                                I’ve been a 10X programmer, and I’ve been less-than-10X. I didn’t regress; the context changed out of my favor. Developers scale badly and most multi-developer projects have a trailblazer and N-1 followers. Even if the talent levels are equal, a power-law distribution of contributions (or perceived contributions) will emerge.

                                                                1. 1

                                                                  I’m glad you acknowledge that there’s room for a 10X or more then 10X gap in productivity. It surprises me how many people claim that there is no difference in productivity among developers. (Why bother practicing and reading blog posts? It won’t make you better!)

                                                                  I’m more interested in exactly what it takes to turn a median (1X by definition) developer into an exceptional developer.

                                                                  I don’t buy the trail-blazer and N-1 followers argument because I’ve witnessed massive success (by any metric) cleaning up the non-functioning, non-requirements meeting (but potentially marketable!) untested messes that an unskilled ‘trailblazer’ leaves in their (slowly moving) wake. Do you think it’s all context or are there other forces at work?

                                                              1. 14

                                                                I wouldn’t call defer a “very elegant solution” when RAII exists :)

                                                                1. 7

                                                                  The problem for RAII is that it needs to be in a class destructor. Defer can just happen by writing a line of free code.

                                                                  1. 7

                                                                    Except RAII can handle the case where ownership is transferred to some other function or variable. Also, it scales well to nested resources, whereas figuring out which of any structs in a given C library require a (special) cleanup call is depends entirely on careful reading of the relevant documentation. If RAII was just about closing file handles at the end of the function, few people would care.

                                                                    1. 2

                                                                      Except RAII can handle the case where ownership is transferred to some other function or variable.

                                                                      Does that matter for languages that have GC?

                                                                      1. 7

                                                                        RAII is not exclusive to memory management. The Resource in RAII can be aquired memory, but it can also equally be an open file-descriptor, socket or any other resource for that matter, that GC won’t collect.

                                                                      2. 1

                                                                        I think the ideal solution would be to be able to use class destructors for some things, but also be able to add a block to the “destruction” of a specific instance.

                                                                    2. 3

                                                                      Doesn’t RAII sort of hide the cleanup from your actual code? I imagine that can work only if one can trust that every library you ever use behaves well in this manner. Then again, I guess an explicitly called cleanup routine may be of poor quality as well.

                                                                      1. 8

                                                                        That’s the point. Cleanup is automatic, deterministic, invisible. You can’t forget it, while you definitely can forget a defer something.close().

                                                                        Every library in Rust does behave like this, and I guess pretty much every library in C++ (that you would actually want to use) does as well.

                                                                      2. 3

                                                                        Excellent point! Now it feels only slightly more elegant than goto :)

                                                                      1. 4

                                                                        If you look at the success of the internet (beyond just the web) I think it’s safe to say OO, not FP, is the most scalable system building methodology. An important realization that Alan Kay emphasizes here is that OO and FP are not incompatible at all. A formal merging of FP and OO can be seen with the Actor Model by Carl Hewitt

                                                                        In other words, I think FP can supercharge OO and it seems the rock stable and fast systems built with Erlang and friends prove this out.

                                                                        1. 7

                                                                          I think servers have scaled now based on solid messaging protocols that are not OOP in nature. And databases are still relational last i checked.

                                                                          1. 6

                                                                            Alan Kay would say that OOPs foundation is messaging protocols.

                                                                            1. 2

                                                                              precisely ! The whole internet is an objective oriented system. The smallest model of an object is a computer. so what is an object ? Its a computer that can receive and send messages. Systems like erlang have million little computers running on one physical computer for instance.

                                                                              1. 4

                                                                                that’s a real stretch. I might as well claim that REST’s success is entirely because it is really just functional programming as it passes the state along with the function and that it is pretty much just a monad.

                                                                                Also, SQL is still king and no object-oriented database approach has supplanted it.

                                                                            2. 4

                                                                              They use the FSM model. Hardware investigations taught me they’d fit in Mealy and Moore models depending on what subset of protocol is being implemented or how one defines terms. Even most software implementations used FSM’s. Maybe all for legacy implementations given what I’ve seen but there could be exceptions.

                                                                              And, addressing zaphar’s claim, their foundation or at least abstracted form may best be done with Abstract, State Machines described here. Papers on it argue it’s more powerful than Turing model since it operates on mathematical structures instead of strings. Spielmann claims Turing Machines are a subset of ASM’s. So, the Internet was built on FSM model which, if we must pick a foundation, matches the ASM model best even though the protocols and FSM’s themselves predate the model. If a tie-breaker is needed for foundations, ASM’s are also one of most successful ways for non-mathematicians to specify software in terms of ease of use and applicability.

                                                                              1. 3

                                                                                You just made the engineer inside me happy :) FSM are the first thing we learned in engineering school but too often software is just hacked together based on code and not design. FSM form the basis of any protocol/service. eg: TCP, FTP, TLS, SSH, DNS, HTTP, etc.

                                                                                1. 3

                                                                                  The cool thing is those can be implemented and verified at the type level in dependently typed functional languages. See Idris’ ST type. Session types are another example. Thankfully I can see movements in the FSM direction on the front end with stuff like Redux and Elm, but alas it will be a while before these can be checked in the type system.

                                                                            3. 4

                                                                              I don’t think the internet is a good reference model. IMO the internet is largely a collection of “whatever we had at the time” with a sprinkle of “this should work” and huge amounts of duct-tape on top. The internet succeeded despite being build on OO, not because of it. Though I think FP would also have made the internet succeeded in spite of it, not because of it.

                                                                              There is no one true methodology, I think it’s best if you mix the two approaches where it makes sense to get the best of both worlds.

                                                                              1. 1

                                                                                Let me be more specific , by internet i mean TCP/IP and friends , not HTTP and friends.

                                                                                1. 2

                                                                                  Even TCP/IP and friends is a lot of hacks and “//TODO this is a horrible hack but we’ll fix it later”. HTTP is just the brown-colored cream on top of the pie that is the modern internet.

                                                                                  It’s why DNSSEC and IPv6 have seen such little adoption, all the middleboxes someone hacked together once are all still up and running with terrible code and they have to be fully replace to not break either protocol.

                                                                                  I’ve seen enough routers that silently malform TCP packets or (more fun) recalculate the checksum without checking it, making data corruption a daily occurence. Specs aren’t followed, they’re abused.

                                                                                  1. 2

                                                                                    And yet the internet has never shut down since it started running with all its atoms replaced many times over. Billions of devices are connected and the whole system manages to span the entire planet. It just works.

                                                                                    It’s an obviously brilliant and successful design that created tens of trillions of dollars in value. I think you will be hard pressed to find another technology that was this successful and that changed the world to the degree the internet has.

                                                                                    Does it have flaws like the ones outlined? Yes of course. Does it work despite them? Yes!

                                                                                    The brilliance of the internet is that even when specs are not followed, the system keeps on working.

                                                                                    1. 2

                                                                                      I think it’s more in spite of how it was built and not because of it.

                                                                                      And the internet has shut down several times by now, or atleast large parts of it (just google “BGP outage” or “global internet outage”)

                                                                                      It’s not a brilliant design but successful, yes. It’s probably just good enough to succeed.

                                                                                      Not brilliant, it merely works by accident and the accumulated ducttape keeps it going despite some hiccups along the way.

                                                                                      If the internet was truly brilliant it would use a fully meshed overlay network and not rely on protocols like BGP for routing. It would also not have to package everything in Ethernet frames (which are largely useless and could be replace with more efficient protocols)

                                                                            1. 6

                                                                              Deleting facebook isn’t a particularly useful exercise because i’m pretty sure they don’t delete the data they already have, and they create shadow profiles for people who aren’t facebook users, even without directly collecting data from you. Blocking their domains is a mild hinderance, not an actual measure to stop them.

                                                                              If you’re deleting your facebook account because it’s not useful to you, or as a political protest action, fine, but at least acknowledge that you’re not meaningfully preventing them from collecting data.

                                                                              1. 7

                                                                                If enough people delete their profiles, then it affects the stats Facebook presents to advertisers, making it a less attractive advertising platform with a smaller audience. That hits Facebook in the pocket, which is the only thing they care about.

                                                                                1. 3

                                                                                  I think it is very useful because they lost one of their primary sources of data. Installing ublock-origin, privacy badger, and other extensions should also help block trackers from most websites. There’s nothing I can do to hide against facebook buying credit data and other 3rd party data except lobby my local politicians. But if everyone deleted facebook and stopped browsing instagram models for.. ahem.. personal entertainment purposes.. facebook would lose their primary source of income :)

                                                                                  1. 2

                                                                                    It may be a functional no-op, but it very definitely sends a message to Facebook corporate. I doubt this will change anything in the long haul - their bottom line depends upon exploitative behavior, but I expect a lot of smoke and little to no fire coming out of all of this.

                                                                                  1. 18

                                                                                    I’ve seen this sentiment being expressed by quite a few people recently, and it makes me happy.

                                                                                    I recently met one of my programming heroes, and we were geeking out and it was wonderful; it felt really nice that I could keep up with them and that we shared so many opinions. Then I asked what they do when they’re not programming. They paused, and then told me they don’t do much else.

                                                                                    Which is also fine, of course, but to me it highlighted how individual this is. Up until then I felt like we were extremely similar. But, I need time to play guitar, patch synthesisers, be outside in nature, draw, fiddle with electronics, play video games, and all the other things I enjoy doing, and need to do in order to feel like a whole human. That leaves almost no time for programming outside of work.

                                                                                    So in that way, we were each other’s complete opposite, and that’s great! What we choose to do in our spare time does not determine how skilled we are at programming, and everyone in tech shouldn’t be the same.

                                                                                    (Although right now I am unemployed, so I can get some coding in anyway 😉)

                                                                                    1. 5

                                                                                      In many areas, (architecture, electronics, finance, …) people aren’t using their “work” skills at home. I mean…

                                                                                      • How many electronics engineers are working on open-designs on weekends (you’ll find some, obviously, but proportionally, not that much)?
                                                                                      • How many architects are just doing pet projects on weekends? Again you’ll find some… but it’s far from the majority.
                                                                                      • How many accountants are doing accounting on weekends? … Maybe a bit on their own finance… but common…

                                                                                      To me it’s a not even a situation, and even if it’s a good way to recruit, it’s not mandatory and shouldn’t be.

                                                                                      1. 19

                                                                                        At the risk of being so terse that others will pedantically snipe my obviously flawed reasoning: programming is a deeply creative endeavor, its only effective cost is time, and the results of a working program can be tangibly experienced without any additional cost. The raw materials for programming are comparatively cheap. To that end, it is no surprise at all that there are many people who code on their free time in contrast to many other professions.

                                                                                        1. 3

                                                                                          I see two arguments here, creativity and cost. Compatibility is not creative, but it is cheap (a computer + excel + paper). Architecture is creative and (to the relative extent of seeing results) cheap (paper/computer). I know your point and won’t enter in the debate of creativity but I still don’t understand why we talk so much about this. My ideas are:

                                                                                          • It’s a self reinforcing loop (people code in the weekend, people that are not are feeling bad about not doing so, so they start doing it…).
                                                                                          • It’s not so different from few other creative work, but we tend to talk/share much more about it.
                                                                                          1. 0

                                                                                            programming is a deeply creative endeavor

                                                                                            I challenge that assertion. It’s no more creative than construction work or any other trade.

                                                                                            1. 11

                                                                                              Have you ever worked in construction or any other trade? What makes you think that those things aren’t creative?

                                                                                              You implication suggests you believe the trades aren’t creative. As someone who has spent a few summers hanging drywall, and who has a family full of tradesmen, I would challenge the implication that the trades do not require a fair amount of creativity.

                                                                                              Any problem-solving discipline requires creativity. In the trades, this is readily apparent the first time you talk to someone who has to coordinate the logistics of moving several tons of material up a narrow street that must remain open to regular traffic. Or when you talk to the plumber who has to retrofit three different piping systems in the same house reno so that shit won’t literally fly out of the toilets when it rains more than 3/4”. Or when you talk to the surveyor who has to figure out how to shoot a line through dense woods so she can accurately determine the property line because the next door neighbor is under the mistaken impression that they own land 13’ past where they actually do.

                                                                                              These are all real examples of situations which required creative solutions. No instruction manual exists to tell the GC how to coordinate those material deliveries, help the plumber design a wastewater system for a house, or help shoot a straight property line in dense situtations. These people rely on ingenuity and experience, as well as their creativity, to help them find a solution.

                                                                                              So I suppose in a sense I agree with you, Programming isn’t any more creative than the trades; but I disagree with your implication that either are not creative processes.

                                                                                              1. 3

                                                                                                I haven’t worked trades but I’ve done supervision for engineering. I agree they are creative. Just as creative as programming. I guess my phrasing was poor. I meant to imply that trades people aren’t hired for doing big weekend hobby projects on tradehub.

                                                                                                And while I think it is creative I don’t think it is deeply creative in the sense that it is more art than mechanics. While there is an art, it isn’t itself an art. Many jobs for trades and programming are pretty routine and boring work.

                                                                                                1. 3

                                                                                                  Note that I listed several reasons why programming attracts a lot of folks that do it in their free time. In the common case, creativity alone isn’t sufficient. The expenditure of resources to realize a result is a key ingredient to my argument.

                                                                                              2. 5

                                                                                                I challenge that assertion. It’s no more creative than construction work or any other trade.

                                                                                                To the extent that the level of creativity can be compared, I disagree. To the extent that the level of creativity cannot be compared, I agree.

                                                                                                Pick your assumptions. I don’t really care otherwise, and I think the direction you’re drawing me in is a pointless waste of time.

                                                                                        1. 14

                                                                                          I believe that OO affords building applications of anthropomorphic, polymorphic, loosely-coupled, role-playing, factory-created objects which communicate by sending messages.

                                                                                          It seems to me that we should just stop trying to model data structures and algorithms as real-world things. Like hammering a square peg into a round hole.

                                                                                          1. 3

                                                                                            Why does it seem that way to you?

                                                                                            1. 5

                                                                                              Most professional code bases I’ve come across are objects all the way down. I blame universities for teaching OO as the one true way. C# and java code bases are naturally the worst offenders.

                                                                                              1. 5

                                                                                                I mostly agree, but feel part of the trouble is that we have to work against language, to fight past the baggage inherent in the word “object”. Even Alan Kay regrets having chosen “object” and wishes he could have emphasized “messaging” instead. The phrase object-oriented leads people to first, as you point out, model physical things, as that is a natural linguistic analog to “object”.

                                                                                                In my undergraduate days, I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

                                                                                                I suppose I have taken that “Aha!” moment for granted and can see how, in the absence of such an explicit lesson, it might be hard to discover the notion on your own. It is definitely a problem if OO concepts are presented universally good or without pitfalls.

                                                                                                1. 4

                                                                                                  I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

                                                                                                  Can you remember some of the specifics of this? Sounds fascinating.

                                                                                                  1. 3

                                                                                                    My memory is a bit fuzzy on it, but the project was about simulating a bank. Your bank program would be initialized with N walk-in windows, M drive-through windows and T tellers working that day. There might’ve been a second type of employee? The bank would be subjected to a stream of customers wanting to do some heterogeneous varieties of transactions, taking differing amounts of time.

                                                                                                    There did not need to be a teller at the drive-through window at all times if there was not a customer there, and there was some precedence rules about if a customer was at the drive-through and no teller was at the window, the next available teller had to go there.

                                                                                                    The goal was to produce a correct order of customers served, and order of transactions made, across a day.

                                                                                                    The neat part (pedagogically speaking) was the project description/spec. It went through so much effort to slowly describe and model the situation for you, full of distracting details (though very real-world ones), that it all-but-asked you to subclass things needlessly, much to your detriment. Are the multiple types of employees complete separate classes, or both sublcasses of an Employee? Should Customer and Employee both be subclasses of a Person class? After all, they share the properties of having a name to output later. What about DriveThroughWindow vs WalkInWindow? They share some behaviors, but aren’t quite the same.

                                                                                                    Most people here would realize those are the wrong questions to be ask. Even for a new programmer, the true challenge was gaining your first understandings of concurrency and following a spec rules for resource allocation. But said new programmer had just gone through a week or two on interfaces, inheritance and composition, and oh look, now there’s this project spec begging you to use them!

                                                                                                2. 2

                                                                                                  Java and C# are the worst offenders and, for the most part, are not object-oriented in the way you would infer that concept from, for example, the Xerox or ParcPlace use of the term. They are C in which you can call your C functions “methods”.

                                                                                                  1. 4

                                                                                                    At some point you have to just let go and accept the fact that the term has evolved into something different from the way it was originally intended. Language changes with time, and even Kay himself has said “message-oriented” is a better word for what he meant.

                                                                                                    1. 2

                                                                                                      Yeah, I’ve seen that argument used over the years. I might as well call it the no true Scotsman argument. Yes, they are multi-paradigm languages and I think that’s what made them more useful (my whole argument was that OOP isn’t for everything). Funnily enough, I’ve seen a lot of modern c# and java that decided message passing is the only way to do things and that multi-thread/process/service is the way to go for even simple problems.

                                                                                                      1. 4

                                                                                                        The opposite of No True Scotsman is Humpty-Dumptyism, you can always find a logical fallacy to discount an argument you want to ignore :)

                                                                                                3. 2
                                                                                                  Square peg;  
                                                                                                  Round hole;  
                                                                                                  Hammer hammer;  
                                                                                                  hammer.Hit(peg, hole);
                                                                                                  1. 4

                                                                                                    A common mistake.

                                                                                                    In object-orientation, an object knows how to do things itself. A peg knows how to be hit, i.e. peg.hit(…). In your example, your setting up your hammer, to be constantly changed and modified as it needs to be extended to handle different ways to hit new and different things. In other words, your breaking encapsulation by requiring your hammer to know about other objects internals.

                                                                                                  2. 2

                                                                                                    your use of a real world simile is hopefully intentionally funny. :)

                                                                                                    1. 2

                                                                                                      That sounds great, as an AbstractSingletonProxyFactoryBean is not a real-world thing, though if I can come up with a powerful and useful metaphor, like the “button” metaphor in UIs, then it may still be valuable to model the code-only abstraction on its metaphorical partner.

                                                                                                      We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                                                                                                      1. 2


                                                                                                        A factory is a real world thing. The rest of that nonsense is just abstraction disease which is either used to work around language expressiveness problems or people adding an abstraction for the sake of making patterns.

                                                                                                        We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                                                                                                        I think OOP has its place in the world, but it is not for every (majority?) of problems.

                                                                                                        1. 3

                                                                                                          A factory in this context is a metaphor, not a real world thing. I haven’t actually represented a real factory in my code.

                                                                                                          1. 2

                                                                                                            I know of one computer in a museum that if you boot it up, it complains about “Critical Error: Factory missing”.

                                                                                                            (It’s a control computer for a factory, it’s still working, and I found that someone modeled that case and show an appropriate error the most charming thing)

                                                                                                            1. 2

                                                                                                              But they didn’t handle the “I’m in a museum” case. Amateurs.

                                                                                                      2. 1

                                                                                                        You need to write say a new air traffic control system, or a complex hotel reservation system, using just the concepts of data structures and algorithms? Are you serious?

                                                                                                      1. 4

                                                                                                        I used Google Wave briefly to plan a trip with some friends. It had a lot of potential, actually.

                                                                                                        1. 5

                                                                                                          I also used Google Wave and agree; I saw the potential right away. It’s a shame it was underappreciated and the project wasn’t a priority and given more resources.

                                                                                                          1. 1

                                                                                                            I’ve used iOS notes and it makes edits instantly visible. I’m sure there are other collaborative tools available.

                                                                                                          1. 14

                                                                                                            Google, the only problems in email are security related (spam, viruses, privacy, authentication, etc). Be engineers, fix that boring stuff and stop trying to control the web.

                                                                                                            1. 5

                                                                                                              there are other problems in email, though unfortunately they are caused or enabled by gmail (top posting, html, exclusion of independent servers).

                                                                                                            1. 14

                                                                                                              So who wants to adopt the lobster for lobste.rs?

                                                                                                              1. 6

                                                                                                                why not zoidberg?

                                                                                                                1. 5

                                                                                                                  I’m up for donating to a pool for this.

                                                                                                                  1. 4

                                                                                                                    Agreed with /u/gerikson, I’m up for a donation pool! Who wants to spearhead it?

                                                                                                                    1. 15

                                                                                                                      I could put together a pool to try to hit the Silver or Gold level. The link would point back to a note on the about page. There would be no reward for donating besides the warm glow of knowing you’ve helped support an organization that is the source of so much error handling in our code.

                                                                                                                      Please take this ad-hoc poll by upvoting the single highest amount you’d donate towards this. Enough support and I’ll put something together. (If you made judicious use of your GPU a few years ago and have cryptocurrency to donate, please select the amount of USD you’d convert it into before sending it because I’m game for a fun lark, not a major project.) (Edit: tweeted)

                                                                                                                      1. 59

                                                                                                                        10 USD

                                                                                                                        1. 17

                                                                                                                          1 USD

                                                                                                                          1. 9

                                                                                                                            50 USD

                                                                                                                            1. 4

                                                                                                                              100 USD

                                                                                                                              1. 1

                                                                                                                                This is in progress.

                                                                                                                                1. 1

                                                                                                                                  500 USD

                                                                                                                            1. 17

                                                                                                                              Key part I’ve often used to debunk anti-MS sentiment from security folks:

                                                                                                                              “Despite the above, the quality of the code is generally excellent. Modules are small, and procedures generally fit on a single screen. The commenting is very detailed about intentions, but doesn’t fall into “add one to i” redundancy.”

                                                                                                                              “From the comments, it also appears that most of the uglier hacks are due to compatibility issues: either backward-compatibility, hardware compatibility or issues caused by particular software. Microsoft’s vast compatibility strengths have clearly come at a cost, both in developer-sweat and the elegance (and hence stability and maintainability) of the code.”

                                                                                                                              Seems most of their problems came not from apathy but from caring about compatibility more than about anyone on desktop. That helped ensure their lock-in and billions. The cost was worse flexibility, reliability, and security. Acceptable cost given Gates’ goal of becoming super rich. Not as great for users, though. Fortunately, the Security Development Lifecycle got some of that under control with Windows kernel 0-days becoming rare versus other types. Their servers are very reliable, too.

                                                                                                                              Anyone wondering what Microsoft could do if not so focused on backward compatibility need only look at MS Research’s projects. Far as OS’s, Midori and VerveOS come to mind for different purposes. One could be a foundation of the other actually.

                                                                                                                              1. 7

                                                                                                                                Not as great for users, though.

                                                                                                                                I beg to disagree. A lot of end users and small businesses rely on some unmaintained piece of legacy software in one way or another. The fact that they don’t have to keep a separate PC with an unmaintained, insecure OS on it is a definite plus for those people.

                                                                                                                                1. 4

                                                                                                                                  Regarding the “what Microsoft could do” – that’s exactly what they’re trying to with UWP apps in Windows 10. Proper sandboxing for all applications, ideally even all browser tabs in OS-level sandboxes.

                                                                                                                                  I’m especially interested (and scared at the same time) in the rumors about Polaris, which is said to be a Windows 10 throwing the entire Win32 layer away, with all the backwards compatibility patches only existing within of the UWP sandbox of each separate application, and with much better security (but also, obviously, less customizability).

                                                                                                                                  1. 3

                                                                                                                                    They’re definitely doing new stuff with UWP. I’ve been off Windows too long to know anything about it. I was mainly talking about designing every aspect of an OS around high-level, modular, safe, and/or concurrent programming. The two links in my comment will give you an idea of what they’re capable of.

                                                                                                                                  2. 3

                                                                                                                                    I’ve never thought that microsoft wrote bad functions, but that their design is over-complicated. There’s too many moving parts, too many function arguments, too many layers, … It’s the accidental complexity that seems to cause logical bugs.

                                                                                                                                  1. 1

                                                                                                                                    How about another bug, int*2 is an undefined overflow. That’ll certainly cause problems.

                                                                                                                                    1. 4

                                                                                                                                      This is one area where Rust and C are different; overflow is well-defined in Rust.

                                                                                                                                    1. 11

                                                                                                                                      Finally a proper use of the caps lock key:

                                                                                                                                      Press caps lock to switch to a command line interface; here’s the debug screen.

                                                                                                                                      1. 8

                                                                                                                                        Well, I’d rather use it for Control. But maybe if keyboards would put Control where it belongs, next to Space (it should go Super Alt Control Space Control Alt Super), then it wouldn’t be necessary to have Control where most keyboards have Caps Lock.

                                                                                                                                        1. 5

                                                                                                                                          I always map Caps Locks to Ctrl, so whenever I’m on someone else’s laptop I keep flipping into caps when I mean to copy/paste/break/etc.

                                                                                                                                          1. 3

                                                                                                                                            it should go Super Alt Control Space Control Alt Super

                                                                                                                                            What’s the premise for “should” here?

                                                                                                                                            1. 1

                                                                                                                                              Because of the frequency of use. Control is used almost all the time, in Windows, Linux & emacs. As such, it should go into the easiest-to-strike location, right next to the spacebar where the thumb can strike it in conjunction with other keys.

                                                                                                                                              Alt/Meta is used less often, so it should receive the less-convenient spot. Alt should be used for less-frequently used functionality, and to modify Control (e.g. C-f moves forward one character; C-M-f moves forward one word).

                                                                                                                                              Super should be used least of the three, and ideally would be reserved for OS-, desktop-environment– or window-manager–specific tasks, e.g. for switching windows are accessing an app chooser. Since it’s used less than either Alt or Control, it belongs in the least-convenient spot, far from the spacebar.

                                                                                                                                              If we were really going to do things right, there’d be a pair of Hyper keys outboard of super, reserved for individual user assignment. But we don’t live in a perfect world.

                                                                                                                                          2. 4

                                                                                                                                            as a vi user, i would have said “use escape” but then remembered my caps-lock key is remapped to escape.

                                                                                                                                          1. 2


                                                                                                                                            Windows code is too complicated. It’s not the components themselves, it’s their interdependencies. An architectural diagram of Windows would suggest there are more than 50 dependency layers (never mind that there also exist circular dependencies). After working in Windows for five years, you understand only, say, two of them. Add to this the fact that building Windows on a dual-proc dev box takes nearly 24 hours, and you’ll be slow enough to drive Miss Daisy.

                                                                                                                                            I haven’t been around in the industry too long, i was in school when this blog entry was posted. But I’ve seen a few projects struggle and fail because of bad architecture and increasing technical debt. The OPs article definitely reflects the struggle between new features, legacy support, and paying down the technical debt (improving security, etc.).

                                                                                                                                            1. 2

                                                                                                                                              The microservices that are all the rage these days adds a whole new layer of challenge to understanding dependencies. While monoliths have their own challenges, at least all of the information is there to understand what is connected. I’m still not sure if this has been adequately solved.

                                                                                                                                              1. 2

                                                                                                                                                Arguably microservices can simplify this dependency tree tremendously. In the world to date, it has been essentially impossible to compile many differently versioned libraries together into one monolithic application, which is what generally happens when you have a large number of teams doing separate development.

                                                                                                                                                With microservices, again arguably, encapsulation happens at the whole-service layer, so each team is free to develop using whatever versions they like, and just provide HTTP (or whatever) as their high level API.

                                                                                                                                                Where this tends to break down in my experience is (a) where true shared dependencies exist, which can happen if you either were bad at data modeling to begin with or if your needs organically grew differently than your original design, and (b) operationally, in a world of incredibly broken and insecure software, processors, etc., resulting from C (and now JS) and the shared memory model, where it is no longer possible to understand what in the opaque blobs need patching.

                                                                                                                                                1. 1

                                                                                                                                                  C obviously has memory bugs but I’m curious what insecurity you see stemming from JS. Is it the automatic type casting? (I write JavaScript every day and think a good portion of the new parts of the language are good, but I will fully admit it spent its formative years on crack.)

                                                                                                                                                  1. 1

                                                                                                                                                    I don’t see how adding more dependencies simplifies anything, that can only make it more complicated. It may be convenient, but it’s not simpler. And in order to have that architecture one needs to have network protocols and serialization going on which has a performance and cognitive cost. There certainly are reasons to have a microservice architecture but I have a hard time seeing simplification as one of them.

                                                                                                                                                  2. 1

                                                                                                                                                    Microservices exist mostly to facilitate development by many teams on a large system. They are one of the best examples of Conway’s Law.

                                                                                                                                                    You are correct that they add complexity, and they tend to be adopted regardless of if they solve a real problem.