1. 42

  2. 13

    The first requirement of any commercial software system is to attract paying customers

    I was on the network engineering team (and the nascent information security team, and was the entirety of the reliability team) at Whole Foods Market 20 years ago. Any time I wanted to do something neat or cool, I was first asked “will it help us sell groceries?”

    It irked me then, as a young man passionate about cool technology but as I’ve gotten older it’s made more and more sense to me. WFM wasn’t a tech company (it is now, hah!) and IT existed solely to facilitate its primary business goal. Often I feel like we forget that.

    1. 1

      WFM wasn’t a tech company (it is now, hah!)

      It’s shift from what you describe to an Amazon subsidiary is probably one of the biggest tech 180’s that will happen in the market. I mean, at least until a brick-and-mortar store selling abacus’s, pencil, and paper without a cash register releases a 7nm CPU after a reorganization phase.

      1. 2

        I’m not sure most are quite that dramatic, but companies that used to fit into the ‘boring, sensible business’ image wanting to reinvement themselves as tech-centric seems to be a bit of a trend. In some cases it might be more PR than a real 180 in how the business is run, but it’s still something management wants to push. For example, boring ol’ Spanish bank BBVA now puts out streams of press releases about how they’re hip to open source and machine learning, publishing miscellaneous stuff like that on GitHub. Oil companies seem to also be trying to reinvent their image in that direction, although it’s less of a stretch because oil/gas exploration has been heavily driven by data / statistics / computational modeling for decades anyway, it just wasn’t seen as tech culturally.

        1. 1

          “although it’s less of a stretch because oil/gas exploration has been heavily driven by data / statistics / computational modeling for decades anyway”

          I was about to point out that every cool NUMA machine I looked at back in the day (esp SGI or Sun) mentioned oil industry as big sector. I was envious of the oil labs. :)

    2. 11

      Here’s the origin of “premium mediocre” for anyone puzzled. As far as I can tell, it means “lower/middle class with pretensions to upper class”; a modern take on nouveau riche from the perspective of the petite bourgouise.

      1. 5

        Said in the right tone of voice, I’ve noticed “middle class” itself can mean that in British English (in contrast to American English, where it has an almost exclusively positive, regular-everyday-folks connotation). Often connotes someone/something that is trying to seem upper-class/posh but isn’t really.

      2. 8

        It makes me sad to see what was once largely a craft become what is essentially a commodity. Then again, I suppose it’s a pipe dream to expect quality to prevail in such a fast-growing industry.

        1. 16

          This sums up my disillusionment.

          When I was young, I dreamed of building beautiful cathedrals of software. But if I pay too much attention to tech, it can feel like everyone obsesses over building the crappiest backyard sheds to power barely-thought-out predatory business models. And I’m supposed to be excited about the narrow possibility of accruing disproportionate financial gains.

          I view the actual craft of programming as almost orthogonal to tech itself. Tech headlines are so preoccupied with what other people are doing: who is buying whom, how many Github stars does this have, what OSS product should we be obsessed with from $MEGACORP, how much do you really love JavaScript, etc. I don’t really care about that stuff, that’s celebrity gossip at best. As a result, I don’t pay much attention to tech. The orange website is permanently blocked in my hosts file, I’d block it at the router level if I could.

          Since I have a family to support, I’ll continue to do great work (and get paid decently!) for something I generally like. However, I’ve had to accept the fact that so many devs and non-devs want to make a commodity of something that I value more as a craft, and just sort of let the idea that it could be an industry driven by craft more than commodity go. FWIW, the more we try to commoditize development, the worse everything seems to get; e.g. having a near-fully declarative UI has not fixed the difficulty of creating reliable UIs. Thus, there is still a high skill floor and ceiling to programming, and I’ll likely always be able to find work.

          My own future projects will probably be more art than intended for end users, because devs only seem to be able to adopt whatever is pushed to them by those with massive marketing budgets.

          1. 3

            It’s not surprising since software is a commodity these days. I suppose it will become similar to automobile mechanics: it requires training and apprenticeship, but is not extremely difficult (compared to say, college level STEM), and is a necessary profession, as long as there are cars.

            The corollary is that while it may no longer be that unique to be a software engineer, if you work in a prestigious position you could be developing something really interesting that could be one day be used by millions of other engineers.

          2. 8

            This is just the sad shift from hackers to employees.

            It started without anybody noticing years ago, when Open Source was invented as a business friendly alternative to Free Software.

            At time I wonder if hackers should stop calling themselves hackers and look for another term, maybe not even an English word, to define themselves and take their ethics back.

            We have values to share, built around our curiosity: creativity, freedom, ingenuity, critical thinking, dialogue…

            And they have nothing to do with business.

            1. 3

              It has always felt like gentrification to me. Am I wrong here?

            2. 7

              There are some really interesting points here.

              This minimalist knowledge approach to programming languages is cost effective because most code is simple and has a short lifetime; the cost of learning lots of language details does not provide enough benefit to be worthwhile.

              I am a minimalist language Python developer. Why would I spend time learning more about the semantics of Python than I need to?

              I’m surprised to realize I’m a minimalist (in this sense) to most of the language on my resume. Language (and framework) convergence over the past few years has made it really easy. What are the facilities for using maps, lists, clusures? If you think in these concepts, they’re portable most languages nowadays. Classes, sure, if they’re present, idomatically used. I don’t sense most Python is OO for example.

              The first requirement of any commercial software system is to attract paying customers. … Minimizing software engineering effort saves time and money (in the short term). If the product is a success, there will be money to pay for what needs to be done, if the product fails nobody cares.

              This is a fundamental truth of this profession.

              Software engineering mediocrity is not only viable, for most people it’s the outcome of making a cost/benefit decision to invest their learning time in the application domain, not software engineering (or computer language).

              The author doesn’t state it, but there’s a similar cost benefit trade off for spending time learning a domain. In recent years I’ve foregone any effort whats so ever in learning the domains I’ve been working in. It’s saves me a lot of stress, and I find that very rarely do I need to seek guidance on domain issues. Usually reading the code or db schema is enough.

              1. 1

                The first requirement of any commercial software system is to attract paying customers. … Minimizing software engineering effort saves time and money (in the short term). If the product is a success, there will be money to pay for what needs to be done, if the product fails nobody cares.

                […] I find that very rarely do I need to seek guidance on domain issues. Usually reading the code or db schema is enough.

                Count your lucky stars that the early mediocrity didn’t include choosing Mongo or otherwise having no schema or data discipline. Even if the product is a success that’ll be an anchor for a long time - maybe forever (I’m expecting good results for Mongo shareholders [I am not one of them] for this reason).

                1. 0

                  This is a fundamental truth of this profession.

                  It’s not for startups that want to growth-hack to IPO’s or acquisitions. They’re behind quite a few techs these days. Additionally, some commercial projects are about improving some area of the business not tied to revenue, esp cost-cutting. The customers are already there or coming via some other means. Finally, a rare segment are charitable projects not expected to get any customers.

                2. 2

                  … and I guess this is why software is so terrible.

                  1. 7

                    Funny thing is, though – software that is not subject to the commercial incentives is still terrible.

                    1. 1

                      How about all the free / open source infrastructure (from kernels thru daemons thru libs thru languages & compilers) the commercial sector builds their projects upon? Sure, these too are supported by corps with commercial incentives. But they’re not the “fast and cheap” apps that make it or don’t.

                      I think we get pretty good software from people who do it for the love of doing it and are paid to keep working on it. Maybe it’s not perfect, but there’s a fair amount of software that doesn’t make me hate it all.

                      1. 5

                        I’m with jfb. Most OSS software has poor UI, poor documentation, poor security, and so on. It’s crap. Even if better than proprietary average, that wouldn’t be saying much since so much of it is crap. Software being crap is the default. Another phrasing is Big Ball of Mud.

                        1. 5

                          Part of the problem is that software quality is an aesthetic judgement, a multi-dimensional one, and so people’s views of what makes software “good” are necessarily going to vary.

                          1. 2

                            Well, maybe, maybe not. It’s definitely subjective in terms of what calls what people will make on it. There are objective attributes we can go by, though. Traditionally, those included things like size of modules, coupling, amount of interactions, what features were tested, how often they fail, how severe failures are, ease of making a change, ease of training new users if UX, and so on. I think the specific values considered acceptable will vary considerably project to project for good reasons. We often assess the same stuff in each one, though. Suggests some things are more objective than others.

                            1. 3

                              I largely agree with this, yes.

                        2. 1

                          I think they’re almost uniformly terrible, too.

                          1. 1

                            Do you have any examples of software you like? I see your POV a bit and was wondering if you had anything you liked.

                            1. 3

                              I liked the original Interface Builder a lot. I was also a fan of the classic Mac OS for a while. I enjoy Squeak. djb’s software is uniformly good. I use Emacs and I love it but that love is tempered by a strong dislike for emacs lisp itself. But as an environment, I couldn’t possibly surrender it.

                              I admire a lot more software than I like – OpenBSD, for instance.

                              ETA: Postgres, of course, is a very good piece of software.

                              ETA’: I really, really like http://reederapp.com, an iOS/OS X RSS reader.

                    2. 2

                      “A minimalist knowledge approach to software engineering is cost effective because most code does not exist long enough to make it worthwhile investing in reducing future maintenance costs. Yes, it is more expensive for those that survive to become commonly used, but think of all the savings from not investing in those that did not survive.”

                      This is something that I’m probably going to have to think more on. @derek-jones might even have data to support it in his collection. My data, though, indicated that most real-world projects from the 1960’s on up to present times run into problems late in the lifecycle they have to fix. Those fixes usually cost more in money or reputation. Some groups spent small, upfront investment preventing most problems like that. They claim it usually paid off in various ways. This was especially true if software was long-lasting. There were times when the quality cost more overall on top of a thrown-together project.

                      Another issue is is that pervasively-buggy software conditioned users to expect that it’s normal. This reduces demand or competitive advantage of high-quality, mass-market software. Many firms, esp small or startups, can profitably supply buggy software so long as it meets a need and they fix the bugs. In enterprise market, you can even sell software that barely works or doesn’t at all so long as it appeared to meet a need making someone in the company look good. So, this needs to be factored into the decision of whether to engineer software vs throw it together.

                      I still say lean toward well-documented, easy-to-change software just in case you get stuck with it. You can also charge more in many markets with better rep. Use the amount of QA practices that the market will pay for. If they pay nothing, use stuff that costs about nothing like interface checks, usage-based testing, and fuzzing. If they’ll pay significant amount, add more design/code review, analysis/testing, slices of specialist talent (eg UI or security), improvements to dependencies, and so on.

                      1. 4

                        Cost/benefit for applications, there is also a less rigorous analysis.

                        Code containing a fault is more likely to be modified (removing the fault as a side-effect) than have the fault reported (of course it may be experienced and not reported); see Figure 10.75.

                        Other kinds of related data currently being analysed.

                        Microsoft/Intel is responsible for conditioning users to treat buggy software as normal. When companies paid lots of money for their hardware, they expected the software to work. Success with mass market software meant getting getting good enough software out the door, or be wiped out by the competition.

                        1. 2

                          I think IBM’s mainframe data might not fit your argument. IBM kept coming up with things like Fagan Inspections, Cleanroom, formal specs, and safe languages. They often experimented on mainframe software. A good deal of it was written in high-level languages like PL/I and PL/S that prevent many problems a shop using C might have. They have lifecycles that include design and review steps. In other words, IBM was regularly doing upfront investments to reduce maintenance costs down the line. The investments varied depending on which component we’re talking about. The fact they were trying stuff like that should disqualify them, though, as a baseline. A much better example would be Microsoft doing fast-moving, high-feature development in C or C++ before and after introducing SDL and other reliability tools. It made a huge difference.

                          Other issues are backward compatibility and lock-in. The old behavior had to be preserved as new developments happened. The two companies also made the data formats and protocols closed-source, complicated ones to make moves difficult. The result is that both IBM and Microsoft eventually developed a customer base that couldn’t move. Their development practices on maintenance side probably factor this in. So, we might need multiple baselines with some allowing lock-in and some being companies that can loose customers at any time. I expect upfront vs fix or change later decisions to be more interesting in the latter.

                          1. 2

                            The data is on customer application usage that ran on IBM mainframes (or at least plug compatibles).

                            1. 1

                              Oh Ok. So, mainframe apps rather than mainframe systems themselves. That would be fine.

                      2. 1

                        The author points out that software engineering practices should be not be dictated by survivorship bias, however in the next paragraph he does suggest software engineering practices based on sampling bias.

                        I think it’s very hard to draw much of any meaningful conclusions from these sort of things. Can you make it if you code trash that barely functions? Probably, if people like your idea. Even if you do really good code, will you have to rewrite it eventually? Probably, if people like your idea.

                        It’s also very easy to compare things that absolutely failed from things that didn’t. But it gets much harder to having a meaningful discussion about degrees of success.

                        There is a lot of code out there, there are a lot of coders, and most of anything isn’t amazing. I think it’s good to be aware of what things look like but I don’t think we can really be very prescriptive about how we should develop based on that.

                        1. 0

                          The market and good software are antagonist? Nothing new under the sun.

                          This article, compared to other on the same subject, seems unnecessarily neutral. You want him to give you a moral suggestion until the end but the moral question is left open.

                          Shall we comply to this state of things or, as developers, shall we fight against the market pressure to preserve our dignity as humans, workers and software engineers? Well…