1. 14

    I feel sure that the SOLID principles belong in the “helps people who already expert at doing SOLID things, harms everyone else” category.

    • SRP: what is a “responsibility”? As the author has found out, you can have multiple answers that are all correct. My WorkflowManagerBean has a single responsibility, managing workflows. On the other hand: your map function has two responsibilities: iterating over a sequence, and applying the passed function.
    • OCP: nobody since Bertrand Meyer has given a coherent explanation of OCP. Bob Martin’s version is related to avoiding the fragile base class problem in C++; most of us aren’t doing C++. If this were the orange site, people would be ‘helpfully’ replying with descriptions of the OCP, and no two of these descriptions would be congruent.
    • LSP: the thing that confuses subclasses with subtypes.
    • ISP: if my objects have a single responsibility, why are there different interfaces to segregate?
    • DIP: if I invert dependencies twice I get back to where I started.
    1. 3

      My SOLID principal has been to “avoid OOP”. People sometimes make fun of haskell for requiring too much theory but I find that OOP requires the same level of theory to not create a foot-gun. But at least with haskell, I have something more like algebra than UML.

      For the most part, i’m preferring procedural style C++ because it’s easy to design, easy to test, fast to read, and document. Too much abstraction can hurt quite a lot and so many times OOP actually ends up complicating things more than it brings any advantage.

      1. 3

        What I’ve been slowly discovering is that OOP also requires too much theory, but internalising that theory lets me avoid all the incidental complexity that built up around OOP during the Software Engineering times. Don’t give me Java. Give me a vtable, a lookup primitive, a delegate primitive, a selector type and an object type, and I can build the OOP system I need without having to contort my design to fit the OOP system you/Sun/AT&T provided.

        1. 2

          “Give me a vtable, a lookup primitive, a delegate primitive, a selector type and an object type, and I can build the OOP system I need without having to contort my design to fit the OOP system you/Sun/AT&T provided.”

          I like the simplicity of your summary. Everything except a selector type looks familiar. What’s that?

          1. 3

            some kind of interned symbol type, like a Ruby/Smalltalk/LISP symbol or an objc selector.

    1. 5

      Very strong opinions here…

      As far as I’m concerned, strong scrum practices would defeat these issues.

      Bad tools are not scrum. Lack of ownership is not scrum.

      People who try to use scrum as a way to wrap a process around bad ideas will never benefit from it.

      Take the good ideas, apply scrum, and most importantly, adapt to what you learn.

      1. 38

        adapt to what you learn.

        Umm. Point 5 and 6 of TFA?

        I’ve learnt from seeing it in practice both in my own experience and speaking to many others… The article is pretty spot on.

        Ok. Warning. Incoming Rant. Not aimed at you personally, you’re just an innocent bystander, Not for sensitive stomachs.

        Yes, some teams do OK on Scrum (all such teams I have observed, ignore largish chunks of it). ie. Are not doing certified scrum.

        No team I have observed, have done as well as they could have, if they had used a lighter weight process.

        Many teams have done astonishingly Badly, while doing perfect certified Scrum, hitting every Toxic stereotype the software industry holds.

        Sigh.

        I remember the advent of “Agile” in the form of Extreme Programming.

        Apart from the name, XP was nearly spot on in terms of a light weight, highly productive process.

        Then Kanban came.

        And that was actually good.

        Then Scrum came.

        Oh my.

        What a great leap backwards that was.

        Scrum takes pretty much all the concepts that existed in XP…. and ignores all the bits that made it work (refactoring, pair programming, test driven development, …), and piles on stuff that slows everything down.

        The thing that really pisses me off about Scrum, is the amount of Pseudo Planning that goes on in many teams.

        Now planning is not magic. It’s simply a very data intensive exercise in probabilistic modelling.

        You can tell if someone is really serious about planning, they track leave schedules and team size changes and have probability distributions for everything and know how to combine them, and update their predictions daily.

        The output of a real plan is a regularly updated probability distribution, not a date.

        You can tell a work place bully by the fact their plans never change, even when a team member goes off sick.

        In some teams I have spoken to, Scrum planning is just plain unvarnished workplace bullying by powertripping scrum managers, who coerce “heroes” to work massive amounts of unpaid overtime, creating warm steaming mounds of, err, “technical debt”, to meet sprint deadlines that were pure fantasy to start with.

        Yes, if I sound angry I am.

        I have seen Pure Scrum Certified and Blessed Scrum used to hurt people I care about.

        I have seen Good ideas like Refactoring and clean code get strangled by fantasy deadlines.

        The very name “sprint “ is a clue as to what is wrong.

        One of the core ideas of XP was “Sustainable Pace”…. which is exactly what a sprint isn’t.

        Seriously, the one and only point of Agile really is the following.

        If being able to change rapidly to meet new demands has high business value, then we need to adapt our processes, practices and designs to be able to change easily.

        Somehow that driving motivation has been buried under meetings.

        1. 8

          I 100% agree with you actually.

          I suppose my inexperience with “real certified scrum” is actually the issue.

          I think it’s perfectly fine and possible to take plays out of every playbook you’ve mentioned and keep the good, toss the bad.

          I also love the idea that every output of planning should be a probabilistic model.

          Anyone who gets married to the process they pick is going to suffer.

          Instead, use the definitions to create commonly shared language, and find the pieces that work. For some people, “sprint” works. For others, pair programming is a must have.

          I think adhering to any single ideology 100% is less like productivity and more like cultish religion.

          1. 5

            fantasy deadlines

            Haha. Deadlines suck so let’s have em every 2 weeks!

            1. 3

              As they say in the XP world: if it hurts, do it more often.

              1. 3

                True. It’s a good idea. One step build pipeline all the way to deployment. An excellent thing, all the pain is automated away.

                If you planned it soundly, then a miss is feedback to improve your planning. As I say, planning is a data intensive modelling exercise. If you don’t collect the data, don’t feed it back into your model… your plans will never improve.

                If it was pseudo planning and a fantasy deadline and the only thing you do is bitch at your team for missing the deadline… it’s workplace bullying and doing it more will hurt more and you get a learned helplessness response.

          2. 12

            Warning: plain talk ahead, skip this if you’re a sensitve type. Scrum can actually work pretty well with mediocre teams and mediocre organizations. Hint we’re mostly all mediocre. This article wreaks of entitlement; I’m a special snowflake, let ME build the product with the features I want! Another hint; no one wants this. Outside of really great teams and great developers, which by definition most of us aren’t, you are not capable.

            Because all product decision authority rests with the “Product Owner”, Scrum disallows engineers from making any product decisions and reduces them to grovelling to product management for any level of inclusion in product direction.

            This the best thing about scrum/agile imo. Getting someone higher in the food chain to gatekeep what comes into development and prioritize what is actually needed is a huge benefit to every developer wether you realize it or not. If you’ve never worked in a shop where Sales, Marketing and Support all call their pet developers to work on 10 hair on fire bullshit tasks a day, then you’ve been fortunate.

            1. 9

              Scrum can actually work pretty well with mediocre teams and mediocre organizations. Hint we’re mostly all mediocre.

              The problem is: Scrum also keeps people mediocre.

              Even brilliant people are mediocre, most of the time, when they start a new thing. Also, you don’t have to be a genius to excel at something. A work ethic and time do the trick.

              That said, Scrum, because it assumes engineers are disorganized, talentless children, tends to be a self-fulfilling prophecy. There’s no mentorship in a Scrum shop, no allowance for self-improvement, and no exit strategy. It isn’t “This is what you’ll do until you earn your wings” but “You have to do this because you’re only a developer, and if you were good for anything, you’d be a manager by now.”

              1. 3

                That said, Scrum, because it assumes engineers are disorganized, talentless children, tends to be a self-fulfilling prophecy.

                Inverting the cause and effect here is an equally valid argument, that most developers in fact are disorganinzed, talentless children as you say, and the sibling comment highlights. We are hi-jacking the “Engineer” prestige and legal status, with none of the related responsibility or authority.

                There’s no mentorship in a Scrum shop, no allowance for self-improvement, and no exit strategy.

                Is there mentoring and clear career paths in none scrum shops? This isn’t a scrum related issue. But regardless, anyone who is counting on the Company for self actualization is misguided. At the end of the day, no matter how much we would all like to think that our contributions matter, they really don’t. To the Company, we’re all just cogs in the machine. Better to make peace with that and find fulfillment elsewhere.

                1. 3

                  Scrum does not assume “engineers” at all. It assumes “developers”. Engineers are highly trained group of legally and ethically responsible professionals. Agile takes the responsibility of being an engineer right out of our hands.

                  1. 4

                    Engineers are highly trained group of legally and ethically responsible professionals.

                    I love this definition. I have always said there’s no such thing as a software engineer. Here’s a fantastic reason why. Computer programmers may think of themselves as engineers, but we have no legal responsibilities nor ethical code that I am aware. Anyone can claim to be a “software engineer” with no definition of what that means and no legal recourse for liars. It requires no experience, no formal education, and no certification.

                    1. 1

                      True, but why?

                      IMHO, because our field is in its infancy.

                      1. 2

                        I dislike this reason constantly being thrown around. Software engineering has existed for half a century, name another disipline where unregulated work and constantly broken products are allowed to exist for that long. Imagine if nuclear engineering was like us. I think the real reason we do not get regulated is majority of our field does not need rigor and companies would like a lower salary for engineers, not higher. John Doe the web dev does not need the equalivalent of a engineering stamp each time he pushes to production because his work is unlikely to be a critical system where lives are at stake.

                        1. 1

                          I’m pretty sure that most human disciplines date in the thousands years.

                          Nuclear engineering (that is well rooted in chemistry and physics) is still in its infancy too, as both Chernobyl and Fukushima show pretty well.

                          But I’m pretty sure that you will agree with me that good engineering take a few generations if you compare these buildings with this one.

                          The total lack of historical perspective in modern “software engineers” is just another proof of the infancy of our discipline: we have to address our shortsighted arrogance as soon as possible.

                          1. 1

                            We’re talking about two different things. How mature a field is not a major factor in regulation. Yes I agree with your general attitude that things get better over time and we may not be at that point. But we’re talking about government regulating the supply of software engineers. That decision has more to do with public interests then how good software can be.

                            1. 1

                              That decision has more to do with public interests then how good software can be.

                              I’m not sure if I agree.

                              In my own opinion current mainstream software is so primitive that anybody could successful disrupt it.

                              So I agree that software engineers should feel much more politically responsible for their own work, but I’m not sure if we can afford to disincentivate people to reinvent the wheel, because our current wheels are triangular.

                              And… I’m completely self-taught.

                2. 3

                  This the best thing about scrum/agile imo. Getting someone higher in the food chain to gatekeep what comes into development and prioritize what is actually needed is a huge benefit to every developer wether you realize it or not.

                  While I agree with the idea of this, you did point out that this works well with mediocre teams and, IME, this gatekeeping is destructive when you have a mediocre gatekeeper. I’ve been in multiple teams where priorities shift every week because whoever is supposed to have a vision has none, etc. I’m not saying scrum is bad (I am not a big fan of it) but just that if you’re explicitly targeting mediocre groups, partitioning of responsibility like this requires someone up top who is not mediocre. Again, IME.

                  1. 2

                    Absolutely, and the main benefit for development is the shift of blame and responsibility to that higher level, again, if done right. Ie there has to be a ‘paper trail’ to reflect the churn. This is were jira (or whatever ticketing system) helps, showing/proving scope change to anyone who cares to look.

                    Any organization that requires this level of CYA (covery your ass) is not worth contributing to. Leeching off of, sure :)

                    1. 2

                      So are you saying that scrum is good or that scrum is good in an organization that you want to leech off of?

                      1. 1

                        I was referring to the case the gp proposed where the gatekeeper themselves are mediocre and/or incompetent, and in the case scape goats are sought, the agile artifacts can be used to effectively shield development, IF they’re available. In this type of pathological organization, leeching may be the best tactic, IMO. Sorry that wasn’t clear.

                  2. 3

                    I’m in favour of having a product owner.

                    XP had one step better “Onsite Customer” ie. You could get up from your desk and go ask the guy with the gold what he’d pay more gold for and how much.

                    A product owner is a proxy for that (and prone to all the ill’s proxies are prone to).

                    Where I note things go very wrong, is if the product owner ego inflates to thinking he is perhaps project manager, and then team lead as well and then technical lead rolled into one god like package…. Trouble is brewing.

                    Where a product owner can be very useful is in deciding on trade offs.

                    All engineering is about trade offs. I can always spec a larger machine, a more expensive part, invest in decades of algorithm research… make this bigger or that smaller…

                    • But what will a real live gold paying customer pay for?
                    • What will they pay more for? This or That? And why? And how much confidence do you have? Educated guess? Or hard figures? (ps: I don’t sneer at educated guesses, they can be the best one has available… but it gives a clue to the level of risk to know it’s one.)
                    • What will create the most re-occurring revenue soonest?
                    • What do the customers in the field care about?
                    • How are they using this system?
                    • What is the roadmap we’re on? Some trade offs I will make in delivering today, will block some forks in the road tomorrow.

                    Then there is a sadly misguided notion, technical debt.

                    If he is wearing a project manager hat, there is no tomorrow, there is only The Deadline, a project never has to pay back debt to be declared a success.

                    If he is wearing a customers hat, there is no technical debt, if it works, ship it!

                    Since he never looks at the code….. he never sees what monsters he is spawning.

                    The other blind spot a Product Owner has is about what is possible. He can only see what the customers ask for, and what the competition has, or the odd gap in our current offering.

                    He cannot know what is now technologically feasible. He cannot know what is technologically desirable. So engineers need wriggle room to show him what can or should be done.

                    But given all that, a good product owner is probably worth his weight in gold. A Bad One will sink the project without any trace, beyond a slick of burnt out and broken engineers.

                1. 4

                  The distribution of programming talent is likely normal, but what about their output?

                  The ‘10X programmer’ is relatively common, maybe 1 standard deviation from the median? And you don’t have to get very far to the left of the curve to find people who are 0.1X or -1.0X programmers.

                  Still a good article! I think this confusion is the smallest part of what he’s trying to say.

                  1. 6

                    That’s an interesting backdoor you tried to open to sneak the 10x programmer back into not being a myth.

                    1. 6

                      They exist, though. So, more like the model that excludes them is broken front and center. Accurate position is most people aren’t 10x’ers or even need to be that I can tell. Team players with consistency are more valuable in the long run. That should be majority with some strong, technical talent sprinkled in.

                      1. 3

                        Is there evidence to support that? As you know, measuring programmer productivity is notoriously difficult, and I haven’t seen any studies to confirm the 10x difference. I agree with @SeanTAllen, it’s more like an instance of the hero myth.

                        EDIT: here are some interesting comments by a guy who researched the literature on the subject: https://medium.com/make-better-software/the-10x-programmer-and-other-myths-61f3b314ad39

                        1. 5

                          Just think back to school or college where people got the same training. Some seemed natural at the stuff running circles around others for whatever reason, right? And some people score way higher than others on parts of math, CompSci, or IQ tests seemingly not even trying compared to those that put in much effort to only underperform.

                          People that are super-high performers from the start exist. If they and the others study equally, the gap might shrink or widen but should widen if wanting strong generalists since they’re better at foundational skills or thinking style. I don’t know if the 10 applies (probably not). The concept of gifted folks making easy work of problems most others struggle is something Ive seen a ton of in real life.

                          Why would they not exist in programming when they exist in everything else would be the more accurate question.

                          1. 0

                            There’s no question that there is difference in intellectual ability. However, I think that it’s highly questionable that it translates into 10x (or whatever-x) differences in productivity.

                            Partly it’s because only a small portion of programming is about raw intellectual power. A lot of it is just grinding through documentation and integration issues.

                            Partly it’s because there are complex interactions with other people that constrain a person. Simple example: at one of my jobs people complained a lot about C++ templates because they couldn’t understand them.

                            Finally, it’s also because the domain a person applies themselves to places other constraints. Can’t get too clever if you have to stay within the confines of a web framework, for example.

                            I guess there are specific contexts where high productivity could be realised: one person creating something from scratch, or a group of highly talented people who work well together. But those would be exceptional situations, while under the vast majority of circumstances it’s counterproductive to expect or hope for 10x productivity from anyone.

                            1. 2

                              I agree with all of that. I think the multipliers kick in on particular tasks which may or may not produce a net benefit overall given conflicting requirements. Your example of one person being too clever with some code for others to read is an example of that.

                              1. 3

                                I think the 10x is often realized by just understanding the requirements better. For example, maybe the 2 week long solution isn’t really necessary because the 40 lines you can write in the afternoon are all the requirement really required.

                              2. 2

                                There’s no question that there is difference in intellectual ability. However, I think that it’s highly questionable that it translates into 10x (or whatever-x) differences in productivity.

                                It does not simply depends on how you measure, it depends on what you measure.

                                And it may be more than “raw intellectual power”. For me it’s usually experience.

                                As a passionate programmer, I’ve faced more problems and more bugs than my colleagues.
                                So it often happens that I solve in minutes problems that they have struggled for hours (or even days).
                                This has two side effects:

                                • managers tends to assign me the worst issues
                                • colleagues tends to ask me when the can’t find a solution

                                Both of this force me to face more problems and bugs… and so on.

                                Also such experience make me well versed at architectural design of large applications: I’m usually able to avoid issues and predict with an high precision the time required for a task.

                                However measuring overall productivity is another thing:

                                • I can literally forget what I did yesterday morning (if it was for a different customer than the one I’m focused now)
                                • at time I’m unable to recognize my own code (with funny effects when I insult or lode it)
                                • when focused, I do not ear people talking at me
                                • I ignore 95% of mails I receive (literally all those with multiple recipients)
                                • being very good at identifying issues during early analysis at times makes some colleague a bit upset
                                • being very good at estimating large projects means that when you compare my estimation with others, mine is usually higher (at times a lot higher) because I see most costs upfront. This usually leads to long and boring meeting where nobody want to take the responsibility to adopt the more expensive solution (apparently) but nobody want to take the risk of alternative ones either…
                                • debating with me tends to become an enormous waste of time…

                                So when it’s a matter of solving problems by programming, I’m approach the 10x productivity of the myth despite not being particularly intelligent, but overall it really depends on the environment.

                                1. 1

                                  This is a good exposition of what a 10x-er might be and jives with my thoughts. Some developers can “do the hard stuff” with little or no guidance. Some developers just can’t, no matter how much coaching and guidance are provided.

                                  For illustration, I base this on one tenure I had as a team lead, where the team worked on some “algorithmically complex” tasks. I had on my team people who were hired on and excelled at the work. I had other developers who struggled. Most got up to an adequate level eventually (6 months or so). One in particular never did. I worked with this person for a year, teaching and guiding, and they just didn’t get it. This particular developer was good at other things though like trouble shooting and interfacing with customers in more of a support role. But the ones who flew kept on flying. They owned it, knew it inside and out.

                                  It’s odd to me that anyone disputes the fact there are more capable developers out there. Sure “productivety” is one measure, and not a good proxy for ability. I personally don’t equate 10x with being productive, that clearly makes no sense. Also I think Fred Brookes Mythical Man Month is the authoritative source on this. I never see it cited in these discussions.

                            2. 2

                              There may not be any 10x developers, but I’m increasingly convinced that there are many 0x (or maybe epsilon-x) developers.

                              1. 3

                                I used to think that, but I’m no longer sure. I’ve seen multiple instances of what I considered absolutely horrible programmers taking the helm, and I fully expected those businesses to fold in a short period of time as a result - but they didn’t! From my point of view, it’s horrible -10x code, but for the business owner, it’s just fine because the business keeps going and features get added. So how do we even measure success or failure, let alone assign quantifiers like 0x?

                                1. 1

                                  Oh, I don’t mean code quality, I mean productivity. I know some devs that can work on the same simple task for weeks, miss the deadline, and be move on to a different task that they also don’t finish.

                                  Even if the code they wrote was amazing, they don’t ship enough progress to be of much help.

                                  1. 1

                                    That’s interesting. I’ve encountered developers who were slow but not ones who would produce nothing at all.

                                    1. 4

                                      I’ve encountered it, though it was unrelated to their skill. Depressive episodes, for example, can really block someone. So can burnout, or outside stresses.

                                      Perhaps there are devs who cannot ship code at all, but I’ve only encountered unshipping devs that were in a bad state.

                                  2. 1

                                    You’re defining programming ability by if a business succeeds though. There are plenty of other instances where programming is not done for the sake of business, though.

                                    1. 1

                                      That’s true. But my point is that it makes no sense to assign quantifiers to programmer output without actually being able to measure it. In business, you could at least use financials as a proxy measure (obviously not a great one).

                                2. 1

                                  Anecdotally, I’m routinely stunned by how productive maintainers of open source frameworks can be. They’re certainly many times more productive than I am. (Maybe that just means I’m a 0.1x programmer, though!)

                                  1. 1

                                    I’m sure that’s the case sometimes. But are they productive because they have more sense of agency? Because they don’t have to deal with office politics? Because they just really enjoy working on it (as opposed to a day job)? There are so many possible reasons. Makes it hard to establish how and what to measure to determine productivity.

                              2. 3

                                I don’t get why people feel the need to pretend talent is a myth or that 10x programmers are a myth. It’s way more than 10x. I don’t get why so many obviously talented people need to pretend they’re mediocre.

                                edit: does anyone do this in any other field? Do people deny einstein, mozart, michaelangelo, shakespear, or newton? LeBron James?

                                1. 4

                                  Deny what exactly? That LeBron James exists? What is LeBron James a 10x of? Is that Athelete? Basketball player? What is the scale here?

                                  A 10x programmer. I’ve never met one. I know people who are very productive within their area of expertise. I’ve never met someone who I can drop into any area and they are boom 10x more productive and if you say “10x programmer” that’s what you are saying.

                                  This of course presumes that we can manage to define what the scale is. We can’t as an industry define what productive is. Is it lines of code? Story points completed? Features shipped?

                                  1. 2

                                    Context is a huge factor in productivity. It’s not fair to subtract it out.

                                    I bet you’re a lot more then 10X better then I am at working on Pony… Any metric you want. I don’t write much C since college, I bet you’re more then 10X better then me in any C project.

                                    You were coding before I was born, and as far as I can tell are near the top of your field. I’ve been coding most of my life, I’m good at it, the difference is there though. I know enough to be able to read your code and tell that you’re significantly more skilled then I am. I bet you’re only a factor of 2 or 3 better at general programming then I am. (Here I am boasting)

                                    In my areas of expertise, I could win some of that back and probably (but I’m not so sure) outperform you. I’ve only been learning strategies for handling concurrency for 4 years? Every program (certainly every program with a user interface) has to deal with concurrency, your skill in that sub-domain alone could outweigh my familiarity in any environment.

                                    There are tons of programmers out there who can not deal with any amount of concurrency at all in their most familiar environment. There are bugs that they will encounter which they can not possibly fix until they remedy that deficiency, and that’s one piece of a larger puzzle. I know that the right support structure of more experienced engineers (and tooling) can solve this, I don’t think that kind of support is the norm in the industry.

                                    If we could test our programming aptitudes as we popped out of the womb, all bets are off. This makes me think that “10X programmer” is ill-defined? Maybe we’re not talking about the same thing at all.

                                    1. 2

                                      No I agree with you. Context is important. As is having a scale. All the conversations I see are “10x exists” and then no accounting for context or defining a scale.

                                  2. 2

                                    While I’m not very familiar with composers, I can tell you that basketball players (LeBron) can and do have measurements. Newton created fundamental laws and integral theories, Shakespeare’s works continue to be read.

                                    We do acknowledge the groundbreaking work of folks like Ken Ritchie, Ken Iverson, Alan Kay, and other computing pioneers, but I doubt “Alice 10xer” at a tech startup will have her work influence software engineers hundreds of years later, so bar that sort of influence, there are not enough metrics or studies to show that an engineer is 10x more than another in anything.

                                2. 3

                                  The ‘10X programmer’ is relatively common, maybe 1 standard deviation from the median? And you don’t have to get very far to the left of the curve to find people who are 0.1X or -1.0X programmers.

                                  So, it’s fairly complicated because people who will be 10X in one context are 1X or even -1X in others. This is why programming has so many tech wars, e.g. about programming languages and methodologies. Everyone’s trying to change the context to one where they are the top performers.

                                  There are also feedback loops in this game. Become known as a high performer, and you get new-code projects where you can achieve 200 LoC per day. Be seen as a “regular” programmer, and you do thankless maintenance where one ticket takes three days.

                                  I’ve been a 10X programmer, and I’ve been less-than-10X. I didn’t regress; the context changed out of my favor. Developers scale badly and most multi-developer projects have a trailblazer and N-1 followers. Even if the talent levels are equal, a power-law distribution of contributions (or perceived contributions) will emerge.

                                  1. 1

                                    I’m glad you acknowledge that there’s room for a 10X or more then 10X gap in productivity. It surprises me how many people claim that there is no difference in productivity among developers. (Why bother practicing and reading blog posts? It won’t make you better!)

                                    I’m more interested in exactly what it takes to turn a median (1X by definition) developer into an exceptional developer.

                                    I don’t buy the trail-blazer and N-1 followers argument because I’ve witnessed massive success (by any metric) cleaning up the non-functioning, non-requirements meeting (but potentially marketable!) untested messes that an unskilled ‘trailblazer’ leaves in their (slowly moving) wake. Do you think it’s all context or are there other forces at work?

                                1. 14

                                  I wouldn’t call defer a “very elegant solution” when RAII exists :)

                                  1. 7

                                    The problem for RAII is that it needs to be in a class destructor. Defer can just happen by writing a line of free code.

                                    1. 7

                                      Except RAII can handle the case where ownership is transferred to some other function or variable. Also, it scales well to nested resources, whereas figuring out which of any structs in a given C library require a (special) cleanup call is depends entirely on careful reading of the relevant documentation. If RAII was just about closing file handles at the end of the function, few people would care.

                                      1. 2

                                        Except RAII can handle the case where ownership is transferred to some other function or variable.

                                        Does that matter for languages that have GC?

                                        1. 7

                                          RAII is not exclusive to memory management. The Resource in RAII can be aquired memory, but it can also equally be an open file-descriptor, socket or any other resource for that matter, that GC won’t collect.

                                        2. 1

                                          I think the ideal solution would be to be able to use class destructors for some things, but also be able to add a block to the “destruction” of a specific instance.

                                      2. 3

                                        Doesn’t RAII sort of hide the cleanup from your actual code? I imagine that can work only if one can trust that every library you ever use behaves well in this manner. Then again, I guess an explicitly called cleanup routine may be of poor quality as well.

                                        1. 8

                                          That’s the point. Cleanup is automatic, deterministic, invisible. You can’t forget it, while you definitely can forget a defer something.close().

                                          Every library in Rust does behave like this, and I guess pretty much every library in C++ (that you would actually want to use) does as well.

                                        2. 3

                                          Excellent point! Now it feels only slightly more elegant than goto :)

                                        1. 4

                                          If you look at the success of the internet (beyond just the web) I think it’s safe to say OO, not FP, is the most scalable system building methodology. An important realization that Alan Kay emphasizes here is that OO and FP are not incompatible at all. A formal merging of FP and OO can be seen with the Actor Model by Carl Hewitt

                                          In other words, I think FP can supercharge OO and it seems the rock stable and fast systems built with Erlang and friends prove this out.

                                          1. 7

                                            I think servers have scaled now based on solid messaging protocols that are not OOP in nature. And databases are still relational last i checked.

                                            1. 6

                                              Alan Kay would say that OOPs foundation is messaging protocols.

                                              1. 2

                                                precisely ! The whole internet is an objective oriented system. The smallest model of an object is a computer. so what is an object ? Its a computer that can receive and send messages. Systems like erlang have million little computers running on one physical computer for instance.

                                                1. 4

                                                  that’s a real stretch. I might as well claim that REST’s success is entirely because it is really just functional programming as it passes the state along with the function and that it is pretty much just a monad.

                                                  Also, SQL is still king and no object-oriented database approach has supplanted it.

                                              2. 4

                                                They use the FSM model. Hardware investigations taught me they’d fit in Mealy and Moore models depending on what subset of protocol is being implemented or how one defines terms. Even most software implementations used FSM’s. Maybe all for legacy implementations given what I’ve seen but there could be exceptions.

                                                And, addressing zaphar’s claim, their foundation or at least abstracted form may best be done with Abstract, State Machines described here. Papers on it argue it’s more powerful than Turing model since it operates on mathematical structures instead of strings. Spielmann claims Turing Machines are a subset of ASM’s. So, the Internet was built on FSM model which, if we must pick a foundation, matches the ASM model best even though the protocols and FSM’s themselves predate the model. If a tie-breaker is needed for foundations, ASM’s are also one of most successful ways for non-mathematicians to specify software in terms of ease of use and applicability.

                                                1. 3

                                                  You just made the engineer inside me happy :) FSM are the first thing we learned in engineering school but too often software is just hacked together based on code and not design. FSM form the basis of any protocol/service. eg: TCP, FTP, TLS, SSH, DNS, HTTP, etc.

                                                  1. 3

                                                    The cool thing is those can be implemented and verified at the type level in dependently typed functional languages. See Idris’ ST type. Session types are another example. Thankfully I can see movements in the FSM direction on the front end with stuff like Redux and Elm, but alas it will be a while before these can be checked in the type system.

                                              3. 4

                                                I don’t think the internet is a good reference model. IMO the internet is largely a collection of “whatever we had at the time” with a sprinkle of “this should work” and huge amounts of duct-tape on top. The internet succeeded despite being build on OO, not because of it. Though I think FP would also have made the internet succeeded in spite of it, not because of it.

                                                There is no one true methodology, I think it’s best if you mix the two approaches where it makes sense to get the best of both worlds.

                                                1. 1

                                                  Let me be more specific , by internet i mean TCP/IP and friends , not HTTP and friends.

                                                  1. 2

                                                    Even TCP/IP and friends is a lot of hacks and “//TODO this is a horrible hack but we’ll fix it later”. HTTP is just the brown-colored cream on top of the pie that is the modern internet.

                                                    It’s why DNSSEC and IPv6 have seen such little adoption, all the middleboxes someone hacked together once are all still up and running with terrible code and they have to be fully replace to not break either protocol.

                                                    I’ve seen enough routers that silently malform TCP packets or (more fun) recalculate the checksum without checking it, making data corruption a daily occurence. Specs aren’t followed, they’re abused.

                                                    1. 2

                                                      And yet the internet has never shut down since it started running with all its atoms replaced many times over. Billions of devices are connected and the whole system manages to span the entire planet. It just works.

                                                      It’s an obviously brilliant and successful design that created tens of trillions of dollars in value. I think you will be hard pressed to find another technology that was this successful and that changed the world to the degree the internet has.

                                                      Does it have flaws like the ones outlined? Yes of course. Does it work despite them? Yes!

                                                      The brilliance of the internet is that even when specs are not followed, the system keeps on working.

                                                      1. 2

                                                        I think it’s more in spite of how it was built and not because of it.

                                                        And the internet has shut down several times by now, or atleast large parts of it (just google “BGP outage” or “global internet outage”)

                                                        It’s not a brilliant design but successful, yes. It’s probably just good enough to succeed.

                                                        Not brilliant, it merely works by accident and the accumulated ducttape keeps it going despite some hiccups along the way.

                                                        If the internet was truly brilliant it would use a fully meshed overlay network and not rely on protocols like BGP for routing. It would also not have to package everything in Ethernet frames (which are largely useless and could be replace with more efficient protocols)

                                                2. 3

                                                  I agree with this when “scalability” is defined as “ability to hire.”

                                                1. 6

                                                  Deleting facebook isn’t a particularly useful exercise because i’m pretty sure they don’t delete the data they already have, and they create shadow profiles for people who aren’t facebook users, even without directly collecting data from you. Blocking their domains is a mild hinderance, not an actual measure to stop them.

                                                  If you’re deleting your facebook account because it’s not useful to you, or as a political protest action, fine, but at least acknowledge that you’re not meaningfully preventing them from collecting data.

                                                  1. 7

                                                    If enough people delete their profiles, then it affects the stats Facebook presents to advertisers, making it a less attractive advertising platform with a smaller audience. That hits Facebook in the pocket, which is the only thing they care about.

                                                    1. 3

                                                      I think it is very useful because they lost one of their primary sources of data. Installing ublock-origin, privacy badger, and other extensions should also help block trackers from most websites. There’s nothing I can do to hide against facebook buying credit data and other 3rd party data except lobby my local politicians. But if everyone deleted facebook and stopped browsing instagram models for.. ahem.. personal entertainment purposes.. facebook would lose their primary source of income :)

                                                      1. 2

                                                        It may be a functional no-op, but it very definitely sends a message to Facebook corporate. I doubt this will change anything in the long haul - their bottom line depends upon exploitative behavior, but I expect a lot of smoke and little to no fire coming out of all of this.

                                                      1. 18

                                                        I’ve seen this sentiment being expressed by quite a few people recently, and it makes me happy.

                                                        I recently met one of my programming heroes, and we were geeking out and it was wonderful; it felt really nice that I could keep up with them and that we shared so many opinions. Then I asked what they do when they’re not programming. They paused, and then told me they don’t do much else.

                                                        Which is also fine, of course, but to me it highlighted how individual this is. Up until then I felt like we were extremely similar. But, I need time to play guitar, patch synthesisers, be outside in nature, draw, fiddle with electronics, play video games, and all the other things I enjoy doing, and need to do in order to feel like a whole human. That leaves almost no time for programming outside of work.

                                                        So in that way, we were each other’s complete opposite, and that’s great! What we choose to do in our spare time does not determine how skilled we are at programming, and everyone in tech shouldn’t be the same.

                                                        (Although right now I am unemployed, so I can get some coding in anyway 😉)

                                                        1. 5

                                                          In many areas, (architecture, electronics, finance, …) people aren’t using their “work” skills at home. I mean…

                                                          • How many electronics engineers are working on open-designs on weekends (you’ll find some, obviously, but proportionally, not that much)?
                                                          • How many architects are just doing pet projects on weekends? Again you’ll find some… but it’s far from the majority.
                                                          • How many accountants are doing accounting on weekends? … Maybe a bit on their own finance… but common…

                                                          To me it’s a not even a situation, and even if it’s a good way to recruit, it’s not mandatory and shouldn’t be.

                                                          1. 19

                                                            At the risk of being so terse that others will pedantically snipe my obviously flawed reasoning: programming is a deeply creative endeavor, its only effective cost is time, and the results of a working program can be tangibly experienced without any additional cost. The raw materials for programming are comparatively cheap. To that end, it is no surprise at all that there are many people who code on their free time in contrast to many other professions.

                                                            1. 3

                                                              I see two arguments here, creativity and cost. Compatibility is not creative, but it is cheap (a computer + excel + paper). Architecture is creative and (to the relative extent of seeing results) cheap (paper/computer). I know your point and won’t enter in the debate of creativity but I still don’t understand why we talk so much about this. My ideas are:

                                                              • It’s a self reinforcing loop (people code in the weekend, people that are not are feeling bad about not doing so, so they start doing it…).
                                                              • It’s not so different from few other creative work, but we tend to talk/share much more about it.
                                                              1. 0

                                                                programming is a deeply creative endeavor

                                                                I challenge that assertion. It’s no more creative than construction work or any other trade.

                                                                1. 11

                                                                  Have you ever worked in construction or any other trade? What makes you think that those things aren’t creative?

                                                                  You implication suggests you believe the trades aren’t creative. As someone who has spent a few summers hanging drywall, and who has a family full of tradesmen, I would challenge the implication that the trades do not require a fair amount of creativity.

                                                                  Any problem-solving discipline requires creativity. In the trades, this is readily apparent the first time you talk to someone who has to coordinate the logistics of moving several tons of material up a narrow street that must remain open to regular traffic. Or when you talk to the plumber who has to retrofit three different piping systems in the same house reno so that shit won’t literally fly out of the toilets when it rains more than 3/4”. Or when you talk to the surveyor who has to figure out how to shoot a line through dense woods so she can accurately determine the property line because the next door neighbor is under the mistaken impression that they own land 13’ past where they actually do.

                                                                  These are all real examples of situations which required creative solutions. No instruction manual exists to tell the GC how to coordinate those material deliveries, help the plumber design a wastewater system for a house, or help shoot a straight property line in dense situtations. These people rely on ingenuity and experience, as well as their creativity, to help them find a solution.

                                                                  So I suppose in a sense I agree with you, Programming isn’t any more creative than the trades; but I disagree with your implication that either are not creative processes.

                                                                  1. 3

                                                                    I haven’t worked trades but I’ve done supervision for engineering. I agree they are creative. Just as creative as programming. I guess my phrasing was poor. I meant to imply that trades people aren’t hired for doing big weekend hobby projects on tradehub.

                                                                    And while I think it is creative I don’t think it is deeply creative in the sense that it is more art than mechanics. While there is an art, it isn’t itself an art. Many jobs for trades and programming are pretty routine and boring work.

                                                                    1. 3

                                                                      Note that I listed several reasons why programming attracts a lot of folks that do it in their free time. In the common case, creativity alone isn’t sufficient. The expenditure of resources to realize a result is a key ingredient to my argument.

                                                                  2. 5

                                                                    I challenge that assertion. It’s no more creative than construction work or any other trade.

                                                                    To the extent that the level of creativity can be compared, I disagree. To the extent that the level of creativity cannot be compared, I agree.

                                                                    Pick your assumptions. I don’t really care otherwise, and I think the direction you’re drawing me in is a pointless waste of time.

                                                            1. 14

                                                              I believe that OO affords building applications of anthropomorphic, polymorphic, loosely-coupled, role-playing, factory-created objects which communicate by sending messages.

                                                              It seems to me that we should just stop trying to model data structures and algorithms as real-world things. Like hammering a square peg into a round hole.

                                                              1. 3

                                                                Why does it seem that way to you?

                                                                1. 5

                                                                  Most professional code bases I’ve come across are objects all the way down. I blame universities for teaching OO as the one true way. C# and java code bases are naturally the worst offenders.

                                                                  1. 5

                                                                    I mostly agree, but feel part of the trouble is that we have to work against language, to fight past the baggage inherent in the word “object”. Even Alan Kay regrets having chosen “object” and wishes he could have emphasized “messaging” instead. The phrase object-oriented leads people to first, as you point out, model physical things, as that is a natural linguistic analog to “object”.

                                                                    In my undergraduate days, I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

                                                                    I suppose I have taken that “Aha!” moment for granted and can see how, in the absence of such an explicit lesson, it might be hard to discover the notion on your own. It is definitely a problem if OO concepts are presented universally good or without pitfalls.

                                                                    1. 4

                                                                      I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

                                                                      Can you remember some of the specifics of this? Sounds fascinating.

                                                                      1. 3

                                                                        My memory is a bit fuzzy on it, but the project was about simulating a bank. Your bank program would be initialized with N walk-in windows, M drive-through windows and T tellers working that day. There might’ve been a second type of employee? The bank would be subjected to a stream of customers wanting to do some heterogeneous varieties of transactions, taking differing amounts of time.

                                                                        There did not need to be a teller at the drive-through window at all times if there was not a customer there, and there was some precedence rules about if a customer was at the drive-through and no teller was at the window, the next available teller had to go there.

                                                                        The goal was to produce a correct order of customers served, and order of transactions made, across a day.

                                                                        The neat part (pedagogically speaking) was the project description/spec. It went through so much effort to slowly describe and model the situation for you, full of distracting details (though very real-world ones), that it all-but-asked you to subclass things needlessly, much to your detriment. Are the multiple types of employees complete separate classes, or both sublcasses of an Employee? Should Customer and Employee both be subclasses of a Person class? After all, they share the properties of having a name to output later. What about DriveThroughWindow vs WalkInWindow? They share some behaviors, but aren’t quite the same.

                                                                        Most people here would realize those are the wrong questions to be ask. Even for a new programmer, the true challenge was gaining your first understandings of concurrency and following a spec rules for resource allocation. But said new programmer had just gone through a week or two on interfaces, inheritance and composition, and oh look, now there’s this project spec begging you to use them!

                                                                    2. 2

                                                                      Java and C# are the worst offenders and, for the most part, are not object-oriented in the way you would infer that concept from, for example, the Xerox or ParcPlace use of the term. They are C in which you can call your C functions “methods”.

                                                                      1. 4

                                                                        At some point you have to just let go and accept the fact that the term has evolved into something different from the way it was originally intended. Language changes with time, and even Kay himself has said “message-oriented” is a better word for what he meant.

                                                                        1. 2

                                                                          Yeah, I’ve seen that argument used over the years. I might as well call it the no true Scotsman argument. Yes, they are multi-paradigm languages and I think that’s what made them more useful (my whole argument was that OOP isn’t for everything). Funnily enough, I’ve seen a lot of modern c# and java that decided message passing is the only way to do things and that multi-thread/process/service is the way to go for even simple problems.

                                                                          1. 4

                                                                            The opposite of No True Scotsman is Humpty-Dumptyism, you can always find a logical fallacy to discount an argument you want to ignore :)

                                                                    3. 2
                                                                      Square peg;  
                                                                      Round hole;  
                                                                      Hammer hammer;  
                                                                      hammer.Hit(peg, hole);
                                                                      
                                                                      1. 4

                                                                        A common mistake.

                                                                        In object-orientation, an object knows how to do things itself. A peg knows how to be hit, i.e. peg.hit(…). In your example, your setting up your hammer, to be constantly changed and modified as it needs to be extended to handle different ways to hit new and different things. In other words, your breaking encapsulation by requiring your hammer to know about other objects internals.

                                                                      2. 2

                                                                        your use of a real world simile is hopefully intentionally funny. :)

                                                                        1. 2

                                                                          That sounds great, as an AbstractSingletonProxyFactoryBean is not a real-world thing, though if I can come up with a powerful and useful metaphor, like the “button” metaphor in UIs, then it may still be valuable to model the code-only abstraction on its metaphorical partner.

                                                                          We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                                                                          1. 2

                                                                            Factory

                                                                            A factory is a real world thing. The rest of that nonsense is just abstraction disease which is either used to work around language expressiveness problems or people adding an abstraction for the sake of making patterns.

                                                                            We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                                                                            I think OOP has its place in the world, but it is not for every (majority?) of problems.

                                                                            1. 3

                                                                              A factory in this context is a metaphor, not a real world thing. I haven’t actually represented a real factory in my code.

                                                                              1. 2

                                                                                I know of one computer in a museum that if you boot it up, it complains about “Critical Error: Factory missing”.

                                                                                (It’s a control computer for a factory, it’s still working, and I found that someone modeled that case and show an appropriate error the most charming thing)

                                                                                1. 2

                                                                                  But they didn’t handle the “I’m in a museum” case. Amateurs.

                                                                          2. 1

                                                                            You need to write say a new air traffic control system, or a complex hotel reservation system, using just the concepts of data structures and algorithms? Are you serious?

                                                                          1. 4

                                                                            I used Google Wave briefly to plan a trip with some friends. It had a lot of potential, actually.

                                                                            1. 5

                                                                              I also used Google Wave and agree; I saw the potential right away. It’s a shame it was underappreciated and the project wasn’t a priority and given more resources.

                                                                              1. 1

                                                                                I’ve used iOS notes and it makes edits instantly visible. I’m sure there are other collaborative tools available.

                                                                              1. 14

                                                                                Google, the only problems in email are security related (spam, viruses, privacy, authentication, etc). Be engineers, fix that boring stuff and stop trying to control the web.

                                                                                1. 5

                                                                                  there are other problems in email, though unfortunately they are caused or enabled by gmail (top posting, html, exclusion of independent servers).

                                                                                1. 14

                                                                                  So who wants to adopt the lobster for lobste.rs?

                                                                                  1. 6

                                                                                    why not zoidberg?

                                                                                    1. 5

                                                                                      I’m up for donating to a pool for this.

                                                                                      1. 4

                                                                                        Agreed with /u/gerikson, I’m up for a donation pool! Who wants to spearhead it?

                                                                                        1. 15

                                                                                          I could put together a pool to try to hit the Silver or Gold level. The link would point back to a note on the about page. There would be no reward for donating besides the warm glow of knowing you’ve helped support an organization that is the source of so much error handling in our code.

                                                                                          Please take this ad-hoc poll by upvoting the single highest amount you’d donate towards this. Enough support and I’ll put something together. (If you made judicious use of your GPU a few years ago and have cryptocurrency to donate, please select the amount of USD you’d convert it into before sending it because I’m game for a fun lark, not a major project.) (Edit: tweeted)

                                                                                          1. 59

                                                                                            10 USD

                                                                                            1. 17

                                                                                              1 USD

                                                                                              1. 9

                                                                                                50 USD

                                                                                                1. 4

                                                                                                  100 USD

                                                                                                  1. 1

                                                                                                    This is in progress.

                                                                                                    1. 1

                                                                                                      500 USD

                                                                                                1. 17

                                                                                                  Key part I’ve often used to debunk anti-MS sentiment from security folks:

                                                                                                  “Despite the above, the quality of the code is generally excellent. Modules are small, and procedures generally fit on a single screen. The commenting is very detailed about intentions, but doesn’t fall into “add one to i” redundancy.”

                                                                                                  “From the comments, it also appears that most of the uglier hacks are due to compatibility issues: either backward-compatibility, hardware compatibility or issues caused by particular software. Microsoft’s vast compatibility strengths have clearly come at a cost, both in developer-sweat and the elegance (and hence stability and maintainability) of the code.”

                                                                                                  Seems most of their problems came not from apathy but from caring about compatibility more than about anyone on desktop. That helped ensure their lock-in and billions. The cost was worse flexibility, reliability, and security. Acceptable cost given Gates’ goal of becoming super rich. Not as great for users, though. Fortunately, the Security Development Lifecycle got some of that under control with Windows kernel 0-days becoming rare versus other types. Their servers are very reliable, too.

                                                                                                  Anyone wondering what Microsoft could do if not so focused on backward compatibility need only look at MS Research’s projects. Far as OS’s, Midori and VerveOS come to mind for different purposes. One could be a foundation of the other actually.

                                                                                                  1. 7

                                                                                                    Not as great for users, though.

                                                                                                    I beg to disagree. A lot of end users and small businesses rely on some unmaintained piece of legacy software in one way or another. The fact that they don’t have to keep a separate PC with an unmaintained, insecure OS on it is a definite plus for those people.

                                                                                                    1. 4

                                                                                                      Regarding the “what Microsoft could do” – that’s exactly what they’re trying to with UWP apps in Windows 10. Proper sandboxing for all applications, ideally even all browser tabs in OS-level sandboxes.

                                                                                                      I’m especially interested (and scared at the same time) in the rumors about Polaris, which is said to be a Windows 10 throwing the entire Win32 layer away, with all the backwards compatibility patches only existing within of the UWP sandbox of each separate application, and with much better security (but also, obviously, less customizability).

                                                                                                      1. 3

                                                                                                        They’re definitely doing new stuff with UWP. I’ve been off Windows too long to know anything about it. I was mainly talking about designing every aspect of an OS around high-level, modular, safe, and/or concurrent programming. The two links in my comment will give you an idea of what they’re capable of.

                                                                                                      2. 3

                                                                                                        I’ve never thought that microsoft wrote bad functions, but that their design is over-complicated. There’s too many moving parts, too many function arguments, too many layers, … It’s the accidental complexity that seems to cause logical bugs.

                                                                                                      1. 1

                                                                                                        How about another bug, int*2 is an undefined overflow. That’ll certainly cause problems.

                                                                                                        1. 4

                                                                                                          This is one area where Rust and C are different; overflow is well-defined in Rust.

                                                                                                        1. 11

                                                                                                          Finally a proper use of the caps lock key:

                                                                                                          Press caps lock to switch to a command line interface; here’s the debug screen.

                                                                                                          1. 8

                                                                                                            Well, I’d rather use it for Control. But maybe if keyboards would put Control where it belongs, next to Space (it should go Super Alt Control Space Control Alt Super), then it wouldn’t be necessary to have Control where most keyboards have Caps Lock.

                                                                                                            1. 5

                                                                                                              I always map Caps Locks to Ctrl, so whenever I’m on someone else’s laptop I keep flipping into caps when I mean to copy/paste/break/etc.

                                                                                                              1. 3

                                                                                                                it should go Super Alt Control Space Control Alt Super

                                                                                                                What’s the premise for “should” here?

                                                                                                                1. 1

                                                                                                                  Because of the frequency of use. Control is used almost all the time, in Windows, Linux & emacs. As such, it should go into the easiest-to-strike location, right next to the spacebar where the thumb can strike it in conjunction with other keys.

                                                                                                                  Alt/Meta is used less often, so it should receive the less-convenient spot. Alt should be used for less-frequently used functionality, and to modify Control (e.g. C-f moves forward one character; C-M-f moves forward one word).

                                                                                                                  Super should be used least of the three, and ideally would be reserved for OS-, desktop-environment– or window-manager–specific tasks, e.g. for switching windows are accessing an app chooser. Since it’s used less than either Alt or Control, it belongs in the least-convenient spot, far from the spacebar.

                                                                                                                  If we were really going to do things right, there’d be a pair of Hyper keys outboard of super, reserved for individual user assignment. But we don’t live in a perfect world.

                                                                                                              2. 4

                                                                                                                as a vi user, i would have said “use escape” but then remembered my caps-lock key is remapped to escape.

                                                                                                              1. 2

                                                                                                                https://blogs.msdn.microsoft.com/philipsu/2006/06/14/broken-windows-theory/

                                                                                                                Windows code is too complicated. It’s not the components themselves, it’s their interdependencies. An architectural diagram of Windows would suggest there are more than 50 dependency layers (never mind that there also exist circular dependencies). After working in Windows for five years, you understand only, say, two of them. Add to this the fact that building Windows on a dual-proc dev box takes nearly 24 hours, and you’ll be slow enough to drive Miss Daisy.

                                                                                                                I haven’t been around in the industry too long, i was in school when this blog entry was posted. But I’ve seen a few projects struggle and fail because of bad architecture and increasing technical debt. The OPs article definitely reflects the struggle between new features, legacy support, and paying down the technical debt (improving security, etc.).

                                                                                                                1. 2

                                                                                                                  The microservices that are all the rage these days adds a whole new layer of challenge to understanding dependencies. While monoliths have their own challenges, at least all of the information is there to understand what is connected. I’m still not sure if this has been adequately solved.

                                                                                                                  1. 2

                                                                                                                    Arguably microservices can simplify this dependency tree tremendously. In the world to date, it has been essentially impossible to compile many differently versioned libraries together into one monolithic application, which is what generally happens when you have a large number of teams doing separate development.

                                                                                                                    With microservices, again arguably, encapsulation happens at the whole-service layer, so each team is free to develop using whatever versions they like, and just provide HTTP (or whatever) as their high level API.

                                                                                                                    Where this tends to break down in my experience is (a) where true shared dependencies exist, which can happen if you either were bad at data modeling to begin with or if your needs organically grew differently than your original design, and (b) operationally, in a world of incredibly broken and insecure software, processors, etc., resulting from C (and now JS) and the shared memory model, where it is no longer possible to understand what in the opaque blobs need patching.

                                                                                                                    1. 1

                                                                                                                      C obviously has memory bugs but I’m curious what insecurity you see stemming from JS. Is it the automatic type casting? (I write JavaScript every day and think a good portion of the new parts of the language are good, but I will fully admit it spent its formative years on crack.)

                                                                                                                      1. 1

                                                                                                                        I don’t see how adding more dependencies simplifies anything, that can only make it more complicated. It may be convenient, but it’s not simpler. And in order to have that architecture one needs to have network protocols and serialization going on which has a performance and cognitive cost. There certainly are reasons to have a microservice architecture but I have a hard time seeing simplification as one of them.

                                                                                                                      2. 1

                                                                                                                        Microservices exist mostly to facilitate development by many teams on a large system. They are one of the best examples of Conway’s Law.

                                                                                                                        You are correct that they add complexity, and they tend to be adopted regardless of if they solve a real problem.

                                                                                                                    1. 2

                                                                                                                      A competent CPU engineer would fix this by making sure speculation doesn’t happen across protection domains. Maybe even a L1 I$ that is keyed by CPL.

                                                                                                                      I feel like Linus of all people should be experienced enough to know that you shouldn’t be making assumptions about complex fields you’re not an expert in.

                                                                                                                      1. 22

                                                                                                                        To be fair, Linus worked at a CPU company,Transmeta, from about ‘96 - ‘03(??) and reportedly worked on, drumrolll, the Crusoe’s code morphing software, which speculatively morphs code written for other CPUs, live, to the Crusoe instruction set.

                                                                                                                        1. 4

                                                                                                                          My original statement is pretty darn wrong then!

                                                                                                                          1. 13

                                                                                                                            You were just speculating. No harm in that.

                                                                                                                        2. 15

                                                                                                                          To be fair to him, he’s describing the reason AMD processors aren’t vulnerable to the same kernel attacks.

                                                                                                                          1. 1

                                                                                                                            I thought AMD were found to be vulnerable to the same attacks. Where did you read they weren’t?

                                                                                                                            1. 17

                                                                                                                              AMD processors have the same flaw (that speculative execution can lead to information leakage through cache timings) but the impact is way less severe because the cache is protection-level-aware. On AMD, you can use Spectre to read any memory in your own process, which is still bad for things like web browsers (now javascript can bust through its sandbox) but you can’t read from kernel memory, because of the mitigation that Linus is describing. On Intel processors, you can read from both your memory and the kernel’s memory using this attack.

                                                                                                                              1. 0

                                                                                                                                basically both will need the patch that I presume will lead to the same slowdown.

                                                                                                                                1. 9

                                                                                                                                  I don’t think AMD needs the separate address space for kernel patch (KAISER) which is responsible for the slowdown.

                                                                                                                          2. 12

                                                                                                                            Linus worked for a CPU manufacturer (Transmeta). He also writes an operating system that interfaces with multiple chips. He is pretty darn close to an expert in this complex field.

                                                                                                                            1. 3

                                                                                                                              I think this statement is correct. As I understand, part of the problem in meltdown is that a transient code path can load a page into cache before page access permissions are checked. See the meltdown paper.

                                                                                                                              1. 3

                                                                                                                                The fact that he is correct doesn’t prove that a competent CPU engineer would agree. I mean, Linux is (to the best of my knowledge) not a CPU engineer, so he’s probably wrong when it comes to get all the constraints of the field.

                                                                                                                                1. 4

                                                                                                                                  So? This problem is not quantum physics, it has to do with a well known mechanism in CPU design that is understood by good kernel engineers - and it is a problem that AMD and Via both avoided with the same instruction set.

                                                                                                                                  1. 3

                                                                                                                                    Not a CPU engineer, but see my direct response to the OP, which shows that Linus has direct experience with CPUs, frim his tenure at Transmeta, a defunct CPU company.

                                                                                                                                    1. 5

                                                                                                                                      frim his tenure at Transmeta, a defunct CPU company.

                                                                                                                                      Exactly. A company whose innovative CPU’s didn’t meet the markets needs and were shelved on acquisition. What he learned at a company making unmarketable, lower-performance products might not tell him much about constraints Intel faces.

                                                                                                                                      1. 11

                                                                                                                                        What he learned at a company making unmarketable, lower-performance products might not tell him much about constraints Intel faces.

                                                                                                                                        This is a bit of a logical stretch. Quite frankly, Intel took a gamble with speculative execution and lost. The first several years were full of erata for genuine bugs and now we finally have a userland exploitable issue with it. Often security and performance are at odds. Security engineers often examine / fuzz interfaces looking for things that cause state changes. While the instruction execution state was not committed, the cache state change was. I truly hope intel engineers will now question all the state changes that happen due to speculative execution. This is Linus’ bluntly worded point.

                                                                                                                                        1. 3

                                                                                                                                          (At @apg too)

                                                                                                                                          My main comment shows consumers didnt pay for more secure CPU’s. So, that’s not really a market requirement even if it might prevent costly mistakes later. Their goal was making things go faster over time with acceptable watts despite poorly-written code from humans or compilers while remaining backwards compatible with locked-in customers running worse, weirder code. So, that’s what they thought would maximize profit. That’s what they executed on.

                                                                                                                                          We can test if they made a mistake by getting a list of x86 vendors sorted by revenues and market share. (Looks.) Intel is still a mega corporation dominating in x86. They achieved their primary goal. A secondary goal is no liabilities dislodging them from that. These attacks will only be a failure for them if AMD gets a huge chunk of their market like they did beating them to proper 64-bit when Intel/HP made the Itanium mistake.

                                                                                                                                          Bad security is only a mistake for these companies when it severely disrupts their business objectives. In the past, bad security was a great idea. Right now, it mostly works with the equation maybe shifting a bit in future as breakers start focusing on hardware flaws. It’s sort of an unknown for these recent flaws. All depends on mitigations and how many that replace CPU’s will stop buying Intel.

                                                                                                                                        2. 3

                                                                                                                                          A company whose innovative CPU’s didn’t meet the markets needs and were shelved on acquisition.

                                                                                                                                          Tons of products over the years have failed based simply on timing. So, yeah, it didn’t meet the market demand then. I’m curious about what they could have done in the 10+ years after they called it quits.

                                                                                                                                          might not tell him much about constraints Intel faces.

                                                                                                                                          I haven’t seen confirmation of this, but there’s speculation that these bugs could affect CPUs as far back as Pentium II from the 90s….

                                                                                                                                      2. 1

                                                                                                                                        The fact that he is correct doesn’t prove that a competent CPU engineer would agree.

                                                                                                                                        Can you expand on this? I’m having trouble making sense of it. Agree with what?

                                                                                                                                  1. 25

                                                                                                                                    Spectre PoC: https://gist.github.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9e3d4bb6 (I had to inline one #DEF, but otherwise works)

                                                                                                                                    1. 5

                                                                                                                                      I’ve tested it with some success on FreeBSD/HardenedBSD on an Intel Xeon. It works on bare metal, but doesn’t work in bhyve.

                                                                                                                                      1. 4

                                                                                                                                        oh god that runs quickly. terrifying.

                                                                                                                                        1. 3
                                                                                                                                          $ ./spectre
                                                                                                                                          Reading 40 bytes:
                                                                                                                                          Illegal instruction (core dumped)
                                                                                                                                          

                                                                                                                                          That was kinda disappointing. (OpenBSD on Hyper-V here.)

                                                                                                                                          1. 10

                                                                                                                                            It worked for me on OpenBSD running on real hardware.

                                                                                                                                            1. 1

                                                                                                                                              That was kinda disappointing. (OpenBSD on Hyper-V here.)

                                                                                                                                              perhaps it was the cache flush intrinsic.

                                                                                                                                            2. 2

                                                                                                                                              I’m impressed how easy it is to run this PoC - even for somebody who didn’t do C programming for years. Just one file, correct the line

                                                                                                                                              #define CACHE_HIT_THRESHOLD(80)

                                                                                                                                              to

                                                                                                                                              #define CACHE_HIT_THRESHOLD 80

                                                                                                                                              then compile: gcc -O0 -o spectre spectre.c

                                                                                                                                              run:

                                                                                                                                              ./spectre

                                                                                                                                              and look for lines with “Success: “.

                                                                                                                                              I am wondering if there is some PoC for JavaScript in the Browser - single HTML page with no dependencies containing everything to show the vulnerability?

                                                                                                                                              1. 2

                                                                                                                                                I’ve been playing quickly with the PoC. It seems to work just fine on memory with PROT_WRITE only, but doesn’t work on memory protected with PROT_NONE. (At least on my CPU)

                                                                                                                                              1. 3

                                                                                                                                                Time to rewrite all our programs to drastically reduce the number of system calls they make. Not to make the security problem go away, but to shrink the performance impact of the workaround for it. :)

                                                                                                                                                1. 3

                                                                                                                                                  The main piece of code I work on for work exports stats to a shared memory segment that we can see in the UI. One of the most important stats is “avgcommit” - the number of units written per syscall. It is, by far, the most important performance statistic we have.

                                                                                                                                                  1. 1

                                                                                                                                                    Cool! If you’re looking closely at that, are you getting into the kind of territory where you might want to be looking at the storage equivalents of DPDK’s approach? By that I mean an approach like driving iSCSI or FC HBAs or NVMe controllers directly from userspace instead of via a kernel filesystem. I think that https://software.intel.com/en-us/articles/introduction-to-the-storage-performance-development-kit-spdk is the kind of thing I’m thinking of.

                                                                                                                                                    1. 1

                                                                                                                                                      We’ve looked into similar things, but the limitations on what hardware we can use and how we interact with legacy systems means that it’s basically a non-starter. Instead we do some cleverness with how we write both data and metadata, and end up writing about 250-300 units and their metadata per syscall (the original system, written before I got here, was one syscall per unit and one syscall per metadata chunk).

                                                                                                                                                      The 250-300 units metric is the speed that we’re receiving things, so we’re operating at speed. I’ve got some ideas on how to speed things up further, but they’re radical departures from what we’re doing now, so much so that it would be essentially a complete rewrite of the subsystem.

                                                                                                                                                  2. 3

                                                                                                                                                    system calls are already ridiculously expensive.

                                                                                                                                                    1. 2

                                                                                                                                                      Good thing I’ve got a one year head start. :)

                                                                                                                                                      1. 1

                                                                                                                                                        What, pledge()? I thought that was more of a restriction of variety rather than frequency. ;)

                                                                                                                                                        1. 3

                                                                                                                                                          No, just running ktrace and asking “why is this program being stupid?”

                                                                                                                                                    1. 10

                                                                                                                                                      Our goal is to deliver the best experience for customers, which includes overall performance and prolonging the life of their devices. Lithium-ion batteries become less capable of supplying peak current demands when in cold conditions, have a low battery charge or as they age over time, which can result in the device unexpectedly shutting down to protect its electronic components.

                                                                                                                                                      Last year we released a feature for iPhone 6, iPhone 6s and iPhone SE to smooth out the instantaneous peaks only when needed to prevent the device from unexpectedly shutting down during these conditions. We’ve now extended that feature to iPhone 7 with iOS 11.2, and plan to add support for other products in the future.

                                                                                                                                                      Come on. If this is really about managing demand spikes, why limit the “feature” to the older phones? Surely iPhone 8 and X users would also prefer that their phones not shut down when it’s cold or the battery is low?

                                                                                                                                                      1. 6

                                                                                                                                                        I would assume most of those phones are new enough where the battery cycles aren’t enough to cause significant enough wear on the battery to trip the governor, and/or battery technology improved on those models.

                                                                                                                                                        It’s really a lose-lose for Apple whichever way they do it, and they IMHO picked the best compromise: run the phone normally on a worn battery and reduce battery life further, and risk just shutting off when the battery can’t deliver the necessary voltages on bursty workloads; or throttle the performance to try to keep battery life consistent and phone running with a battery delivering reduced voltages?

                                                                                                                                                        1. 6

                                                                                                                                                          Apple could have also opted to make the battery replaceable, and communicate to the user when to do that. But then that’s not really Apple’s style.

                                                                                                                                                          1. 3

                                                                                                                                                            I believe that’s called “visiting an Apple store.” Besides, as I’ve said elsewhere in this thread, replacing a battery on an iPhone is pretty easy; remove the screen, (it’s held in with two screws and comes out with a suction cup) and the battery is right there.

                                                                                                                                                          2. 4

                                                                                                                                                            and plan to add support for other products in the future.

                                                                                                                                                            They probably launched on older phones first since older phones are disproportionately affected.

                                                                                                                                                            1. 2

                                                                                                                                                              Other media reports indicate that battery performance loss is not just a function of age but of other things like exposure to heat. They also indicate that this smoothing doesn’t just happen indiscriminately but is triggered by some diagnostic checks of the battery’s condition. So it seems like making this feature available on newer phones would have no detrimental effect on most users (because their batteries would still be good) and might help some users (whose batteries have seen abnormally harsh use or environmental conditions). So what is gained by limiting it only to those using older models? Why does a brand new iPhone 7 bought new from Apple today, with a brand new battery, have this feature enabled while an 8 does not?

                                                                                                                                                              1. 2

                                                                                                                                                                Probably easier for the test team to find an iPhone 7 or 6 with a worse battery than an 8. the cpu and some other components are different.

                                                                                                                                                                1. 3

                                                                                                                                                                  There are documented standards for rapidly aging different kinds of batteries (for lead-acid batteries, like in cars, SAE J240 says you basically sous-vide cook them while rapidly charging and draining them), and I’d be appalled if Apple didn’t simulate battery aging for two or more years as part of engineering a product that makes or breaks the company.

                                                                                                                                                          1. 5

                                                                                                                                                            Excellent — if i work much overtime then i just end up making a mess. But the 8 hours i do put in, they drain me. So it doesn’t feel like excellent work life balance.