1. 1

    When the author mentions OO:

    There’s a very specific poor design at the core of the whole, well, not concept in the abstract, but ideology as practiced.

    I can’t resist myself to refuse the notion of OO ideology. What does ideology even mean here? Let me guess:

    If one tries to understand the theory behind OO, and fails, and applies some of the ideas in an apparant chaotic way that resemblences it.

    But this, in my opinion, happens with most things one tries to learn. No blaim to the student, for he is only interested in learning more!

    Let me close with a poem:

    Object-orientation without subclasses is like functions without arguments.

    1. 4

      Inheritance is not important for OO programming at all. It’s about substitutability, and if done the way it was originally intended, message passing.

      1. 2

        Inheritance and subclassing are different. In inheritance, one imposes a constraint declaratively (say, “these objects must be a subclass of other objects”) and that can be done, often without any verification what so ever. Subclassing, in contrast, just exists even if one would not explicitly implement it. Consider:

        The expected object was Y, now we substitute for some object X. This substitution only makes sense when X is compatible with Y.

        There exists a compatibility relation between Xs and Ys, let’s call it the “compatible” relation.

        If X is compatible with Y, and Y is compatible with Z, then X is compatible with Z. Every X is compatible with itself. There exists the most compatible object (say, nil, that can always be substituted). There exists the least compatible object (say, the empty object, which is compatible only with itself).

        Just from substitution, we arrive at subclassing: our “compatible” relation now takes place of subclassing. This does not have to be checked by a compiler, it could equally persist only in the programmers’ mind.

        Consider how subclassing works differently for: languages with inheritance declarations, algebraically typed languages (cf. coercions), un(i)typed languages.

        One can also argue that the notion of “compatibility” only makes sense whenever one works with errors. Again, errors can be kept implicit or explicitly declared as exceptions. Without errors, every object could be substituted for another, and everything is compatible!

        Why do I call them classes of objects and not just sets of objects? Technical, mathematical terminology: objects are non-well-founded, non-hereditary, graph-like constructions, while most people assume set theory to be well-founded, hereditary, tree-like.

    1. 1

      Suggesting that Haskell has no null is doubtful. A diverging computation, e.g. null = 1 + null is typable for, say Integers, but admits no value. So, there you go: another bottom value that fits in every type.

      1. 6

        The nice thing about bottom, compared to null, is that it’s not observable. In other words, we can’t do something like:

        myValue = if isBottom myOtherValue
                     then thingA
                     else thingB

        In this sense, bottom acts more like a thrown exception (control flow) than a null (data value); although it looks like a null since laziness combines data flow with control flow.

        In a sense, this more faithfully represents an “unknown” value, since attempting to figure out which value it is also ends up “unknown”. In contrast, null values are distinct from other values. For example, if I don’t know the value of a particular int, the answer to “is it 42?” should be “I don’t know”, not “no”. Three value logic attempts to act like this, but is opt-in.

        1. 1

          Please could you explain how “laziness combines data flow with control flow”?

          1. 2

            Just the idea that evaluation proceeds based on the definition of data, rather than separate control mechanisms, e.g. we can write ifThenElse as a normal function:

            ifThenElse True  x y = x
            ifThenElse False x y = y
            1. 2

              Oh, I see. Thanks. I was thinking that the usual C-style control mechanisms, e.g. if and for, still kind of entangle control flow and data flow, albeit with special syntax rather than just functions. I wonder if it is possible to disentangle this? What would that look like?

      1. 3

        A PBX such as Asterisk might be fun. Connct it to a (handheld) SIP; perhaps use a VoiceXML interpreter, and you can implement custom voice-based applications.

        1. 4

          A very general theory for understanding computation is Rewrite Systems. A rewrite system can be understood as pattern matching on terms in some first-order language, binding free variables, and literally rewriting the terms into other terms. For example:

          f(x,y) -> h(g(x),g(y))

          Defines a rewrite relation, where x and y are free variables. Now an interesting general question pops up: given an arbitrary rewrite relation, and an arbitrary term, will you eventually reach a term for which no longer any of the patterns match? This is a reachability problem (and undecidable in general).

          Other aspects of rewrite systems are: if you apply on any term two different rules, is it possible to join them back into a common term? This property is confluence and is also a general property of graphs. Joinability and meetability, and in particular partial order relations, are also understood intuitively as properties of graphs.

          1. 9

            Benjamin C. Pierce, Types and Programming Languages, MIT Press.

            That’s the one you want. I’m biased towards ML (vs Haskell), and I think the book is, too (it’s not a Haskell book). You can get all six, sure, but if you had to get one that’s the one.

            1. 3

              Software Foundations is also good, and online for free.

              1. 3

                +1 for this. I’m using TaPL in my PL class this quarter and it’s awesome. Super well written and ML is great for this class - the work involves writing successively more complex interpreters for successively more complex toy languages. We’re not sticking strictly to the book, but the sections our prof has pointed us to have been great.

                1. 1

                  I also highly recommend TAPL.

                  I’ve been recommended TAPL before, but seeing as this isnt a class thats strictly about type systems, I’d like to get a more general one and read TAPL later.

                  TAPL isn’t just about types neither - it’s types AND programming languages.

                  Practical Foundations for Programming Languages is also very good.

                  1. 1

                    +1 for PFPL. It covers more material in fewer pages than TaPL, and it’s more up to date.

                  2. 1

                    There are lots of examples in this book, which I have found very helpful.

                  1. 1

                    I saw some Dutch text in the email program demo video. “Duit” in Dutch means: coin.

                    1. 1

                      So what you’re saying is: this is an initial coin offering? :-P

                    1. 3

                      Lesson 1 of the Internet: read only. Lesson 2 of the Internet: everything on it is false, including this post.

                      1. 8

                        Functional Programming and Object-Oriented Programming are dual, in a very precise way.

                        Data types and constructing inhabitant terms is dual to interface types and observation methods. Functions are interfaces with a single method that takes an argument and returns an argument. Isomorphic data types can be understood as existence of particular functions that map A to B and back from B to A, preserving identities. Dual to functions are so-called differences: these are data that witness an output argument belonging to an input argument. A basic argument could show that two classes are behaviorally equivalent whenever there is no difference between them (the differences between A and B, and the differences between B and A, are empty).

                        In Functional Programming, one is interested in the termination of a program. In Object-Oriented Programming, one is interested in the dual of termination: productivity of a process. Consider that the difficulty of preventing dead-locks is similar to the difficulty of ensuring termination.

                        In terms of logic, many advancements in recent years have brought us: constructive and computational interpretations of classical logics; languages that allow the expression of focussing on both the output of a program, and the input of a process; polarization of types to model within one system both functional aspects and objective aspects; better understanding of paradoxical mathematical foundations lead to the theory of non-wellfounded relations, which brings us the dual of recursion/induction called corecursion/coinduction.

                        In terms of abstract algebra, we now know that notions of quotient types of languages by certain equivalence/congruence relations is dual to notions of subtypes of abstract objects by certain bisimulations/bisimilarities of behaviors.

                        In terms of philosophy, it is understood that rationality and irrationality can also be applied to software systems. “The network is unpredictable,” is just another way of saying that the computer system consists of an a priori irrational component. Elementary problems in Computing Science, such as the Halting Problem, witness the theoretical existence of irrational components. Those who assume every system is completely rational, or can always be considered rational, suffer from a “closed world syndrome.”

                        1. 3

                          From a layman’s perspective, I thnk it is dual in another way too. In terms of how mutability is handled. Functional way of organizing programs often strives to push the state out, with a set of pure functions inside, which are stitched together on some skeleton that handles state or IO. On the other hand, OO programming encapsulates the state, striving to provide as simple interface to the outside world as possible. i.e the mutability is hidden inside rather than pushed out.

                        1. 2

                          Recently, I heard that quotient types are dual to subclassing. What do you mean with “non-termination of infinitary rewrite systems”? And, strategy is just a way to deal with non-confluence. Monads are the continuation-passing style translation that (formulas-as-types) corresponds to Gödels double negation translation of classical logics into constructivist logics, that is, embedding the computational interpretation of Turing machines into models of simulating interaction machines.

                          1. 4

                            Empirical Software Engineering: I followed a course at my university on this. It was an eye opener. I can publish some of my reviews and summaries sometime. Let me already give you some basic ideas:

                            1. Use sensible data. Not just SLOC to measure “productivity,” but also budget estimates, man-hours per month, commit logs, bug reports, as much stuff you can find.
                            2. Use basic research tools. Check whether your data makes sense and filter outliers. Gather groups of similar projects and compare them.
                            3. Use known benchmarks in your advantage. What are known failure rates, how can these be influenced? When is a project even considered a success?
                            4. Know about “time compression” and “operational cost tsunamis”: these are phenomena such as an increase in the total cost by “death march” projects, and how operational costs are incurred already during development.
                            5. Know about quality of estimates, and estimates of quality and how these can improve over time. Estimates of the kind “this is what my boss wants to hear” are harmful. Being honest about total costs allows you to manage expectations: some ideas (lets build a fault-tolerant and distributed, scalable and adaptable X) are more expensive than others (use PHP and MySQL to build simple prototype for X).
                            6. Work together with others in business. Why does another department need X? What is the reason for deadline X? What can we deliver and how to decrease cost, so we can develop oportunity X or research Y instead?
                            7. Optimize on the portfolio level. Why does my organization have eight different cloud providers? Why does every department build its own tools to do X? What are strategic decisions and what are operational decisions? How can I convince others of doing X instead? What is organizational knowledge? What are the risks involved when merging projects?

                            Finally, I became convinced that for most organizations software development is a huge liability. But, as a famous theoretical computer scientist said back in the days: we have to overcome this huge barrier, because without software some things are simply impossible to do. So keep making the trade off: how much are you willing to lose, with the slight chance of high rewards?

                            1. 2

                              Any books you want to recommend?

                              1. 1

                                I’m also interested in that but online articles instead of books. The very nature of empiricism is to keep looking for more resources or angles as new work might catch what others miss. Might as well apply it to itself in terms of what methods/practices to use for empirical investigation. Let get meta with it. :)

                            1. 14

                              If documents and models are unstructured, then consider using an unstructured model. An untyped language is actually typed with a single all-encompassing type. This can be simulated in any sufficiently powerful type system.

                              Consider, for example, Gson: it allows conversion into POJOs, thereby making use of existing data structures. But it is also possible to use Gson’s built-in all-encompassing type, and explore and navigate the unstructured JSON directly.

                              I don’t see any problem.

                              1. 2

                                You can also add a variant class to make typed code somewhat typeless, ie. at my work we have a C++ wrapper around jsonc that puts everything into a tree with a variant class for the values that represents a bool, int, float or string. You can create a variant that represents as many values as you want to supports, with run time type safety. You can also assert or debug if the wrong type coercion is attempted.

                                1. 1

                                  To me, it seems like the type system equivalent of the safe vs unsafe debate in languages. The best practice on the safe side is to make it safe-by-default with an unsafe keyword that gives the developer freedom very selectively (eg per module). It becomes a big warning sign that the component needs strong code/data verification and validation. In this case, the dynamic typing could be the unsafe type of an otherwise statically-typed language that the program is implemented in. Just those things changing are marked with this unconstrained type with anything touching those variables validating the data they contain. Maybe some extra logging, too.

                                1. -3

                                  Proof of Work can be useful as follows. Blockchain systems are a realization of the P=NP problem in theoretical computer science. A problem is in P if it is intuitively easy to solve on a single conventional machine. A problem is in NP if to check an answer to the problem is intuitively easy to do on a single machine.

                                  Now many problems are in NP, cracking hashes is only one of them. If you have a huge network of cooperating computers, then solving a difficult problem also solves consensus. Compare this with a group of schoolchildren; those who have the answers to the test, everybody likes and listens to. But these answers are very hard to attain.

                                  Now if P=NP, then we can observe this empirically from the networks’ converging behavior. If otherwise, we won’t observe a convergence, we can’t reject the hypothesis that P!=NP (which is what most scientists believe).

                                  Blockchain is a massive theoretical experiment, and to pay for it, we have popularized it and attained a critical mass of followers.

                                  1. 5

                                    Now if P=NP, then we can observe this empirically from the networks’ converging behavior. If otherwise, we won’t observe a convergence, we can’t reject the hypothesis that P!=NP (which is what most scientists believe).

                                    what do you mean by “convergence”?

                                    1. -2

                                      So, as the popularity of a blockchain increases, the number of participants also increase. To keep solving the consensus problem, the difficulty is increased too. If, however, the difficulty of the PoW improves without an increase of participants, another phenomenon must be at play.

                                      If the network at some point becomes unstable (viz. unfairness of consensus) because a few nodes have apparently a superior solution generator, we observe a bias towards particular participants. If, however, such solution generator is hard or impossible to attain, we wont observe this bias.

                                      Convergence is that, ultimately, there are only two stable states: stability under participants/difficulty ratio, or, instability from the incommensurability of participants and difficulty.

                                      Of course, there are many subtleties in this argument, requiring careful analysis. Consider (false positive): one participant actually owns a mining farm with ASICs, giving it a disproportionate amount of computing power, is that a reasonable explanation of bias? How about scenario’s such as the one of Enigma’s code breaking (false negative), where participants with a superior solution generator won’t tell the others by keeping their advantage to within bounds of reasonable chance, to prevent other parties from inferring the information that the actual problem is already solved. Then can we still observe a bias?

                                      1. 2

                                        The hashing problem that Bitcoin is based on is not known to be NP-Complete, so even if a polynomial-time algorithm for that were discovered, it would amount to a security hole in SHA256, not a proof that P = NP.

                                        On the other hand if someone did have a constructive proof that P = NP, they would be able to use it to break SHA256 (if it were a small enough polynomial with small enough constants). So the Bitcoin network does provide another mechanism of reward for someone who comes up with a constructive proof for P = NP. But the stability of Bitcoin doesn’t add much evidence that P =/= NP, as IMO the other rewards for proving P = NP are more than enough to motivate researchers.

                                        Granted, the Bitcoin network does provide further security testing for the crytographic primitives it’s based on, as any high-stakes cryptosystem does.

                                  1. 3

                                    It might be a naive thought, but “Thinking the Unthinkable,” was not about programming languages themselves but their developing research programmes, and the overarching paradigms in which these are grounded.

                                    One example of “the unthinkable” would be non-well-founded set theory, as studied by Aczel in 1988. It is a provocative work, and its even stating its intended purpose as breaking fundaments, with the admission of “axioms of non-fundament.” My hypothesis is that it resulted, at least in Europe, in a split between conventional CS and so-called TCS.

                                    This split seems like a huge distance in retrospect. TCS is how I like to call it: concrete mathematics. Its machinery and tools has even developed past years to automate most of the mundane research tasks of the previous paradigm. In concrete terms, these researchers write compilers for new languages as easily as we do programming. Programming language theory thus has moved to the bin “academically uninteresting,” and many universities renamed their research groups to focus on new topics.

                                    In my own work I also try to break foundations. My claims, without providing too much details, are: algorithms are not the fundamental mathematical objects we should study, and the Halting problem is not actually a problem but a corollary of Cantor’s diagonalization proof of the existence of real numbers, and termination proofs are as important as productivity proofs. Instead of algorithms, we should study protocols. Protocols compose multiple underlying computers in a cooperative network. By designating some computers as irrational, i.e. we can only observe their behavior but not finitely represent it, we can model the real-world quite well. E.g. a Byzantine failure model becomes an irrational network component. Security and privacy properties are now the fundamental questions, which by sufficient mathematical modelling can be studied and understood.

                                    1. 3

                                      Very convincing review and replication study.

                                      1. 18

                                        I really love legacy, and have been working on a DOS application that is in use since 1986. I helped to patch the blob to solve clock frequency issues around 2005, and virtualized it completely in 2015 (now allowing the app to reach files and print over the network!)

                                        I really hate legacy, and have found enormous amounts of garbage and myself struggeling not only with the incomprehensible and untangible structure of bloated software architectures, but also with consequent motivational problems. I even had to disappoint the customer, who invested a lot in me: despite the promising progress, fixing it for real would cost way too much.

                                        Sometimes I like to tell junior programmers some war stories, especially when they complain when working with the code of others. I romanticize what I call “software archeology,” and declare my love of unraveling the mysteries hidden behind the unknown. This I do for two reasons: I hope to motivate them beyond the point of misery (the trap, in which you believe you can not deal with the problem, and give up) and I hope to give them another perspective, as follows.

                                        Legacy is something to be proud of. It is the work of a precious generation (be it 30 years or 6 months ago), which dealt with perhaps completely different circumstances. Respect their work, just as you wish others will respect your own. Instilling this picture, that legacy is something great and is what you ultimately hope to produce, might result in work that one can be proud of: work that builds upon the great work of others, and tries to improve upon it!

                                        1. 4

                                          “Your code could be legacy some day!” is a legitimate motivational phrase, in my opinion. There’s often a lot wrong with legacy code, but that’s because you’re often looking at it from a very different perspective. Understanding the original authors’ viewpoint is important. You might call it “code empathy”.

                                          1. 1

                                            I have made similar experiences with dealing with legacy. It’s easy to complain about certain design decisions, but really, sometimes it just seemed like a good idea at the time. Much can be learned from legacy code, too. Tricks that nobody uses today, space and memory optimization and such.

                                            Grab a copy of some 70s or 80s source code and go to town with it sometime. Bring it into the 21st century. Enjoy the journey.

                                            1. 1

                                              Realizing that legacy code is code that did served well it’s purpose, is part of our path to professionalism.

                                              But IMHO, this should not stop us to rethink things.

                                              Most of complexities of modern software (and most of it’s flaws) come from legacy decisions that we didn’t challenge.

                                              And I’m not only talking about JavaScript in the browsers, but even about things like dynamic linking, root users, using HTTP to deliver computations and so on…

                                              All useful things, when they were thought. But today we can do it better.

                                              1. 1

                                                You know, I did get a pretty uplifted feeling reading that. The respect point is both incredibly spot on, and incredibly not the norm.

                                                1. 1

                                                  Yes! I often really enjoy working in fifteen year old legacy code for exactly that reason. Sure the abstractions may not be great, but it is useful code that has served the company well all that time. My main job when working in legacy code is to not break what it gets right.

                                                  1. 1

                                                    This is all fine, but what turned me off a bit wrt consulting is a high frequency of modernizing legacy.

                                                    The code did not appear in a vacuum and there are always some of the original forces in place; budgets and schedules and such. Tradeoffs have to be made, and often these include not taking the upgrade path all the way to the latest version.

                                                    This leads to boredom, although the customers are always super and their domains where they work different from each other. It also raises the barrier to invest time in the latest and greatest, since the bulk of it would have to be done out of passion on free-time hobby projects.

                                                  1. 7

                                                    First of all, I love love love how vibrant the Lobsters formal methods community is getting. I’m much more likely to find cool FM stuff here than any other aggregator, and it’s awesome.

                                                    Second, maybe I’ve been spending too much time staring at specifications, but I’m not seeing how level 1 is different from level 2. Is level 1 “this is broken for probable inputs”, while level 2 is “this is broken for some inputs”? Different in degrees of probability.

                                                    1. 4

                                                      Level 1 is statements about specific executions; level 2 is statements about the implementation. This is for all statements, not just correctness.

                                                      1. 1

                                                        First of all, I love love love how vibrant the Lobsters formal methods community is getting.

                                                        Me too.

                                                        I believe crustaceans care more for high-quality code than the average programmer. Maybe pride of the craftsmen? This is a distinction to Hacker News, where monetary considerations get more attention. Formal methods is certainly one of the big topics to improve the quality of code.

                                                        1. 1

                                                          I also love that we get more formal methods discussions taking place here!

                                                          I am however not sure how much “depth” is accepted; should I post any paper I find interesting here, with short summaries why and personal reflections?

                                                          1. 2

                                                            I usually just post the papers. One thing I do, though, is try to make sure they show some practical use along with what they could or couldn’t handle. Especially in abstract so readers can assess with a glance. I’ll sometimes add summaries, reflections, brainstorming, etc if I feel it’s useful. I will warn the acceptance are hit or miss with many of the PDF’s getting 1-3 votes. Then, out of nowhere, they really appreciate one. Also, I only submit anything important on Monday-Friday since many Lobsters seem to be off-site on weekends.

                                                            So, that’s been my MO. Hope that helps.

                                                        1. 22

                                                          To be honest, most of my goodwill towards Tim Berners-Lee (which there was a lot of, by the way) went away when he started shilling for web DRM. Requiring w3c compliant browsers to ship closed source BLOBs in order to correctly display w3c compliant web pages is against the very core of the open web; not to mention how the w3c wouldn’t even protect security researchers who want to see if there are security issues with said BLOBs. I know Berners-Lee probably isn’t responsible for every one of those decisions, but he publicly (and probably internally in the w3c) argued for DRM.

                                                          For further reading, here’s a great (albeit long) article from the EFF: https://www.eff.org/deeplinks/2017/10/drms-dead-canary-how-we-just-lost-web-what-we-learned-it-and-what-we-need-do-next

                                                          1. 8

                                                            Computers, the Internet, and the web represent some of the greatest innovations in the history of mankind and the fruition of what could have only been a fantasy for billions of our ancestors for thousands of years. To see it so quickly, in the course of a few decades, and thoroughly corrupted by the interests of corporate profits is profoundly sad. I am severely disappointed to have dedicated my life to the pursuit of mastering these technologies which increasingly exist primarily to exploit users. DRM is a thread in a tragic tapestry.

                                                            1. 3

                                                              At this point my usual plea is, judge what’s spoken, not by whom it’s spoken. TBL’s authority is one thing, and the merit of what he has to say about that “Solid” thing is quite another. The idea feels very sane to me, although I don’t see a clear path of shoving it past the influence of all the silo-oriented companies like Facebook and Google.

                                                              1. 2

                                                                “At this point my usual plea is, judge what’s spoken, not by whom it’s spoken.”

                                                                This sentiment was drummed into me as a child and ordinarily I would strive to do this to a point, but the topic of putting locks on the open web by way of DRM is to me related to the apparently opposed mission of “solid”.

                                                                Arguing for decoupling data from applications provided by corporate giants in the interests of user control seems absurd when he just played a major part in removing transparency and control from a user’s web experience.

                                                                I’m not quite sure what to make of this.

                                                                1. 2

                                                                  Did you consider the possibility that DRM could also work in reverse? The Digital Rights Management of individuals. I think that is the underlying motivation for allowing DRM: to protect assets and information. Users can not freely copy media to which they have no right of ownership, and conversely, companies can not freely copy user data to which they should have no right of ownership.

                                                            1. 2

                                                              Algebraic Graphs with Class by Mokhov. See my blog article which has a reference to this paper. http://www.hansdieterhiep.nl/blog/graphs-in-haskell/

                                                              1. 1

                                                                Summary: If a grammar specifies a language, and a language is defined as a set of words where a word is a sequence of letters in some alphabet, you can take a random element out of this set, to use it to test whether any word in the language is accepted by your code.

                                                                Have you also looked at regular languages, and regular expressions to (completely) describe such languages, and how to generate fuzzers for them? Might be useful to generalize your work, and allow construction of fuzzers given only a regex, e.g: #[a-zA-Z0-9]{3}[a-zA-Z0-9]{3}?

                                                                1. 12

                                                                  I like how the article notes one of the main sources of burnout in senior engineers: continually cleaning up messes caused by other people.

                                                                  However, I think that it does miss the biggest pathology of engineers–a continual reinvention of shiny and experimentation that I can only describe as neurotic. Watching engineers throw away perfectly good tooling in order to try the framework of the month reminds me of a cockatoo plucking its own feathers out because it isn’t getting to do anything fundamentally interesting in its cage.

                                                                  Companies have this problem where they refuse to acknowledge the commonalities that their businesses have with every other business on the planet (and hence they won’t accept standardized solutions) and also where they won’t actually pay engineers in such a way as to reward them for delivering value on time and under-budget.

                                                                  I ask you fellow lobsters–if you were guaranteed 1-5% of growth profits that your company had this year, how much harder would you work? How much less would you invest in new toys?

                                                                  Similarly, if there is no further growth, maybe it’s time to stop writing software. Maybe the business is completed, and we can all go do something else rewarding with our lives.

                                                                  1. 5

                                                                    Similarly, if there is no further growth, maybe it’s time to stop writing software.

                                                                    I think there’s also an important detail: the job isn’t to write software. It’s not to ship things. It’s not to fix bugs.

                                                                    The job of everyone is to do the “right things” so the company can make more money/be more sustainable/whatever the final goal of the company.

                                                                    Sometimes this means not goofing off on rewrites. Sometimes it means killing off a project because this isn’t actually important. Sometimes it’s not even about technology. If you’re sitting around making store page updates but company logistics are causing your company to lose shipments, maybe you need to go help the mailroom.

                                                                    This is why I love having engineering handle user support. It helps teach you that shipping doesn’t mean anything if your old stuff is breaking. It teaches you that you’re not doing things in a vacuum. And it helps engineers also realize that results matter, and software development is just one piece of that.

                                                                    A lot of product teams build out these huge product plans, but in the end spend half of their time in “firefighting” mode. Most of those teams should immediately drop everything and…. just fix their stuff. Nothing else matters.

                                                                    Whenever you end up spending a bunch of time on things but it doesn’t seem to have an effect on the bottom line, it weighs on you. If you can get out of the box of your job title, though, you can go immediately towards fixing the things that need fixing, adding the things that need adding. And if you’re right, you will know, and you will know immediately.

                                                                    1. 4

                                                                      “However, I think that it does miss the biggest pathology of engineers–a continual reinvention of shiny and experimentation that I can only describe as neurotic.“

                                                                      We agree on this idea, that software engineering seems to operate in circles. Fundamentally, there is not much change happening: just new layers or ways of expressing. Each has its positives and negatives with respect to expressivity. For example, where first-class “objects” in object-orientation could help model concurrency and interactivity, now futures in event streams seem to be it all. The underlying model of concurrency has however not fundamentally changed for 40 years, and will unlikely change in another 40.

                                                                      Sometimes I am concerned, when discussing programming with fellow students who seem to be dissatisfied with extremely simple programs, that we learn to expect complicated solutions everywhere. In reality, only simple solutions are really solutions in the literal sense: a solution dissolves the original problem into understandible and simpler terms. The simpler terms allows one to perform a computation more easily, thereby tackling the original problem. I am not talking about algorithmics here, e.g. divide and conquer, but modeling humane computational problems.

                                                                      I would be even inclined to believe that too much focus on the programming activity actually distracts from solving any problem. Keep programming activitities at minimum, focus on the humane part of development of a project. Learn as much as is possible, or as much as one wants, from the problem domain. Attack small problems first, built it out into an ecosystem of solutions. Validate the solutions: try to explain peers what is the problem, why it is important and how it can be simplified. These solutions live in the minds of people within an organisation. Develop training programs to effectively learn new hires the legacy of the company comprising all existing ideas and ways of thinking.

                                                                      Most people equate the role of developer and programmer. That is wrong. Project development is people-first, machine-second. Now, the word “neurotic” perfectly fits this description in my mind: people problems can not be solved by thinking like a machine.

                                                                      1. 3

                                                                        The evidence to say that money movitvates (see https://hbr.org/2013/04/does-money-really-affect-motiv) is not strong, so the guaranteed bonus might not have an incentive effect…