Threads for DanielBMarkham

    1. 7

      I’m later in my career now and writing is the single-most impressive meta-skill I’ve developed. It translates across all disciplines, you can either edit and improve your writing on your own or with others, and it’s open to a countless number of implementations.

      Having said that, content-focused marketing-based herd writing is a waste of brain cells. You probably have something better to do. Go find that.

      My best writing is when I write a few “gee, here’s this thing I kinda know but I’m not certain about” and then a few years later wrap several of these into a “Thesis: X. Supporting arguments: A, B, C”

      Interestingly enough, neither one is suitable for mass consumption! The first kind of essay folks complain about as being too wooly (which is the entire point). The second kind of essay folks complain about making too many assumptions and not supporting/explaining them (which is also the entire point)

      Write for yourself. Learn to edit. Learn to write and edit in different ways depending on the personal goals your writing is supposed to have at any one point in time.

      There’s this really cool feeling in pure functional programming and DDD where you ask a question and realize that the answer was there all along. It was in the model. Writing does this as well when done well, only about two levels up from “simple” coding.

      ADD: the conclusion here seems to be that the more likely your writing is to appeal to a mass audience the less useful it is overall. Not sure if I support that! Maybe in a few years I will. :)

      1. 3

        These days, my interest is showing in what I’ve been working on. I guess it is the “marketing-based” thing you talked about. My disagreement with your statement of “waste of brain cells” is that what is wrong with showing what we made and why we are proud of it? After all, why do we make what we make? Just to amuse ourselves or show our own kind that “Hey, I made this cool thing”? Yes, if you try to manipulate others to sell them an inferior idea, that’s not cool, but what if your work is worth showing others?

        1. 3

          Apologies. Didn’t mean to single you out. If that’s your thing, rock at it! I admire any kind of writing for any reason.

          My journey left me with the conclusion that social groups tend to self-scope over time. So to me, any kind of writing “looking outward” tends to end up controlling the outcomes and conclusions I end up reaching.

          I’ve done a lot of mass market writing. Love it.

          It never developed a damned thing that I found useful long-term, though, aside from a bunch of cool stories about people, places, things, and tech. That kinda thing has value, no doubt.

          I prefaced my comments by saying I’m later in my career. I’m looking a lot more for synthesis now. Earlier on, I was just grooving on having fun and sharing. I am glad I don’t do that anymore. Your mileage may vary. If I’m 80 years old telling folks about some 10% improvement in a hashing system, my personal values are that somehow I lost track of the truly important things along the way. But that’s just me.

          Thanks for writing this, both sharing the article and reply. I enjoyed it!

      2. 0

        Having said that, content-focused marketing-based herd writing is a waste of brain cells. You probably have something better to do. Go find that.

        100% agree with this sentiment.

    2. 2

      Profiling and analysis will always have a place, even with AI. AI will always give different answers because it can’t read our minds.


      On a completely different note: Is it wise to have your signature down there? Isn’t that asking for it to be used by anyone in any way?…

      1. 2

        I don’t know. I am unable to answer your question. Isn’t it worse to have hundreds of thousands of words of text posted online all with my name on it? I mean that kind of thing could yield all sorts of shenanigans. Signatures drift over time.

    3. 1

      I believe this is a category error.

      If you’ll allow me to flail around a bit: I believe that given the right kind of coding style, software development can be a form of Curry-Howard correspondence, where the coder’s understanding of the problem domain, reflected in the code, provides a rigorous, testable definition of that domain.

      This means that programming can be a form of applied philosophy. Begin with people bullshitting, transition to semi-formal colloquial phrasing, then eventually complete the transition to a testable and reproducible form of domain exploration, ie, science.

      Because of a lot of things, but especially the social-cognitive-performative nature of language, we’re never going to have a system of creating new sciences. If that were ever to happen, new forms of science discovery would be effectively closed off to humans. Such a tron-like world doesn’t track with me.

      tl;dr - round hole, meet square peg. Qualities of hammers are not important in this discussion.

    4. 5

      This is a nice explainer. Thanks.

      Yep, I think a lot of folks were punked by Turing. He didn’t know, so it wasn’t deliberate.

      Many years ago, my grandmother who was in her 90s had a series of mini-strokes. For a while she was awake, alert, and went home. It’s just that everything wasn’t “right” with her. She didn’t talk a lot.

      I was close to my grandmother, so I visited several times. During one of those visits, I noticed something interesting: I could start a conversation with a common phrase we had used during our life together, say “And how are you doing this fine morning?”

      She would respond! In fact, it was possible to get into a rhythm of conversation that sounded completely normal to an outside observer. I think on some level she got what was going on, but at another level she was just on autopilot.

      Passing the Turing test is indeed a major milestone if it’s happened, but it’s not a milestone in AI. It’s a milestone showing all of humanity how little intelligence is required in most all of our activities and how we love to assign intelligence where it doesn’t exist. I have no idea what we’re going to do with that knowledge, but that’s a new thing for us. It’ll be interesting.

      1. 2

        I just wanted to say that I am sorry about your grandmother. My father still can’t speak at all. Funnily enough, he still assumes we somehow understand him even without him talking and gets angry that we don’t.

    5. 16

      This is close to enlightenment. To get the rest of the way there, first note that names do not matter for pragmatists, aside from descriptive explanatory power; a class by any other name will still have the same behaviors. Then, consider a single fragment of code, and all of the possible effects that it can have.

      We might imagine that a fragment’s effects are dependent upon its inputs. If the fragment is not self-contained (it has holes or needs a context), then it also depends upon the context used to complete it. This leads us to the standard logic of preconditions and postconditions, which leads to native type theory. Let’s state that plainly: The type of a fragment, in a particular type context, determines the possible postconditions which can be reached from various starting preconditions.

      The punchline is that paradigms don’t matter for pragmatism. All that matters is what the code actually can do when executed. Restate Peirce: Consider what outputs we conceive the code to have. Then the whole of our conception of those outputs is the whole of our conception of the code.

      1. 7

        This is a great comment. It provides clarity where I thought I had but obviously didn’t.

        Much appreciated. I love hearing the reinterpretation of my essays through the minds of informed people. I believe you are providing more of a technical foundation than I would be happy with in an essay, but I need to think through that a bit. Way cool.

    6. 2

      I disagree.

      There are differing programming paradigms, each of which loosely based on various philosophies. These paradigms can be viewed by it’s view on state.

      So yep, it kinda sorta reduces to state, but that’s not what they’re about. It’s a side effect. That’s important to know when you’re comparing notes.

      Philosophy is directed conversation and questioning in the search for a science, a place where experiments can be performed. Most programming paradigms are tools that can be used at the end of the philosophical process. Different tools emphasize different kinds of solutions.

      The difference is important because this is all a loose grouping. Pick one, like functional programming or microservices. Whichever one you pick is going to have problems because they’re not well thought out philosophies. This essay is like “hammers create wooden houses” It’s true the two are associated, but there’s lots of edge cases and you’re trying to link the tool with some dream end state. It doesn’t tend to be useful aside from noticing it.

      Programming is really about the application of type theory in the context of applied philosophy. This means that things like OO and FP (in the generic) are really about “how to think about solving problems”, not the problems themselves or the deployment units used. Most all programming things we deal with are there to help us think better. These are analysis tools, only put inside of compilers and deployment units in various formats.

      Apologies if that description wasn’t good enough. It’s a great topic. Congrats to the author for taking a run at it!

    7. 3

      I’ve long thought that modern technology consists of three areas: 1. Do something for me (CPU cycles) 2. Store something for me (Memory, and 3. Talk to somebody else (communications)

      These three areas should be part of the metal and run on isolated systems. At that point we can talk about vendors, approaches, and strategies for each system. Until then, though, we seem to have stuck all of our eggs into one basket. To me it’s much too difficult to reason about them all at once as this author does. It quickly runs to extremes like nihilism. There’s a structural reason why that happens (and doesn’t have to)

    8. 3

      I don’t see siloing as a problem with real-world programming specifically. Rather, it’s a problem with real-world organizations in general, and programmers are no more immune to it than anyone else. Go to, say, the customer support organization at any big company, even one that develops no software at all, and you’ll find plenty of people who have no clue that the sales department is about to do something that will have a big impact on support.

      I think this matters because it suggests the problem can’t be fully solved by, say, switching to a different development methodology.

      1. 1

        I agree.

        At the risk of being over-the-top, I think that by defining software development as different than business operations you have created the exact silo referred to in the essay.

    9. 1

      We don’t have to have named variables.

      Every business is its own silo. Part of your complaint is inherently rooted in how corporations behave; if they were upstanding members of a Free Software commons, then code reuse would be much easier.

      1. 1

        Every business is not its own silo. Silos are specialties not related to businesses. In fact, if every business was a silo, there would not be a problem.

        Also, it is awesome that you see that variable names are not needed. The issue becomes how to solve real-world business problems in such a way that this can be accomplished

        1. 3

          Tacit programming is part of the real world. OpenFirmware, jq, UNIX shells, railroad diagrams, and more; there are so many examples. It sounds like you’re describing a social issue: how do we lead professional programmers out of the Turing tarpits?

    10. 2

      This problem is unavoidable, no? In fact, we have the tools to avoid it, and it we still run into it. The tools to avoid it being formal logics (programming languages are formal logics for example). The reason why it’s still a problem even with formal logics is that in order to be fully formal, they need to be described in painful detail. Humans have a hard time coping with that level of detail, so we have trouble interpreting statements in the logic unambiguously, without tracing through all of the rules manually.

      Until we can connect our brains together with wires and exchange ideas directly, we have to have a language for describing concepts. What’s the way to avoid that?

      1. 1

        The way to avoid that is to limit the use of formal logic to very narrow contexts in order to produce composable applications. It’s that scoping of provable code to narrow contexts that we don’t make happen.

        Agreed that it’s very painful to do an entire application like that. That’s why we shouldn’t do it that way, not abandon the concept altogether because of our scoping problems.

        1. 1

          So you’re saying that formal logic is the root of the issue? Or that formal logic is the solution, but it’s too difficult to apply everywhere?

          1. 1

            Neither. Formal logic is the answer, but only in limited cases with specific boundaries. The boundaries are not there because it’s too difficult. They’re there for intelligence-language-logic reasons that don’t involve just one of those areas

      2. 1

        In fact, we have the tools to avoid it, and it we still run into it

        The tools need to be accessible to the people who understand the domain.

        Until we can connect our brains together with wires and exchange ideas directly, we have to have a language for describing concepts. What’s the way to avoid that?

        IMO, we need better tools for using the languages that we are already constructing.

        That is - we already construct a shared language (when we talk to domain experts). We then attempt to translate our understanding of that language into code - but our understanding is frequently incomplete or wrong.

        There is some work on this kind of toolkit - eg the ActiveFacts project.

        From this example text, ActiveFacts can generate

        • A normalized OLTP SQL schema
        • A denormalized data warehouse schema
        • The ETL code to move OLTP data into the warehouse
        • Classes to store / validate in-memory representations of the data

        Currently, only the ruby in-memory representation is open-sourced, and there’s no automatic save/load between the in-memory representation and the database.

        1. 1

          I think it’s a mistake to think that non-programmers will ever write or interact with code. I think programmers should continue to firmly exist as interpreters of code. It’s way too specialized of a skill.

          But I do think that programmers should have a toolkit that can express concepts at the level of the domain. This is actually the basis of my main side project - a specification language for building model-based test suites.

          What you showed is fairly close to that (the specification language that is), though it looks like it’s only describing a static data model. Can it describe system behavior?

          1. 1

            I think it’s a mistake to think that non-programmers will ever write or interact with code.

            Write, yes. Interact? I haven’t given up that dream just yet. The language I linked to specifically aims at non-programmers being able to identify model errors.

            only describing a static data model. Can it describe system behavior?

            IIRC, it can describe ‘relaxed’ constraints (eg something like Doctor prescribes Medication to Patient where Patient is not Doctor, otherwise report to medical board), but not any behavior associated with what “report to medical board” entails.

            Some early work has been done on integrating it with Alloy to model state transitions, and it looks likely to work but I don’t think it’s a current focus.

      1. 1

        nice. Nord Stage?

        1. 1

          Yup. Makes for a nice break from coding when I get particularly stuck on something.

          I had a two-level keyboard setup with an electric piano and a separate computer for sequencing, but it turned out to be too much overhead for what I use music for. The keyboard and an analog mixer are just fine.

          1. 1

            Ahhh nice, yup that should indeed be sufficient! I’ve got a two-level keyboard stand as well, but yeah, currently just laying folded up against the wall. Hmmm, a synth/keyboard next to the workstation seems like an excellent idea… :D

    11. 8

      Got some side gig with Santa solving all kinds of whack-o-doodle programming problems for his elves (advent of code)

    12. 2

      I run a mom-and-pop (small) Discord. We have a AoC channel with a couple of dozen programmers participating using various languages.

      You guys are welcome to come and play along. I’m hoping that by teaming up I can stay motivated to do more of them this year.

      https://discord.gg/fR8dG5XePM

    13. 12

      On the positive side, this path (microservices) leads to small languages and DSL. They’re a way out of the larger enterprise architecture software complexity trap.

      That’s such a load of horse shit.

      Microservices introduce lots of additional complexity that wouldn’t be there in a monolith. It’s not all “just ops” either as the post seems to suggest. You still need to ensure that the various microservices agree on the domain model as a whole, which takes a lot more effort to oversee. Also, rarely do systems decompose into neat little delineated components. Very often new customer requirements crop up that require you to wire two previously distant parts of the code together in new ways. This means you’re introducing coupling between new components.

      More coupling means more intricate release management and deployment of the components (can only deploy version A of system X when system Y has at least version B), which is what you’re “not supposed to do” in microservices. But the customer doesn’t care that you started out with two nicely unrelated systems - that’s an engineering concern, and the decision to do that was on you. They just want the cross-cutting feature, now.

      1. 1

        Yeah no. You are confusing the way microservices are typically done with the actual intent.

        I’m not going to argue with your gentle and kind ways. Here’s an essay. It’s probably horseshit too, but at least it’ll be different horseshit.

        https://danielbmarkham.com/honest-microservices/

        1. 7

          Yeah, sorry for the harsh tone, I’m not having the best week (and it’s only Wednesday).

          The essay you’re linking to doesn’t meaningfully address the two problems I’ve pointed out where you things fall down in practice:

          • Customers will demand new cross-cutting features that don’t align well with the way the system has been carved into services that “do one thing well”.
          • You need to maintain an overview of how everything hangs together in order to ensure the various microservices are in alignment with eachother about the domain model. Essentially, your domain model is more “emergent” and not specifically written down in its entirety somewhere in code. Each service may have a partial/different view of the domain entities.

          And of course there’s always the facile observation that if things are “typically done” in a certain way, that’s probably because there’s an inherent difficulty in doing it correctly, by the book (if that is even achievable at all in practice). Think of all the criticisms of so-called “RESTful” APIs, and how few there are in the wild that even come close to the ideal. If it’s too hard to do even in a team of really smart people, is there any hope the more run-of-the-mill teams can pull it off?

          1. 2

            No problem. I took at just being part of the job of writing to programmers. Thank you for the feedback. Hope your week gets better!

            I agree with your critiques, insofar as these essays poorly address them. I’d love to dive down into both of your points in depth, happy to talk about them in-person if you’d like, but I’ll throw out a couple of blurbs for what that’s worth.

            • Yes, customers will demand new cross-cutting features, so features shouldn’t be cross-cutting. That is, information available to the code should not be coupled with what the code does
            • You need to maintain an overview of how everything hangs together. Also yeah no. You need data flow between truly decoupled microservices for the MOC system to work. You don’t need a domain model and in fact a domain model can be counter-productive. Note: At this level. You very well want a domain model at a higher level, if only to create a ubiquitous language to talk about the Onion at the enterprise or division level, the one above your microservices. Other ways of managing complexity don’t work like this! It’s opposite of the way we’ve done things.

            Is there any hope that run-of-the-mill teams can pull it off? Frankly I don’t think there’s any chance for any of these strategies. There’s real work involved with the continuous matching of human understanding of a problem to code. You can’t get rid of that work with a bunch of documents, a coding bootcamp, or a set of really smart people. That’s not saying it’s impossible, it’s saying that we tend to focus on everything else but the work: we focus on the code, the language, the stack, and so forth. We’ll go on and on in depth about everything but that continuous matching process.

            Personally I think there’s a way to teach programming that scales from the function on out to the enterprise level, but now I’m getting into the hand-wavy bullshit arena, and I’m happy to own it. Happily this essay wasn’t about that!

            1. 1

              That’s not saying it’s impossible, it’s saying that we tend to focus on everything else but the work: we focus on the code, the language, the stack, and so forth. We’ll go on and on in depth about everything but that continuous matching process [of matching human understanding of a problem to code].

              I think you’ve nailed the real crux on the head here.

              I think it’s worthwhile focusing efforts on making and keeping code easy to maintain under less-than-optimal conditions. People are almost constantly coding under time pressure. I think focusing on languages and stack and so on is mostly counterproductive, but there are a few general approaches that do help:

              • Having a good set of tests and/or a decent set of types helps protect you against making mistakes you made before and some new ones you didn’t anticipate (which is why TDD and static typing get hyped so hard).
              • Focusing on localizing effects and state help you keep an overview of what’s going on and the impact of the changes you make. This includes visibility/encapsulation e.g. via module systems and a functional mindset (which is why some people hail FP languages as “the future”). I think microsystems and the UNIX approach are another way of doing this.

              I also think it’s good to be able to selectively and occasionally “break the rules” and throw in some code without tests or adding some nasty global variables that you mutate in a few places under a time crunch. As long as that’s viewed as technical debt and cleaned out later during a maintenance cycle, that should be okay.

              But note that this is also made impossible by the extremely principled approaches (like purely functional languages, extremely rigid static typing and, indeed, also microsystems). Then you might “workaround” yourself into bigger problems than the ones you’d be in if you cut corners a little bit in a more forgiving system. Things either bend or break under pressure.

    14. 3

      I’m here to talk about programming. Full stop.

      The hell if I’m going to play that game “But this isn’t programming” I’ve been down that road too many times. It always ends up being about programming anyway. So I try to assume positive intent.

      I think it’s fair to say that coders who post such things might need to reword them. For instance, I commented on “Are you still active in any social platforms” to mean “How are you changing the way you leverage social platforms to be a better coder/communicator?” because social platforms let us talk, enable our programming skills, and so on. It’s very much a programming practices question, perhaps the most important programming practices question many coders will face.

      Yeah, I get it that current events may cause such questions. That makes them tricky and I don’t like that. It’s too easy to drift into stupid human tricks. I usually try to make sure the questioner understands that yes, I got the current events angle, but no, that’s not the important thing. There’s actually something very important here that has nothing to do at all with politics, current events, or whatnot. It’s just that perhaps those things brought the topic up for them.

      So yup, I understand folks having a fun little flagging party. I’m also just here for programming. But I’m not playing that short-sighted, lets-assume-the-poster-is-just-trolling-us game. Be nice. I’d like lobste.rs to be a place to talk about programming, real people struggling to do a better job creating solutions for other people. That means a messy people element is always going to be part of the conversation.

    15. 4

      I think social media, at least to me, is best viewed as “stopping off at the pub after work”

      Some folks can do that everyday. Some folks end up with a drinking problem. I think it varies a lot both by the individual and the platform.

      I gave up FB. I stay on FB messenger but only on one machine. I use Twitter more than I should, but not a lot. I mostly don’t do anything else except Discord, where I’m experimenting with the platform as a tool to enable ad-hoc working teams, coding and working on technical problems with a minimal amount of structure. So far I’ve had mixed results.

      For what it’s worth, in general I’ve found that the more dispassionate I am about picking my broadcast platform (and changing them), the better the overall quality of my decision looks years later. I’ve never had any luck with becoming upset and engaging in any sort of written communication at all, much less social media. It’s best for me to step away. YMMV.

      • I like telling jokes and making people smile. I find that this is the thing that keeps me coming back to most of these platforms. I don’t need an audience, but if I can make one person feel better then I feel like I’m helping folks.
      1. 4

        What does Discord offer that Matrix, Mattermost, or Zulip doesn’t?

        1. 1

          I’ve been experimenting with online publishing and various forms of audience interaction since, well, there was a web. I tend to avoid things like featureset comparisons, or promises of instant fame or fortune.

          When I finally realized I had to leave my FB friends, it was almost like losing my family! Very strange to have that sense of loss.

          I decided that I wanted to go somewhere that already had a big crowd, was used by people who wouldn’t put up with a lot of marketing, and allowed fairly seamless integration either in meeting/coding mode or post-respond mode. Discord offered me that.

          For my server I made a minimum number of channels and I manage content there. I think this allows folks to drop by and consume the kinds of things they want while leaving me open to experiment. Now that Discord allows folks to subscribe to channels on various servers, it opens up the opportunity to do something like have a set of channels on crypto, for instance, all managed by different folks. This would be the closest I’ve seen to what magazines used to be. We’ve combined the author and editor role.

          But the big thing was turn it on, it works. Any questions, somebody is around to help out.

    16. 1

      I’ve been playing with DALL-E for a couple of days. I’ve been having a blast.

      I can’t draw. I’m a writer and I have some need for crude art from time-to-time but finding useful, free stuff to illustrate a joke or something is a long, hard slog online.

      Now I can think of something creative, have DALL-E draw it, and move on. Like so: https://twitter.com/danielbmarkham/status/1541455118725521409

      I don’t do commercial content creation, so it works fine for me. More importantly, it allows me to quickly create something, anything, that generally conveys an idea and continue the content creation process. If I were doing something commercial, it would be enough of a start to flush out later.

      I’m looking forward to having fun incorporating DALL-E into various creative workflows, see what happens.

    17. 3

      Neat idea! I like it!

    18. 20

      There are two schools of thought in technology development.

      School one is the tech development is full of little robots working at little stations. Your job is to optimize the robots, work, queues, and pathways in order to optimize flow. In this school, tech development is a big factory. Ideas come in on one side, finished tech goes outside on the other. You can see this in Product Development in “The Principles of Product Development Flow”. For DevOps the keystone book is probably “The Goal”. DevOps even goes so far as to literally call the thing a “pipeline” I guess we should be lucky nobody used the phrase “conveyor belt”

      School two is that tech development is all about scoping, identifying, and understanding a problem well enough that it “goes away”, ie, computers do everything and people are free. In this school, you want to kill the robots, the flow, the workstations, and the rest of it. The more complicated it is, the more complicated it is. And it’ll only get worse. Tech development is about combining creativity, logic, and problem-solving. It’s nothing like building Toyotas at all. We’re destroying the factory, not trying to make it run better.

      Traditional managers, directors, and C-level folks are all taught school one. After all, they couldn’t understand or do school two even if they wanted to. So whatever BigCorp you’re in, you’re most likely going to be stuck in some kind of Toyota Production System knockoff. Sometimes it doesn’t hurt too much, but it’s always suboptimal. Building cars ain’t creating new tech solutions. It never will be.

      1. 4

        And if you’re a cofounder of a bootstrapped product company, it seems obvious to me that you should implement school two all the way. Any recommended resources for learning to do this?

      2. 2

        Anecdotes follow, but I’d love to here data both for and against!

        They forget the art of engineering. The science is supposed to be repeatable, but the art is knowing what will be easy to enhance and maintain, while also being satisfying to work on so that you don’t encounter negative emotion turnover.

        You also have to factor in “wanderlust” that occurs from doing the same basic thing for a while. Be ready for your team members to just get bored with what they’re doing, and encourage them to branch out and move!

        Generally speaking, the teams I led this way would be more relaxed, didn’t have to work nights and weekends, and were CONSISTENTLY better and faster than the “people are robots driven by arbitrary quotas and dates” crews.

        Any time someone started throwing out constant arbitrary fixed goals, we did better when I REFUSED to dump that pressure on the team. If they’re possible to hit, they’ll get hit whether we focus on the random date or not. Most of the time arbitrary quotas/dates make people rush to the quota and stop when they hit it, or slack until the date looms and rush to hit it. In both cases quality or cost suffers.

      3. 1

        Anecdotes follow, but I’d love to here data both for and against!

        They forget the art of engineering. The science is supposed to be repeatable, but the art is knowing what will be easy to enhance and maintain, while also being satisfying to work on so that you don’t encounter negative emotion turnover.

        You also have to factor in “wanderlust” that occurs from doing the same basic thing for a while. Be ready for your team members to just get bored with what they’re doing, and encourage them to branch out and move!

        Generally speaking, the teams I led this way would be more relaxed, didn’t have to work nights and weekends, and were CONSISTENTLY better and faster than the “people are robots driven by arbitrary quotas and dates” crews.

        Any time someone started throwing out constant arbitrary fixed goals, we did better when I REFUSED to dump that pressure on the team. If they’re possible to hit, they’ll get hit whether we focus on the random date or not. Most of the time arbitrary quotas/dates make people rush to the quota and stop when they hit it, or slack until the date looms and rush to hit it. In both cases quality or cost suffers.

        1. 1

          This was posted as a dupe of the parent. Consider deleting.

    19. 3

      I found this too vague and theoretical to be of use. I’m left without any idea of how to solve a specific problem with this architecture, so the claims at the end seem hyperbolic.

      Some examples might help. Not necessarily actual code, but something less hand-wavey. Else you’re still in the domain of the Monty Python “How To Do It!” Sketch.

      1. 1

        Agreed. Still coming at it from top-down, the next step is to take cynefin and map it to some sample onion code. Also take the CAS attributes and map it to code smells.

        Not impossible by any means. It’d be fun. I just felt that I had to stop somewhere, and 4,000 words is enough for any one essay.