1. 3

    Perhaps it is worth pointing out that these “native” executables essentially wrap pre-jitted code into a convenient single file with fast startup time. These two features (single executable and fast startup time) can be attractive, but the price paid in most cases is runtime performance. In the majority of cases, dart code jitted against the VM will be more performant due to local type information and runtime optimization, which is (to a degree)) unavailable in a pre-jitted version.

    Contrast the needs of a smallish command-line utility, versus a long running complex server application.

    It should be noted that the AOT compiled code is always run within the context of a minimal Dart VM. In that way it is for many something different than “a traditional native executable”.

    1. 7

      “Software developers are domain experts. We know what we’re doing. We have rich internal narratives, and nuanced mental models of what it is we’re about, …”

      For a large proportion of software undertakings surely this is not true. Much of software development is outside the domains of the computing and data sciences, and computing infrastructure. While popular to consider that these are the only endeavors of importance to today’s developers, the modeling of systems in domains other than these into code represents the majority of software running in the world today.

      In these, we don’t know to model reliably and predictably, the systems that stakeholders (think they) want, and that external domain experts know. How can one consider applying scientific rigor to that?

      1. 4

        Great point. You are certainly correct that software developers are not always experts in the domains of their products. They are still experts in the domain of their tools and practices though, so they should be considered “domain experts” from the perspective of researchers.

        1. 2

          They are still experts in the domain of their tools and practices though

          I like this degree of optimism, and hope one day to overcome my experience enough to share it!

      1. 1

        I suggest that an abstraction is a simplified representation of concepts real or imagined.

        In the context of computer programming, it is the simplified representation of concepts real, imagined, or those already implemented in computer code?

        1. 1

          “The biggest challenge is how to structure code so that its complexity stays manageable.”

          I disagree that this is the biggest challenge. Instead I contend that the biggest challenge is how to abstract or model a large, complex problem or system into working code.

          Maintainability is just one measure of quality of that effort.

          1. 1

            To some degree your “requirements” choices are limited by your software architecture. You mention it is a “CRUD” application. This likely means that your domain logic and constraints are in a Service Layer featuring procedural code wrapped in transactions. If so, then Use Cases or Stories are likely your best bet, despite their ineffectiveness largely due to them inevitably requiring further and further decomposition. They are particularly well suited to this “transactional script” approach, even though it so often results in a “big ball of mud” - as complexity and size increase.

            On the other hand, if you represent your domain logic and constraints in some different way …

            1. 1

              The author culminates with a statement of which is “better”. In programming languages there is no “better”, and to claim such is naive. There is only better “suited” to the particular problem or system to be represented or modeled.

              1. 3

                I’m looking into Flutter’s internals. Specifically more into how the widget tree gets initially rendered, then how only changed parts get updated, i.e. the widget/element/render trees, plus keys (local/global etc), and the relationship between Builder, StatefulBuilder, StatefulWidget and StatelessWidget.

                Anyone want to compare notes?

                1. 2

                  I don’t have any notes to compare, but I’m just getting started with Flutter myself and I would love to read what you find if you have a blog or something.

                  1. 2

                    No blog on this type of stuff sorry. What do people use to collaborate together on a topic? Dare I say I miss something like Google Wave, i.e. a combination of collaborative wiki, question/answer forum?

                    1. 2

                      I miss Google Wave too…

                  2. 2

                    I don’t have any notes, but I remember the Xi editor guy recommending this talk about Flutter’s internals in his talk about Rust GUI: https://www.youtube.com/watch?v=UUfXWzp0-DU

                  1. 10

                    Given the title, I was hoping a little more for lessons learned, and some reflection of benefits/costs versus alternatives after one year, rather than largely being a description of what ES/CQRS is.

                    1. 6

                      Same here. The few folks I’ve talked to first-hand that tried ES/CQRS systems (a very small set size for sure!), ended up running screaming for the hills after a while (and/or doing a rewrite). Maybe they did it wrong, or maybe doing it right is hard? Unsure.

                      So, I’d be sure be interested in hearing more anecdotes/stories/tales about how ES/CQRS went right, or didn’t.

                      1. 11

                        The few folks I’ve talked to first-hand that tried ES/CQRS systems (a very small set size for sure!), ended up running screaming for the hills after a while (and/or doing a rewrite). Maybe they did it wrong, or maybe doing it right is hard?

                        ES is a quagmire like ORM (and really, OOP in general): never-ending domain-modeling churn, with the hope that a useful system is “just around the corner”.

                        This stuff is catnip for…

                        The context of the project was related to the Air Traffic Management (ATM) domain

                        …government/defense contractors. The drones are obsessed with building and rebuilding the One True Hierarchy of interconnected objects. But taxonomy is the lowest form of work.

                        According to Martin Fowler Event Sourcing:

                        Ensures that all changes to application state are stored as a sequence of events

                        Indeed, that sounds awesome. And in order to do that you need actual computer science (e.g. datomic), not endless domain-modeling.

                        Domain Driven Design (DDD) is an approach to tackle …

                        Like clockwork :)

                        1. 2

                          Some great links in there, and some good reading. Thanks!

                        2. 10

                          I have been working on an ES/CQRS system for about 4 years and enjoy it, but it’s a smaller system than the one the article describes. It’s a payment-processing service.

                          Because it’s a much smaller service, I haven’t really gotten into a lot of the DDD side of things. The system has exactly one bounded context which eliminates a lot of the complexities and pain points.

                          There was definitely a learning curve; this was my first exposure to that architecture. I made some decisions early on in my ignorance that ended up being somewhat costly to change later. However, I’ve been in this game a pretty long time, and I could say the exact same thing about plenty of non-ES/CQRS systems too. I’m not sure this architecture makes it any more or less painful to revisit early decisions.

                          What have the costs been?

                          • The message-based model is kind of viral. If you have a component that you could almost implement as a straight system service that’s just regular code called in the regular way, but there’s one case where an event would interact with what it does (example: a customer cancels a payment that hasn’t completed yet) the path of least resistance is to make the whole component message-driven. This sometimes ends up making the code more complicated.
                          • Ramping up new engineers takes longer, because many of them also have never seen this kind of system before.
                          • In some ways, debugging gets harder because you can no longer look at a simple stack trace to see what chain of logic led you to a certain point.
                          • We’ve spent an appreciable amount of time mucking around with the ES/CQRS framework to make it suit our needs. Probably still less time than we would have spent to implement the equivalent feature set from scratch, but I’ve had to explain why I’m spending time hacking on the framework rather than working on the business logic.
                          • If you make a significant change to the semantics of the system, you may need to deal with old events that no longer have any useful meaning. In many cases you can just ignore them, but sometimes you have to figure out how to translate them to whatever new conceptual model you’re moving to.

                          What have the benefits been?

                          • The fact that the inputs and outputs are constrained makes it phenomenally easier to write meaningful, non-brittle black-box unit tests. Like, night-and-day difference. Tests nearly all become, “Given this initial set of events, when this command/event happens, expect these commands/events.”
                          • Having the ability to replay the event log makes it easy to construct new views for efficient querying. On multiple occasions we’ve added a new database table for reporting or analysis that would have been difficult or flat-out impossible to construct using the data in existing tables. With ES, the code to keep the view model up to date is the same as the code to backfill it with existing data. For a couple of our engineers, this was the specific thing that lit the light bulb for them: “Wait, you mean I’m done already? I don’t have to write a nasty migration?”
                          • In some ways, debugging gets easier because you have an audit trail of events and you can often suck the relevant events into a development environment and replay them without having to sift through system logs trying to manually reconstruct what must have happened.
                          • The “dealing with old events” thing I listed under costs is also a benefit in some ways because it forces you to address the business-level, product-design question up front: how should we represent this aspect of history in our new way of thinking about the world? That is extra work compared to just sweeping it under the rug, but it means you’re never in a situation where you have to scramble when some customer or regulator asks for history that spans a change in product design.
                          • Almost nothing had to change in the application code when we went from a single-node-with-hot-standby configuration to a multiple-active-nodes configuration. It was already an asynchronous message-passing architecture, but now the messages sometimes get delivered remotely.
                          • And finally the main reason we went with ES/CQRS in the first place: The audit trail is the source of truth, not a tacked-on extra thing that might be wrong or incomplete. For a financial application that has to answer to regulators, this is a significant win, and we have had meaningful benefit from being able to prove that there’s no way for information to show up in a customer-facing report without an audit trail to back it up.

                          The main conclusion I’ve reached after working on the system is that ES/CQRS is a tool like any other. It isn’t a panacea and like any engineering decision, it involves tradeoffs. But I’m happy to have it in my toolbox to pull out in cases where its tradeoffs are the right ones for a project.

                          1. 1

                            Thanks for the comprehensive answer! <3

                          2. 8

                            Like with all Design Patterns, ES/CQRS is a means masquerading as an end, and good design will be found to have naturally cherry-picked parts of it without needing to name it as such.

                            Anecdotally, I’m dealing with a system that is ⅔ ES/CQRS and ⅓ bandaging together expanding requirements, new relationships between entities, increasing latency due to scale – basically everything that wasn’t accounted for at the start. It works, but I wouldn’t choose it over a regular database and a transaction log.

                            1. 6

                              anecdotes/stories/tales

                              As with outcomes, sadly so elusive.

                              It occurs to me that our industry would be well served with a “Glassdoor” for IT projects. One where those involved could anonymously report on progress, issues and lessons learned. One which could be correlated with supplier[1], architecture type, technologies, project size etc.

                              [1] supplier, e.g. internal or specified outsourced supplier i.e. Accenture, Wipro, IBM etc.

                          1. 12

                            While I find the writing of Martin Fowler to be good for getting ideas on new patterns or existing ones, better expressed, I strongly disagree calling this page a software architecture guide. Software architecture is a lot more than these patterns and approaches.

                            I’ve been building a few large distributed systems and been in the design loop of many more the past few years, within Uber. Software design/architecture of these systems resembled anything but what the articles talk about.

                            We did not use patterns or find the best architecture matches. Instead, we looked at the business case, the problem and the constraints. We whiteboarded, argued and discussed approaches, trade offs and short-medium-long-term impacts. We prototyped, took learnings, documented the whiteboard approach that made sense, then circulated the proposal. We debated some more over comments and in-person, then built an MVP, shipped it and went on with the same approach to the next phase, all the way to migrating over to the new system.

                            Nowhere in this process did we talk the kind of patterns this post talks about. We talked tradeoffs and clashed ideas all the time though.

                            Talking with friends in the industry, the same applies for much of the large systems built by tech companies across Uber, Amazon, Facebook, Google. If you read the design documents, they use little to no jargon, simple diagrams, little to no reference to patterns and are full of back-and-forth comments.

                            Maybe I live and work in a different environment than Martin Fowler. But where his thoughts of architecture end is where my world just about begins.

                            Also, as my view is so different on this topic than this post, I’ll likely write up my thoughts/experience in a longer form.

                            1. 6

                              I agree, what you describe is the way to go. You start with the business case, you think it through collaboratively, and you figure out the best plan you can for (a) getting it done, so that (b) you can maintain it with velocity from then on.

                              But I don’t think Fowler advocates anything against that.

                              I think, if you and some teammates were doing that, and Fowler was around, he’d be listening, understanding, and identifying patterns that he could document. Then he’d be remarkably good at explaining them to management and to the Product team. He also might overhear a conversation and say, “We struggled with that at xyz. Here’s what we did.” And he might even point to one of his own articles in that case.

                              I think what often goes wrong is developers often start with a blank whiteboard, panic, and then grab Fowler’s or some other author’s works, and try to make them fit.

                              Rather, the process should be: start with a blank whiteboard, think hard, sketch out some ideas and identify challenges, then maybe see if any documented patterns or ideas might help.

                              1. 3

                                Do you ever retroactively try to identify what architectural patterns you ended up with, for instance to mention in documentation? In the end I would say patterns are mainly useful as a convenient shorthand to communicate parts of the design, more than a toolbox to try and apply.

                              2. 5

                                Maybe I live and work in a different environment than Martin Fowler.

                                This touches on something I’ve been thinking about for quite a while. That there are broad categories of sofware application, and that programmers typically have a perspective only on those they have actually experienced. I’m not talking here in your case, but more generally.

                                For many new in the industry, there is a perspective that software is limited to problems in the computer and data sciences, and those involving computing infrastructure. Often these undertakings have a focus largely on what may be considered non-functional requirements, where functional requirements (and how to represent them in code) is of relatively little importance. Issues such as how to scale, be reliable, available, accessible and so forth are paramount.

                                On the other hand, there are a large proportion of applications where how to model a complex problem domain into code is the issue. Typically a problem domain where expertise is outside of the development team. How to discover, manage and implement functional requirements into code is the core issue. It is this later world that Martin Fowler primarily lives in and writes about.

                                Is such a categorization of any value?

                                1. 2

                                  Possibly, but the example you give sounds to me like it intermingles the datamodel / domain structure / domain architecture with the software architecture, while the former, in my view, is mostly one of many constraints on the latter and one that can be used to ‘encode/compress’ part of the functional requirements. However, you can’t design a good software architecture without taking all functional requirements into account, while at the same time you also need to take all other requirements into account: after all, there is only one software architecture.

                                  1. 1

                                    I think this is a good categorization.

                                1. 2

                                  Here are my thoughts. I’d be curious to know what others think of them?

                                  Perhaps the question of what is Software Architecture can best be considered through what a Software Architect does.

                                  That role (which may or may not be a formal one in a particular Project), makes decisions about how requirements should be implemented in a software solution. In addition, a Software Architect may make decisions about how those requirements are discovered and managed.

                                  In this regard the role can be considered to be the technical complement in a software project of decisions made by a Project Manager, which typically involve those related to the management of scope, units of work over time, milestones, resources, cost, stakeholders and risk.

                                  It is common that these architecture decisions are limited by constraints, which can be typically considered to be already made decisions or requirements which limit the choices of the Software Architect. Examples of constraints include:- functional requirements must be represented as Use Cases; the corporate Oracle database must be used for data persistence; the solution must be be Sarbanes-Oxley compliant etc.

                                  Decisions include those related to both functional and non-functional requirements (subject to constraints). There are a large number of ways functional requirements might be discovered, managed, represented and implemented. A good Software Architect knows the relative merits and deficiencies of different alternatives when applied to differing requirements and circumstances. Likewise, one should know (or delegate to one/those that know) the relative merits and deficiencies of alternative decisions related to non-functional requirements which may include those related to:- maintainability, scalability, reliability, usability, security, compliance etc.

                                  While the Software Architect role may not formally exist in a software project, consider that the role is always actually performed. How such decisions are made, and when they are in a non-trivial undertaking, are directly related to a project’s success.

                                  Given this, perhaps consider that a software architecture is the result of decisions made to meet the software’s requirements and constraints. These results can be considered the software’s architecture.

                                  It might be useful to consider this against another domain …

                                  Consider that a building or bridge architecture is the result of decisions made to meet the structure’s requirements and constraints. These results can be considered the structure’s architecture.

                                  1. 1

                                    My article intentionally focuses on the software architecture instead of the architect. There are so many things an architect should do that it hardly fits into a blog post. My employer has a 10 page document. An example would be that the architect should influence the organization because Conway’s law.

                                    I would say that the primary role of an architect is to ensure that the software architecture is designed intentionally. The assumption is that an intentional design is better than an emergent architecture with respect to whatever metric the business desires.

                                  1. 3

                                    How refreshing to read of a real problem and journey to an (alternative) solution, rather than yet another general description or critique of a technology or technique. Many thanks to the author.

                                    It’s particularly fascinating for me because I work in a different world of complex line-of-business systems, which do not often involve problems like how best to implement (and persist) a very large graph, where one has requirements for fast and concurrent access with consistency constraints. That being said I have been involved in a couple of systems that incorporated “engines” that faced some similar issues. For example, a large estimation system that maintained a complete parts list for vehicles in order to calculate the cost of repairs (including the time required for repair), another being part of the calculation engine for an actuarial application, and a complex aircraft engine management system.

                                    Would have been nice to have read of your experiences before working on the architecture of these!

                                    1. 2

                                      Thanks, Aryeh.

                                    1. 9

                                      One thing that struck me, is that we as a Fortune 50 company with hundreds of subsidiaries, and a commensurate number of software systems and projects, really have a different problem than seems to be discussed here. Over hundred of systems, the issue for us isn’t largely technology and the development/runtime stack, but rather how to better discover and model functional requirements of a problem domain into working code.

                                      Is seems to me that much developer discussion today, including the author’s talk, is about non-functional requirements and technologies/techniques to meet them. Is this because most systems worked on have trivial functional requirements, and the focus instead is almost entirely on aspects like reliability, scalability and maintainability?

                                      1. 5

                                        I fondly remember HyperCard as enabling anyone to create interactive content and functionality, in a free, simple, approachable, use-it-as-you-play way.

                                        It allowed non-programmers of all ages to be creators. We are arguably much more a world of online consumers today, serviced by inaccessible (to many) technologies, languages and deployment models. Is there no place for such creativity tools today?

                                        1. 2

                                          Nice article with a bunch of history I didn’t know! Thanks for writing this, as I’m also annoyed by the “Alan Kay and OOP” meme. I got a comment reply to that effect a few weeks ago, which I can’t find at the moment because Reddit is down.

                                          BTW, “The Design and Evolution of C++” by Stroustrop is a nice historical book that talks about the influence of Simula on C++. Given that C++ directly influenced Java and Python, and Java influenced C#, I’d say C++ has contributed a lot to OOP (despite what both C++ people and OOP would say about that :) ).

                                          If I were to write an Alan Kay rant, I would write one called “The Web Shouldn’t Be a VM; It should have a VM (and it’s had at least three)”. At some point he said some fairly dumb things implying the former, disparaging the work Tim Berners-Lee, so he sort of deserves to be disparaged for that… But it’s an old argument, and he already “lost” so it’s probably not worth it.


                                          edit: in case anyone cares, the reply to this comment is where I got hit with the “Alan Key OOP meme”

                                          https://www.reddit.com/r/ProgrammingLanguages/comments/b1hzke/is_there_a_scientific_paper_that_claims_that_the/eimac71/

                                          1. 4

                                            Actually Java’s basis in Simula is more direct, rather than via C++.

                                            “Java’s object model came directly from Simula”. See http://bit.ly/2xc1XTA (James Gosling)

                                            1. 1

                                              Interesting, thanks for the reference!

                                              FWIW this 2009 post is the one I was thinking of that shows the influence of C++’s OO features on Python:

                                              http://python-history.blogspot.com/2009/02/adding-support-for-user-defined-classes.html

                                          1. 4

                                            A great article to clarify a very common (and unfair) error attributing object-orientation to Alan Kay. The main legacy is a confused new generation of programmers all arguing that Erlang is the only object-oriented language, because it’s all about something called “messaging”.

                                            To quote him (recently) on his role in object-orientation’s invention:-

                                            “Very much in the same spirit as I thought about it back in the 60s. I don’t think I invented “Object-oriented” but more or less “noticed” what was really powerful about just making everything from complete computers communicating with non-command messages. This was all chronicled in the HOPL II chapter I wrote “The Early History of Smalltalk”.

                                            A critical part of that thought process was the idea of using Carl Hewitt’s PLANNER ideas as the interface to objects – that was done around 1970, and we used some of it in the first Smalltalk (72).”

                                            Source - https://news.ycombinator.com/item?id=14337970

                                            1. 13

                                              Looks great and I’m so tempted to subscribe!

                                              However, I need less “new” things to ponder and distract me. My reading backlog already likely represents 4 months full time reading.

                                              I’m not sure that succumbing to the feeling of “I might miss out on sonething important” is really beneficial or healthy for me.

                                              I’m perhaps one of the few that want to learn less new things, but it looks like a fantastic resource for those that do. Best wishes for its success.

                                              1. 13

                                                Reading this made realize I don’t see myself in any of the 3 groups. I’m probably closest to a “maker”, however, the author seems to tie that somewhay to UI and “front-end”.

                                                I work on line-of-business systems that tend to be large and complex. In that, I most consider myself a modeler. I’m primarily about modeling systems, real and imagined, into a working program, but not through the lens of a UI (or a relational database model).

                                                Instead, I see functional requirements for a large, complex system being implemented in an object-oriented “domain model”.

                                                Am I just a dinosaur, and lonely group of one, these days?

                                                1. 40

                                                  Am I just a dinosaur, and lonely group of one, these days?

                                                  I think the author is just wrong. People really like putting things into neat little boxes, but I would guess there are more than 3 types of programmers out there.

                                                  1. 15

                                                    I’m going to go one further and say these 3 “tribes” are just different ways of thinking that anyone can adopt at any time, or depending on the situation. I could plausibly claim membership of a different tribe every hour of the working day.

                                                    1. 7

                                                      To me, that flexibility is one of the most rewarding things of the profession: I can inhabit the mindset that best fits the problem I’m currently faced with, and I find my best ideas often by flipping to another perspective.

                                                      1. 5

                                                        Indeed, in fact the thing that would most likely make me say someone isn’t a “real programmer” is not their membership in the “wrong tribe”, but rather their inability to switch between all three of these modes as appropriate.

                                                        1. 2

                                                          I think the difficulty lies in that the initial discovery of each tribe is met with the revelation of, “ah, I’ve found the way to build software.” Then you sit with it for a few years and start to see the weak spots.

                                                      2. 13

                                                        Agreed. This is an arbitrary distinction and many programmers don’t tend to fall neatly into one category or the other. As a Scheme implementor, I believe “camps” 1 and 2 are not mutually exclusive, and in fact there’s a long history of programming language implementation research that tries to bridge the gap between beautiful mathematical constructs and how to make a machine execute it as fast as possible. In fact, this is the most interesting part of PL implementation!

                                                        1. 3

                                                          not to mention programming language and library design as the user interface you present to other developers.

                                                          1. 4

                                                            Absolutely, that’s more the “craft” or engineering aspect of programming, which fits better in the third “tribe” from the article. You need to be able to work on and balance all three aspects, really :)

                                                        2. 4

                                                          I agree with that. Furthermore, categorizing in general throws away information and that may or may not be suitable depending on the context. Perhaps a better approach would be to identify different traits of programmers and treat each of them as a continuous variable. Based on that it would be possible to categorize less arbitrarily (e.g. by finding clusters in the data). This would be a more data-driven approach and one would obviously need a representative dataset for that.

                                                        3. 11

                                                          No. As an experienced software person, you simply have to divide your time into multiple heads-paces.

                                                          Sometimes you can really appreciate something like this:

                                                          qs([H|T])->qs([X||X<-T,X<H])++[H]++qs([X||X<-T,X>=H]);qs([])->[].
                                                          

                                                          Other times, you might just do lists:sort(A++B), and yet still other times you would be like no you really can get away with lists:merge(A,lists:sort(B)).

                                                          That’s ok. It doesn’t make you alone, but it can be lonely: Most teams in my experience have just one of you, and even in a large company there are likely to be very few of you.

                                                          The article may be written by someone just discovering these different head-spaces exist and jumping to the conclusion that they fit a taxonomy, instead of more like a venn-diagram of considerations that go into implementation and design. That’s unfortunate, but it doesn’t mean there isn’t some good stuff to think about.

                                                          1. 5

                                                            I think this post would make more sense if it focused less on UI and more on shipping things. This would also avoid the awkwardness of writing “Bret Victor” below this:

                                                            How the code interacts with humans is a separate consideration from its implementation. Beautiful code is more important than beautiful UI.

                                                            I think I am personally a bit of all three, naturally trending more towards 1 but trying to be more like 3.

                                                            1. 3

                                                              Yeah, I had a similar feeling when reading this. I have mostly worked on HPC systems which are used for scientific simulations, and my primary concern is helping scientists do that. I write software in the course of building and maintaining those systems, but if I could make “simulate scientific problems” work without writing any software, I’d do that instead.

                                                              Or, as a friend of mine put it: computers are just a means, not an end.

                                                              1. 1

                                                                Reading this made realize I don’t see myself in any of the 3 groups.

                                                                I’m glad I came to the comments and saw this as the first line of the first comment. I absolutely feel the same way.

                                                              1. 3

                                                                A thoughtful attempt to get to grips with what I think should be regarded as a bad principle. I say bad principle because I am unable to find a single example or source, that can clearly explain how and when to use it. It seems that for many new programmers it has come to mean “a class should only have one method” which I find… ludicrous.

                                                                1. 5

                                                                  My theory, completely made up and untested, is that these powerful Lisp/Smalltalk systems weave code and brains together in a way that amplifies both.

                                                                  I think that is a universal property of programming. Cf. code as design, programming as theory building and others arguing that programming and design are interweaved such that the code supposes a context and background knowledge to make sense and tell a story the developers can understand. New developers need to become familiar with the context, background and story to understand the code and make changes consistent with them.

                                                                  1. 4

                                                                    Yes. Back when Rails was making a splash, the amount of noise generated over the fact it uses convention over configuration was astounding: the time spent on boarding a new hire is not bound by the fact your web app handlers don’t follow a strict file name convention, but rather the domain itself, the history of how it has evolved, and the specific trade offs made.

                                                                    As usual, these matters are less fun to talk about, so we fixate on the tech instead.

                                                                    1. 3

                                                                      In light of this, I wonder if it’s a mistake to work on a project for a significant length of time as a solo developer, unless one intends to be solely responsible for it forever. Maybe a team project has to start out as a team project, so the domain-specific knowledge isn’t all in one person’s head. What do you think?

                                                                      1. 3

                                                                        Modeling the domain as explicitly as possible (eg DDD) and keeping it separate from gunk like frameworks and the web goes a long way towards making a system that is easy to pass on.

                                                                        1. 2

                                                                          It appears to me that DDD has morphed into being all about the “gunk like frameworks” rather than the problem domain. To so many today it requires Event Sourcing perhaps with CQRS etc. The idea of using language constructs to model domain concepts, their interactions and the constraints between them as exposed by Eric Evans book (i.e. an object model), is lost in the rush to how to model plumbing like Aggregates and Bounded Contexts.

                                                                          My belief is that it has become this way because of the huge numbers of Microsoft background developers who have grown up with Microsoft’s data modeling approach, where there is no rich object domain model but instead a transactional service layer that interacts with a domain layer that just abstracts a database. Eric Evans book concepts dont really fit that anemic domain model approach.

                                                                          1. 2

                                                                            Fully agree. The ideal of Evans hasn’t really been realized in mainstream understanding. Which is kind of funny, because it is conceptually much simpler than stacking myriad layers of gunk atop one another.

                                                                        2. 1

                                                                          I think the knowledge can be transferred, but it takes time. If you start working together with the one holding the knowledge and communicate about ongoing development, discussing requirements, designs, code, …, taking care to go into the what/how/why/when/where/who/whence/… and the history of each of those, then eventually all knowledge will be transferred.

                                                                          To do that you need to have someone willing to think of and ask a lot of questions and someone willing to answer them and expound freely (as well as an environment that allows that). There are all kinds of possible obstacles that prevent this from working. If asking questions makes someone feel dumb, being asked questions makes someone feel second-guessed, history doesn’t seem important, management doesn’t allow prolonged chats, etc.

                                                                          I think the knowledge transfer can be sped up significantly if you manage to first convey the high level structure of that knowledge. That should follow from answers to high level questions. Why does this company/project/product/service exist? How do the people involved view its future (and how has that changed ober time)? Who are the (intended) customers/users? What development philosophy, traditions, convictions, … has guided development? What have been critical decisions?

                                                                          An advantage of starting in a team, even if only of two, is that you are already forced to make a lot of things explicit to explain them to the other, with the opportunity to write them down somewhere. You can choose to document every hack, every design decision, every shelved idea, every known shortcoming, etc. Also the answers to the higher leveI questions mentioned in the previous paragraph. I think a couple of documents can go a long way towards providing the framework for all the knowledge.

                                                                          Anyway, this is just a scattering of thoughts on the subject and quite non-exchaustive; I wonder if there’s a good source that addresses these issues…

                                                                    1. 5

                                                                      One reason is they provide no benefit for a large proportion of applications. Particularly those that model large complex domain problems, where domain knowlege is outside the development team. Problems outside the domains of computing and its infrastructure, or data sciences. In these, the problem is not the correctness of software.

                                                                      1. 14

                                                                        I would disagree. For example, Business process modelling is a huge market and how to implement them securely is a big research problem (SAP has whole offices of researchers for that). Large companies regularly go bankrupt by essentially finding modelling errors too late. A lot of the techniques they research are very similar to things like TLA+. When working there, I worked on proving properties about supplier graphs.

                                                                        I’ve had clients as seemingly simple as a messenger app crashing and burning when they hit all the edge cases they hadn’t seen. Sure, there’s more things here (other bad development practices, etc.), but I don’t agree on the point that hiring an expert in essentially modelling and bullet-proofing their messenger-system would have cost more as a team of 5 programmers figuring out everything on foot.

                                                                        The current problem I see is that we’re still lacking easily usable tools. I’ve seen clients using BPMN to inform developers of product processes instead of a bunch of stories as a shared artifact between management and devs - encouraging that and putting some form of additional checking through experts in the middle sounds like a workable sketch.

                                                                        1. 3

                                                                          The building blocks can still be proven correct.