A while back I got in it my head to write “a people’s history of UML” and interviewed several people about it, including Grady Booch, Bertrand Meyer, and one of the current architects of SysML. The story is a lot more complex than just “Agilists were meany poo-poo heads”. Among other things:
UML tried to integrate three separate method systems while remaining backwards compatible with all their legacy tooling. That made it very messy and inconsistent from the start.
There was no text format, except XMI, which barely counts. Between that and its large complexity, the only people who were able to make UML tooling were large software teams, which severely hampered any open-source ecosystem from developing.
IBM bought Rational in 2003 and did their IBM magic. I had trouble getting Rational Rose working on Windows 10; I don’t think it’s been updated since the mid-00’s.
Rational and OMG were never that clear on whether UML was supposed to be a modeling tool or a Model-Driven Engineering tool, ie 2-way sync between code and diagrams. Lots of software developers expected the latter and were burned by the lack of any refinement tool or roundtrip properties.
Software methodology was never a really big thing, and even UML never went fully mainstream. Even in the Bad Old Days of Waterfall the majority of software developed ad-hoc and incrementally.
Unrelatedly, I’m always a little unbalanced by people calling UML a “formal specification”. I guess it is? You can draw an invalid UML diagram. But most of the diagram types are this weird combination of complicated and inflexible that make them poor for actually specifying things. Like how does having a class diagram actually help you? I guess you can trace it to a use-case diagram, but do you really need formal semantics for that?
I think many of the useful diagrams that you could make with UML (e.g. state machines) work better in graphviz. graphviz is freely available, flexible, fairly easy to use and the dot format is not difficult to programmatically generate.
UML + OCL (a weird sort-of subset of a union of object-z and Java) has some merit as a formal specification tool, and even less likelihood of round trip tooling support.
Unrelatedly, I’m always a little unbalanced by people calling UML a “formal specification”.
Were there not people who wanted UML to be used to generate code or at least the code structure? That madr sense to me as to why UML is so needlessly pedantic on minor details such as arrow types. Of course that breaks down one you stop using a Java-kind of language.
UML works for a very narrow view of software development. I found that out when I wrote my thesis in OCaml and had to document it in UML. How do you express a curried function in UML? How do you express a simple function in UML that happens to not be attached to an object?
And that is only the tip of the iceberg.
How do you express a complex SQL query in UML?
Entity-relation diagrams help modelling the schema, but what about a query with joins and window functions?
How do you express a map-reduce operation in UML?
What’s the diagram for filtering a list, then aggregating the results?
How do you express a minimally-complex algorithm in UML (think Sieve of Erastothenes)?
Why can’t I use pseudo-code, which beats diagrams every day of the week?
How do you express a reactive user interface in UML?
Why do I have to use something like a collaboration diagram, when the state of the interface derives directly the representation of the interface? It’s not the case that the input enables the submit button when it finds itself no longer empty.
How do you express a Prolog program in UML?
Do you think the UML committee has ever known about the existence of Prolog?
How do I represent a multi-param generic data structure? For example, a mapping/hash table from A to B, where A and B are arbitrary types.
And then we ignore the old problem of documentation and software diverging over time, and having to update the documentation when the software changes.
UML comes from an era where Object Oriented (Obsessed?) Design was the whole world, and everything that wasn’t OOD was being forced to not exist. In this era, Architects would draw UML diagrams and cheap Indian code monkeys would do the coding, because we thought that code monkeys would be cheap and replaceable. This was perfect for the MBAs because programmers started to be annoying by not delivering on the expected deadlines, and claiming about the algorithms, the damned algorithms. We wanted to replace them with cheaper and less-whiny ones.
Turns out that the details of the system are in the code, not in the diagrams, and only the trivial parts of the system would be expressed in diagrams, and the devil is in the details. But this makes programmers non replaceable and fungible, because the better ones can express algorithms that the worse programmers will never understand. And many times, you need these better algorithms.
All this makes UML not anymore the silver bullet for bodyshops. The broken idea of hiring 1 architect and 100 code monkeys from the consultancy just doesn’t work, because the architect is not expected to dig in the details, and the 100 code monkeys will just make a mess of the details.
UML was dead on arrival when it ignored most of the details of Software Development. Doesn’t mean though that some parts of UML can be salvaged, such as sequence diagrams, state diagrams, or entity-relation diagrams. But trying to model in UML an arbitrary problem and solution is likely to become wrestling with the language to express anything minimally complex or different.
How do you express a …
UML was dead on arrival when it ignored most of the details of Software Development.
The UML isn’t supposed to model software down to the query and the algorithm. It’s supposed to model a system and its processes, to assist with the design of a software implementation of that system.
If the way you think about modelling a system is “what complex SQL queries will I need, and how can I add a functional reactive UI” then indeed the UML will not help as the paradigm of its creators and users is the object oriented paradigm. You are thinking about the problem in a way the UML will not help you to express, so we do not expect it to help with the expression of those thoughts.
The OO paradigm of the UML isn’t the “objects are inheritance and encapsulation” straw man of blog posts about switching to functional, but the “objects are things in the problem domain reflected in software” of object oriented analysis and design.
To the extent that the UML and other tools from OOSE were “dead in the water” (an odd claim given their former prevalence), a more convincing reason is that they were sold as efficiency and productivity bolt-ons to programmers without also explaining the need for a paradigm shift. A large number of companies adopted Java, carried on writing structured software, and noticed that they were also having to do this O-O ceremony that doesn’t help them.
In a few years time they’ll notice that switching to Scala, carrying on writing structured software, and also doing this functional ceremony isn’t helping them, and we’ll all get to have this thread again on the “has currying died and nobody noticed?” posts.
The UML isn’t supposed to model software down to the query and the algorithm. It’s supposed to model a system and its processes, to assist with the design of a software implementation of that system.
Then I would like to have the class diagram removed from UML, please, because class diagrams define a lot of details of data structures, and that restricts a lot on the algorithms I’m allowed to use.
You are thinking about the problem in a way the UML will not help you to express, so we do not expect it to help with the expression of those thoughts.
And now you are agreeing with me that many problems and solutions cannot be expressed in UML unless I twist the problem/solution to fit UML.
This is not about Object Oriented Programming vs Functional Programming. This is about the fact that I can’t express many things in UML, starting with the whole universe of functional programming, and continuing with database interactions, declarative programming, advanced data structures, compositional semantics, and many others that I haven’t got time yet to study. Each of those alternative ways to do computing beat the others in specific situations, and having to use UML just forces me to not be able to use the right tool for the job because the UML committee decided that it is hammers for everyone. And now I have to make this screwdriver look like a hammer to be able to draw it in UML.
Then I would like to have the class diagram removed from UML, please, because class diagrams define a lot of details of data structures, and that restricts a lot on the algorithms I’m allowed to use.
Firstly, you’re welcome to not use class diagrams. Secondly, you’re welcome to only put the details you need into a diagram, and avoid constraining your implementation: the map is not the terrain.
And now you are agreeing with me that many problems and solutions cannot be expressed in UML unless I twist the problem/solution to fit UML.
I don’t think so. It sounds like you’re saying the UML is bad because you can’t do these things, whereas I’m saying the UML is good when I don’t do these things. “Doctor, it hurts when I lift my arm like this!”
I hate when people use this analogy. If it hurts when I lift my arm like this, that’s probably a sign of some deeper underlying problem! Don’t tell me to not lift my arm.
The OO paradigm of the UML isn’t the “objects are inheritance and encapsulation” straw man of blog posts about switching to functional
That straw man is exactly what I’ve been taught at school. Shapes and animals and all that. Sure it’s just for learning, but the next step is invariably “OO is hard”, “OO is complicated”, “OO takes times to master”… the usual conversation stoppers.
[UML is] “objects are things in the problem domain reflected in software”
That also is likely a mistake. As Mike Acton so eloquently put it, we should not code around a model of the world (in this case, the things in the problem domain). We should code around a model of the data. The things in the problem domain are what you speak of before managers or domain specialists. When you start actually programming however, it quickly becomes about the shape and size and flow of the data, and UML doesn’t help much there.
“objects are things in the problem domain reflected in software” of object oriented analysis and design.
What does that even mean ? I have never seen this aspect explained except in the most superficial terms - creating a class which has a method name that reflects something from the domain usually resulting in bikeshed arguments.
I am sorry but that is not a model of anything. It is just naming. We can’t call naming modelling. When I am making a model of a circuit in software I can interact with the model, test all the assumptions. When an actual architect uses AutoDesk they are modelling things and can interact with them. BDD does this so it can absolutely be called modelling in some sense. I don’t know if it is the best modelling technique we can come up with but it works.
I’d recommend Eric Evans’s book on domain-driven design. “Just naming” implies that you’ve already decided what the method does and that you want to find something in the problem domain to map it onto. OOA/D says that you have found something important in the problem domain and that you want your software to simulate it.
I felt this one. I haven’t been doing this, relatively, that long but it feels like we are eschewing formal documentation and planning for a more fly by the seat of our pants, get it right with enough iteration approach. I’ve talked to a few “technology savvy investors” in the past few years that have told me that software architects are an antipattern.
On the one hand the people against a more formal architecture approach might be right. If you break your features down to such manageable chunks that they can go from zero to hero in production in a sprint do we need need a full design process and documentation that will end up wrong or out of date before anyone has a chance to fix a bug?
I think the problem ends up being with the struggle of maintaining a cohesive architecture across all of these manageable l, hopefully vertical, slices of functionality. I’ve come into a few shops who have followed a low/no documentation route (“The user story is the documentation!” Said the project manager) and indeed they ship code fast but by the time I get there we get tasked with trying to build a unifying style or standard to move everything to because the original developers are gone/dead/committed to an asylum for their health and no one is willing to make changes to the legacy codebase with any confidence and ticket times are climbing.
So where is the middle ground? I would be happy to never have to create another 10+ page proposal, print it out and defend it in front of people who are smarter than me, and know it but there is something missing in modern software development and I think it’s some sort of process of getting our bad ideas out in the planning phase before writing code. Formal methods for sure fill this role but no one has the time outside of the “if my code fails someone dies” crowd.
I guess this is a long winded way of saying “I use formal planning to get my bad ideas out of my system before I create software that works but breaks in subtly or is just flat out wrong.
I think it’s a market thing. Subtly broken software that’s out now is better than well-designed software a year from now, from a customer adoption perspective….sadly.
I agree. Usually (hopefully) the lag isn’t that pronounced but it’s all how fast can we get a feature in front of users to drive adoption or validate our “product market fit” to investors. I’m glad where I’m at we have a formal architecture review, albeit abbreviated, for new features so we are trying to stradle that fine line between velocity and hopefully we’ll designed code. Sadly this just means the architect (me) gets a lot of extra work
Have you heard the good word of lightweight formal methods
(Less facetiously: the key thing that makes the middle ground possible finding notations that are simple enough to be palatable while also being machine checkable. I volunteer a lot with the Alloy team because I think it has potential to mainstream formal methods, even moreso than TLA+.)
A friend has gotten me on a mcrl2 kick for modeling one of our new systems. I’ll check out alloy as well. I love playing with pluscal/tla+ when I have the time for it
The thing I really like about Alloy is that the whole BNF is like a page. So it’s not as big a time-investment to learn it over something like mcrl2 or TLA+.
Also, I think you’re the second person to mention mcrl2 in like a week, which is blowing my mind. Before this I’d only ever heard about it from @pmonson711.
I think I’ve finally warmed up to the idea of writing more about how and why I lean on mCrl2 as my main modelling tool.
Regarding Alloy and the time investment, I agree the syntax is much simpler. It did take me a huge mental investment to get comfortable with modelling anything that changes overtime. It’s a great tool for data structure/relations modelling but if you need to understand how things change with time, than TLA+’s UNCHANGED handling feels like a necessity.
I think thats the most valuable part of the whole exercise for me. I usually model everything I can as some sort of state machine with well defined entrance and exit behaviors so seeing over time the affect of changes to that system is really useful to me
I’m in the process of working out how to change Alloy’s pitch away from software specifications and more towards domain modeling, precisely because “how things change” is less omnipresent in requirements modeling than in software specs.
I think I gave a less positive view of Alloy than I wanted in this comment. Alloy can certainly model time, but I always seem to end up ad-hoc defining the UNCHANGED much like the in the talk Alloy For TLA+ Users around the 30min mark.
The thing I really like about Alloy is that the whole BNF is like a page. So it’s not as big a time-investment to learn it over something like mcrl2 or TLA+.
I think that is a main selling point to me especially. I have a lot of interest in formal methods and modeling because i can see how damn useful it can/will be. Just carving out the time and the mental bandwidth to do it is a struggle. I am by no means good at any of the tools but I am putting a lot of effort to make the time investment worth it not just to me but the product as a whole. Currently I am trying to unravel my thoughts on how we apply/enforce a fairly granular security model in our system and its been a great thought exercise and Im fairly confident it will prove useful, or drive me insane.
Why masala? Because they are informal; they cover multiple dimensions at once, they may be both structural and behavioural, logical and physical.
A bit tangential but I don’t really get why you’d call these diagrams “masala”. In the usage I’m familiar with, masala is key to good flavor and spice, which, metaphorically I think you’d want in your diagram.
Yeah, the college I went to tried to drill capital-A Agile into me, but that included UML too. Many of the original Agile evangelists also promoted things like UML as part of Agile.
I’ve never been a fan of UML just because I’m not a visual thinker, but I definitely agree that formal specifications have fallen out of favor.
My charitable take is that a big part of it is because our tools are getting better, such that the cost of being somewhat wrong is often considerably lower than the cost of trying to be precisely correct. When iterating on a piece of software meant a new months- or years-long development cycle, it really really paid to make sure you got it right the first time. But now? “Sure, we should be able to change how that works in time for the weekly production release.”
My less-charitable take is that it’s because in enterprise and consumer software, we’ve sort of decided that specs should be written by product managers who often have little or no training or background in rigorously specifying a system and are instead focused almost exclusively on high-level “user stories” that don’t require thinking in fine detail, or on UI mockups that don’t require writing down precise business rules.
That said, though, there are still contexts where you see people take up-front design discussion pretty seriously. Of course there are obvious candidates like aerospace, but also, for example, the processes some programming languages have in place to manage their evolution (JEP, PEP, etc.) involve a lot of design discussion before anyone is willing to review any code.
I’ve never really found a good use for component diagrams or activity diagrams. Component diagrams are too fine-grained in UML (what I really care about are whole software components, not classes, which might not even exist in my language), and activity diagrams are pretty much just restating code in diagrammatic form.
But UML sequence diagrams are incredibly useful! For the type of work I do, which is mostly projects with a lot of services communicating over gRPC, I don’t think there’s any better way to understand what’s going on and communicate it clearly.
Entity-relationship diagrams are also still useful, although not technically part of UML.
Take the useful stuff, throw away the useless stuff…
I was trying to be a little bit snarky about how many “thought leaders” aren’t actually thinking that hard, but I looked and “thot” is also slang for prostitutes? Changed to something a little less misogynistic.
I’m not familiar with UML, so this question might be unwelcome considering a bit more research might get me to my answer. But why is the flowchart in the author’s second image (the millionaire question) rigged? It seems clear to me assuming that the step numbering is meaningful. That is, step one followed by either step 3 or step 2 and then 3. How does UML solve the ambiguity?
This is an Agile/management driven approach and it sounds about right, but I wonder if it’s also from the tooling perspective. The average business programmer is less Java object hell, more JavaScript slurry.
The quotes are there to undermine the idea that there is one real reason, as opposed to a myriad of essential, accidental, and contextual problems. Imagine me saying it with air quotes.
In retrospect, the ambiguities in what people mean by UML and techdeath, I should have called it Why “UML” “Really” “Died”.
A while back I got in it my head to write “a people’s history of UML” and interviewed several people about it, including Grady Booch, Bertrand Meyer, and one of the current architects of SysML. The story is a lot more complex than just “Agilists were meany poo-poo heads”. Among other things:
Unrelatedly, I’m always a little unbalanced by people calling UML a “formal specification”. I guess it is? You can draw an invalid UML diagram. But most of the diagram types are this weird combination of complicated and inflexible that make them poor for actually specifying things. Like how does having a class diagram actually help you? I guess you can trace it to a use-case diagram, but do you really need formal semantics for that?
I think many of the useful diagrams that you could make with UML (e.g. state machines) work better in graphviz. graphviz is freely available, flexible, fairly easy to use and the dot format is not difficult to programmatically generate.
I decided to write a more thorough response and it’s now twice as long as the original article.
Why do I do these things to myself
No idea why, but we appreciate it :D
UML + OCL (a weird sort-of subset of a union of object-z and Java) has some merit as a formal specification tool, and even less likelihood of round trip tooling support.
Were there not people who wanted UML to be used to generate code or at least the code structure? That madr sense to me as to why UML is so needlessly pedantic on minor details such as arrow types. Of course that breaks down one you stop using a Java-kind of language.
UML works for a very narrow view of software development. I found that out when I wrote my thesis in OCaml and had to document it in UML. How do you express a curried function in UML? How do you express a simple function in UML that happens to not be attached to an object?
And that is only the tip of the iceberg.
How do you express a complex SQL query in UML? Entity-relation diagrams help modelling the schema, but what about a query with joins and window functions?
How do you express a map-reduce operation in UML? What’s the diagram for filtering a list, then aggregating the results?
How do you express a minimally-complex algorithm in UML (think Sieve of Erastothenes)? Why can’t I use pseudo-code, which beats diagrams every day of the week?
How do you express a reactive user interface in UML? Why do I have to use something like a collaboration diagram, when the state of the interface derives directly the representation of the interface? It’s not the case that the input enables the submit button when it finds itself no longer empty.
How do you express a Prolog program in UML? Do you think the UML committee has ever known about the existence of Prolog?
How do I represent a multi-param generic data structure? For example, a mapping/hash table from A to B, where A and B are arbitrary types.
And then we ignore the old problem of documentation and software diverging over time, and having to update the documentation when the software changes.
UML comes from an era where Object Oriented (Obsessed?) Design was the whole world, and everything that wasn’t OOD was being forced to not exist. In this era, Architects would draw UML diagrams and cheap Indian code monkeys would do the coding, because we thought that code monkeys would be cheap and replaceable. This was perfect for the MBAs because programmers started to be annoying by not delivering on the expected deadlines, and claiming about the algorithms, the damned algorithms. We wanted to replace them with cheaper and less-whiny ones.
Turns out that the details of the system are in the code, not in the diagrams, and only the trivial parts of the system would be expressed in diagrams, and the devil is in the details. But this makes programmers non replaceable and fungible, because the better ones can express algorithms that the worse programmers will never understand. And many times, you need these better algorithms.
All this makes UML not anymore the silver bullet for bodyshops. The broken idea of hiring 1 architect and 100 code monkeys from the consultancy just doesn’t work, because the architect is not expected to dig in the details, and the 100 code monkeys will just make a mess of the details.
UML was dead on arrival when it ignored most of the details of Software Development. Doesn’t mean though that some parts of UML can be salvaged, such as sequence diagrams, state diagrams, or entity-relation diagrams. But trying to model in UML an arbitrary problem and solution is likely to become wrestling with the language to express anything minimally complex or different.
The UML isn’t supposed to model software down to the query and the algorithm. It’s supposed to model a system and its processes, to assist with the design of a software implementation of that system.
If the way you think about modelling a system is “what complex SQL queries will I need, and how can I add a functional reactive UI” then indeed the UML will not help as the paradigm of its creators and users is the object oriented paradigm. You are thinking about the problem in a way the UML will not help you to express, so we do not expect it to help with the expression of those thoughts.
The OO paradigm of the UML isn’t the “objects are inheritance and encapsulation” straw man of blog posts about switching to functional, but the “objects are things in the problem domain reflected in software” of object oriented analysis and design.
To the extent that the UML and other tools from OOSE were “dead in the water” (an odd claim given their former prevalence), a more convincing reason is that they were sold as efficiency and productivity bolt-ons to programmers without also explaining the need for a paradigm shift. A large number of companies adopted Java, carried on writing structured software, and noticed that they were also having to do this O-O ceremony that doesn’t help them.
In a few years time they’ll notice that switching to Scala, carrying on writing structured software, and also doing this functional ceremony isn’t helping them, and we’ll all get to have this thread again on the “has currying died and nobody noticed?” posts.
Then I would like to have the class diagram removed from UML, please, because class diagrams define a lot of details of data structures, and that restricts a lot on the algorithms I’m allowed to use.
And now you are agreeing with me that many problems and solutions cannot be expressed in UML unless I twist the problem/solution to fit UML.
This is not about Object Oriented Programming vs Functional Programming. This is about the fact that I can’t express many things in UML, starting with the whole universe of functional programming, and continuing with database interactions, declarative programming, advanced data structures, compositional semantics, and many others that I haven’t got time yet to study. Each of those alternative ways to do computing beat the others in specific situations, and having to use UML just forces me to not be able to use the right tool for the job because the UML committee decided that it is hammers for everyone. And now I have to make this screwdriver look like a hammer to be able to draw it in UML.
Firstly, you’re welcome to not use class diagrams. Secondly, you’re welcome to only put the details you need into a diagram, and avoid constraining your implementation: the map is not the terrain.
I don’t think so. It sounds like you’re saying the UML is bad because you can’t do these things, whereas I’m saying the UML is good when I don’t do these things. “Doctor, it hurts when I lift my arm like this!”
I hate when people use this analogy. If it hurts when I lift my arm like this, that’s probably a sign of some deeper underlying problem! Don’t tell me to not lift my arm.
That straw man is exactly what I’ve been taught at school. Shapes and animals and all that. Sure it’s just for learning, but the next step is invariably “OO is hard”, “OO is complicated”, “OO takes times to master”… the usual conversation stoppers.
That also is likely a mistake. As Mike Acton so eloquently put it, we should not code around a model of the world (in this case, the things in the problem domain). We should code around a model of the data. The things in the problem domain are what you speak of before managers or domain specialists. When you start actually programming however, it quickly becomes about the shape and size and flow of the data, and UML doesn’t help much there.
What does that even mean ? I have never seen this aspect explained except in the most superficial terms - creating a class which has a method name that reflects something from the domain usually resulting in bikeshed arguments.
I am sorry but that is not a model of anything. It is just naming. We can’t call naming modelling. When I am making a model of a circuit in software I can interact with the model, test all the assumptions. When an actual architect uses AutoDesk they are modelling things and can interact with them. BDD does this so it can absolutely be called modelling in some sense. I don’t know if it is the best modelling technique we can come up with but it works.
I’d recommend Eric Evans’s book on domain-driven design. “Just naming” implies that you’ve already decided what the method does and that you want to find something in the problem domain to map it onto. OOA/D says that you have found something important in the problem domain and that you want your software to simulate it.
Spot on. Frankly speaking, this was already crystal clear for many people back them. Many people realised immediately it was snake oil.
I felt this one. I haven’t been doing this, relatively, that long but it feels like we are eschewing formal documentation and planning for a more fly by the seat of our pants, get it right with enough iteration approach. I’ve talked to a few “technology savvy investors” in the past few years that have told me that software architects are an antipattern.
On the one hand the people against a more formal architecture approach might be right. If you break your features down to such manageable chunks that they can go from zero to hero in production in a sprint do we need need a full design process and documentation that will end up wrong or out of date before anyone has a chance to fix a bug?
I think the problem ends up being with the struggle of maintaining a cohesive architecture across all of these manageable l, hopefully vertical, slices of functionality. I’ve come into a few shops who have followed a low/no documentation route (“The user story is the documentation!” Said the project manager) and indeed they ship code fast but by the time I get there we get tasked with trying to build a unifying style or standard to move everything to because the original developers are gone/dead/committed to an asylum for their health and no one is willing to make changes to the legacy codebase with any confidence and ticket times are climbing.
So where is the middle ground? I would be happy to never have to create another 10+ page proposal, print it out and defend it in front of people who are smarter than me, and know it but there is something missing in modern software development and I think it’s some sort of process of getting our bad ideas out in the planning phase before writing code. Formal methods for sure fill this role but no one has the time outside of the “if my code fails someone dies” crowd.
I guess this is a long winded way of saying “I use formal planning to get my bad ideas out of my system before I create software that works but breaks in subtly or is just flat out wrong.
I think it’s a market thing. Subtly broken software that’s out now is better than well-designed software a year from now, from a customer adoption perspective….sadly.
I agree. Usually (hopefully) the lag isn’t that pronounced but it’s all how fast can we get a feature in front of users to drive adoption or validate our “product market fit” to investors. I’m glad where I’m at we have a formal architecture review, albeit abbreviated, for new features so we are trying to stradle that fine line between velocity and hopefully we’ll designed code. Sadly this just means the architect (me) gets a lot of extra work
Have you heard the good word of lightweight formal methods
(Less facetiously: the key thing that makes the middle ground possible finding notations that are simple enough to be palatable while also being machine checkable. I volunteer a lot with the Alloy team because I think it has potential to mainstream formal methods, even moreso than TLA+.)
A friend has gotten me on a mcrl2 kick for modeling one of our new systems. I’ll check out alloy as well. I love playing with pluscal/tla+ when I have the time for it
The thing I really like about Alloy is that the whole BNF is like a page. So it’s not as big a time-investment to learn it over something like mcrl2 or TLA+.
Also, I think you’re the second person to mention mcrl2 in like a week, which is blowing my mind. Before this I’d only ever heard about it from @pmonson711.
I think I’ve finally warmed up to the idea of writing more about how and why I lean on mCrl2 as my main modelling tool.
Regarding Alloy and the time investment, I agree the syntax is much simpler. It did take me a huge mental investment to get comfortable with modelling anything that changes overtime. It’s a great tool for data structure/relations modelling but if you need to understand how things change with time, than TLA+’s
UNCHANGED
handling feels like a necessity.I think thats the most valuable part of the whole exercise for me. I usually model everything I can as some sort of state machine with well defined entrance and exit behaviors so seeing over time the affect of changes to that system is really useful to me
I’m in the process of working out how to change Alloy’s pitch away from software specifications and more towards domain modeling, precisely because “how things change” is less omnipresent in requirements modeling than in software specs.
I think I gave a less positive view of Alloy than I wanted in this comment. Alloy can certainly model time, but I always seem to end up ad-hoc defining the UNCHANGED much like the in the talk Alloy For TLA+ Users around the 30min mark.
Haha! Oddly he is the friend who put me on that path
Thanks you two for bringing up mcrl2, I wouldn’t have discovered it if y’all didn’t talk about it!
I think that is a main selling point to me especially. I have a lot of interest in formal methods and modeling because i can see how damn useful it can/will be. Just carving out the time and the mental bandwidth to do it is a struggle. I am by no means good at any of the tools but I am putting a lot of effort to make the time investment worth it not just to me but the product as a whole. Currently I am trying to unravel my thoughts on how we apply/enforce a fairly granular security model in our system and its been a great thought exercise and Im fairly confident it will prove useful, or drive me insane.
A bit tangential but I don’t really get why you’d call these diagrams “masala”. In the usage I’m familiar with, masala is key to good flavor and spice, which, metaphorically I think you’d want in your diagram.
[Comment removed by author]
For some time now i have thought the prevailing development methodology is “hack it till it works”, labeled under the more acceptable term “agile”.
An alternative name in terms of requirements analysis is “Marco Polo” development, named after the popular swimming pool game.
UML or any formal equivalent simply isnt required for such a methodology.
Yeah, the college I went to tried to drill capital-A Agile into me, but that included UML too. Many of the original Agile evangelists also promoted things like UML as part of Agile.
I’ve never been a fan of UML just because I’m not a visual thinker, but I definitely agree that formal specifications have fallen out of favor.
My charitable take is that a big part of it is because our tools are getting better, such that the cost of being somewhat wrong is often considerably lower than the cost of trying to be precisely correct. When iterating on a piece of software meant a new months- or years-long development cycle, it really really paid to make sure you got it right the first time. But now? “Sure, we should be able to change how that works in time for the weekly production release.”
My less-charitable take is that it’s because in enterprise and consumer software, we’ve sort of decided that specs should be written by product managers who often have little or no training or background in rigorously specifying a system and are instead focused almost exclusively on high-level “user stories” that don’t require thinking in fine detail, or on UI mockups that don’t require writing down precise business rules.
That said, though, there are still contexts where you see people take up-front design discussion pretty seriously. Of course there are obvious candidates like aerospace, but also, for example, the processes some programming languages have in place to manage their evolution (JEP, PEP, etc.) involve a lot of design discussion before anyone is willing to review any code.
When you say “project manager”, I expect someone whose job duties and training are in:
None of those are requirements gathering?
To be fair, you need PMs to be in the requirements gathering anyway because reqs impact schedule and cost, so leaving them out is counter productive.
Mismatch. They said “product manager”, not “project manager”. A product manager should be the major source of requirements.
Ah! Thanks for pointing this out. I did indeed misread.
I’ve never really found a good use for component diagrams or activity diagrams. Component diagrams are too fine-grained in UML (what I really care about are whole software components, not classes, which might not even exist in my language), and activity diagrams are pretty much just restating code in diagrammatic form.
But UML sequence diagrams are incredibly useful! For the type of work I do, which is mostly projects with a lot of services communicating over gRPC, I don’t think there’s any better way to understand what’s going on and communicate it clearly.
Entity-relationship diagrams are also still useful, although not technically part of UML.
Take the useful stuff, throw away the useless stuff…
UML was a flimsy excuse to sell obscenely-priced training, documentation, tooling, and consulting. We caught on. That’s it.
Typo: “Lots of thot leaders…”
Or, uh… maybe not?
I was trying to be a little bit snarky about how many “thought leaders” aren’t actually thinking that hard, but I looked and “thot” is also slang for prostitutes? Changed to something a little less misogynistic.
I’m not familiar with UML, so this question might be unwelcome considering a bit more research might get me to my answer. But why is the flowchart in the author’s second image (the millionaire question) rigged? It seems clear to me assuming that the step numbering is meaningful. That is, step one followed by either step 3 or step 2 and then 3. How does UML solve the ambiguity?
This is an Agile/management driven approach and it sounds about right, but I wonder if it’s also from the tooling perspective. The average business programmer is less Java object hell, more JavaScript slurry.
The quotes in the title are there for what? Emphasis? That’s not what quotes do: https://www.dailywritingtips.com/punctuation-errors-quotation-marks-for-emphasis/
The quotes are there to undermine the idea that there is one real reason, as opposed to a myriad of essential, accidental, and contextual problems. Imagine me saying it with air quotes.
In retrospect, the ambiguities in what people mean by UML and techdeath, I should have called it Why “UML” “Really” “Died”.