Then I started thinking about it. This approach turns your program into a distributed system. It tries to model something akin to a microservice architecture, where all communication passes through direct message queues.
There’s two sides to this coin; the fact that most people are forced to model in a distsys way when they might not necessarily need the complexity sucks. On the other hand, Erlang I find exemplifies good OOP (process as state, messages, encapsulation, etc.) practice; it shows it’s not some procedural-object-functional dichotomy, but it’s basically a bunch of traits a language can pick from.
I would love to get a look at the code to get an answer to four specific questions I have:
How normal shutdown works? If the user clicks X or a test ends, how do we make sure that systems is properly shut down?
How abnormal shutdown works? What happens if one of the actors dies with a panic?
How back-pressure/deadlocks are handled? I feel like I might discover something new: connecting actor to a common event stream feels like a different approach than connecting them to each other.
How prioritization is handled: in some systems, some events need to be handled before others (prioritizing writes over reads).
Every top-level actor/thread is basically a loop with some termination condition. It might be a Arc<AtomicBool> flag, or some channel returning None or a combination of multiple things. The main thread is just a loop over join on JoinHandles, a unix signal handler can just set that termination flag.
In certain cases, where all communication is using Rust channels, the fact that one thread exits, triggers destructors of all channel senders which leads to a neat cascading shutdown of all downstream&upstream threads. That’s I do in rdeduphttps://github.com/dpc/rdedup , since the whole program is basically a long multi-pipeline dowstream from the “stdio reader” actor.
However I admit that not every problem is as easy to handle this way, sometimes it gets a little bit tricky, especially in the presence of blocking operations. Some blocking on input and output (recv and send on the channel is often OK as long as operation is unblocked if the other side disconnects, and that progress is guaranteed in the normal state (eg. no cycles in the dependency graph). In Java the InterruptedException is very useful for this and that’s what I’m using, but in Rust there’s no such thing. Possibly async makes cancellation handling easier in Rust, but I haven’t wrote any bigger Rust program using async like this yet.
How abnormal shutdown works? What happens if one of the actors dies with a panic?
Panics are to be avoided and would crash the whole application, which is not great but OK since the data model is designed to be consistent in the presence of crashes.
Otherwise every top-level top-level actor/thread would bubble up all unrecoverable errors, and the top level error handler can set a flag somewhere that it crashed and/or flag for other actors to stop as well and exit.
How back-pressure/deadlocks are handled?
Oftentimes I rely on synchronous, bounded size channels for backpressure. Here as the communication would happen over an event log, things would be a little different. The beauty of following an event log is that each follower of a stream of events is free to do it at their own pace and not being able to keep up results “only” in perceived delay of processing of that one actor, without affecting other actors. Which usually is OK, but might be not suitable for certain applications.
How prioritization is handled, some events need to be handled before others (prioritizing writes over reads).
Well, a shared event log wouldn’t do in the case of different handling priorities between events. Things like favoring writers over readers need to be handled as implementation/design requirements. I have many ideas, but it dependents on the exact use case and requirements.
The beauty of event log is that only writers are synchronized with each other on write. Readers are independent of each other and the writers.
Just to highlight again: The main point of my blogpost is not that there’s one universal way to handle things, but that data architecture requirements are most important, need to be addressed as a priority (obviously it might change during implementation as details are better undertood, but there’s got to be some vision and goal upfront) and the code written for the data model requirements, not data model requirements somehow spontaneously emerging in a process of OOP modeling.
Thanks for this! I appreciate emphasizing the overarching data architecture rather than the lower level programming paradigms used to implement the abstractions.
@dpc_pw: Thanks for saving me a purchase on Elegant Objects! I was highly curious about it, but had reservations about the author, based on his online writing.
EDIT:
Also, after reading this post, it definitely strikes me as a more practical way to write software than the OOP books you’ve described.
I think a lot of software dogma doesn’t take into account things like abstraction gradients (how to start with something small, and gradually increase it’s generality as needed), the importance of getting data layout right, or the fact that many things in software dev are tradeoffs.
Thanks for this post. Although I consider paradigms to be just tools, for the past three years I often had to explain to others in the team why I am not “writing the real code” (which for them was always OOP) and choose functional approach instead. If I’m forced to work with such people in the future again, I will just point them to your excellent blog post.
I think that there are some classes of problems that are extremely hard to model or reason about without OOP. For example, I currently work on ERP systems, namely Odoo, and I don’t see how you could manage to achieve something comparable without OOP. Odoo itself might offer a quite twisted view of OOP: objects stored on the database, multiple inheritance, global inheritance (when an overridden method becomes overridden for all classes inheriting the original class), inheritance by composition, etc. Yet it is same old OOP, where you are mainly operating on objects in your database, and only interfacing by pure data with outside systems. This architecture allows for many unrelated teams to work on the same eventual codebase. It builds for business needs, allowing for business to use an already written piece of code without any need for integration with existing custom code. As you are usually modelling objects in the real world, there isn’t many problems with deciding on how to map your concepts to objects.
I know nothing about Odoo, but I generally admit that sometimes OOP-style does shine: namely when your data architecture analysis suggest that the best way to handle the state is a graph of interconnected objects, reacting to each other “messages”. When one is willing to the complexity and overhead of a graph and can ignore the downsides. One example of that are UI systems, when there’s no need for persistence of each individual object so impedance mismatch with the database doesn’t matter. Initial rise of java-style OOP coincides with the raise of GUI programs for a reason.
Erlang might be the only object oriented language […] Alan Kay himself wrote this famous thing and said “The notion of object oriented programming is completely misunderstood. It’s not about objects and classes, it’s all about messages”.
And this is similar feeling of what you proposed there.
In terms of “classical OOP” I really like OOP in ANSI-C, as it uses language that is generally not considered OOP and implement all of these features on top of that, where you actually see what is going behind scenes.
About my favourite OOP example (in general I am not really fan of OOP) is *nix file handling which in fact is very OOP-y solution. You have multiple types of files (files, hard links, soft links, pipes, sockets, etc.) on multiple filesystems and all of these have uniform OOP-y API that is quite easy to reason about from the user viewpoint. That is what IMHO makes good OOP - well defined interface, that can be easily extended without much changes to the API itself.
In terms of “classical OOP” I really like OOP in ANSI-C
Thanks. I will try to take a look.
About my favourite OOP example (in general I am not really fan of OOP) is *nix file handling which in fact is very OOP-y solution.
I agree that it’s Good approach/solution/method, but … is it really OOP? There is an interface that supports multiple implementation, yes. I guess we can consider it polymorphism. Maybe also encapsulation. But plain Unix file APIs are synchronous, so no meaningful messaging between objects can be involved - just polimorphic function calls between caller and callee. Encapsulation is of the internal details of handling the data, not the data itself, which some OOPers wold complain as not very OOP. The implementation details all very leaky here, BTW: file descriptor cloning issues, behavior on fork and so on. Global mutable seek position.
IMO, the whole thing is an example on how interfaces are very useful in software, and considering it OOP is debatable, but IMO stretching it.
If you’re interested I can give you recs for good historical books on OOP: not necessarily best practices, but a good indication of what people were thinking and what they were trying to do.
@singpolyma and I already regret it because seems like my intuition was right, and is just over-analyzing a piece of code that is too trivial to matter, or even show any OOP-ness. Most of the analyzed implementations so far could be considered both: procedural, FP and OOP at the same time.
Oh well… Just going to go through it and see how it goes.
Outstanding article. I went down the oop road and got good at it. It declares one claim you must take on faith— that if you insist that “everything is an object”, it’s possible to find a nice set of objects to implement you’re system, and that’s the best way to do things.
Instead, what it does is tickles a developer’s brain in a super enjoyable way that feels fulfilling … in other words mental masturbation.
What i found is you often can find an elegant solution that consists of objects, and it will feel like a profoundly beautiful thing when you do. It will feel so right that it convinces many that, “this is the way.”
But in reality, it’s over-complicated. Your brain has a blast, but it was more about pleasuring your brain by making you work under a challenging limitation. Turns out you could drop that single dogmatic principle and wind up with a simpler solution faster, still elegant and maintainable (which is of course more fulfilling).
The one claim you had to take on faith, turns out, is exactly where the whole thing breaks down. And the incorrect illusion of “rightness” it gives to the puzzle solver’s brain is the engine that keeps it rolling.
Instead, what it does is tickles a developer’s brain in a super enjoyable way that feels fulfilling … in other words mental masturbation.
I very much agree. I do see “every little thing is an encapsulated object that receives messages” as an artificial constraint which acts like a additional challenge akin to ones used in many games. “Complete game without killing anyone” in an RPG game and get a “pacifist” achievement, etc.
Instead of actually solving the problem and working on solving constraints that are actually important, we tend to get stuck on puzzle solving that makes us feel smart. It’s not limited the OOP. The same kind of motive are e.g. people agonizing over expressing some universal taxonomy, overusing generics to provide “super clean universal abstraction”, or (in Rust) solving a borrow checker puzzle involving 3 lifetimes, just to shave off one data copy in some cold path.
Very true, and a good pattern to be aware of (in yourself and others).
That said, there is an upside to adding additional constraints: constraints can make things easier to reason about. The most obvious, famous example that comes to mind is maintaining referential transparency in functional programming, and isolating your mutations. Or, say, in React, the way you have actions that update a data model, and then a unidirectional flow of that updated model down into the re-rendered view. In these cases, too, (especially at first) it can be a “puzzle” to figure out how re-structure your problem so that it fits into the paradigm. But in both of these cases, at least imo, there is a huge net benefit.
Which is to say that “artifical constraints” per se aren’t the problem. The problem is artificial constraints that are time-consuming but don’t pay you back.
Which is to say that “artifical constraints” per se aren’t the problem. The problem is artificial constraints that are time-consuming but don’t pay you back.
I agree, except for semantics… if the constraint has an actual payoff (like “use only pure functions” does, for example), then it is no longer “arbitrary” or “artificial”.
“Everything is an object” never had a real payoff… it was just dogmatic koolaid.
I am by no means an OOP-first or an OOP-only and spent years being anti-OOP (but without knowing what OOP was or knowing what I preferred instead) – if I had to be pressed to “pick a paradigm” I would say I’m a functional programmer.
However, I have seen (and written myself and tried to come back later) enough “big procedures of doom” to know that is not the way. Or at least not a way I want to have to touch unless for a lot of money. Of course “good procedural” exists just like “good OOP” does but really we should focus on what we (often) have in common, which is that a 50 line procedure with everything inline is (often) an unreadable mess two days later and (more often) gets worse with every edit and so it needs to be modeled somehow (and no, procedures named do_thing, do_second_thing don’t count). You can use polymorphism and call it OOP or you can use closures and call it functional or you can use modules with glue and call it procedural or what have you – the paradigm and the name doesn’t matter, only the result.
Object orientation can be used in many ways. Perhaps its current failure is not being able to provide users with a prescriptive path to using it well. It’s a powerful toolset without a good manual. Of course kilometers of books describe its parts and mechanics, but not how to use it to model things.
Therein lies the first contention. Is it about modeling things, or organizing your code, or about just (mystically) sending messages? Most spend years as programmers pursuing the second only while waxing lyrically about the third.
My conclusion is that the author has no where to turn to learn about modeling concepts and their interactions, real or imagined, into object oriented code. Certainly no idea how to model constraints (rules) in object oriented code without just including them in procedural scripts as managers, controllers or supervisors.
Can you model a product that can be a member of certain groups, which can be purchased in different locations by a person acting in the role of a customer in pure object oriented code, without regards to persistence or a user interface and with constraints on the collaboration between the objects?
When such a question is posed nearly all programmers will stand in front of a whiteboard mumbling about nouns, verbs and use cases, or what seems to be the current favorite - writing down each of the “solid” principles (I kid you not) as an answer.
Is it the paradigm or practitioner that should be improved?
Where I work, we use objects as a way to express and structure dependencies between parts of a single process. In other words, all the actual data modelling and computational work is done in functional, plain-old-data style; objects are just used to model the processing and allow mock testing. So for instance, if we were writing some data to a database, the database backend would be an object implementing an interface (specialized for a set of data types), and then the test could mock out the database and validate save/load calls easily. So approximately all our OOP ends up of the form “class <- interface”, one level deep. I think this is where OOP shines: introducing an expectation for a subsystem in one part of the code, that is fulfilled in another part in a flexible way. Separation of concerns, rather than inheritance of functionality. And there’s still a tendency to go too abstract, so it’s definitely something to be careful about. Plain function calls (or struct method calls, but that’s the same thing) should be the first choice; method calls on a concrete class the second; method calls on an interface the third.
That sounds very much like I am doing. However I ask myself and you a question: are we really doing OOP? Or we are just using polymorphism to decompose parts we would like to have interchangeable (for testing and other purposes). From 4 tenents of OOP anything other than polymorphism is pretty meaningless for us, no? So why would we call it OOP? Polymorphism is possible in assembler (jump tables), C, functional programming.
Far as I can tell, using objects for system modelling uses abstraction (over implementation details), encapsulation (of command driven state changes), inheritance (of abstract interfaces) and polymorphism (of interchangeable sub-components).
The examples in the link are garbage, of course, because C++ has poisoned the OOP discourse, but as far as I can tell the operations are still there. Though I think the case for inheritance is the weakest.
I dread event-based systems. When any part of the system can trigger an event, that’s a global mutable state (events can lead to mutation — anywhere). It’s also a pretty difficult kind of global mutable state where there isn’t even a fixed order of mutations. And if you let event handlers fire more events, you may end up with livelocks (infinite loops which aren’t locally explicit in the code).
Observers are more scoped and thus manageable. At least you can write the observed half of the code in a straightforward linear way. In the worst case an observer can slow down or livelock the object it’s observing, but at least that’s not global.
Just curious, what technologies would one use for the handling of the events in this kind of implementation?
Apache Kafka? Is that suitable for intra-application communication?
Kafka is quite poplar these days or Amazon Kinesis.
Is that suitable for intra-application communication?
If you already have Kafka for other stuff, sure. Though it’s optimized for volume, not low latency at low volume. For a small volume application setting up and maintaining Kafka would be an overkill, IMO. A Postgres table will do, especially given that free-form event specific data could go to json table (or just be serialized on application level).
There’s two sides to this coin; the fact that most people are forced to model in a distsys way when they might not necessarily need the complexity sucks. On the other hand, Erlang I find exemplifies good OOP (process as state, messages, encapsulation, etc.) practice; it shows it’s not some procedural-object-functional dichotomy, but it’s basically a bunch of traits a language can pick from.
I would love to get a look at the code to get an answer to four specific questions I have:
Every top-level actor/thread is basically a loop with some termination condition. It might be a
Arc<AtomicBool>
flag, or some channel returningNone
or a combination of multiple things. The main thread is just a loop overjoin
onJoinHandle
s, a unix signal handler can just set that termination flag.In certain cases, where all communication is using Rust channels, the fact that one thread exits, triggers destructors of all channel senders which leads to a neat cascading shutdown of all downstream&upstream threads. That’s I do in
rdedup
https://github.com/dpc/rdedup , since the whole program is basically a long multi-pipeline dowstream from the “stdio reader” actor.However I admit that not every problem is as easy to handle this way, sometimes it gets a little bit tricky, especially in the presence of blocking operations. Some blocking on input and output (
recv
andsend
on the channel is often OK as long as operation is unblocked if the other side disconnects, and that progress is guaranteed in the normal state (eg. no cycles in the dependency graph). In Java theInterruptedException
is very useful for this and that’s what I’m using, but in Rust there’s no such thing. Possiblyasync
makes cancellation handling easier in Rust, but I haven’t wrote any bigger Rust program using async like this yet.Panics are to be avoided and would crash the whole application, which is not great but OK since the data model is designed to be consistent in the presence of crashes.
Otherwise every top-level top-level actor/thread would bubble up all unrecoverable errors, and the top level error handler can set a flag somewhere that it crashed and/or flag for other actors to stop as well and exit.
Oftentimes I rely on synchronous, bounded size channels for backpressure. Here as the communication would happen over an event log, things would be a little different. The beauty of following an event log is that each follower of a stream of events is free to do it at their own pace and not being able to keep up results “only” in perceived delay of processing of that one actor, without affecting other actors. Which usually is OK, but might be not suitable for certain applications.
Well, a shared event log wouldn’t do in the case of different handling priorities between events. Things like favoring writers over readers need to be handled as implementation/design requirements. I have many ideas, but it dependents on the exact use case and requirements.
The beauty of event log is that only writers are synchronized with each other on
write
. Readers are independent of each other and the writers.Just to highlight again: The main point of my blogpost is not that there’s one universal way to handle things, but that data architecture requirements are most important, need to be addressed as a priority (obviously it might change during implementation as details are better undertood, but there’s got to be some vision and goal upfront) and the code written for the data model requirements, not data model requirements somehow spontaneously emerging in a process of OOP modeling.
These questions are where all the real-world hours go, and frankly where the interesting problems lie.
Thanks for this! I appreciate emphasizing the overarching data architecture rather than the lower level programming paradigms used to implement the abstractions.
@dpc_pw: Thanks for saving me a purchase on Elegant Objects! I was highly curious about it, but had reservations about the author, based on his online writing.
EDIT:
Also, after reading this post, it definitely strikes me as a more practical way to write software than the OOP books you’ve described.
I think a lot of software dogma doesn’t take into account things like abstraction gradients (how to start with something small, and gradually increase it’s generality as needed), the importance of getting data layout right, or the fact that many things in software dev are tradeoffs.
Services are objects wanted to be.
Thanks for this post. Although I consider paradigms to be just tools, for the past three years I often had to explain to others in the team why I am not “writing the real code” (which for them was always OOP) and choose functional approach instead. If I’m forced to work with such people in the future again, I will just point them to your excellent blog post.
I think that there are some classes of problems that are extremely hard to model or reason about without OOP. For example, I currently work on ERP systems, namely Odoo, and I don’t see how you could manage to achieve something comparable without OOP. Odoo itself might offer a quite twisted view of OOP: objects stored on the database, multiple inheritance, global inheritance (when an overridden method becomes overridden for all classes inheriting the original class), inheritance by composition, etc. Yet it is same old OOP, where you are mainly operating on objects in your database, and only interfacing by pure data with outside systems. This architecture allows for many unrelated teams to work on the same eventual codebase. It builds for business needs, allowing for business to use an already written piece of code without any need for integration with existing custom code. As you are usually modelling objects in the real world, there isn’t many problems with deciding on how to map your concepts to objects.
I know nothing about Odoo, but I generally admit that sometimes OOP-style does shine: namely when your data architecture analysis suggest that the best way to handle the state is a graph of interconnected objects, reacting to each other “messages”. When one is willing to the complexity and overhead of a graph and can ignore the downsides. One example of that are UI systems, when there’s no need for persistence of each individual object so impedance mismatch with the database doesn’t matter. Initial rise of java-style OOP coincides with the raise of GUI programs for a reason.
That article reminded me of quote from Joe Armstrong:
And this is similar feeling of what you proposed there.
In terms of “classical OOP” I really like OOP in ANSI-C, as it uses language that is generally not considered OOP and implement all of these features on top of that, where you actually see what is going behind scenes.
About my favourite OOP example (in general I am not really fan of OOP) is *nix file handling which in fact is very OOP-y solution. You have multiple types of files (files, hard links, soft links, pipes, sockets, etc.) on multiple filesystems and all of these have uniform OOP-y API that is quite easy to reason about from the user viewpoint. That is what IMHO makes good OOP - well defined interface, that can be easily extended without much changes to the API itself.
Thanks. I will try to take a look.
I agree that it’s Good approach/solution/method, but … is it really OOP? There is an interface that supports multiple implementation, yes. I guess we can consider it polymorphism. Maybe also encapsulation. But plain Unix file APIs are synchronous, so no meaningful messaging between objects can be involved - just polimorphic function calls between caller and callee. Encapsulation is of the internal details of handling the data, not the data itself, which some OOPers wold complain as not very OOP. The implementation details all very leaky here, BTW: file descriptor cloning issues, behavior on fork and so on. Global mutable
seek
position.IMO, the whole thing is an example on how interfaces are very useful in software, and considering it OOP is debatable, but IMO stretching it.
Wow, sounds like three pretty sad book recommendations. Java? Cat/Dog? “Plain data”? Oof.
My top recommendation is 99 Bottles of OOP, largely because it’s short and mostly exercises.
I’m just going to take a stab at it: buy, read and write some thoughts, since I have seen it mentioned more than 3 times now.
If you’re interested I can give you recs for good historical books on OOP: not necessarily best practices, but a good indication of what people were thinking and what they were trying to do.
Sure I am! I am very curious in particular about what OOP is supposed to mean exactly, and that includes how the term evolved.
I’ve heard another good book is The Art of The Metaobject Protocol
@singpolyma and I already regret it because seems like my intuition was right, and is just over-analyzing a piece of code that is too trivial to matter, or even show any OOP-ness. Most of the analyzed implementations so far could be considered both: procedural, FP and OOP at the same time.
Oh well… Just going to go through it and see how it goes.
Outstanding article. I went down the oop road and got good at it. It declares one claim you must take on faith— that if you insist that “everything is an object”, it’s possible to find a nice set of objects to implement you’re system, and that’s the best way to do things.
Instead, what it does is tickles a developer’s brain in a super enjoyable way that feels fulfilling … in other words mental masturbation.
What i found is you often can find an elegant solution that consists of objects, and it will feel like a profoundly beautiful thing when you do. It will feel so right that it convinces many that, “this is the way.”
But in reality, it’s over-complicated. Your brain has a blast, but it was more about pleasuring your brain by making you work under a challenging limitation. Turns out you could drop that single dogmatic principle and wind up with a simpler solution faster, still elegant and maintainable (which is of course more fulfilling).
The one claim you had to take on faith, turns out, is exactly where the whole thing breaks down. And the incorrect illusion of “rightness” it gives to the puzzle solver’s brain is the engine that keeps it rolling.
I very much agree. I do see “every little thing is an encapsulated object that receives messages” as an artificial constraint which acts like a additional challenge akin to ones used in many games. “Complete game without killing anyone” in an RPG game and get a “pacifist” achievement, etc.
Instead of actually solving the problem and working on solving constraints that are actually important, we tend to get stuck on puzzle solving that makes us feel smart. It’s not limited the OOP. The same kind of motive are e.g. people agonizing over expressing some universal taxonomy, overusing generics to provide “super clean universal abstraction”, or (in Rust) solving a borrow checker puzzle involving 3 lifetimes, just to shave off one data copy in some cold path.
Very true, and a good pattern to be aware of (in yourself and others).
That said, there is an upside to adding additional constraints: constraints can make things easier to reason about. The most obvious, famous example that comes to mind is maintaining referential transparency in functional programming, and isolating your mutations. Or, say, in React, the way you have actions that update a data model, and then a unidirectional flow of that updated model down into the re-rendered view. In these cases, too, (especially at first) it can be a “puzzle” to figure out how re-structure your problem so that it fits into the paradigm. But in both of these cases, at least imo, there is a huge net benefit.
Which is to say that “artifical constraints” per se aren’t the problem. The problem is artificial constraints that are time-consuming but don’t pay you back.
Agreed!
I agree, except for semantics… if the constraint has an actual payoff (like “use only pure functions” does, for example), then it is no longer “arbitrary” or “artificial”.
“Everything is an object” never had a real payoff… it was just dogmatic koolaid.
I am by no means an OOP-first or an OOP-only and spent years being anti-OOP (but without knowing what OOP was or knowing what I preferred instead) – if I had to be pressed to “pick a paradigm” I would say I’m a functional programmer.
However, I have seen (and written myself and tried to come back later) enough “big procedures of doom” to know that is not the way. Or at least not a way I want to have to touch unless for a lot of money. Of course “good procedural” exists just like “good OOP” does but really we should focus on what we (often) have in common, which is that a 50 line procedure with everything inline is (often) an unreadable mess two days later and (more often) gets worse with every edit and so it needs to be modeled somehow (and no, procedures named do_thing, do_second_thing don’t count). You can use polymorphism and call it OOP or you can use closures and call it functional or you can use modules with glue and call it procedural or what have you – the paradigm and the name doesn’t matter, only the result.
[Comment removed by author]
Object orientation can be used in many ways. Perhaps its current failure is not being able to provide users with a prescriptive path to using it well. It’s a powerful toolset without a good manual. Of course kilometers of books describe its parts and mechanics, but not how to use it to model things.
Therein lies the first contention. Is it about modeling things, or organizing your code, or about just (mystically) sending messages? Most spend years as programmers pursuing the second only while waxing lyrically about the third.
My conclusion is that the author has no where to turn to learn about modeling concepts and their interactions, real or imagined, into object oriented code. Certainly no idea how to model constraints (rules) in object oriented code without just including them in procedural scripts as managers, controllers or supervisors.
Can you model a product that can be a member of certain groups, which can be purchased in different locations by a person acting in the role of a customer in pure object oriented code, without regards to persistence or a user interface and with constraints on the collaboration between the objects?
When such a question is posed nearly all programmers will stand in front of a whiteboard mumbling about nouns, verbs and use cases, or what seems to be the current favorite - writing down each of the “solid” principles (I kid you not) as an answer.
Is it the paradigm or practitioner that should be improved?
Where I work, we use objects as a way to express and structure dependencies between parts of a single process. In other words, all the actual data modelling and computational work is done in functional, plain-old-data style; objects are just used to model the processing and allow mock testing. So for instance, if we were writing some data to a database, the database backend would be an object implementing an interface (specialized for a set of data types), and then the test could mock out the database and validate save/load calls easily. So approximately all our OOP ends up of the form “class <- interface”, one level deep. I think this is where OOP shines: introducing an expectation for a subsystem in one part of the code, that is fulfilled in another part in a flexible way. Separation of concerns, rather than inheritance of functionality. And there’s still a tendency to go too abstract, so it’s definitely something to be careful about. Plain function calls (or struct method calls, but that’s the same thing) should be the first choice; method calls on a concrete class the second; method calls on an interface the third.
That sounds very much like I am doing. However I ask myself and you a question: are we really doing OOP? Or we are just using polymorphism to decompose parts we would like to have interchangeable (for testing and other purposes). From 4 tenents of OOP anything other than polymorphism is pretty meaningless for us, no? So why would we call it OOP? Polymorphism is possible in assembler (jump tables), C, functional programming.
Far as I can tell, using objects for system modelling uses abstraction (over implementation details), encapsulation (of command driven state changes), inheritance (of abstract interfaces) and polymorphism (of interchangeable sub-components).
The examples in the link are garbage, of course, because C++ has poisoned the OOP discourse, but as far as I can tell the operations are still there. Though I think the case for inheritance is the weakest.
[Comment removed by author]
I dread event-based systems. When any part of the system can trigger an event, that’s a global mutable state (events can lead to mutation — anywhere). It’s also a pretty difficult kind of global mutable state where there isn’t even a fixed order of mutations. And if you let event handlers fire more events, you may end up with livelocks (infinite loops which aren’t locally explicit in the code).
Observers are more scoped and thus manageable. At least you can write the observed half of the code in a straightforward linear way. In the worst case an observer can slow down or livelock the object it’s observing, but at least that’s not global.
@dpc_pw, Growing Object-Oriented Software, Guided by Tests Without Mocks is another object-oriented take on the Auction Sniper program.
Oh wow, this is great. I need to get back to it and analyze in more detail. Thanks a lot!
Just curious, what technologies would one use for the handling of the events in this kind of implementation? Apache Kafka? Is that suitable for intra-application communication?
Kafka is quite poplar these days or Amazon Kinesis.
If you already have Kafka for other stuff, sure. Though it’s optimized for volume, not low latency at low volume. For a small volume application setting up and maintaining Kafka would be an overkill, IMO. A Postgres table will do, especially given that free-form event specific data could go to json table (or just be serialized on application level).