Programming philosophies are about state because state is the physical manifestation of logical time. Without state, there’s no way to tell the difference between point N in the program lifetime and point N+1. If my program buys something off Amazon, there is a point before which that event didn’t happen and a point after which it did, and that point corresponds to a change in the state of the world itself.
i think that this is the best top-level reply. More vividly, the passage of time is illusory, and so mutability is also an illusion. We could broadly talk about systems which don’t have mutability, systems which have explicit mutability varying over observable time, and systems which have implicit mutability varying over implicit time; but I’m not sure that this is helpful.
Minimize state! Use functional programming practices!
But seriously, my work with systems recently has been all about how to reduce the state that exists on a server. If you think this is solved with containers, I welcome you to the Jenkins container guidance of “put all that stuff in a volume mount”.
There is a better way! A middle path, that captures the best of both worlds!
If a function is the sole owner of an object, and mutates it before returning it, that is indistinguishable from it never mutating anything at all, and instead returning a brand new object with the modifications.
If you pass a mutable reference to some data into a function, and only that function has the rights to modify that data, this is the same as never using mutation at all and instead having that function return the updated data, and using that updated data in place of the original data after the function call.
You can only do this safely and clearly with a type system that distinguishes sole ownership from shared references, and ensures there for every piece of data at every time, there is at most one function with permission to mutate that data. This is what Rust gives, and it’s what Val gives more clearly, though Val is still a prototype. And Haskell gives it too. Mutation without the spooky action at a distance.
If you pass a mutable reference to some data into a function, and only that function has the rights to modify that data, this is the same as never using mutation at all and instead having that function return the updated data, and using that updated data in place of the original data after the function call.
One issue is that some operations may not be idempotent. If you had a function call that took a mutable reference, mutated the object, and then performed an operation that could fail you might not be able to retry the whole function safely, unless the mutation was idempotent itself.
For example, if you took a mutable reference to a dictionary, popped a value out of the dictionary, used that value to make an http request and that request failed you wouldn’t be able to rerun the whole function with the original reference.
You would be able to to that with immutable objects.
Would linear typing also qualify as a means to reach this middle path? I learned about it recently, and some of the solutions you mentioned reminded me of the the whole “each value can only be used once” restriction.
Rust does use linear typing. It has four different rules for different kinds of values:
Small types like i32 and bool and friends are so cheap to copy it would be silly not to. These implement Copy, and are except from the rules of the next bullet point because the compiler will implicitly copy them when needed.
T is an owned value of type T (unless T is a small type like i32 and implements Copy). E.g. String is an owned heap-allocated string. This uses linear typing to enforce sole ownership. If you have a variable x: String, it’s a type error to say let y = x; let z = x;. (Pedants will point out that Rust uses affine types. All that that means is that you’re allowed to mention x zero times, while that’s technically disallowed by linear types.)
&T is a shared reference to T, there can be many simultaneous copies of it but it’s immutable.
&mut T is a mutable reference to T, there can be only one copy of it.
The first (owned typed) use linear types. The second two use Rust’s borrow checker. It’s the combination of the two that makes the magic.
Haskell gets to a similar endpoint (it allows mutation, but controls it well) by completely different means, using state monads.
Most other languages pass around shared, mutable references, either explicitly, or under the hood. Which leads to confusion and bugs, and is the reason that people get up in arms about state. My point is that if you remove the sharing, then the state becomes harmless.
Pedants will point out that Rust uses affine types.
Yes, we shall. Rust’s affine types are useful, but they are not linear types, the lack of which prevents encoding in the type system that, e.g., some object given out by one’s module must be passed back to that module and not just dropped on the floor, and leads to kludges like types that prevent their values from being discarded unused by unconditionally panicking (throwing an exception or aborting the process) in their destructors.
Not familiar with Rust…. is there a fundamental conceptual difference between this and the OO pattern of private data that can only be mutated by a public method?
The main difference with Rust is that in the typical OO language you can have two aliases to the same object, that can both call the public method, so there’s still room for spooky action at a distance.
Clojure has a mechanism for this too. You can make a variable mutable, and it will error out if you pass it to a function that expects non-mutable data. This allows mutation for speed and going back to being immutable when passing data around.
Not the same! The property I’m referring to is global, not local. It’s not the sort of thing you can bolt on to an existing type system.
E.g. say that you have the object graph:
A
/ \
B C
\ /
D
and A passes in two mutable references to a function: a mutable reference to B and one to C. Bam, invariant has been broken, spooky action at a distance may occur! If the function modifies B.D, then spookily it has also modified C.D.
i.e. borrow checking can be seamless for stateless apps, but is more complex for stateful apps
Stateless programs like command line tools or low-abstraction domains like embedded programming will have less friction with the borrow checker than domains with a lot of interconnected state like apps, stateful programs, or complex turn-based games
If you pass a mutable reference to some data into a function, and only that function has the rights to modify that data, […]
You can only do this safely and clearly with a type system that […] ensures there for every piece of data at every time, there is at most one function with permission to mutate that data. This is what Rust gives, […]
Rust’s &mut exclusive references mean that “only that function has the rights to modify” and read that data — letting one function mutate the data without barring other functions from reading the data at the time suggests a race condition.
Completely agree! Minimizing state is paramount. I’ve found functional programming pushes the idea of state reduction, but it can apply to any philosophy and is the first step to solving a problem. I’ve found over time my feedback on design reviews has just been concerned with the design of state and how it’s being handled.
I can’t help but take the reductionist viewpoint here. At some level, all machines are state machines. Of course all engineering philosophies are about state. It’s what they’re for. 😉
Right, this is the same as saying “all programming is data manipulation.” Well, yea, but “data” is so vague that it’s a meaningless word. Same with state.
Because literally everything is data. For example, you see people talk about “data cleanliness.” This means nothing on its own, you have to know about the specific domain model to know if something is “clean” or not, or if it’s an optimal representation.
The level of which you speak is not the level at which software is typically authored. If one cares how the program as it was written will be understood by those who read it, the philosophies are not equivalent, as you seem to be implying, because they are primarily concerned with mental models and only incidentally, as SICP put it, how the code will be executed.
I am not saying they’re equivalent. I’m saying they’re strategies for managing something very fundamental to the discipline; that a programming philosophy that was somehow not “about state” would have difficulty being about programming at all.
I.e. that programming itself is about state, so of course all the philosophy around it is.
I apologize. I mistook your use of the terms “reductionist” and “at some level” to be a reference to an almost postmodern, it’s-all-just-ones-and-zeroes, anti-philosophy that’s found an occasional niche here.
With that cleared up, I agree with you, although I still found the article to be a decent summary of the ways in which people have searched for effective ways to manage state.
The way I understand this discussion, which may or may not be what the comment you’re responding to is talking about, is that there are at least two kinds of state that we deal with in writing a program. One is external or world state, which is the kind of state you’re talking about, I think. Any program that has no effect on this type of state is indistinguishable from no program at all, and so programming is trivially about state in that sense. The other is internal state, or the bookkeeping that programs enact in order to affect external state according to their contract, and that’s the type of state being talked about here. I don’t really agree that every programming paradigm is fundamentally “about” state in that sense, but I don’t think it’s a trivial assertion in that context.
I think saying that every paradigm is about state is taking things a bit too far. Sure state management is a cross-cutting concern that can interact with a lot of things, but that doesn’t mean these things are defined purely in their interaction with states.
OO and FP are indeed mostly about states, but OO is also about inheritance, which in turn can be about computation rather than state. FP is also about higher-order functions, which again show up in computations more frequently than in state management.
Declarative vs imperative can also be about computation.
Services can also be about computations. In fact, in a lot of microservice-based architectures, you will have a large number of almost stateless services [1] that talk to each other, with one or a few databases at the very bottom that manages all the states. In other words microservices are more often about distributing computation rather than distributing states.
[1] Cache is a notable exception, but cache is arguably a performance optimization and does the change the stateless nature
but OO is also about inheritance, which in turn can be about computation rather than state
Alan Kay, who coined the term, would disagree. More modern OO languages are increasingly moving away from the idea that subtyping should be coupled to implementation inheritance, because it turned out to be a terrible idea in practice. Even in OO languages that support it, the mantra of ‘prefer composition over inheritance’ is common.
I think more important than Kay, is what the inventors of object-orientation would say - Nygaard and Dahl. They have sadly passed, but I imagine what they would say (particularly Nygaard) is that there is a huge problem space to which some aspects of a language might be applied, and others not. To lump everything together as the same for all is problematic. Not all software is the same. For example, implementation inheritance has been very successful for a type of problem - the production of development and runtime frameworks that provide default behavior that can be overridden or modified.
Alan Kay doesn’t get to decide what words mean for everybody else, especially when he’s changed his mind in the 40+ years since he originally coined the term.
Moreover I feel the description of OO vs FP and declarative vs imperative seem to imply that the amount of states to manage is given, and the different paradigms are just different ways to manage them. While it’s true that every system has inherent states, there can also be a lot of accidental states and different paradigms can have very different amount of accidental states.
there can also be a lot of accidental states and different paradigms can have very different amount of accidental states
Completely agree! This was really just a short article showing a different way you can view programming philosophies, and maybe evaluate them based on the trade offs they make about state management. I didn’t really dive into how certain approaches may introduce additional state, and whether that tradeoff is worth it, but it’s an important thing to consider when designing systems.
It’s more of a conceptual model that can be applied to the various programming philosophies, and shows how they differ but also how they are similar. It’s focusing on the trade offs which is important, rather than a “one true way.”
I’m slightly confused by what you mean by “computation.” Inheritance and higher-order functions both are ways of structuring code with the goal to get it more correct.
I did leave out the topic of performance from this article, and I think that is a very important axis in which the philosophies have varying takes as well. It felt like it would have made the article too unwieldy if I tried to tackle that as well.
Can I further your argument and say that I don’t think any of them are far apart? I think any good design is principled about state and I/O, and my hope was to show that each programming philosophy just provides different ways of approaching it.
I think the fact that FP pushes so hard on state-reduction it feels like any state-minimization thinking is a functional viewpoint. But being principled about state is necessary for properly using any development style, FP is just the most direct about it.
Read the article, rather than just the headline. The thesis is that the difference between the philosophies can be boiled down to how they work with state. Seems right to me.
Author here, I used the phrase “about state” pretty loosely. Wasn’t particularly thinking about it.
But I stand by what I wrote: as dtgriscom wrote, this article is about boiling down the different philosophies into how they manage state, and that really is the primary difference between them.
It’s not a perfect fit (it does leave out the axis of computation performance) but I think a useful conceptual model.
Then use functional programming! Just (half) kidding!
You mean All high level Programming philosophies, maybe?
At an high level, state is an annoyance that you are better off without. Hence, the techniques and practices to isolate it in the smallest possible compartment. Personally I think functional programming is the key to achieve this.
At a low level, programming becomes nothing but the art of manipulating state. It doesn’t make sense to talk about minimizing or isolating it. It is just there at all times.
High level programming languages have been incredibly useful, because they give very cheap access to all the nice things reliably implemented at a lower level. Most people use high level languages to build all sorts of software. The difficult parts of state handling have been solved by libraries, operative systems, database systems, etc.
There are differing programming paradigms, each of which loosely based on various philosophies. These paradigms can be viewed by it’s view on state.
So yep, it kinda sorta reduces to state, but that’s not what they’re about. It’s a side effect. That’s important to know when you’re comparing notes.
Philosophy is directed conversation and questioning in the search for a science, a place where experiments can be performed. Most programming paradigms are tools that can be used at the end of the philosophical process. Different tools emphasize different kinds of solutions.
The difference is important because this is all a loose grouping. Pick one, like functional programming or microservices. Whichever one you pick is going to have problems because they’re not well thought out philosophies. This essay is like “hammers create wooden houses” It’s true the two are associated, but there’s lots of edge cases and you’re trying to link the tool with some dream end state. It doesn’t tend to be useful aside from noticing it.
Programming is really about the application of type theory in the context of applied philosophy. This means that things like OO and FP (in the generic) are really about “how to think about solving problems”, not the problems themselves or the deployment units used. Most all programming things we deal with are there to help us think better. These are analysis tools, only put inside of compilers and deployment units in various formats.
Apologies if that description wasn’t good enough. It’s a great topic. Congrats to the author for taking a run at it!
State, or the structure of data, representation encompasses complex choices about the meaning and scope of a program.
Algorithms also encompass complex choice about the meaning and scope of a program.
The leads to the novel and startling conclusion: Algorithms + Data Structures = Programs
More seriously, the art of development is about alignment of interests. Various forms of programming (functional, object oriented, declarative, imperative, etc.) are attempts to satisfy competing interests:
The system can ship MVP quickly (initially cheap)
The system can be add new features without changing architecture (longer term cheap)
The system has enough information to create a fast binary (initially fast)
The system can scale to an exponential growth of users (longer term fast)
The system has restrictions to provide safe and complete execution (initially good)
The system can be worked on by hundreds of independent teams *(longer term good) *
There are different interest groups of different strengths in organizations. Aligning the interests into architecture to fully capture intent from developer’s attention is still between an art and a craft.
Programming philosophies are about state because state is the physical manifestation of logical time. Without state, there’s no way to tell the difference between point N in the program lifetime and point N+1. If my program buys something off Amazon, there is a point before which that event didn’t happen and a point after which it did, and that point corresponds to a change in the state of the world itself.
i think that this is the best top-level reply. More vividly, the passage of time is illusory, and so mutability is also an illusion. We could broadly talk about systems which don’t have mutability, systems which have explicit mutability varying over observable time, and systems which have implicit mutability varying over implicit time; but I’m not sure that this is helpful.
While you’re right (block universe ftw!), I think this just reduces to s/time/logical dependency/ with no change in content.
That’s why computer scientists love clocks.
I made a significant error interpreting this sentence when skim reading…
Minimize state! Use functional programming practices!
But seriously, my work with systems recently has been all about how to reduce the state that exists on a server. If you think this is solved with containers, I welcome you to the Jenkins container guidance of “put all that stuff in a volume mount”.
There is a better way! A middle path, that captures the best of both worlds!
If a function is the sole owner of an object, and mutates it before returning it, that is indistinguishable from it never mutating anything at all, and instead returning a brand new object with the modifications.
If you pass a mutable reference to some data into a function, and only that function has the rights to modify that data, this is the same as never using mutation at all and instead having that function return the updated data, and using that updated data in place of the original data after the function call.
You can only do this safely and clearly with a type system that distinguishes sole ownership from shared references, and ensures there for every piece of data at every time, there is at most one function with permission to mutate that data. This is what Rust gives, and it’s what Val gives more clearly, though Val is still a prototype. And Haskell gives it too. Mutation without the spooky action at a distance.
One issue is that some operations may not be idempotent. If you had a function call that took a mutable reference, mutated the object, and then performed an operation that could fail you might not be able to retry the whole function safely, unless the mutation was idempotent itself.
For example, if you took a mutable reference to a dictionary, popped a value out of the dictionary, used that value to make an http request and that request failed you wouldn’t be able to rerun the whole function with the original reference.
You would be able to to that with immutable objects.
Would linear typing also qualify as a means to reach this middle path? I learned about it recently, and some of the solutions you mentioned reminded me of the the whole “each value can only be used once” restriction.
Rust does use linear typing. It has four different rules for different kinds of values:
Small types like
i32
andbool
and friends are so cheap to copy it would be silly not to. These implementCopy
, and are except from the rules of the next bullet point because the compiler will implicitly copy them when needed.T
is an owned value of typeT
(unlessT
is a small type likei32
and implementsCopy
). E.g.String
is an owned heap-allocated string. This uses linear typing to enforce sole ownership. If you have a variablex: String
, it’s a type error to saylet y = x; let z = x;
. (Pedants will point out that Rust uses affine types. All that that means is that you’re allowed to mentionx
zero times, while that’s technically disallowed by linear types.)&T
is a shared reference toT
, there can be many simultaneous copies of it but it’s immutable.&mut T
is a mutable reference toT
, there can be only one copy of it.The first (owned typed) use linear types. The second two use Rust’s borrow checker. It’s the combination of the two that makes the magic.
Haskell gets to a similar endpoint (it allows mutation, but controls it well) by completely different means, using state monads.
Most other languages pass around shared, mutable references, either explicitly, or under the hood. Which leads to confusion and bugs, and is the reason that people get up in arms about state. My point is that if you remove the sharing, then the state becomes harmless.
Yes, we shall. Rust’s affine types are useful, but they are not linear types, the lack of which prevents encoding in the type system that, e.g., some object given out by one’s module must be passed back to that module and not just
drop
ped on the floor, and leads to kludges like types that prevent their values from being discarded unused by unconditionally panicking (throwing an exception or aborting the process) in their destructors.Not familiar with Rust…. is there a fundamental conceptual difference between this and the OO pattern of private data that can only be mutated by a public method?
The main difference with Rust is that in the typical OO language you can have two aliases to the same object, that can both call the public method, so there’s still room for spooky action at a distance.
Clojure has a mechanism for this too. You can make a variable mutable, and it will error out if you pass it to a function that expects non-mutable data. This allows mutation for speed and going back to being immutable when passing data around.
Not the same! The property I’m referring to is global, not local. It’s not the sort of thing you can bolt on to an existing type system.
E.g. say that you have the object graph:
and A passes in two mutable references to a function: a mutable reference to B and one to C. Bam, invariant has been broken, spooky action at a distance may occur! If the function modifies
B.D
, then spookily it has also modifiedC.D
.Good article and comment about that recently:
https://verdagon.dev/blog/when-to-use-memory-safe-part-2
https://news.ycombinator.com/item?id=34412976
i.e. borrow checking can be seamless for stateless apps, but is more complex for stateful apps
Rust’s
&mut
exclusive references mean that “only that function has the rights to modify” and read that data — letting one function mutate the data without barring other functions from reading the data at the time suggests a race condition.Another way to put this is that only the combination of shared mutable data is bad. If you remove / control sharing, mutation can be totally safe.
I love this model, and am very interested in Val for that reason.
(D calls this “weakly pure”)
Completely agree! Minimizing state is paramount. I’ve found functional programming pushes the idea of state reduction, but it can apply to any philosophy and is the first step to solving a problem. I’ve found over time my feedback on design reviews has just been concerned with the design of state and how it’s being handled.
I can’t help but take the reductionist viewpoint here. At some level, all machines are state machines. Of course all engineering philosophies are about state. It’s what they’re for. 😉
Right, this is the same as saying “all programming is data manipulation.” Well, yea, but “data” is so vague that it’s a meaningless word. Same with state.
How is the term “data” vague?
Because literally everything is data. For example, you see people talk about “data cleanliness.” This means nothing on its own, you have to know about the specific domain model to know if something is “clean” or not, or if it’s an optimal representation.
The level of which you speak is not the level at which software is typically authored. If one cares how the program as it was written will be understood by those who read it, the philosophies are not equivalent, as you seem to be implying, because they are primarily concerned with mental models and only incidentally, as SICP put it, how the code will be executed.
I am not saying they’re equivalent. I’m saying they’re strategies for managing something very fundamental to the discipline; that a programming philosophy that was somehow not “about state” would have difficulty being about programming at all.
I.e. that programming itself is about state, so of course all the philosophy around it is.
I apologize. I mistook your use of the terms “reductionist” and “at some level” to be a reference to an almost postmodern, it’s-all-just-ones-and-zeroes, anti-philosophy that’s found an occasional niche here.
With that cleared up, I agree with you, although I still found the article to be a decent summary of the ways in which people have searched for effective ways to manage state.
The way I understand this discussion, which may or may not be what the comment you’re responding to is talking about, is that there are at least two kinds of state that we deal with in writing a program. One is external or world state, which is the kind of state you’re talking about, I think. Any program that has no effect on this type of state is indistinguishable from no program at all, and so programming is trivially about state in that sense. The other is internal state, or the bookkeeping that programs enact in order to affect external state according to their contract, and that’s the type of state being talked about here. I don’t really agree that every programming paradigm is fundamentally “about” state in that sense, but I don’t think it’s a trivial assertion in that context.
My favourite (“Minimise mutable state in scope”) gives you at least these bits of regularly received wisdom:
Back when I had a public Twitter, I tweeted “immutable til it hurts; mutable til it works.”
I think saying that every paradigm is about state is taking things a bit too far. Sure state management is a cross-cutting concern that can interact with a lot of things, but that doesn’t mean these things are defined purely in their interaction with states.
OO and FP are indeed mostly about states, but OO is also about inheritance, which in turn can be about computation rather than state. FP is also about higher-order functions, which again show up in computations more frequently than in state management.
Declarative vs imperative can also be about computation.
Services can also be about computations. In fact, in a lot of microservice-based architectures, you will have a large number of almost stateless services [1] that talk to each other, with one or a few databases at the very bottom that manages all the states. In other words microservices are more often about distributing computation rather than distributing states.
[1] Cache is a notable exception, but cache is arguably a performance optimization and does the change the stateless nature
Alan Kay, who coined the term, would disagree. More modern OO languages are increasingly moving away from the idea that subtyping should be coupled to implementation inheritance, because it turned out to be a terrible idea in practice. Even in OO languages that support it, the mantra of ‘prefer composition over inheritance’ is common.
I think more important than Kay, is what the inventors of object-orientation would say - Nygaard and Dahl. They have sadly passed, but I imagine what they would say (particularly Nygaard) is that there is a huge problem space to which some aspects of a language might be applied, and others not. To lump everything together as the same for all is problematic. Not all software is the same. For example, implementation inheritance has been very successful for a type of problem - the production of development and runtime frameworks that provide default behavior that can be overridden or modified.
Alan Kay doesn’t get to decide what words mean for everybody else, especially when he’s changed his mind in the 40+ years since he originally coined the term.
Since Hillel is being too circumspect, I’ll cite the piece: https://lobste.rs/s/0zg1cd/alan_kay_did_not_invent_objects.
Moreover I feel the description of OO vs FP and declarative vs imperative seem to imply that the amount of states to manage is given, and the different paradigms are just different ways to manage them. While it’s true that every system has inherent states, there can also be a lot of accidental states and different paradigms can have very different amount of accidental states.
Completely agree! This was really just a short article showing a different way you can view programming philosophies, and maybe evaluate them based on the trade offs they make about state management. I didn’t really dive into how certain approaches may introduce additional state, and whether that tradeoff is worth it, but it’s an important thing to consider when designing systems.
It’s more of a conceptual model that can be applied to the various programming philosophies, and shows how they differ but also how they are similar. It’s focusing on the trade offs which is important, rather than a “one true way.”
I’m slightly confused by what you mean by “computation.” Inheritance and higher-order functions both are ways of structuring code with the goal to get it more correct.
I did leave out the topic of performance from this article, and I think that is a very important axis in which the philosophies have varying takes as well. It felt like it would have made the article too unwieldy if I tried to tackle that as well.
I mostly agree, though I don’t think OO and FP are very far apart. Related comment here:
https://lobste.rs/s/lwopua/functional_code_is_honest_code
Can I further your argument and say that I don’t think any of them are far apart? I think any good design is principled about state and I/O, and my hope was to show that each programming philosophy just provides different ways of approaching it.
I think the fact that FP pushes so hard on state-reduction it feels like any state-minimization thinking is a functional viewpoint. But being principled about state is necessary for properly using any development style, FP is just the most direct about it.
I completely disagree. Paradigms are orthogonal to state manipulation. There are plenty mutation friendly FP languages.
Are about state? This fallacy of logic is akin to making the observation that humans are made up of atoms, therefore humans are about atoms.
Observing that a collection all have a common characteristic or attribute does not mean that is what they are ‘about’.
https://en.wikipedia.org/wiki/Blind_men_and_an_elephant
Read the article, rather than just the headline. The thesis is that the difference between the philosophies can be boiled down to how they work with state. Seems right to me.
Author here, I used the phrase “about state” pretty loosely. Wasn’t particularly thinking about it.
But I stand by what I wrote: as dtgriscom wrote, this article is about boiling down the different philosophies into how they manage state, and that really is the primary difference between them.
It’s not a perfect fit (it does leave out the axis of
computationperformance) but I think a useful conceptual model.Then use functional programming! Just (half) kidding!
You mean All high level Programming philosophies, maybe?
At an high level, state is an annoyance that you are better off without. Hence, the techniques and practices to isolate it in the smallest possible compartment. Personally I think functional programming is the key to achieve this.
At a low level, programming becomes nothing but the art of manipulating state. It doesn’t make sense to talk about minimizing or isolating it. It is just there at all times.
High level programming languages have been incredibly useful, because they give very cheap access to all the nice things reliably implemented at a lower level. Most people use high level languages to build all sorts of software. The difficult parts of state handling have been solved by libraries, operative systems, database systems, etc.
Happy codding!
I disagree.
There are differing programming paradigms, each of which loosely based on various philosophies. These paradigms can be viewed by it’s view on state.
So yep, it kinda sorta reduces to state, but that’s not what they’re about. It’s a side effect. That’s important to know when you’re comparing notes.
Philosophy is directed conversation and questioning in the search for a science, a place where experiments can be performed. Most programming paradigms are tools that can be used at the end of the philosophical process. Different tools emphasize different kinds of solutions.
The difference is important because this is all a loose grouping. Pick one, like functional programming or microservices. Whichever one you pick is going to have problems because they’re not well thought out philosophies. This essay is like “hammers create wooden houses” It’s true the two are associated, but there’s lots of edge cases and you’re trying to link the tool with some dream end state. It doesn’t tend to be useful aside from noticing it.
Programming is really about the application of type theory in the context of applied philosophy. This means that things like OO and FP (in the generic) are really about “how to think about solving problems”, not the problems themselves or the deployment units used. Most all programming things we deal with are there to help us think better. These are analysis tools, only put inside of compilers and deployment units in various formats.
Apologies if that description wasn’t good enough. It’s a great topic. Congrats to the author for taking a run at it!
The leads to the novel and startling conclusion: Algorithms + Data Structures = Programs
More seriously, the art of development is about alignment of interests. Various forms of programming (functional, object oriented, declarative, imperative, etc.) are attempts to satisfy competing interests:
There are different interest groups of different strengths in organizations. Aligning the interests into architecture to fully capture intent from developer’s attention is still between an art and a craft.