That’s not a very useful argument. The real question is whether the language makes this style of programming natural. The problem with doing FP in an imperative language is that you get no support from the language, and you often end up working around the way the language is meant to be used. One example of this is the fact that data is mutable by default in imperative languages. Therefore, it’s entirely up to the developer to ensure they’re using immutable data structures, and that mutable data isn’t being put in these data structures and referenced elsewhere. If you’re working on a team, then everybody has to have a common understanding of how to work with mutable state and how to manage side effects.
I never said anything about using immutable data structures, just about avoiding mutating parameters. Or at the very least, make it apparent in the function name/signature. Lisp is considered a functional programming language, but it has many functions to mutate parameters which are reflected in their name (RPLACA or RPLACD for instance).
If you can’t work in a functional programming language, but you want the benefits, then what? Just give up? Or try to follow some simple principles?
Referential transparency is at the core of modern functional programming. The whole idea behind this style is that you write pure functions that can be reasoned about independently, and you chain these functions into data transformation pipelines. This is difficult to achieve without immutability, which makes it a key feature in a functional language.
The ideas from FP can be applied in imperative languages, but the point is that the benefits are nowhere near those of using an actual functional language.
I don’t completely agree with the “Separate I/O from processing” section. Using a functional style does not necessarily mean you need to have all the data you’re gonna work with in memory when you start working with it, you just need to separate the work you do between tasks that perform I/O and data transformation. There’s many FP tools to help you deal with streaming data, such as lazy sequences/transducers and parser combinators.
Right, you want the code that does the IO to be separated from the code that does the computation, but only in terms of where you put its source code. They can still overlap in time at runtime.
You’re agreeing with the author here about this:
have the processing code call a (possibly configurable) function for input and output
Something along the same lines that I’ve been doing (in OO languages): Reducing usage of instance variables.
Even when an instance variable would be very much accessible in a given method I’m writing, I write the method so that it has to take in another argument instead, which serves the instance variable’s purpose. Similar reasoning as with global variables, but on a smaller scale: It reduces the surface area of what (in the class’s code) could possibly impact what the method in question does, thereby making it easier to read and reason about.
Separate I/O from processing
I’d have appreciated an example of this, as I don’t quite get the point the author is trying to make. “Here’s an example where it’s not separate. See how these problems can occur? Now here’s what it looks like when we separate them.”
Separating I/O from processing is good instinctual starting point.
Not all I/O code is a file read, sometimes it’s and RPC, or a Database call, or listening on a socket.
Having your core logic not worrying about the details of how the data gets to it allows you to reuse it across more domains, and allows your I/O code to focus on I/O, rather than having to think of both at the same time.
That being said, that are times when you have to make sure your separation of concerns doesn’t lead to bad I/O patterns, like transaction-per-row database interactions.
And you have things like ETL(Extract, Transform, Load) flows, where the I/O is the processing, for the most part.
That’s a fair question. A lot of the time, it’s private functions that do some intermediary work that doesn’t need to be exposed as part of the class’s public interface.
I wish more programmers cared more about programming, but that is just me, because I love programming. The text is a fun read and I wish I lived in the 60s!
idk, the text describes a horribly toxic and arrogant mindset about 60% of the time. XD
Places with this kind of programming still exist though. Look in aerospace, automotive, some corners of health care, legacy IBM systems for airlines and accounting, etc. Anything where the costs of mistakes are very high.
I agree with most of this strongly. My C programming got much less buggy and more maintainable when I started avoiding side effects like mutating global variables or reference parameters.
Separating I/O from processing is probably also a heuristic I tend to use that has made my programming better. If a prototype I make has I/O sprinkled in several places one of the first refactors I tend to do is consolidate or separate it out. I/O is where you interact with things outside your program, and having those interfaces separated more discretely makes them easier to debug, easier to test, and makes it easier to separate a “your code problem” from an interface or “their code problem”.
It was extremely frustrating though, in Haskell, to jump through hoops to do debug printing inside a function.
Yes, but to model what? Put another way, what subset of “applications” is it best suited to? Is a simulation the same as a line-of-business application the same as a control system the same as a program that just transforms data?
That’s not a very useful argument. The real question is whether the language makes this style of programming natural. The problem with doing FP in an imperative language is that you get no support from the language, and you often end up working around the way the language is meant to be used. One example of this is the fact that data is mutable by default in imperative languages. Therefore, it’s entirely up to the developer to ensure they’re using immutable data structures, and that mutable data isn’t being put in these data structures and referenced elsewhere. If you’re working on a team, then everybody has to have a common understanding of how to work with mutable state and how to manage side effects.
I never said anything about using immutable data structures, just about avoiding mutating parameters. Or at the very least, make it apparent in the function name/signature. Lisp is considered a functional programming language, but it has many functions to mutate parameters which are reflected in their name (
RPLACA
orRPLACD
for instance).If you can’t work in a functional programming language, but you want the benefits, then what? Just give up? Or try to follow some simple principles?
Referential transparency is at the core of modern functional programming. The whole idea behind this style is that you write pure functions that can be reasoned about independently, and you chain these functions into data transformation pipelines. This is difficult to achieve without immutability, which makes it a key feature in a functional language.
The ideas from FP can be applied in imperative languages, but the point is that the benefits are nowhere near those of using an actual functional language.
I don’t completely agree with the “Separate I/O from processing” section. Using a functional style does not necessarily mean you need to have all the data you’re gonna work with in memory when you start working with it, you just need to separate the work you do between tasks that perform I/O and data transformation. There’s many FP tools to help you deal with streaming data, such as lazy sequences/transducers and parser combinators.
Right, you want the code that does the IO to be separated from the code that does the computation, but only in terms of where you put its source code. They can still overlap in time at runtime.
You’re agreeing with the author here about this:
I was more ticked off by “get more memory”, which I don’t think should be an acceptable tradeoff for adopting a functional style.
Good point.
Something along the same lines that I’ve been doing (in OO languages): Reducing usage of instance variables.
Even when an instance variable would be very much accessible in a given method I’m writing, I write the method so that it has to take in another argument instead, which serves the instance variable’s purpose. Similar reasoning as with global variables, but on a smaller scale: It reduces the surface area of what (in the class’s code) could possibly impact what the method in question does, thereby making it easier to read and reason about.
I’d have appreciated an example of this, as I don’t quite get the point the author is trying to make. “Here’s an example where it’s not separate. See how these problems can occur? Now here’s what it looks like when we separate them.”
Separating I/O from processing is good instinctual starting point.
Not all I/O code is a file read, sometimes it’s and RPC, or a Database call, or listening on a socket.
Having your core logic not worrying about the details of how the data gets to it allows you to reuse it across more domains, and allows your I/O code to focus on I/O, rather than having to think of both at the same time.
That being said, that are times when you have to make sure your separation of concerns doesn’t lead to bad I/O patterns, like transaction-per-row database interactions.
And you have things like ETL(Extract, Transform, Load) flows, where the I/O is the processing, for the most part.
If the method works without any instance variables, why is it even in the class at all?
That’s a fair question. A lot of the time, it’s
private
functions that do some intermediary work that doesn’t need to be exposed as part of the class’s public interface.Usually I like to bundle such things into a module of related helpers, or else build a sub-object for that inner functionality.
Not that a method with no self references is like dogmatically evil, but it feels like a smell often.
From https://www.kimballlarsen.com/2007/10/26/real-programmers-dont-eat-quiche/
I wish more programmers cared more about programming, but that is just me, because I love programming. The text is a fun read and I wish I lived in the 60s!
idk, the text describes a horribly toxic and arrogant mindset about 60% of the time. XD
Places with this kind of programming still exist though. Look in aerospace, automotive, some corners of health care, legacy IBM systems for airlines and accounting, etc. Anything where the costs of mistakes are very high.
Yeah you are right. Shouldn’t be fun to work in.
But man isn’t it amazing to land a spacecraft six years later lol
I agree with most of this strongly. My C programming got much less buggy and more maintainable when I started avoiding side effects like mutating global variables or reference parameters.
Separating I/O from processing is probably also a heuristic I tend to use that has made my programming better. If a prototype I make has I/O sprinkled in several places one of the first refactors I tend to do is consolidate or separate it out. I/O is where you interact with things outside your program, and having those interfaces separated more discretely makes them easier to debug, easier to test, and makes it easier to separate a “your code problem” from an interface or “their code problem”.
It was extremely frustrating though, in Haskell, to jump through hoops to do debug printing inside a function.
GHC Haskell includes a purity-breaking “Debug.Trace” module to solve this problem if you’re ever in that space again
Yes, but to model what? Put another way, what subset of “applications” is it best suited to? Is a simulation the same as a line-of-business application the same as a control system the same as a program that just transforms data?