Stop making #1 priorities, should be your #1 priority for 2015. (tired of stupid headlines proclaiming one-size fits all solutions every 6 months for the last 10 years since the masses got involved in programming.)
The “replace object oriented programming with functional programming” lead-in doesn’t do functional programming favors.
Object oriented programming was originally in “one size fits all, best thing since sliced bread, does great stuff for free” terms and that was a disaster. Indeed, that kind of sell harmed OO more than anything else.
OO has the virtue of being:
1. A way to package/encapsulate just about anything and so make something that begin a disaster into a slightly less disastrous thing in a practical time frame. Functional programming isn’t competing here and shouldn’t compete here.
2. A way to program powerfully by combining objects, programming with duck-types or generics or whatever. Functional programming does compete here but neither “wins” because after a point, power, meaning effects-per-line or flexibility-per-line only matters for small-ish programs. Once one gets to a certain program, the main limit to power is the programmer’s ability to encompass it and that’s inherently limited.
The virtues of Functional Programming (which I’m less versed in so feel free to correct).
1. If you fully subscribe to the approach, you get a variety of hard safety guarantees which also allow you to manipulate the program on various different levels. Especially, these open the door to safety guarantees in multi-threaded and concurrent programs whereas imperative programs tend to be subject to unpredictable errors, a drag given that such programs tend to be “high performance” where an unpredictable crash is a bad thing (see Twitter going from Ruby to Scala).
2. Power, see above.
3. You jump on the next-big-thing bandwagon, expand your mind, learn new and different tricks.
So there you are have it, apples versus oranges as you’d expect.
Just as OO was ill-served when sold as the answer to everything in the 90s, my hunch is that functional programming will be ill-served if it becomes accepted as the answer to everything, though that seems less likely given that it seems inherently difficult to learn and use.
you get a variety of hard safety guarantees which also allow you to manipulate the program on various different levels
You get more than that. I’m going to scope this to Haskell because I will not defend other languages considered to be functional and because I’ve been teaching it and writing about it for awhile now.
OO has the virtue of being: 1. A way to package/encapsulate just about anything and so make something that begin a disaster into a slightly less disastrous thing in a practical time frame.
Abstract datatypes do the same thing. You see them a lot in OCaml and Haskell. I’d say the deeper point here is about final encodings but let me hand-wave that for now.
I benefit from FP (initial encodings - algebraic datatypes, pattern matching) and OO-style (final encodings - abstract datatypes/typeclasses) in Haskell in equal measures. The deeper point, IMO, is that I have a solid and principled foundation to build upon and a wealth of options to work with. Building on wet sand sucks.
Also, good defaults. I do think the “defaults” in a language like Haskell (pure, immutable, etc.) are the right way to go. I do think FP-style initial encodings are a good ‘first-pass’ default for when you’re still mapping out and understanding the domain. I don’t abstract out to something that hides information about the concrete instance until I understand the…abstraction (or pattern). Perhaps this works fine because Haskell is pure and the default is immutability, but pure code + immutability alone wasn’t enough to make Clojure scale well for me, so it’s not just that.
IMHO, thinking final encodings (OO) will save you from pervasive effects (impurity, mutability, etc.) is like thinking breathing through a straw on Venus will keep you alive.
You need a spacesuit, not a straw.
The frontier on this is continually being pushed out - you see Haskellers asking the same questions about totality and turing completeness that people ask of Haskellers about purity and “but how do you talk to the outside world?” As the author of a library that does nothing but talk to an external service (search engine) and still has ~94% of the code emit no IO, let me tell you, it works fine.
But sometimes, I use a mutable datatype. And it’s totally fine. shrug I would note, however, that I am not allowed to confuse immutable and mutable variations of the same type in Haskell, but I still allowed to (safely) write generic operations over both.
You can’t think this is a one-size fits all approach unless you’re talking about languages with a narrower set of options than something like Haskell. Unless you don’t think final encodings capture the value of OO, but then the onus would be on you to explain what OO does that abstract datatypes, typeclasses, and modules cannot. I don’t think that’ll be fruitful unless you’re formulating new foundations for programming languages which means you now must justify a totally new PL theory. That seems a rather more daunting task than, “this tool relying on 79 year old well-established theory of computation happens to be well made, have nice libraries, and lets me be more productive than the alternatives”.
I use Haskell in my 9-5 for a variety of reasons. Only one of them has the word “lambda” in them.
TL;DR - No hairshirt. Future’s so bright, I gotta wear shades.
I don’t feel like defending object-oriented programming, but I will :).
You can run a function a thousand of times in different cores or machines that you’re not going to get different outputs from what you’ve gotten before. So, you can use the same code to run in 1 core as well as 1k. Life can be good again.
Counter-example: modern parallelized BLAS implementation still outperform purely functional matrix libraries (e.g. repa) by a wide margin for most use cases. Of course, lazy immutable languages could eventually benefit from the elimination of temporaries (what e.g. Eigen does successfully with C++ expression templates).
I agree that the point can be true for concurrency: e.g. in a webserver this can be hugely advantageous. Unfortunately, most web frameworks live in an IO monad, since real-world applications do have state (database connections, etc.).
At least for concurrency and parallelism, OOP cannot save you anymore. It’s just because OOP relies directly on mutable state (in Imperative Languages, which are the most common OOP implementation).
Object Oriented programming and mutability are orthogonal. You can design classes to be immutable (e.g. see Guava’s immutable collections). There are also many counterexamples that show that oo-mutable <-> fp-immutable is a false dichotomy. E.g. OCaml, Scala, and Lisp allow mutation.
Many may argue that FP has poor readability. If you’d have a imperative background, functional programs will look like a crypt language. Not because it’s actually crypt, but because you still don’t know its common idioms.
Most pure FP is beautiful and perfectly readable. The problem is that if you really want to have optimized code in a pure lazy FP language, you usually end up with an imperative/functional mixture in state or IO monads. While it still results in nice function signatures, the code is usually not very readable. If you want examples, look at some of the performant Haskell libraries or the Language Game benchmarks.
tl;dr: I don’t believe there is one true way. Pure, immutable functions are good. Immutable data structures should be the default. But sometimes you also want mutable data structures and/or imperative code for performance reasons.
Object Oriented programming and mutability are orthogonal.
I agree. Cf. what I said about final encodings in my other comment.
modern parallelized BLAS implementation still outperform purely functional matrix libraries (e.g. repa) by a wide margin for most use cases.
I would be careful about attributing that to being purely functional. How many hours have been dumped into BLAS?
How many hours have been dumped into repa?
Regardless, both are available in Haskell. The point for me is that I have a choice at all.
Pure, immutable functions are good. Immutable data structures should be the default. But sometimes you also want mutable data structures and/or imperative code for performance reasons.
The problem is foundations. You can have both and still be using a language that has the right foundations & defaults. Having pervasive, untracked effects and impure semantics by default doesn’t serve a useful purpose when we already know how to do a better job.
Most pure FP is beautiful and perfectly readable. The problem is that if you really want to have optimized code in a pure lazy FP language, you usually end up with an imperative/functional mixture in state or IO monads.
No, Monads and IO are pure. I think you’ve misunderstood what purely functional means. OCaml is impure because it augments lambda calculus with stuff that…isn’t lambda calculus. This is where the imperative bits in OCaml come from. Purely functional means lambda calculus only, this attested to in the first paper from 1965 known to suggest monads would be useful in purely functional languages.
Do-syntax letting you believe you’re writing imperative code in Haskell doesn’t mean Monads or IO are imperative. Technically, IO and Monad are orthogonal in addition to this. Monad is just a convenience interface for working with the IO datatype. Some of the benefits of how IO works are kicked around in this article.
Monad started not as a typeclass, but as syntactic sugar baked into GHC. We didn’t get the former until Gofer invented constructor classes.
IO doesn’t add anything (not Lambda Calculus) to the lambda calculus underlying Haskell’s semantics, so it’s pure. You may mean, “IO values can be effectful or bear effects”, but that’s not what purely functional means.
Purely functional means pure as in “just lambda calculus”, not purity ring.
No, Monads and IO are pure. I think you’ve misunderstood what purely functional means.
I know. I should have worded that more accurately. The point is that code in the IO, ST, or State monads with do-notation look like imperative code. And when the program is executed, they have side effects just like imperative code. So for all practical purposes it is imperative code from the programmer’s point of view.
(Edit: yes, I know that do-notation is desugared and that the monad laws hold. That’s not my point - it’s that if you write performant code you usually don’t end up with the beautiful clean functions that you start with, e.g. because you need the ST monad.)
So for all practical purposes it is imperative code from the programmer’s point of view.
No it isn’t, we get much better guarantees than that from how Haskell works. I linked an article in my last comment to address (some of) this specifically.
You’re conflating semantics (IO, ST, State) with syntax (do). Grave.
Your assertion that syntax reduces the semantics to what the syntax seems to represent (but doesn’t get desugar’d into) is not correct. Programming languages are much more than their syntax.
I don’t think this conversation can proceed productively.
I’d rather look at the other solution to shared mutable state: limited sharing - especially since that’s a new thing to look at and I’ve known how to do functional programming since 2010…
The last example looks more like declarative (or logical) programming than functionnal programming to me. I though it was a prolog snippet at first sight!
Really? Above making money, doing good for your local community, country and the world? Functional programming is my #1 priority, just after about 10 other things including “Getting shit done”.