Actually, I like the reddit/r/AskHistorians take on reposts; For philosophical ideas such as this, there might always be new things to be thought about, and understood. So reposts should be OK.
Sorry, I wasn’t trying to say that it was a repost. I just remembered seeing Hillel in the comments arguing about the provenance of object oriented systems. Then two days later he has an article on his site. It seemed like the conversation sparked the article.
Confirmed! I’ve wanted to do a post on the broader history of OOP, but seeing the mistakes in the Ovid piece get super popular inspired this off the cuff one.
A great article to clarify a very common (and unfair) error attributing object-orientation to Alan Kay. The main legacy is a confused new generation of programmers all arguing that Erlang is the only object-oriented language, because it’s all about something called “messaging”.
To quote him (recently) on his role in object-orientation’s invention:-
“Very much in the same spirit as I thought about it back in the 60s. I don’t think I invented “Object-oriented” but more or less “noticed” what was really powerful about just making everything from complete computers communicating with non-command messages. This was all chronicled in the HOPL II chapter I wrote “The Early History of Smalltalk”.
A critical part of that thought process was the idea of using Carl Hewitt’s PLANNER ideas as the interface to objects – that was done around 1970, and we used some of it in the first Smalltalk (72).”
The history of Programming Languages conference paper on the early history of Smalltalk by Alan Kay has some interesting background also. Thay had a compiler listing for Simula that didn’t work, and the documentation was not the best. Their process of learning the language was to read the compiler:
another graduate student and I unrolled the program listing 80 feet down the hall and crawled over it yelling discoveries to each other. The weirdest part was the storage allocator, which did not obey a stack discipline as was usual for Algol. A few days later, that provided the clue. What Simula was allocating were structures very much like the instances of Sketchpad. There were descriptions that acted like masters and they could create instances, each of which was an independent entity.
Nice article with a bunch of history I didn’t know! Thanks for writing this, as I’m also annoyed by the “Alan Kay and OOP” meme. I got a comment reply to that effect a few weeks ago, which I can’t find at the moment because Reddit is down.
BTW, “The Design and Evolution of C++” by Stroustrop is a nice historical book that talks about the influence of Simula on C++. Given that C++ directly influenced Java and Python, and Java influenced C#, I’d say C++ has contributed a lot to OOP (despite what both C++ people and OOP would say about that :) ).
If I were to write an Alan Kay rant, I would write one called “The Web Shouldn’t Be a VM; It should have a VM (and it’s had at least three)”. At some point he said some fairly dumb things implying the former, disparaging the work Tim Berners-Lee, so he sort of deserves to be disparaged for that… But it’s an old argument, and he already “lost” so it’s probably not worth it.
edit: in case anyone cares, the reply to this comment is where I got hit with the “Alan Key OOP meme”
The more I use object-oriented programming, the more I change my mind about it. I have always liked the simplicity of QBASIC (which was my first programming language), C, and assembler. In my first year in uni, I had to learn C#. The .NET library (I think it was .NET framework back then, but I’m not sure) was pretty nice, but all the OO principles we had to learn seemed… unnecessary? In all programs I did, making a ‘Program’ class just seemed like boilerplate. In the meantime, I have come across a couple of examples where I thought inheritance was pretty nice:
Suppose you have a pseudorandom number generator with state. If you want to have two independent series of random numbers, it’s pretty darn useful to encapsulate the state of one PRNG in a class.
If you have a lot of almost-but-not-quite-identical behaviors, inheritance can be very useful.
Still, I have encountered way, way more instances where object orientism was used as an excuse for needless layers of abstraction, which made for some pretty mind-boggling code (to read or debug) at times. One thing I find hard to swallow is that it is very hard to predict what the following function does:
void MutateSomething(Foo foo) {
foo.property++;
}
as you have to know if Foo is a struct or class to know what happens. I’ve seen this alone account for many, many bugs (truth be told, usually the buggy code was written by novice programmers). I think C’s system is much more elegant, where pass-by-reference doesn’t even exist: You can pass a pointer by value, and then dereference that, which is hard to do accidentally, since it has a separate syntax.
The most recent example I came across where OO was used as a reason to write poor code, was when we had to validate some logic on objects. Think
if (2 <= myobject.value && myobject.value <= 5)
throw new Exception("Value should be between 2 and 5 but is " + myobject.value);
Instead of writing a function for the more commonly used rules, a senior programmer decided that it was a good idea to give each rule its own class that derives from a Rule class. He wrote like 30 rules before he decided that he needed a class that just takes a function and calls an exception if the function is false. So you have exactly what you’d have if you just write the function in the test logic, except that a lot of overhead is added (5 people worked on this little framework, for an average of about 3 days).
Anyway. All in all, I think OO has its uses, but I’m bothered by the way it’s pushed in our faces. Newbies eagerly use OO to make horrible constructs. Nine out of ten times, I’d rather not use classes, and write a bunch of pure/mathematical functions. It seems that, at the places where I work, I am alone on this: When I make a static class with some pure functions
Suppose you have a pseudorandom number generator with state. If you want to have two independent series of random numbers, it’s pretty darn useful to encapsulate the state of one PRNG in a class.
That’s one for encapsulation, not inheritance, right? Here’s how I would write it in C++ (interface only):
class Prng {
public:
// core functionality (with lots of crypto pitfalls worked out)
Prng(uint8_t seed[32]);
void get(uint8_t *buffer, size_t buffer_size);
Prng spawn();
private:
// Prng state
};
No need for inheritance just to spawn independent random number generators. Did you mean something else?
You’re right. I first had only the other example, which actually is about inheritance, and then added this one without changing. ‘Object oriented programming’ would be more appropriate, instead of ‘inheritance’.
I don’t understand how the Smalltalk-80 syntax makes it possible to send messages over alternate message transports, nor to out-of-vm targets. Perhaps this was something lost between -76 and -80?
I’m no fan of the amount of appealing to authority we do in our industry. People stake out areas and then folks think they own them. We constantly re-invent things already invented multiple times before, then brand them.
So I agree with the author that Kay’s role can be misunderstood/misused.
Who invented objects? I’d go with Plato, perhaps with a bit of Aristotle thrown in. These ideas have long, storied, and interesting histories. We do one another a disservice when we say “Well, X says that….”
Then Dogen invented functions. He describes the process of immutable transformation in his Koan of firewood.
Firewood becomes ash, and it does not become firewood again. Yet, do not suppose that the ash is future and the firewood past. You should understand that firewood abides in the phenomenal expression of firewood, which fully includes past and future and is independent of past and future. Ash abides in the phenomenal expression of ash, which fully includes future and past. Just as firewood does not become firewood again after it is ash, you do not return to birth after death.
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.
This is the core of OOP, and exactly counter to FP. I am still not sure which approach is better. Hiding state as OOP does it or expelling state as FP does it.
To me it’s clear that it depends entirely on the problem. There is no way to say which one is better without reference to a problem – they’re both valid.
And the problems are very small and fine-grained (like 50 lines of code). I can’t think of a single program where you wouldn’t want to use an FP style in one place and OOP style in another place. That is, using 100% FP or 100% OOP is always strictly worse than mixing them.
I think this is explains why OOP languages are more popular – they can “fake” FP but the other way around works less well. (For example, for some reason people recommend against using the OCaml object system, which is what the “O” was for in the first place. As far as I understand it’s not idiomatic OCaml.)
On the other hand, all languages with the “class” keyword have picked up some functional features in the last 10 or 20 years. That’s true for the dynamically typed ones (Python, Ruby, JS) and static ones (C#, Java, Swift, Kotlin, etc.).
I’ll use Darius as a clear example of a Lisp programmer who uses Python :)
Having lambdas sprinkled in and being a fully-fledged pure functional/algebraic language are totally different things. I don’t remember when I last wrote an explicit lambda in my functional code.
The only language where this mix arguably exists is Scala, and even there it’s unwieldy and you have to use third-party libraries for rather regular FP constructs.
Don’t forget metaprogramming: making it easier to transform programs. It’s what Kay et al used in STEPS to keep incidental complexity down. It’s used extensively by Lisp programmers. Probably Darius, too. The DSL approach was also used by Pieter Hintjens at iMatix for enterprise applications. MP was used to easily implement FP, OOP, etc in languages that support it. It seems to be the most powerful paradigm.
Previously on As the Lobster Turns…
Actually, I like the reddit/r/AskHistorians take on reposts; For philosophical ideas such as this, there might always be new things to be thought about, and understood. So reposts should be OK.
But this is 2 days later! Maybe let it marinate/stew a bit longer…
Ah, you are right!
Sorry, I wasn’t trying to say that it was a repost. I just remembered seeing Hillel in the comments arguing about the provenance of object oriented systems. Then two days later he has an article on his site. It seemed like the conversation sparked the article.
Confirmed! I’ve wanted to do a post on the broader history of OOP, but seeing the mistakes in the Ovid piece get super popular inspired this off the cuff one.
A great article to clarify a very common (and unfair) error attributing object-orientation to Alan Kay. The main legacy is a confused new generation of programmers all arguing that Erlang is the only object-oriented language, because it’s all about something called “messaging”.
To quote him (recently) on his role in object-orientation’s invention:-
“Very much in the same spirit as I thought about it back in the 60s. I don’t think I invented “Object-oriented” but more or less “noticed” what was really powerful about just making everything from complete computers communicating with non-command messages. This was all chronicled in the HOPL II chapter I wrote “The Early History of Smalltalk”.
A critical part of that thought process was the idea of using Carl Hewitt’s PLANNER ideas as the interface to objects – that was done around 1970, and we used some of it in the first Smalltalk (72).”
Source - https://news.ycombinator.com/item?id=14337970
The history of Programming Languages conference paper on the early history of Smalltalk by Alan Kay has some interesting background also. Thay had a compiler listing for Simula that didn’t work, and the documentation was not the best. Their process of learning the language was to read the compiler:
From http://worrydream.com/EarlyHistoryOfSmalltalk/ For reference, the history of the Simulas is described here: http://cs-exhibitions.uni-klu.ac.at/fileadmin/template/documents/text/The_development_of_the_simula_languages.pdf
Nice article with a bunch of history I didn’t know! Thanks for writing this, as I’m also annoyed by the “Alan Kay and OOP” meme. I got a comment reply to that effect a few weeks ago, which I can’t find at the moment because Reddit is down.
BTW, “The Design and Evolution of C++” by Stroustrop is a nice historical book that talks about the influence of Simula on C++. Given that C++ directly influenced Java and Python, and Java influenced C#, I’d say C++ has contributed a lot to OOP (despite what both C++ people and OOP would say about that :) ).
If I were to write an Alan Kay rant, I would write one called “The Web Shouldn’t Be a VM; It should have a VM (and it’s had at least three)”. At some point he said some fairly dumb things implying the former, disparaging the work Tim Berners-Lee, so he sort of deserves to be disparaged for that… But it’s an old argument, and he already “lost” so it’s probably not worth it.
edit: in case anyone cares, the reply to this comment is where I got hit with the “Alan Key OOP meme”
https://www.reddit.com/r/ProgrammingLanguages/comments/b1hzke/is_there_a_scientific_paper_that_claims_that_the/eimac71/
Actually Java’s basis in Simula is more direct, rather than via C++.
“Java’s object model came directly from Simula”. See http://bit.ly/2xc1XTA (James Gosling)
Interesting, thanks for the reference!
FWIW this 2009 post is the one I was thinking of that shows the influence of C++’s OO features on Python:
http://python-history.blogspot.com/2009/02/adding-support-for-user-defined-classes.html
The more I use object-oriented programming, the more I change my mind about it. I have always liked the simplicity of QBASIC (which was my first programming language), C, and assembler. In my first year in uni, I had to learn C#. The .NET library (I think it was .NET framework back then, but I’m not sure) was pretty nice, but all the OO principles we had to learn seemed… unnecessary? In all programs I did, making a ‘Program’ class just seemed like boilerplate. In the meantime, I have come across a couple of examples where I thought inheritance was pretty nice:
Still, I have encountered way, way more instances where object orientism was used as an excuse for needless layers of abstraction, which made for some pretty mind-boggling code (to read or debug) at times. One thing I find hard to swallow is that it is very hard to predict what the following function does:
as you have to know if Foo is a struct or class to know what happens. I’ve seen this alone account for many, many bugs (truth be told, usually the buggy code was written by novice programmers). I think C’s system is much more elegant, where pass-by-reference doesn’t even exist: You can pass a pointer by value, and then dereference that, which is hard to do accidentally, since it has a separate syntax.
The most recent example I came across where OO was used as a reason to write poor code, was when we had to validate some logic on objects. Think
Instead of writing a function for the more commonly used rules, a senior programmer decided that it was a good idea to give each rule its own class that derives from a Rule class. He wrote like 30 rules before he decided that he needed a class that just takes a function and calls an exception if the function is false. So you have exactly what you’d have if you just write the function in the test logic, except that a lot of overhead is added (5 people worked on this little framework, for an average of about 3 days).
Anyway. All in all, I think OO has its uses, but I’m bothered by the way it’s pushed in our faces. Newbies eagerly use OO to make horrible constructs. Nine out of ten times, I’d rather not use classes, and write a bunch of pure/mathematical functions. It seems that, at the places where I work, I am alone on this: When I make a static class with some pure functions
That’s one for encapsulation, not inheritance, right? Here’s how I would write it in C++ (interface only):
No need for inheritance just to spawn independent random number generators. Did you mean something else?
You’re right. I first had only the other example, which actually is about inheritance, and then added this one without changing. ‘Object oriented programming’ would be more appropriate, instead of ‘inheritance’.
Check out FreeBASIC if you miss QBASIC.
You mean D.L. Parnas? Or Alan Perlis?
Oops, meant David Parnas. Pushed a fix!
I don’t understand how the Smalltalk-80 syntax makes it possible to send messages over alternate message transports, nor to out-of-vm targets. Perhaps this was something lost between -76 and -80?
I’m no fan of the amount of appealing to authority we do in our industry. People stake out areas and then folks think they own them. We constantly re-invent things already invented multiple times before, then brand them.
So I agree with the author that Kay’s role can be misunderstood/misused.
Who invented objects? I’d go with Plato, perhaps with a bit of Aristotle thrown in. These ideas have long, storied, and interesting histories. We do one another a disservice when we say “Well, X says that….”
Then Dogen invented functions. He describes the process of immutable transformation in his Koan of firewood.
This is the core of OOP, and exactly counter to FP. I am still not sure which approach is better. Hiding state as OOP does it or expelling state as FP does it.
To me it’s clear that it depends entirely on the problem. There is no way to say which one is better without reference to a problem – they’re both valid.
And the problems are very small and fine-grained (like 50 lines of code). I can’t think of a single program where you wouldn’t want to use an FP style in one place and OOP style in another place. That is, using 100% FP or 100% OOP is always strictly worse than mixing them.
I think this is explains why OOP languages are more popular – they can “fake” FP but the other way around works less well. (For example, for some reason people recommend against using the OCaml object system, which is what the “O” was for in the first place. As far as I understand it’s not idiomatic OCaml.)
On the other hand, all languages with the “class” keyword have picked up some functional features in the last 10 or 20 years. That’s true for the dynamically typed ones (Python, Ruby, JS) and static ones (C#, Java, Swift, Kotlin, etc.).
I’ll use Darius as a clear example of a Lisp programmer who uses Python :)
https://github.com/darius/parson
Having lambdas sprinkled in and being a fully-fledged pure functional/algebraic language are totally different things. I don’t remember when I last wrote an explicit lambda in my functional code.
The only language where this mix arguably exists is Scala, and even there it’s unwieldy and you have to use third-party libraries for rather regular FP constructs.
Don’t forget metaprogramming: making it easier to transform programs. It’s what Kay et al used in STEPS to keep incidental complexity down. It’s used extensively by Lisp programmers. Probably Darius, too. The DSL approach was also used by Pieter Hintjens at iMatix for enterprise applications. MP was used to easily implement FP, OOP, etc in languages that support it. It seems to be the most powerful paradigm.