This is actually super boring to a modern audience but remember this is from 1982, so there’s some historical baggage there.
It (boringly, but bear with me) means that this (contrived and hypothetical but bear with me!) code:
struct library_entry entries[]; // Some library of books
...
int entr_id = get_entry_id(); // Prompt for an entry ID
struct library_entry *my_entry = &entries[entr_id];
do_something_with_entry(my_entry);
is more brittle and harder to evolve in the long run than this:
int entr_id = get_entry_id(); // Prompt for an entry ID
struct library_entry *my_entry = get_entry(entr_id);
do_something_with_entry(my_entry);
In the second version, entries are retrieved through a get_entry function. That means you are free to change the type of the entry ID (e.g. from an int to a 64-bit hash) and the underlying data structure (e.g. from a simple array to a hash table) without changing any of the code that retrieves and manipulates entries. At most, if you change the type of the entry ID, you have some light refactoring to do.
In the first version, entries are retrieved not by a function, but by direct manipulation of the data underlying structure. In this case, code that retrieves entries is irrevocably tied to your choice of data structure. If you want to change the underlying data structure, you have to replace every my_entry = &entries[entr_id] with my_entry = <however you index your new data structure>.
Thus, functions “delay binding”, in that they express how values are mapped (bound) to other values without actually committing you to any choice in terms of mapping mechanism. When you say foo = f(bar, i), you bind foo to bar via i, but you don’t commit to a specific choice of mapping at the time of writing. Data structures, on the other hand, “bind early”, in that as soon as you’ve written a mapping in terms of (instances of) data structures, you’ve committed to a particular mapping mechanism. I.e. when you say foo = bar[i], you didn’t just say that foo is mapped to bar via i, you also commit to a particular way of doing that binding.
We just call this data abstraction these days but remember this is a collection of sayings from 1982. So it’s collecting “lore” that was already established at a time when SQL wasn’t even ten years old, user-facing programs programmed (at least partly) in assembly language were still a thing on some platforms, and virtual memory wasn’t universal.
“Early binding” through data structures was fairly common at the time, but it caused all sorts of trouble, not only as requirements evolved (and e.g. you found out that things that could previously be neatly indexed by numbers now had to be mapped to strings) but also when porting across systems. E.g. if you relied on pointer arithmetic to get through a sequence of fixed- or floating-point value, or to retrieve a particular value, the program broke as soon as the width of the value type changed, which could happen just by e.g. trying to use a different fixed-point arithmetic implementation.
Edit: lots of these make sense only in a historical context. For example:
Syntactic sugar causes cancer of the semicolon
This is frequently taken as some jab at unfettered complexity, and that’s sort of right I guess, but I suspect it’s actually a little more literal. Early attempts to bolt syntactic sugar on APL in the spirit of Lisp let (a la “let this be this expression, let that be that other espression and so on) resulted in long series of local variables, initialized to whatever expressions they were initialized to. In APL, local variables were IIRC separated by semicolons, so syntactic sugar literally resulted in long series of semicolon-separated variables, so, erm, cancer of the semicolon.
This:
A program without a loop and a structured variable isn’t worth writing.
is likely the distiled form of Djikstra’s famous quip that numerical computation is the most trivial form of computation: a program without a loop and a structured variable is a trivial transformation of some values, which can be trivially replaced with a dictionary or just pen & paper.
This:
Think of all the psychic energy expended in seeking a fundamental distinction between “algorithm” and “program”.
retrospectively pokes fun at what was once the traditional way to open CS101 courses, with a long explanation about how an algorithm was fundamentally different from a program, even though they both prescribed the same behaviour.
I suspect the historical context that actually made this super funy is that, in 1982, when this was presented, executable specifications was just taking off, so people understandably found it funny that there was a lot of “psychic energy” expended in seeking a fundamental distinction between algorithms and programs, only to wind up figuring that the best way to specify an algorithm is in the form of a program.
This:
If your computer speaks English, it was probably made in Japan.
You can measure a programmer’s perspective by noting his attitude on the continuing vitality of FORTRAN.
Obviously written before Fortran90.
Bringing computers into the home won’t change either one, but may revitalize the corner saloon.
I suppose in 1982 for someone with access to real computers™, it was pretty hard to predict that personal computers plus wide area networks would change computing forever. I’m not ready to say if it also revitalized corner saloons. ;)
I take it to mean that we should prefer abstract data types to concrete ones. By this point, the study of modules and ADTs had already begun, and the industry was thinking about coupling / binding.
For example, the interface for a stack is something like:
push(int)
pop() -> int
The only thing the client needs to know about are the names of the methods, and the argument and return types. So clients aren’t coupled to any particular data structure, and the internals can be changed because of that.
I found most of these are … questionable, really. Way too dogmatic.
What is meant here is that a function delays anything, i.e. function f() { return stuff; }; const a = f(); vs const a = stuff;. (In general seeing functions as “binding delayers” is not a common view IMO.)
or I could be completely wrong and they mean something else entirely…
What does that mean?
This is actually super boring to a modern audience but remember this is from 1982, so there’s some historical baggage there.
It (boringly, but bear with me) means that this (contrived and hypothetical but bear with me!) code:
is more brittle and harder to evolve in the long run than this:
In the second version, entries are retrieved through a
get_entry
function. That means you are free to change the type of the entry ID (e.g. from an int to a 64-bit hash) and the underlying data structure (e.g. from a simple array to a hash table) without changing any of the code that retrieves and manipulates entries. At most, if you change the type of the entry ID, you have some light refactoring to do.In the first version, entries are retrieved not by a function, but by direct manipulation of the data underlying structure. In this case, code that retrieves entries is irrevocably tied to your choice of data structure. If you want to change the underlying data structure, you have to replace every
my_entry = &entries[entr_id]
withmy_entry = <however you index your new data structure>
.Thus, functions “delay binding”, in that they express how values are mapped (bound) to other values without actually committing you to any choice in terms of mapping mechanism. When you say
foo = f(bar, i)
, you bind foo to bar via i, but you don’t commit to a specific choice of mapping at the time of writing. Data structures, on the other hand, “bind early”, in that as soon as you’ve written a mapping in terms of (instances of) data structures, you’ve committed to a particular mapping mechanism. I.e. when you sayfoo = bar[i]
, you didn’t just say that foo is mapped to bar via i, you also commit to a particular way of doing that binding.We just call this data abstraction these days but remember this is a collection of sayings from 1982. So it’s collecting “lore” that was already established at a time when SQL wasn’t even ten years old, user-facing programs programmed (at least partly) in assembly language were still a thing on some platforms, and virtual memory wasn’t universal.
“Early binding” through data structures was fairly common at the time, but it caused all sorts of trouble, not only as requirements evolved (and e.g. you found out that things that could previously be neatly indexed by numbers now had to be mapped to strings) but also when porting across systems. E.g. if you relied on pointer arithmetic to get through a sequence of fixed- or floating-point value, or to retrieve a particular value, the program broke as soon as the width of the value type changed, which could happen just by e.g. trying to use a different fixed-point arithmetic implementation.
Edit: lots of these make sense only in a historical context. For example:
This is frequently taken as some jab at unfettered complexity, and that’s sort of right I guess, but I suspect it’s actually a little more literal. Early attempts to bolt syntactic sugar on APL in the spirit of Lisp
let
(a la “let this be this expression, let that be that other espression and so on) resulted in long series of local variables, initialized to whatever expressions they were initialized to. In APL, local variables were IIRC separated by semicolons, so syntactic sugar literally resulted in long series of semicolon-separated variables, so, erm, cancer of the semicolon.This:
is likely the distiled form of Djikstra’s famous quip that numerical computation is the most trivial form of computation: a program without a loop and a structured variable is a trivial transformation of some values, which can be trivially replaced with a dictionary or just pen & paper.
This:
retrospectively pokes fun at what was once the traditional way to open CS101 courses, with a long explanation about how an algorithm was fundamentally different from a program, even though they both prescribed the same behaviour.
I suspect the historical context that actually made this super funy is that, in 1982, when this was presented, executable specifications was just taking off, so people understandably found it funny that there was a lot of “psychic energy” expended in seeking a fundamental distinction between algorithms and programs, only to wind up figuring that the best way to specify an algorithm is in the form of a program.
This:
is 100% a piece of 1980s history :).
and so on.
It’s clearly showing its age in multiple places.
Obviously written before Fortran90.
I suppose in 1982 for someone with access to real computers™, it was pretty hard to predict that personal computers plus wide area networks would change computing forever. I’m not ready to say if it also revitalized corner saloons. ;)
Not to mention
But some things are timeless:
I take it to mean that we should prefer abstract data types to concrete ones. By this point, the study of modules and ADTs had already begun, and the industry was thinking about coupling / binding.
For example, the interface for a stack is something like:
The only thing the client needs to know about are the names of the methods, and the argument and return types. So clients aren’t coupled to any particular data structure, and the internals can be changed because of that.
I found most of these are … questionable, really. Way too dogmatic.
What is meant here is that a function delays anything, i.e. function f() { return stuff; }; const a = f(); vs const a = stuff;. (In general seeing functions as “binding delayers” is not a common view IMO.)
or I could be completely wrong and they mean something else entirely…