This feels condescending to me, on multiple levels.
Erlang, like C and Haskell, is a Von Neumann language.
Are you sure Haskell is? At least one random person on Quora states Haskell is explicitly not:
Part of the model of Haskell seems to be very non-von Neumann (despite running on one).
It might help you if you look at a non-Von Neumann language. Here’s Morrison’s Flow-based Programming whose overall model is more like how cells in chips or body work than von Neumann’s:
Neural networks in analog implementation are another example simulate memory with connections and weights. The memory is generative, though, where it’s just a side effect of execution of something quite non-Von Neumann. Analog conouters themselves are probably a good example as they approximate or implement mathematical functions in real-time on real numbers.
The one that is closest, though, are Harvard architectures since von Neumann used single memory in his descriptiom but they have two. That’s kind of a cheat imho as it’s sort of like 2 von Neumann’s and almost all Harvards are modified to be more von Neumann in silicon.
Just figured you’d have more fun with the others as they’re substantially different from von Neumann machines with practical benefits resulting.
A Von Neumann language is a language isomorphic to a Von Neumann machine…
It has variables for storing values.
It has if-then-else branching.
It has an arithmetic expressions on variables.
Having functional expressions based on these does not escape the definition of a Von Neumann machine.
Oh but I’m not sure that is actually true. The integral part of a Von Neumann machine is that you have memory that you store and retrieve things from. A language like Haskell has no such concept. Haskell does not have variables for storing values but it has names that bind to things. The name itself need not be a memory location or exist in any sense other than written in a source file. Compared to C where the value and the thing that holds it are distinct.
If you’ve read Backus' Turing Award speech, that, as I understand it, is what he means by the von neumann bottleneck.
This gets into a somewhat gray area. Haskell bindings allow a variable to exist in multiple locations simultaneously, but a named value is still a “stored” value and the essence of Haskell programs is still along the lines of a Von Neumann machine… “assign values to V1 and V2 then assign a value to V3 using an arithmetic expression based conditionally on the values of A and B” – that’s Von Neumann whether your language has an assignment statement or not.
Note that C compilers can treat variables as independent of locations too, the programmer just has to take more care and the compiler has to do more work.
Backus FP does not truly escape this to any significant degree. It’s just “point free”. Without much programming forethought an FP program will suffer the Von Neumann bottleneck (stored values and a CPU with branching and arithmetic). A Von Neumann-esque machine with multiple cores is still a Von Neumann machine.
Functional languages like FP and Haskell still require a good bit of effort (or language enhancements) to take advantage of architectures that are significantly different from Von Neumann. For example, a data flow machine or a connection machine are not going to be fully utilized by typical FP or Haskell programs.
but a named value is still a “stored” value and the essence of Haskell programs is still along the lines of a Von Neumann machine
But it’s not. The name does not have a memory location so there is no requirement of being stored at all. C requires that a name has a location, which is accessed with the address-of operator (&). Disregarding register variables.
I think you might be conflating runtime properties such as storage with something like point-free programming, which is a syntactic property. In Haskell the name of something indistinguishable from its value which is not the same in von neumann languages. Consider equality in Java, for example. It might be that by running on a von neumann machine these bottlenecks still show up but, from what I understand, there is a significant semantic distinction you are leaving out.
But a small caveat: Haskell does have some von neumann things in it but I do not believe they apply in this context because they exist for practical reasons due to living in a von neumann world.
The name does not have a memory location so there is no requirement of being stored at all.
I believe you are being to narrow in your definition of “stored”. If a variable binds a name to a value, it is “stored” in the Von Neumann sense. Whether that is a register or RAM or beads on a string, or all three, it is a name and a value. The program names values and then it conditionally branches and computes expressions based on the name / value binding.
That’s Von Neumann.
If a variable binds a name to a value, it is “stored” in the Von Neumann sense.
No, it doesn’t have to be. Binding a name to a value in a language with referential transparency may do nothing whereas in a language like C it has to be stored because names have distinct memory locations. There are some optimized paths in a language like C that do not require actually storing the value but the semantics of the language require it in the general case.
I find what you’re saying confusing. Are you saying that all ways of executing computation are Von Neumann? If so, that is clearly false and your definition of Von Neumann is far too broad.
Binding a name to a value in a language with referential transparency may do nothing
It had better do something. In particular it had better remember the value and refer to it whenever that variable is referenced. “remembering” and “referencing” named values is isomorphic to a Von Neumann machine.
There are some optimized paths in a language like C that do not require actually storing the value
but the semantics of the language require it in the general case.
No more than with Haskell. A C compiler is free to keep a value in multiple locations as long as it can prove that it is used correctly. And in Haskell for mutable references, the state monad has similar requirements to mutated variables in C. When comparing Von Neumann machines to machines that are radically non-Von Neumann then C and Haskell appear to be much more alike than either are to languages suited to the radically non-Von Neumann machines.
The differences between C and Haskell are more of safety and expressiveness (compactness of expression). They are not very different at all at the level of the underlying machine. (Again, relative to machines that are radically different than Von Neumann.)
I find what you’re saying confusing. Are you saying that all ways of executing computation are
Von Neumann? If so, that is clearly false and your definition of Von Neumann is far too broad.
I am saying Von Neumann is essentially a stored program, stored data, CPU machine (more specifically there are details with what the bus looks like, etc.) And I am saying that Haskell is more isomorphic to that architecture than it is to machines that are radically different than this.
Most Haskell programs would have to be radically rewritten or extended (along with the compiler) to escape the essence of Von Neumann, e.g. to fully utilize something like a connection machine, a data-flow machine, a many-core (e.g. something transputer-like with many hundreds or thousands of cores.)
Another way to look at this… moving a C program from a Von Neumann architecture to a Harvard architecture is matter of recompiling. Moving a Haskell program is also a matter of recompiling.
But moving either to a connection machine or a many-core machine would not take advantage of those machines without a lot of effort in restructuring and extending the capabilities of the program and the compilers.
“The integral part of a Von Neumann machine is that you have memory that you store and retrieve things from. A language like Haskell has no such concept. ”
I found this interesting. I decided it might be better to get to the bottom of what Haskell compiles down to. Aside from assembly that is. Instead, the most fundamental and low-level of the abstract machines that make it work. That would be the STG machine. I see a big, pile of von Neumann in the “components of the machine” section:
It has registers, a stack, and a heap. It has functions and expressions that act on them. The execution model is more similar to our machines than Lambda Calculus. Even the compiler I found for LC got really von Neumann before it was executable. Haskell looks von Neumann at least by STG level. I don’t know about Core that’s above it. However, I figured detailed descriptions of its execution would mention a heap (i.e. memory) plus functions on it that take values out and put values in if it was a von Neumann machine. Found such a description in this paper on p 12:
So, it might derive from a non-Von Neumann model but the executable forms of that are quite von-Neumann. To a guy carefully reading papers on a topics (FP, Haskell) he knows nothing about. There’s the caveat.
Well, of course the executable form is a von Neumann form–it has to execute on a von Neumann machine! But the question is about the language, not the implementation of the language. You can simulate a quantum computer circuit on a von Neumann machine, but that doesn’t imply in a useful sense that a quantum computer is a von Neumann machine.
I’m not sure how much the distinction matters if it’s a programming language we never execute. That becomes either a specification language or something to throw in the garbage. Has anyone published an executable version of the language that doesn’t require a program and memory? The only ones I know that come close are the subsets being used as DSL’s for hardware circuits that don’t have to be von Neumann. I’m not aware of it for Haskell or Core language themselves.
What you’re saying is that you looked at the code that turns a language, L into an executable to be run on architecture A. You found components that correspond to architecture A, thus language L must be in the classification of things that belong to A. But that is clearly nonsense as language L could belong to other classifications of architectures.
This discussion is not about implementations, though, it’s about semantics. So whether or not a language is Von Neumann is about if its semantics require such a machine or not, regardless of if we ever get an opportunity to execute it outside of such a machine.
But I think one thing that muddies the water is that “von neumann architecture” has become a bit of a catch-all for whatever we are running now and I think most of us are not running anything so pure. I think the important distinction is that in a language like Haskell, names need to correspond to a memory location. This is distinct from C, C++, Java, and many other languages. With that, things that look like memory operations need not happen or exist at all in such a way at run-time.
“What you’re saying is that you looked at the code that turns a language”
What I’m saying is it and every other functional language I’ve ever seen only ran on von Neumann architectures. Haskell’s Core had clear mapping to them, too. So, they might inherently be von Neumann-type languages.
“it’s about semantics. So whether or not a language is Von Neumann is about if its semantics require such a machine or not”
This is where I probably slipped up. In that case, I’m backing out of the discussion since I don’t know Haskell’s semantics or FP well enough to have a good opinion on it.
“ is that “von neumann architecture” has become a bit of a catch-all”
It has a clear meaning. It looks catch-all because it was the most effective, abstract representation of computer architecture for decades. About every computer in existence, from x86 to the LISP machines, were implementations of that architecture. Wikipedia has a nice picture:
A Von Neumann language is a language isomorphic to a Von Neumann machine
Given Von Neumann machines are Turing-complete, this definition is less than useful, as it simplifies to “A Von Neumann language is a programming language”.
Being isomorphic to a Von Neumann machine has more narrow requirements than being Turing complete. A wide (very, very, very, very wide) array of machines can be Turing complete… Dataflow, Connection Machines, Turing Machines, and Von Neumann machines to name a few are Turing complete.
As a counter to this piece: http://prog21.dadgum.com/54.html
a bit of an aside on the dictionary comment, but basically everything in Python is worked off of dicts. If you look around with something like objgraph, then you’ll see how Classes are built off of dicts and lists.
It’s pretty admirable how Python’s internals end up being pretty simple compared to how something like Java ends up resolving
This article is interesting but the tone is weird. I feel like there’s an implication that fp would help, but I feel like even in a functional language you would work in a similar manner in the end.
Python is a functional language. It’s also an imperative one. What it lacks is static type checking, which is part of the reason Google has been discouraging its use for production code for a while now.
If Python is a functional language then I’m not sure the term has any meaning at all. I’ll offer a weak definition of FP: to prefer immutability as the default style of programming where side-effects are distinct from pure code. Having written a lot of Python code over the last several years, I don’t feel Python abides by this definition.
It’s not a pure functional language so you don’t have to use it in a functional way if you don’t want, which may be the problem.
Ocaml is not a pure functional language either but its language constructs prefer immutability over mutability. Mutability has to be explicitly stated. Which is why I tend to consider it closer to functional than not. For Python, mutability is the default state of everything, which is why I tend to put it much further on the imperative side. Even things like ‘if’ condition cannot return value.
Nobody has (or can, I think) define these terms in a way most people are content with but I’m having a really hard time swallowing that Python is a functional language.
Well, Python is also an imperative language, which may be its undoing here. Scala seems to handle that better, pushing immutable data by default but still allowing mutability if needed.
The problem is it was inspired by and intended successor of the ABC programming language: an imperative language for structured programming. It was also written by a guy that didnt care about functional languages. This combo is why it’s definitely not a functional language. It’s an imperative one with some functional stuff bolted on like many others.
EDIT: Added the darned backslashes to escape the parentheses. Also, the example in the Wikipedia article should leave no doubt this language inspired Python.
I stand corrected.