So, we have a bunch of things, a universe of things. I’m going to call these things by letters x, y, z. All together, I’m going to call them values.

What can we do with these values? Well, I can make exact copies of them as often as I like (if I do so, I’ll refer to all of the copies by the same name). I can also do one other thing: if I place two values next to one another the left one eats the right one and becomes something new. This is called application and the new value is called the result. For convention, values eat from the left side first and I use parentheses to control the order in which values apply.

(x y) (z w) q

is the same as

((x y) (z w)) q

Now let me tell you about certain values in our universe which are interesting. Whenever I refer to a specific value I’ll give it a capitalized name

First there’s True. It’s best to think of True as “eating” two values sequentially. It thus behaves like this: for any values x and y

True x y = x

(Note that I didn’t say we could compare two values for equality. We can’t, actually. So when I use equality here it is only to assert things we know about the behavior of values—it’s not actually meaningful as an operation on values. There is no result. Those words don’t even make sense in this universe.)

Similarly, there’s false

False x y = y

There’s even If which has the following properties

If True x y = x
If False x y = y

You might have realized that If is probably not doing much when it eats values. If q is either True or False then If also follows this equation

If q x y = q x y
If True x y = True x y = x
If False x y = False x y = y

We can even simplify this equation by leaving off the x and y parts

If True = True
If False = False
If q = q

As it turns out, there are lots of interesting values in our universe, but it’s also valuable for us to be able to create our own. To do this, we only need to define what happens when our value eats other values. We can do that with a new kind of equation, double-equals means defines

True x y == x

Importantly, definition-equals cannot know anything about its “arguments” except that they’re values and they can be applied. If not for this restriction, we could define equality-checking values but I already said we can’t check equality. More simply stated, when using definitional equality we cannot witness “capitalized” names on the left side besides the one we’re defining. Additionally, we cannot use our definition in the act of defining it—recursion is disallowed.

So, we might now say that we’ve got a universe of values and then I’ll give you a big list of definitions to let you know some interesting ones that are in the universe. This already lets us do a lot. You could keep writing new names on this list of definitions as you find interesting values and as it turns out you can find any value you can dream of starting with a very short list of definitions

S x y z = (x z) (y z)
K x y = x
I x = x

One last thing is needed to make this the lambda calculus: we need to be able to talk about values with a particular structure but without giving them a name. To do this, we use lambda syntax

λ __________ . ____________

where the first blank names some variables and the right half show what happens when this value eats them. So, we can write

(λ x y . x) x y = x

and see that (λ x y . x) is the same as True. We can actually define all of the values we’ve seen using lambda syntax

True == (λ x y . x)
False == (λ x y . y)
If == (λ q . q)
S == (λ x y z . (x z) (y z))
K == (λ x y . x)
I == (λ x . x)

This makes it clear that K is the same as True and If is the same as I. Again, we don’t actually get to take two values and apply equality to them—we only know they’re equal because we’ve seen that they’re both definable by the same lambda statement.

Lambda statements are “the same” when they define the same application behavior. This means that given these definitions

X == (λ a b . a)
Y == (λ q w . q)

we have that X and Y are the same even though they use different names “internally”. If I hand you X and Y without showing you their definitions there is no way you can find any distinction between them via application alone.

So this is lambda calculus—this universe of values and our means of inspecting and defining them. Why is it important?

Mostly because its very simple. The rules can be boiled down even more than I’ve done so far if you like, but already there’s not much here. That said, it’s possible to prove that any calculation you like can be executed with these simple tools.

What’s also even more interesting is that even with this simple foundation we’ve got interesting fundamental ideas to explore:

the notion of equality of things being available “externally” but not “internally” is important to formal verification and even simple testing

lambda syntax is laying out the essence of naming and definitions—it creates a small context for which a name is meaningful and notes that meaning is given via substitution, via application.

ideas like self-application give rise to primordial recursion and we can define interesting computations which take recursive shapes like binary trees or the natural numbers

For these reasons it’s a foundational calculus, a foundational way to talk about both mathematics and computer science.

This is excellent. If we had a best-of list, I would nominate it.

The one thing I might add is the importance of fixed-point combinators as a way to have recursion without explicit self-reference. That may be a bit too much for newcomers though, given that it also requires us to discuss normal vs. applicative order.

Also, Church numerals helped me understand just how applicable lambda calculus is to things that otherwise seem to require ur-elements or “objects”.

Yeah, I didn’t want to dig into fixed points because of the things you mentioned exactly. LC is already interesting without it. Besides, half the fun is taking seriously the statement “everything you would want to have is already in LC” and challenging it to represent loops or effects.

For a dense, challenging, but thorough and (once you get into it) mind-expanding look at the lambda calculus, check out “The Lambda Calculus, Its Syntax and Semantics” by H.P. Barendregt. It is a difficult book, which I have only read portions of, but it is a book which I feel teaches me something substantive whenever I read it. The biggest challenge is the author’s strong preference for purely symbolic explanations, although the book includes a thorough notation guide to assist in deciphering them.

Lambda calculus is an interesting idea, but it has nothing to do with computer programming. Computability as a topic in logic has cool results but the attempt to use it as a foundation for programming languages is poorly motivated.

Take a look at the book Haskell Programming From First Principles. It starts with the lambda calculus and shows you how Haskell was built around it. That decision enables the language’s most powerful features like non-strict evaluation.

If you can program in it, then it has something to do with programming. That LISP borrowed from it and Haskell largely built on derivations of it with impressive properties resulting further corroborates its applicability to programming. I can’t say more outside these observed results since it’s not my specialty.

“Our target language is an idealized assembly language, with in-
finitely many registers and memory cells, each of which holds unbounded
natural numbers”

You can turn that into real, assembly language like functional programming compilers do with their high-level stuff. All kinds of papers and programs on going from abstract to real machines. Such well-trodden territory they likely didn’t bother wasting time on it. Proved lambda calculus could compile down to something closer to our machines then moved onto next challenge.

You’ll see a lot of this in CompSci among top researchers where they don’t redo existing work. Just focus on what matters.

The field doing that includes the companies in railways, etc using the B method to turn abstract machines into specific code that runs. Others did similar stuff with Z, TLA+, Alloy, etc. seL4 started with an abstract machine in Haskell. The hardware people been at this longest with Abstract State Machine models turned into FSM’s then into RTL & wires. I said this category was well-trodden, not simple. Indeed, some of above work was impressively hard. Given other results in field, that lambda calculus was taken that far probably means it can run on a real machine with extra work. That’s why they stopped.

I’d love to see some links to however it is you handle formal specs and correctness of code outside things like review & static analysis. What you would recommend to others as practical or that helped at FSMlabs?

I wouldn’t claim otherwise. I figure you apply whatever your preferred techniques are in personal work and maybe for them. My question remains even if we leave them out.

I have yet to find a superior replacement for good programmers, testing, static analysis, and code review - although the quality of the code review is key. There is no substitute for excellent programmers although static analysis tools have improved a lot over the last 10 years. The hardware people seem well ahead of software but it’s a more structured state space. I do have a spare time research project on specification that is not yet practical if it ever will be
http://www.yodaiken.com/wp-content/uploads/2016/08/steps.pdf

Im mostly in agreement with you. A few other stuff shows value but 80/20 rule applies with those having most value. Plus tooling that augments those to make the humans more effective.

Interesting paper in the link. Ill save it for future reading when I relearn the formal stuff.

What McCarthy took from Lambda Calculus was the syntax of function definition and variable binding which is pretty superficial - and he immediately modified it to make recursive definitions comprehensible.

I think it’s largely a matter of taste although I note that Haskell remains an academic curiosity while “functional” languages like F# allow mutable records and lambda calculus simplicity (this is a recurring theme) is not as simple as it might appear. As Backus pointed out in his functional programming speech: “ The semantics of Church’s lambda calculus depends on substitution rules that are simply stated but
whose implications are very difficult to fully comprehend. The true complexity of these rules is not widely
recognized but is evidenced by the succession of able
logicians who have published "proofs” of the ChurchRosser theorem that failed to account for one or another
of these complexities. (The Church-Rosser theorem, or Scott’s proof of the existence of a model [22], is required
to show that the lambda calculus has a consistent semantics.)
The definition of pure Lisp contained a related error for a considerable period (the “funarg” problem). Analogous problems attach to Curry’s system as well. "

In the same paper Backus writes: “pure Lisp is often buried in large extensions with many von Neumann features. ”
Computer science is, for some reason, a field in which super simple and elegant formulations often turn out to require all sorts of “minor” exceptions and extensions that end up, in the case of programming languages, back at FORTRAN.

It’s not clear to me that models of computability (in the sense of metamathematics) have much to do with programming computers.

“I think it’s largely a matter of taste although I note that Haskell remains an academic curiosity”

It’s used in production systems and to build software for EDA, finance, and other industries. They all report good results in terms of performance and correctness. Especially Bluespec and Galois people. Or did you mean something else?

Note: I’m also writing this comment in light of pushcx’s link & others I’ve read showing lambda calculus’s influence on how its built. So, it’s a LC-inspired, practical language whose intermediate forms in compiler are LC derivatives with successful, production deployments. At least for Haskell, LC and computation seem connected with one influencing the other.

…but you haven’t set any goalposts for being more than “an academic curiosity” so you could find an excuse to dismiss anything.

Because I’m a masochist, here’s another example. How about something like how Facebook uses monads for concurrency in Haxl, their world-class ridiculous-scale spam-fighting? In talks, they said explicitly that there wasn’t anything they couldn’t do in C++ (or Java? it’s been a while), but sequencing, especially in response to dynamic loads, was a practical problem. It wasn’t possible to rearrange from the linear flow of non-composable strict code to meet their performance needs.

Though you clearly understand a lot of the concepts, your recent posts read like you’re looking up from the blub paradox. You have not stumbled into a secret club of wankers who are waiting to be dazzled how simple everything could be if we’d just throw out some of these needless abstractions.

I like the blub essay. My reservations are 1) any argument that dismisses Eric Raymond seems strong whether it is or not 2) what they claim as advantage from use of Lisp could easily and more probably be an advantage from programmer quality - my experience is that great assembly programmers or PHP programmers or even MUMPS programmers can blow the doors off mediocre ones in your-favorite-language. Maybe their advantage came from a great team with a good design using tools that they understood well and liked, competing against B teams that were under the thumb of horrible HR departments.

I don’t have anything against Lisp, despite having had some really obnoxious arguments with John McCarthy about nuclear power 100 years ago. It’s definitely not a functional language in practice.

As for Haskell/LC I am far from expecting anyone to be dazzled by my questions - which are genuine and possibly stupid. My instinct is to dismiss the claimed mathematical basis as nonsense based on experience and a general dislike for what appears to be needlessly complex. But I’ve been wrong before, so I’m rereading some papers and asking what may well be annoying questions. To me, much of what is claimed as a result of category theoretical analysis looks like the discovery that functions returning pairs of (condition code, value) can be composed easily - which is a perfectly plausible approach to software engineering if not taken too seriously. And carrying state as a parameter may be a good idea too, but it’s still state. Maybe I’m missing something important though.

Just as a side note: C is often derided as lacking mathematical rigor, but take a look at Dennis Ritchie’s work as a graduate student: http://onlinelibrary.wiley.com/doi/10.1002/malq.19720180405/abstract . I think it explores the mathematical structure which makes C so clear to some of us.

For slightly more in depth coverage that starts at the lambda calculus and shows how to build up to a whole programming language, I’d highly recommend “An Introduction to Functional Programming Through Lambda Calculus”.

It’s by far the most understandable book or paper I’ve seen on functional programming and the lambda calculus.

Thanks, I’ve been wanting to learn Lambda calculus and it seems really confusing.

So, we have a bunch of

things, a universe of things. I’m going to call these things by letters`x`

,`y`

,`z`

. All together, I’m going to call them values.What can we do with these values? Well, I can make exact copies of them as often as I like (if I do so, I’ll refer to all of the copies by the same name). I can also do one other thing: if I place two values next to one another the left one eats the right one and becomes something new. This is called

applicationand the new value is called theresult. For convention, values eat from the left side first and I use parentheses to control the order in which values apply.is the same as

Now let me tell you about certain values in our universe which are interesting. Whenever I refer to a

specificvalue I’ll give it a capitalized nameFirst there’s

`True`

. It’s best to think of`True`

as “eating” two values sequentially. It thus behaves like this: for any values`x`

and`y`

(Note that I didn’t say we could compare two values for equality. We can’t, actually. So when I use equality here it is only to assert things we know about the behavior of values—it’s not actually meaningful as an operation on values. There is no result. Those words don’t even make sense in this universe.)

Similarly, there’s

`false`

There’s even

`If`

which has the following propertiesYou might have realized that

`If`

is probably not doing much when it eats values. If`q`

is either`True`

or`False`

then`If`

also follows this equationIf q x y = q x y If True x y = True x y = x If False x y = False x y = y

We can even simplify this equation by leaving off the

`x`

and`y`

partsIf True = True If False = False If q = q

As it turns out, there are lots of interesting values in our universe, but it’s also valuable for us to be able to create our own. To do this, we only need to define what happens when our value eats other values. We can do that with a new kind of equation, double-equals means defines

Importantly, definition-equals cannot know anything about its “arguments” except that they’re values and they can be applied. If not for this restriction, we could define equality-checking values but I already said we can’t check equality. More simply stated, when using definitional equality we cannot witness “capitalized” names on the left side besides the one we’re defining. Additionally, we cannot use our definition in the act of defining it—recursion is disallowed.

So, we might now say that we’ve got a universe of values and then I’ll give you a big list of definitions to let you know some interesting ones that are in the universe. This already lets us do a lot. You could keep writing new names on this list of definitions as you find interesting values and as it turns out you can find any value you can dream of starting with a very short list of definitions

One last thing is needed to make this the lambda calculus: we need to be able to talk about values with a particular structure but without giving them a name. To do this, we use lambda syntax

where the first blank names some variables and the right half show what happens when this value eats them. So, we can write

and see that

`(λ x y . x)`

is the same as`True`

. We can actually define all of the values we’ve seen using lambda syntaxThis makes it clear that

`K`

is the same as`True`

and`If`

is the same as`I`

. Again, we don’t actually get to take two values and apply equality to them—we only know they’re equal because we’ve seen that they’re both definable by the same lambda statement.Lambda statements are “the same” when they define the same application behavior. This means that given these definitions

we have that

`X`

and`Y`

are the same even though they use different names “internally”. If I hand you`X`

and`Y`

without showing you their definitions there is no way you can find any distinction between them via application alone.So this is lambda calculus—this universe of values and our means of inspecting and defining them. Why is it important?

Mostly because its very simple. The rules can be boiled down even more than I’ve done so far if you like, but already there’s not much here. That said, it’s possible to prove that any calculation you like can be executed with these simple tools.

What’s also even more interesting is that even with this simple foundation we’ve got interesting fundamental ideas to explore:

namingand definitions—it creates a small context for which a name is meaningful and notes that meaning is given via substitution, via application.For these reasons it’s a foundational calculus, a foundational way to talk about both mathematics and computer science.

This is excellent. If we had a best-of list, I would nominate it.

The one thing I might add is the importance of fixed-point combinators as a way to have recursion without explicit self-reference. That may be a bit too much for newcomers though, given that it also requires us to discuss normal vs. applicative order.

Also, Church numerals helped me understand just how applicable lambda calculus is to things that otherwise seem to require ur-elements or “objects”.

Yeah, I didn’t want to dig into fixed points because of the things you mentioned exactly. LC is already

interestingwithout it. Besides, half the fun is taking seriously the statement “everything you would want to have is already in LC” and challenging it to represent loops or effects.The paper is very nice because it starts out at an even pace, so even I* can keep up

*person who loves the platonic concept of math, but can often be found cowering in a ditch when the equations fly.

For a dense, challenging, but thorough and (once you get into it) mind-expanding look at the lambda calculus, check out “The Lambda Calculus, Its Syntax and Semantics” by H.P. Barendregt. It is a difficult book, which I have only read portions of, but it is a book which I feel teaches me something substantive whenever I read it. The biggest challenge is the author’s strong preference for purely symbolic explanations, although the book includes a thorough notation guide to assist in deciphering them.

Lambda calculus is an interesting idea, but it has nothing to do with computer programming. Computability as a topic in logic has cool results but the attempt to use it as a foundation for programming languages is poorly motivated.

Take a look at the book Haskell Programming From First Principles. It starts with the lambda calculus and shows you how Haskell was built around it. That decision enables the language’s most powerful features like non-strict evaluation.

Chlipala would disagree with you given his certifying compiler from lambda calculus to assembly language:

http://adam.chlipala.net/papers/CtpcPLDI07/CtpcPLDI07.pdf

If you can program in it, then it has something to do with programming. That LISP borrowed from it and Haskell largely built on derivations of it with impressive properties resulting further corroborates its applicability to programming. I can’t say more outside these observed results since it’s not my specialty.

“Our target language is an idealized assembly language, with in- finitely many registers and memory cells, each of which holds unbounded natural numbers”

Ok.

You can turn that into real, assembly language like functional programming compilers do with their high-level stuff. All kinds of papers and programs on going from abstract to real machines. Such well-trodden territory they likely didn’t bother wasting time on it. Proved lambda calculus could compile down to something closer to our machines then moved onto next challenge.

You’ll see a lot of this in CompSci among top researchers where they don’t redo existing work. Just focus on what matters.

unfortunately what you see in the field is a lot of handwaving about how something is simple.

The field doing that includes the companies in railways, etc using the B method to turn abstract machines into specific code that runs. Others did similar stuff with Z, TLA+, Alloy, etc. seL4 started with an abstract machine in Haskell. The hardware people been at this longest with Abstract State Machine models turned into FSM’s then into RTL & wires. I said this category was well-trodden, not simple. Indeed, some of above work was impressively hard. Given other results in field, that lambda calculus was taken that far probably means it can run on a real machine with extra work. That’s why they stopped.

I’d love to see some links to however it is you handle formal specs and correctness of code outside things like review & static analysis. What you would recommend to others as practical or that helped at FSMlabs?

I’m certainly not representing FSMLabs in this discussion.

I wouldn’t claim otherwise. I figure you apply whatever your preferred techniques are in personal work and maybe for them. My question remains even if we leave them out.

I have yet to find a superior replacement for good programmers, testing, static analysis, and code review - although the quality of the code review is key. There is no substitute for excellent programmers although static analysis tools have improved a lot over the last 10 years. The hardware people seem well ahead of software but it’s a more structured state space. I do have a spare time research project on specification that is not yet practical if it ever will be http://www.yodaiken.com/wp-content/uploads/2016/08/steps.pdf

Im mostly in agreement with you. A few other stuff shows value but 80/20 rule applies with those having most value. Plus tooling that augments those to make the humans more effective.

Interesting paper in the link. Ill save it for future reading when I relearn the formal stuff.

[Comment removed by author]

What McCarthy took from Lambda Calculus was the syntax of function definition and variable binding which is pretty superficial - and he immediately modified it to make recursive definitions comprehensible.

[Comment removed by author]

I think it’s largely a matter of taste although I note that Haskell remains an academic curiosity while “functional” languages like F# allow mutable records and lambda calculus simplicity (this is a recurring theme) is not as simple as it might appear. As Backus pointed out in his functional programming speech: “ The semantics of Church’s lambda calculus depends on substitution rules that are simply stated but whose implications are very difficult to fully comprehend. The true complexity of these rules is not widely recognized but is evidenced by the succession of able logicians who have published "proofs” of the ChurchRosser theorem that failed to account for one or another of these complexities. (The Church-Rosser theorem, or Scott’s proof of the existence of a model [22], is required to show that the lambda calculus has a consistent semantics.) The definition of pure Lisp contained a related error for a considerable period (the “funarg” problem). Analogous problems attach to Curry’s system as well. "

In the same paper Backus writes: “pure Lisp is often buried in large extensions with many von Neumann features. ” Computer science is, for some reason, a field in which super simple and elegant formulations often turn out to require all sorts of “minor” exceptions and extensions that end up, in the case of programming languages, back at FORTRAN.

It’s not clear to me that models of computability (in the sense of metamathematics) have much to do with programming computers.

“I think it’s largely a matter of taste although I note that Haskell remains an academic curiosity”

It’s used in production systems and to build software for EDA, finance, and other industries. They all report good results in terms of performance and correctness. Especially Bluespec and Galois people. Or did you mean something else?

Note: I’m also writing this comment in light of pushcx’s link & others I’ve read showing lambda calculus’s influence on how its built. So, it’s a LC-inspired, practical language whose intermediate forms in compiler are LC derivatives with successful, production deployments. At least for Haskell, LC and computation seem connected with one influencing the other.

Galois is essentially a DARPA funded R&D house, no? BSV now looks like C according to all reports. Here’s a code example:

rule step1(state == 1); counter.increment(); counter.decrement(); state <= 2; endrule

Finance - the big story was Credit Suisse, which replaced Haskell by F#

I get that people like it and “academic curiosity” is perhaps an unkind characterization, but …

…but you haven’t set any goalposts for being more than “an academic curiosity” so you could find an excuse to dismiss anything.

Because I’m a masochist, here’s another example. How about something like how Facebook uses monads for concurrency in Haxl, their world-class ridiculous-scale spam-fighting? In talks, they said explicitly that there wasn’t anything they couldn’t do in C++ (or Java? it’s been a while), but sequencing, especially in response to dynamic loads, was a practical problem. It wasn’t possible to rearrange from the linear flow of non-composable strict code to meet their performance needs.

Though you clearly understand a lot of the concepts, your recent posts read like you’re looking up from the blub paradox. You have not stumbled into a secret club of wankers who are waiting to be dazzled how simple everything could be if we’d just throw out some of these needless abstractions.

Haxl is interesting - thanks for the link.

I like the blub essay. My reservations are 1) any argument that dismisses Eric Raymond seems strong whether it is or not 2) what they claim as advantage from use of Lisp could easily and more probably be an advantage from programmer quality - my experience is that great assembly programmers or PHP programmers or even MUMPS programmers can blow the doors off mediocre ones in your-favorite-language. Maybe their advantage came from a great team with a good design using tools that they understood well and liked, competing against B teams that were under the thumb of horrible HR departments.

I don’t have anything against Lisp, despite having had some really obnoxious arguments with John McCarthy about nuclear power 100 years ago. It’s definitely not a functional language in practice.

As for Haskell/LC I am far from expecting anyone to be dazzled by my questions - which are genuine and possibly stupid. My instinct is to dismiss the claimed mathematical basis as nonsense based on experience and a general dislike for what appears to be needlessly complex. But I’ve been wrong before, so I’m rereading some papers and asking what may well be annoying questions. To me, much of what is claimed as a result of category theoretical analysis looks like the discovery that functions returning pairs of (condition code, value) can be composed easily - which is a perfectly plausible approach to software engineering if not taken too seriously. And carrying state as a parameter may be a good idea too, but it’s still state. Maybe I’m missing something important though.

Just as a side note: C is often derided as lacking mathematical rigor, but take a look at Dennis Ritchie’s work as a graduate student: http://onlinelibrary.wiley.com/doi/10.1002/malq.19720180405/abstract . I think it explores the mathematical structure which makes C so clear to some of us.