This is intriguing, but the firehouse-strength description kind of knocked me to the ground, This earlier description I found was a bit more approachable, until it lost me when defining True and False. Any hints for dummies who only know dummy languages like C++ and Forth?

The basic idea here is that, when you want to represent some data in lambda calculus, you’ve only got one option for how to represent it: It’s gotta be a function, because that’s the only tool the lambda calculus gives you.

You want to choose your functions so that, whatever operations you wanted to perform on your data, you can instead do by function application. So when they say that True := λx.λy.x; False := λx.λy.y, what they’re really saying is “These functions give us enough information that we can distinguish one from the other, which is the main thing we need from a single bit: For example, if we know some value has got to be a bit, we can figure out which one it is by calling it as a function on two distinct values, and seeing which of those values we get back.” (note that this procedure is basically how you’d implement if/then/else, given that you’ve defined True/False this way)

Imagine you have a list (or stack) of things and pretend that λ is pronounced chomp. Even though lambda in reality is simply just a function that always accepts exactly one argument (e.g. function (x) { ... } we’re going to compare it to eating food just for fun.

Now let’s start off simple with the identity function. Here’s how a Krivine machine works. Say our list of things is λ0 x y z. We see the lambda so we chomp the x. It’s eaten into a closure where it’s stored at slot 0. Then we get 0 y z. We evaluate the 0. We pull x out of the closure and we arrive at x y z.

One cool thing worth noting about the iconoclastic definitions in SectorLambda is that the if statement is defined as the identity function if = λ 0. Since the behavior of if is kind of integral to way this language operates. Most people actually don’t even bother including if in their lambda expressions for that reason. Another way if could be defined is λλλ [[2 1] 0] which is kind of like a longer version of the identity function, but that’s a better exercise to think about once you’re done reading this comment, especially if you mess around with the lambda.com -v program.

So let’s move on to something more complicated than the identity function. Consider false which is λλ0 by our definition. This expression chomps twice, so if our stream of things is λλ0 x y z then we chomp and the stream becomes λ0 y z and 0=x is tucked away in a closure. Then we chomp again and the stream becomes 0 z and 0=y is tucked away in a new closure, which means we have to reindex or push down 0=x to 1=x. Now we need to evaluate 0 z. Since 0 in de Bruijn indexing references the argument passed to the nearest enclosing function, we resurrect y so the stream becomes y z.

So what did λλ0 accomplish? In summary, it transformed the stream of things that follows from x y z to being y z. It acted as a selector or predicate. It dropped the x and kept the y. So if false=λλ0, then surely true would be λλ1. True selects the first of the two things that follow in the stream, so it would transform x y z into x z.

In any case, this selection behavior is the basis for logic the language. It lets us divide and conquer and that’s really all you need to start building your coding empire.

One thing you may be wondering is, why are the definitions of true and false backwards compared to C++? You’d expect false would come before true. One example of what makes this convention useful is the EOF behavior. For example, the NIL terminator is also defined as false. In the monadic I/O model, what NIL does is it skips ahead in the list of terms. In order to do that, it needs to hop over things until it eventually reaches an undefined term. That’s the point at which VM knows it needs to perform a read i/o operation, in order to “gro” the input list. It also allows us to determine program exit. It’s defined as a NIL selector skipping past the program image.

Thanks very much! I like the Pac-Man analogy. It’s reminding me of concatenative languages, except here it sounds like you’ve got two stacks — one is the “list of things” being chomped & barfed, the other is the list of saved closures?

Mind-bending in a fun way. I reckon I can look up “Krivine machine” for more info when I find the time for a good rabbit-hole dive…

An 2012 IOCCC implementation : An 2012 IOCCC implementing a BLC in 650 bits : https://www.ioccc.org/2012/tromp/hint.html

BLC is really intriguing!

nice to see more lightweight implementations of LC around :)

This is intriguing, but the firehouse-strength description kind of knocked me to the ground, This earlier description I found was a bit more approachable, until it lost me when defining True and False. Any hints for dummies who only know dummy languages like C++ and Forth?

The basic idea here is that, when you want to represent some data in lambda calculus, you’ve only got one option for how to represent it: It’s gotta be a function, because that’s the only tool the lambda calculus gives you.

You want to choose your functions so that, whatever operations you wanted to perform on your data, you can instead do by function application. So when they say that

`True := λx.λy.x; False := λx.λy.y`

, what they’re really saying is “These functions give us enough information that we can distinguish one from the other, which is the main thing we need from a single bit: For example, if we know some value has got to be a bit, we can figure out which one it is by calling it as a function on two distinct values, and seeing which of those values we get back.” (note that this procedure is basically how you’d implement`if/then/else`

, given that you’ve defined True/False this way)Imagine you have a list (or stack) of things and pretend that λ is pronounced chomp. Even though lambda in reality is simply just a function that always accepts exactly one argument (e.g.

`function (x) { ... }`

we’re going to compare it to eating food just for fun.Now let’s start off simple with the identity function. Here’s how a Krivine machine works. Say our list of things is

`λ0 x y z`

. We see the lambda so we chomp the x. It’s eaten into a closure where it’s stored at slot 0. Then we get`0 y z`

. We evaluate the 0. We pull x out of the closure and we arrive at`x y z`

.One cool thing worth noting about the iconoclastic definitions in SectorLambda is that the

`if`

statement is defined as the identity function`if = λ 0`

. Since the behavior of`if`

is kind of integral to way this language operates. Most people actually don’t even bother including`if`

in their lambda expressions for that reason. Another way`if`

could be defined is`λλλ [[2 1] 0]`

which is kind of like a longer version of the identity function, but that’s a better exercise to think about once you’re done reading this comment, especially if you mess around with the`lambda.com -v`

program.So let’s move on to something more complicated than the identity function. Consider false which is

`λλ0`

by our definition. This expression chomps twice, so if our stream of things is`λλ0 x y z`

then we chomp and the stream becomes`λ0 y z`

and`0=x`

is tucked away in a closure. Then we chomp again and the stream becomes`0 z`

and`0=y`

is tucked away in a new closure, which means we have to reindex or push down`0=x`

to`1=x`

. Now we need to evaluate`0 z`

. Since 0 in de Bruijn indexing references the argument passed to the nearest enclosing function, we resurrect`y`

so the stream becomes`y z`

.So what did

`λλ0`

accomplish? In summary, it transformed the stream of things that follows from`x y z`

to being`y z`

. It acted as a selector or predicate. It dropped the x and kept the y. So if false=λλ0, then surely true would be λλ1. True selects the first of the two things that follow in the stream, so it would transform`x y z`

into`x z`

.In any case, this selection behavior is the basis for logic the language. It lets us divide and conquer and that’s really all you need to start building your coding empire.

One thing you may be wondering is, why are the definitions of true and false backwards compared to C++? You’d expect false would come before true. One example of what makes this convention useful is the EOF behavior. For example, the NIL terminator is also defined as false. In the monadic I/O model, what NIL does is it skips ahead in the list of terms. In order to do that, it needs to hop over things until it eventually reaches an undefined term. That’s the point at which VM knows it needs to perform a read i/o operation, in order to “gro” the input list. It also allows us to determine program exit. It’s defined as a NIL selector skipping past the program image.

Thanks very much! I like the Pac-Man analogy. It’s reminding me of concatenative languages, except here it sounds like you’ve got two stacks — one is the “list of things” being chomped & barfed, the other is the list of saved closures?

Mind-bending in a fun way. I reckon I can look up “Krivine machine” for more info when I find the time for a good rabbit-hole dive…

Pac-Man that’s perfect. Yes two lists. Plus lists grow off each item in the two lists. There’s also a free list.