I got all excited till I realized type inference does not extend to function arguments or return values. In my experience that encourages you to write longer functions than you should, because then you get more benefit from the inference–split things out into enough small functions and you might not get any inference at all.
Edit: on the other hand, the snippet on the web site clearly shows lambdas that have inferred arguments and return values, so it’s hard to say what’s actually going on here.
from the author:
(the poster is thinking of reconstruction, ala Haskell). I don’t support that because I like being able to look at a function and know what I give it and what I get out of it. Sure, it’s longer, but I like having each function be an endpoint, so to speak.
As for lambdas, I have a trick where type information gets obtained in emitter, and then emitter hands control down to parser. That continues in a loop so that lambdas have full type information.
Hello and thank you for both answering in my stead and inviting me. Inference, as it stands right now, only goes left to right. The trick I do for lambdas is that I scoop them up as a single token. Parser just sees a fat token, sort of like how it sees a fat token when it gets a string. Parser passes the blob down to emitter who will have type information when the lambda is finally reached.
Once emitter gets to the lambda, it now has type information from doing whatever evaluation it did. The emitter passes the block, and the type information back to the parser. The parser then parses the body of the lambda. Lily is pretty clever about figuring out the return of a lambda. Also, for nested lambda, the process just goes deeper until it bottoms out.
I think what the first poster is thinking of is reconstruction ala Haskell (where when you define a function, you can have the inputs and the output inferred in many cases). I could be wrong about that though.
Interesting method. Does this work for all lambdas, or only those that are only called in the same place they’re defined like in the snippet on the splash page? That is to say, would it be possible to extend this trick to named functions as well and you choose not to, or is it necessarily more limited?
As much as it bugs me to have to spell out types for all functions, I could see Lily being a good fit for this game I’m working on that uses programming as a core mechanic. I’ve got a space adventure game that has the user writing a lot of Lua (most of the game is implemented in Lua) but also pulls in a Lisp, and I’ve started on adding a Forth implementation. It would be really cool to add a static language, but I can’t find any that compile directly to Lua, so I think I’ll have to resort to FFI if want to do this. Would it be difficult to call Lily functions from Lua, provided the only data that crossed the language boundaries was doubles and strings? I’m afraid I’m fairly rubbish at C; the only nontrivial program I’ve written in it was a Forth.
All the other languages I’ve been using so far run in the LuaJIT VM. Since the game involves programming at runtime, adding another language would require shipping a full compiler in most cases, which is why Lily’s interpreter could simplify things a lot for me.
Edit: saw from the HN thread that the plan is to write up a guide for embedding Lily after the next release comes out in a month or two, so it sounds like I should hold off and take another look once that’s ready.
I like being able to look at a function and know what I give it and what I get out of it.
Dang; I hoped that full inference was on the roadmap and that he just didn’t have time to get to it yet.
I’m sure the author knows this, but inferring function types doesn’t mean it’s difficult to find the type of a function; in OCaml you just enter the function name at the top level and it tells you the type. It’s trivial to write editor/IDE tooling that will display types with a keystroke once you have the underlying inference working.