It seems the trigger to their decision to move to Lua was, that they couldn’t rely on a system’s python version, and Python felt like too big to ship with the code. I wonder if one can produce a stripped-down distribution of Python 3.4/3.5 that would come into the acceptable range.
In the end, the blog post feels a bit like a “I bought 15 pairs of new socks and put my old socks into the bin” post. Sure it feels great, but has the software really improved? Or will you realize after a few weeks that your situation has not changed dramatically?
I think “not relying on the system version of $LANGUAGE” is one of those decisions that improves your life nearly 100% of the time when you need to distribute software to unknown machines.
I doubt the Python -> Lua switch helps that much beyond that, but it sounds like the tooling that came with it was nice.
I used to do that with the Python installations for Windows: just install it and rip out all parts that could easily be removed. Python 2.4.2 ended up being 2.4MB and Python 2.5.1 2.8MB. So definitely doable.
I don’t see much of an upside for migrating from Python to Lua (or Ruby for that matter, or even vice versa) as the languages are essentially the same anyway. ctypes is part of the Python Stdlib for quite some time, so calling that was never an issue to start with. Similarly with nonblocking code, Python has had Twisted for ages and Tornado is not that new anymore either.
Starting from a clean slate is fun, but this post would’ve been almost equally valid if one string-replaced “Lua” with “Python” and “Python” with “Lua”.
this summarizes exactly what I felt about this post.
I just threw something up last week using luvit:
It’s an IRC bot which listens for web hooks from gitlab and announces them in our channel. It was pretty straightforward to put together. I’d have some reservations about using Lua for a larger server-side codebase due to its poor concurrency support and sloppy semantics around nil and arity checks, but for small tools you distribute to end-users it’s a great fit.
I’ve used prosody in production before and not ran into any major issues. I feel its about where ruby is at with concurrency and nil handling.
and sloppy semantics around nil and arity checks…
I’ve always seen Lua as a “low level language” well suited for targetting from a compiler. l2l is the perfect idea, but it would ideally be a bit more sophisticated, support things like tree-shaking, have a larger library, etc. From there, your nil handling and stuff could easily be dealt with, as you obviously can just change the semantics.
But, I’m a bit surprised at your assertion of poor concurrency. I haven’t tested it by any stretch of the imagination, but I always assumed Lua’s coroutine library worked well. Maybe I should start testing these assumptions…
l2l is the perfect idea
It is pretty nifty, but by design its semantics stay as close to Lua as possible. So it’s very different from something like ClojureScript that attempts to retrofit immutability and sensible equality semantics onto a reluctant runtime. That’s great in that it’s much easier to debug, but you’re right that you’d want something more sophisticated for large application development. Maybe an l3l?
But, I’m a bit surprised at your assertion of poor concurrency. I haven’t tested it by any stretch of the imagination, but I always assumed Lua’s coroutine library worked well.
Well sure; it depends on what you compare it to. Coroutines are a hell of a lot better than node.js callbacks, but you still need multiple processes to take advantage of multiple cores. There’s no true concurrency in vanilla Lua. Some runtimes allow you to spin up multiple Lua “processes” into a single unix process, but they don’t share memory space and have to communicate by serializing tables over some kind of message passing.
Maybe an l3l?
I’ve thought about it. Lua is an interesting language to target, has a lot of potential use cases, etc. Problematically, though, and as you point out, Lua’s lack of multi-core support makes it difficult to adopt for my typical day.
Not sure that should matter, though, and I should probably work on more things that don’t have to scale like that. :)
There’s no true concurrency in vanilla Lua.
coroutines provide concurrency. I think you mean there’s no true parallelism?
Oh, sorry; I was using the conventional English definition of concurrency, not the Pike neologism.
The differences between parallelism and concurrency are fairly well established in computing and certainly not due to Pike.
Maybe in some circles. I’ve heard plenty of usages of both meanings and prefer the literal definition (“actually happening at the same time”, which is clearly not true of Lua coroutines) myself. I certainly wouldn’t say there is a strong consensus on a single meaning of either term. Up until 2013 I had only heard the word “parallelism” used to mean “data parallelism” on IRC and at conferences.
I’ve heard plenty of usages of both meanings and prefer the literal definition.
The problem with the “literal”, dictionary definitions is that they mean more or less mean the same thing.
Concurrency is really a property of a program. Can two individual tasks make progress at the same time? If yes, then the program is concurrent.
But, parallelism refers to a property at runtime. Can two individual tasks be concurrent without sharing (all) resources? i.e. can they both use different processors?
I certainly wouldn’t say there is a strong consensus on a single meaning of either term.
I disagree. I think it’s common for a large percentage in our industry to be confused by the terms due to the fact that the dictionary definitions are so close. There is a real distinction between these two terms, however, and that distinction has been around for many many many years.
Can two individual tasks make progress at the same time? If yes, then the program is concurrent.
The reason I use the word “literally” is that “concurrency” does not apply in cases like Lua except in a figurative sense. At any given time, only one piece of Lua code can be running per process; inactive coroutines are just that, inactive.
I think it’s common for a large percentage in our industry to be confused by the terms due to the fact that the dictionary definitions are so close.
I have no problem distinguishing between the two concepts. I just use the older dictionary definition.
At any given time, only one piece of Lua code can be running per process; inactive coroutines are just that, inactive.
But this is true of even the most advanced M:1 threading systems as well! If you have a single process, on a single processor, you, by definition can only run one thing at a time. That doesnt mean you can’t have a time based scheduler preempt the current thread to switch what task is running, though! Thats exactly how pthreads, purely as a library originally worked.
Coroutines are a way to organize multiple executing tasks. There are some coroutine systems in which the executing coroutine is the only thing that can give up control, i.e., preemption isnt possible. There are others, though, in which coroutines can interrupt and take control.
But, in either case, coroutines can be used to perform multiple tasks at once. Multiple coroutines could execute at the same instant, if the language runtime made multiple cores available. That is to say, if the language supported parallelism. This is what Go does.
As a final point, asymmetric coroutines (those that can take control from others), have been shown to be equivalent of one-shot continuations. I believe the paper (I think its called Coroutines revisited, or something like that… sorry on my phone), actually uses lua’s coroutines to show it.
I understand everything you’re saying, but none of it explains why you would apply the word “concurrent” to a system where only one thing can happen at once when it’s meant something different for hundreds of years.
Why I would? Or why computing people would?
I’ve only adopted the terminology because it’s accepted, not because it necessarily makes sense in the literal sense.