Congratulations to the Racket team; this is a huge milestone. I love that they’re putting an emphasis on maintainability and a correct approach, understanding that this investment will allow them to improve on optimizations much more easily later on down the line.
Massive amount of work for something like this to happen. Thank you.
Super exciting to have the Racket CS base right now. The corollary to being substantially easier to to maintain, is the possibility of growing a much wider base of contributors, including compiler and low level hackers. Really sets the base for years of growth in the implementation.
Down the road, an alternative IO library giving Racket from top to bottom full native thread concurrency or more like a M:N Green to Native thread mapping would interesting (along the lines of Haskell GHC’s approach).
If you are a compiler guy (and I am not) but I’m of the general belief that the Chez compiler is still cutting edge and doing some exciting stuff on the compiler front that is still on going. e.g. Its nano pass design.
So the benchmarks show that everything is faster with Chez Scheme, except VM startup is slower? That seems to be the prevailing trend in VMs.
Python 3 is the same way now – everything is faster than Python 2, except startup. There was a good PyCon talk showing this, but this message also gives some numbers [1].
I have a prototype of a shell solution to this here:
It requires around 100 lines of code in the VM or application, assuming the application can make the dup2() libc call and such. If anyone wants to help prove the protocol, let me know :)
Yeah I skimmed the first few rows of the graphics and misinterpreted. I think it would be easier to read if they had overlaid them somehow in a vertical format, or maybe provided a summary.
Congratulations to the Racket team; this is a huge milestone. I love that they’re putting an emphasis on maintainability and a correct approach, understanding that this investment will allow them to improve on optimizations much more easily later on down the line.
Massive amount of work for something like this to happen. Thank you.
Super exciting to have the Racket CS base right now. The corollary to being substantially easier to to maintain, is the possibility of growing a much wider base of contributors, including compiler and low level hackers. Really sets the base for years of growth in the implementation.
Down the road, an alternative IO library giving Racket from top to bottom full native thread concurrency or more like a M:N Green to Native thread mapping would interesting (along the lines of Haskell GHC’s approach).
If you are a compiler guy (and I am not) but I’m of the general belief that the Chez compiler is still cutting edge and doing some exciting stuff on the compiler front that is still on going. e.g. Its nano pass design.
So the benchmarks show that everything is faster with Chez Scheme, except VM startup is slower? That seems to be the prevailing trend in VMs.
Python 3 is the same way now – everything is faster than Python 2, except startup. There was a good PyCon talk showing this, but this message also gives some numbers [1].
I have a prototype of a shell solution to this here:
Shell Protocol Designs http://www.oilshell.org/blog/2018/12/05.html#toc_2
https://github.com/oilshell/shell-protocols/tree/master/coprocess
It requires around 100 lines of code in the VM or application, assuming the application can make the dup2() libc call and such. If anyone wants to help prove the protocol, let me know :)
[1] https://mail.python.org/pipermail/python-dev/2017-July/148656.html
Not at all; the benchmarks show a good number non-startup things which are faster on the old Racket.
Yeah I skimmed the first few rows of the graphics and misinterpreted. I think it would be easier to read if they had overlaid them somehow in a vertical format, or maybe provided a summary.
The LuaJIT VM starts up pretty quickly.