1. 26
  1.  

  2. 11

    “Game is 80,000 lines of code. Compiles in 1 second on desktop, 1.5 seconds on laptop. Compiler has gotten a bit slower but will get faster again.”

    That’s single-threaded on one core. Their goal for parallel is 1 million lines a second. They believe quick compiles equal quick iterations with more productivity as result. That was proven out in LISP and Smalltalk. So, this language is already interesting as a C++ replacement if it’s performance focused, simpler, and has quick iterations.

    1. 3

      The more I see stats like this, the more I wonder if it was right for Go to forego templates in favor of simplicity and compilation speed.

      1. 3

        You can have both. I argued in those debates for at least three tiers:

        1. Interactive or REPL for instant results with live image. This gives productivity of LISP and Smalltalk.

        2. Go-style native compiles that give you decent speed really fast.

        3. Profile-guided, optimizing compiler for release mode.

        The main advantage of interpreters and fast compiles is you maintain mental state of flow. Let’s you develop near peak performance. The optimized ones can make software more competitive on user-facing speed or cost less to use in terms of VM’s or whatever. The speed difference doesn’t always matter, though, as both Go and Python show. PHP, too.

        So, I advocate having each available so user can choose best option for them. Per-module, too, so fast paths can be ultra-optimized if needed. You can do that with GC’s, too, where different apps or pieces of apps use memory management right for them.

    2. 4

      I’ve been watching Jon Blow’s language design since he started in 2014, and I gotta say, he’s made some impressive leaps and bounds. The original “conceptual” talks[1] that don’t feature any of the code at all have a lot of great justification for his design decisions. Of course this video four years later as he’s actively been working on the project, and I’m interested to see how this has diverged from and expanded on his original vision.

      [1] First talk, second talk.

      1. 1

        The dynamically scoped global context made me happy. No other language except Common Lisp has that as far as I know. I am excited to use this.

        1. 2

          Won’t that make optimizations extremely hard? I haven’t watched the video, so I don’t know the details (and the Jai language primer makes no mentions of contexts), but if you can’t tell statically what’s in scope, it seems to me that most analyses will have to conservatively assume that the universe is in scope, no?

          1. 3

            Things may have changed from the last demo I saw of Jai contexts, but this seems to be something intended to be used sparingly, or at least the context should contain only a few root object pointers. Functions that use context simply desugar to context-passing-style. The really interesting problem is what to do about higher-order code.

            On other thing that makes this easier: Jai is focused on fast full-compilation, so it doesn’t suffer from the usual restrictions imposed by separate compilation. It would be possible to do conservative global analysis (very cheaply!) to compute which functions need which partitions of the whole context.

            1. 1

              Scope and optimization here are separate questions and I don’t see how they’re related. Regarding scope, I don’t know the full details but I would assume you have to declare the global variables beforehand, so it’s not like you can introduce arbitrary variables into the context. The compiler knows exactly which static addresses are accessible and which are not. Perhaps that answers your question?