Source: me 😛. I tested an earlier version of this.
I trust the source of this message. Thanks, @freddyb!
What is the good practice for making builds reproducible when using babel-preset-env’s relative queries? When trying to debug an issue in an older (but still supported) version of a library, is there a way to build it using historic browserslist data? Perhaps some sort of a browserslist lock file?
Heh, I was going to comment “what’s the point, innerHTML is slow and dangerous and the virtual DOM is what makes react fast and secure in the first place”, but it makes much more sense in the js23kGames jam context.
Using template strings over JSX is also pretty nifty. Looking forward to seeing your game, stas!
Thanks, freddyb! I realize that using innerHTML has its issues and I hope I managed to convey it in the project’s README. The whole thing is a pet project, a what-if experiment and an excuse for learning more about one-way bindings and the Virtual DOM (by show-casing what happens when it’s not there).
To make innerself slightly more serious, I’ve recently added a rudimentary sanitization function :)
A week has passed and the game is now ready to play at http://js13kgames.com/entries/a-moment-lost-in-time.
Initially I felt very productive using innerself and I was quickly able to add all the views and put the logic in the reducer. In the later stages of development we focused on UI polish and I set out to add transitions between views. As you can imagine, CSS didn’t appreciate the fact that innerself would re-render entire DOM trees when the state changed. I’m not very happy with the result: I ended up using setTimeouts timed precisely with animation events to work around those re-renders.
I guess this counts yet another example of a use-case which innerself isn’t well suited for. Any time the DOM is stateful (forms, animations, transitions, video, audio), re-rendering by assigning to innerHTML is a bad idea.
this is cool!
for those interested, this article is another interesting exploration of the tradeoffs of a super simple react (not redux) clone:
Thanks for the links; they’re very helpful! The limitation described in the article you linked to is the same that I ran into: just replacing the whole DOM tree on each update, as fast or slow as innerHTML may make it, makes it impossible to create viable interfaces which update every time the user types a character into the UI. It also hurts accessibility.
For a moment I was tempted to try and somehow mark DOM elements which should update on key press events etc. I would then need to also prevent re-renders… which essentially puts us back to square one: two-way bindings. In the end I decided to make this a known limitation of innerself. It’s probably fine to say that you can’t create every imaginable interface in it. I’m less happy about how inaccessible it ends up being.
It makes me wish for a Virtual DOM diffing done natively by the browser.
It looks pretty cool to me! I think the thing I want to know the most (but haven’t really been able to find an answer to) is the performance of using async/await when compared to normal promises. I think there are two dimensions to this. The first, and slightly less useful one at present, is a comparison without any transpiling and relying on the browser native implementation. The second, and much more useful at present, is a comparison using transpiling and a generator runtime like regenerator. Namely, if I want to use async/await today, then I probably need to use regenerator. If I do that, what have I lost? (Other than perhaps good stack traces.)
I have no idea what the performance using regenerator is like, but I can comment on native performance. Right now, it’s mediocre to pretty bad, so entirely unusable in tight loops. Then again, the same is true for Promise itself, too. But yes, it’s worse for async/await, because that needs not only Promises, but also generators, both of which have fairly fundamental perf issues. In most engines, generators don’t run in the optimizing JIT tier. I think V8 is currently the only exception (but I might be wrong). I’m not sure how much that matters relative to the required trip through the browser’s event loop/task queue.
So for the general case, both Promise and async/await will always have non-negligible overhead. However, there are certain, pretty common, scenarios in which async/await should be able to perform substantially better, to the point of being optimizable into simple fully JIT-compiled loops. SpiderMonkey’s bug 1317481 has more information, but the gist is this: because the resumption after an awaited Promise is fulfilled is guaranteed to be the first frame on the stack, we know that the current task will end upon encountering the await. If in this situation we can ensure that a), the awaited value is either not a Promise or other thenable (which would wrap it into an immediately resolved Promise) and b) no other tasks are enqueued in the microtask queue, we can simply resume execution immediately without an observable difference. (Note, this is only true in the second iteration: the first time the await is encountered, there are other frames on the stack.)
This might seem like it’d only hold for fairly esoteric circumstances, but I’m pretty sure it’ll be a very common scenario. E.g. a ReadableStream that has multiple chunks pending will result in such a tight loop.
(Reply moved from top-level so it’s clear what I’m replying to.)
Re. regenerator performance, if I’m reading http://kpdecker.github.io/six-speed/#test-generator right, the native implementations in recent versions of JS engines are faster than the babel-transpiled ones. In all cases, however, the code using generators is slower than the code rewritten to not use them at all.