1. 2

    I fell into this trap before. Some years ago, at my first job, I was in charge of designing and developing a Web broadcast planning tool. I replaced all native array methods by hand-written forEach/map functions and some lodash ones.

    It was (a bit) faster. But, I spent ~1 month doing it.
    The problem is, it added too few value but it costed 1 month of development.

    1. 2

      agree, exactly. I bet there are lots of similar stories out there

    1. 1

      Interesting, I was building one too, can you give some thoughts about Node’s capability in term of networking performance? for the two games you mentioned, what is the peak CCU it can handle before it start running into lagging problem?

      1. 2

        Hey, cool project! The two games I built (generals.io and geoarena.io) are a little different in the sense that they aren’t one big arena for all the players. Because of that, the performance characteristics will look a bit different, so my experiences might not be as useful to you. FWIW, on the day generals.io hit #1 on Hacker News, the game had 50k users playing over that 24 hour period and my Node.js server running on a $20 DigitalOcean droplet did okay, but not great. There was definitely some lag, but everyone could play!

        1. 2

          Thanks for making Generals! I spent a few months playing it most days, was a lot of fun. I think it’s plausibly the next best RTS game after Starcraft 1/2 :)

          1. 1

            Glad you enjoyed! I’m a big Starcraft 2 player / fan, so it was a lot of fun to make generals :)

            1. 2

              Nice, me too! Generals felt a lot like practicing keeping up with Zerg creep spread while microing.

      1. 12

        Your at least it’s not all bad popup still hijacks my spacebar and prevents me from reading to the end of your post.

        1. 4

          good point, I’ve removed that.

        1. 13

          Just a note about your site: the “At least this isn’t a full screen popup”-popup captures keyboard input, so if you’re scrolling with the keyboard you need to deselect it before continuing. Pretty annoying, IMHO.

          1. 2

            This is definitely one of those websites where disabling JS results in a much better/usable experience.

            1. 1

              hey, thanks for the feedback - you’re definitely right. I’ve removed that.

            1. 3

              I have been using the default webpack config that vuecli set up. Has been working well for me. Also I wonder what the performance penalty of multiple files is now with http 2. On the first request the server can send the html and every js/css file at the same time so you don’t have multiple round trips

              What seems more efficient to you: downloading 1000 lines of code 10 lines at a time, or downloading 1000 lines of code all at once?

              This is a terrible way to explain something. You can’t download 1000 lines all at once. They all come one bit at a time. Also just vague feelings about speed are not useful. Measure the speed before and after the change and then use the results to make your decision. It certainly was true however that multiple JS files did cause slowdowns because browsers would only request a limited number of resources at the same time but I doubt it has much of an effect at all anymore. If I was writing a blog post and posting it here I would actually test this claim however.

              1. 1

                hey, thanks for the feedback, I’m going to tweak the wording to hopefully be clearer

              1. 5

                Count them. That’s 22 script includes.

                Requesting this many scripts was a major network bottleneck, and as a result my site took a long time to load. Web performance matters — its importance has long been well-known and documented. What seems more efficient to you: downloading 1000 lines of code 10 lines at a time, or downloading 1000 lines of code all at once?

                …What?

                I know we need to measure things and not go with intuition on performance, but my intuition is that 22 scripts — in the grand scheme of things — is not particularly many. I mean, the webpage that this blog post is written on makes 77 requests, with the first ~25 of those happening in the first couple of seconds.

                Did he actually measure this “speed problem”? Was downloading 22 scripts verifiably a “major network bottleneck”?

                1. 1

                  I suspect it was a “time to first paint” issue, which would certainly make it feel like what the author describes. I agree that it is an imprecise way to say it though.

                  1. 1

                    good point - I didn’t measure the speed problem but probably should’ve. It’s much less of / not a problem now with http 2, but back in 2016 this was a bit more relevant. Regardless, “major network bottleneck” is an exaggeration and I’ll edit this a bit for clarity. Appreciate the feedback!

                  1. 3

                    With HTTP/2 many small files is not a problem. Just ship ES Modules directly to browsers. More granular caching, less dev tools.

                    1. 1

                      yeah, the small files problem was more relevant in 2016

                      1. 1

                        It’s not as big of a problem if you know all of them up front. But ES modules don’t let you know this without using a bundler or bundler-like tool. See this post for why http/2 doesn’t solve the problem: https://engineering.khanacademy.org/posts/js-packaging-http2.htm

                        1. 2

                          Use modulepreload to preload all the module files:

                          https://html.spec.whatwg.org/multipage/links.html#link-type-modulepreload

                          1. 2

                            Thanks for the pointer! I hadn’t seen this before and haven’t been able to find good information on browser support yet. Do you know how well supported it is?

                            Additionally, it fails to solve two of the problems with just serving ES modules:

                            1. You still need to provide a static list of modules, so you’ll need a tool to convert your dependency graph into that list (a bundler?)

                            2. By splitting everything into multiple files, you lose compression efficiency. Assuming you need everything anyway, shipping as a single (or small number) of bundles will produce the best results.

                            1. 1

                              Yeah it seems to be a little difficult to find information on. I know there’s support in Chromium, but not sure it’s in the others just yet. I’ve only experimented with this a little on small projects and nothing in production so I’m still trying to get my head around how it would work when things get larger and more complex! In that case bundling probably will always be a good idea. Also agree with the point about compression. Below a certain size it’s probably not a big issue, but there will definitely be a point where you need to consider that.

                              My hope is just that eventually I will be able to use this and avoid complex build steps for smaller projects with a manageable number of modules.

                      1. 3

                        VIctor: I wonder what percentage of the data set was used to train the model and if the test results (i.e. accuracy, F1 score) are from the previously unseen portion during testing. That would be good to note alongside the results.

                        Also, do you think more accurate results could be achieved by not using an [unordered] bag of words model? For example, would an RNN (or, specifically, LSTM) for sequence classification perform better? Here’s an example of what I mean. It seems like a good portion of the “profane/hate speech” requires more than one word to go over the line, as it were.

                        1. 5

                          hey, good questions. I actually experimented with a lot of different models (including LSTM-based models), and the ones that performed better than the BOW model did so at a huge cost to performance. Since this library is intended to be accurate but also performant, I decided to go with the BOW model because it’s quite robust in many cases while also being extremely fast.

                          The train/test split was 80/20, and the test results are of course using unseen data. I followed standard procedures when experimenting.

                        1. 3

                          I’m rebuilding my personal site/blog from scratch with Gatsby.js and trying to make some more interesting posts. It’s been fun so far, excited to see where it goes :)