1. 4

    Is it me or is Go an odd choice for mobile device development?

    1. 4

      Go is similar to Java, which is used for mobile development. On the other hand, I think the document is indeed pointing out one relevant difference, that Go produces large binaries.

    1. 2

      Similarly, math is about people not proof. Thurston’s On proof and progress in mathematics is a classic.

        1. 2

          I see it uses SMT solvers to check the annotations. I know that Z3 is quite impressive, but am interested in how scalable this would be. Does the language design ensure these checks are kept ‘local’ (i.e. adding N definitions/calls adds O(N) time to the build), or can the constraints interact over a longer distance (potentially causing quadratic or exponential time increase)? I’d also like to know if there are common failure-modes, e.g. where some common code pattern can’t be solved without extra annotations.

          For example, the classic “hello world” for dependent types is to define Vector n t (lists containing n elements of type t), and the function append : Vector n t -> Vector m t -> Vector (plus n m) t whose output length is the sum of the input lengths. Woe betide anyone who writes plus m n instead, as they’d have to prove that plus commutes!

          1. 3

            With SMT based verification, each property is proved separately and assumed true when proving others. So there’s no problem with locality. The problems are very different vs. dependent types. SMT is undecidable, so unless the tool is very careful about using decidable logic (like Liquid Haskell is), you’ll very often see that the solver just times out with no solution, which is pretty much undebuggable. It’s difficult to prove anything through loops (you have to write loop invariants), etc.

            1. 2

              In this case, SMT isn’t undecidable: a friend pointed out the write!(solver,"(set-logic QF_UFBV)\n").unwrap(); line to me, which means “quantifier free with uninterpreted functions and bitvectors”. It’s decidable, just super hard.

              1. 2

                Yeah, if you can get away with quantifier-free, everything is much better. But you can’t do much with C code without quantifiers. zz probably doesn’t support verifying loops.

                1. 2

                  I agree, unless ZZ has its own induction system for loops, using SMT only to prove the inductive hypothesis. It’s a lot of work though.

            2. 2

              Does the language design ensure these checks are kept ‘local’ (i.e. adding N definitions/calls adds O(N) time to the build), or can the constraints interact over a longer distance (potentially causing quadratic or exponential time increase)?

              I’m also interested in this in terms of API stability. If interactions happen at a longer distance it can be difficult to know when you are making a breaking change.

              1. 2

                Z3 “knows” addition commutes, so that’s no problem. Usual trouble with dependent type is that you define addition yourself, so it is hard for Z3 to see your addition is in fact addition.

                1. 1

                  Yes, I gave that example for dependent types (the type requiring a different sort of branching than the computation) since I’m more familiar with them.

                  The general question was about the existence of pathological or frustrating cases in this SMT approach (either requiring much more time, or extra annotations) arising from very common assertion patterns, or where the/a “obvious” approach turns out to be an anti-pattern.

              1. 7

                After digging through the Go source code, we learned that Go will force a garbage collection run every 2 minutes at minimum. In other words, if garbage collection has not run for 2 minutes, regardless of heap growth, go will still force a garbage collection. We figured we could tune the garbage collector to happen more often in order to prevent large spikes, so we implemented an endpoint on the service to change the garbage collector GC Percent on the fly. Unfortunately, no matter how we configured the GC percent nothing changed. How could that be? It turns out, it was because we were not allocating memory quickly enough for it to force garbage collection to happen more often.

                As someone that’s not very familiar with GC design, this seems like an absurd hack. That this 2-minute hardcoded limitation is not even configurable comes across as amateurish even. I have no experience with Go – do people simply live with this and not talk about it?

                1. 11

                  As someone who used to work on the Go team (check my hats… on the Cloud SDK, not on language/compiler), I would say that:

                  1. It is a mistake to believe that anything related to the garbage collector is a hack. The people I met who worked on it were far smarter than I and often had conversations that went so far over my head I may as well have walked out the room for all I could contribute. They have been working on it a very long time (see the improvements in speed version over version). If it works a particular way, it is by design, not by hack. If it didn’t meet the design needs of Discord’s use case, then maybe that is something that could be worked on (or maybe a later version of Go would have actually fixed it anyway).
                  2. Not providing knobs for most things is a Go design decision, as mentioned by @sanxiyn. This is true for the whole language. I have generally found that Go’s design is akin to “here is a knife that’s just about sharp enough to cut your dinner, but you’ll find it fairly difficult to cut yourself”. When I worked with Java, fiddling with garbage collection was just as likely (if not more) to make things worse it than was to make it better. Additionally, the more knobs you provide across the language, the harder it is to make things better automagicaly. I often tell people to write simpler Go that’s a little slower than complex Go that’s a little faster algorithmically because the compiler can probably optimize your simpler code. I would guess this also pertains to GC, but I don’t know anything about the underpinnings.
                  1. 6

                    One of explicit design goals of Go’s GC is not to have configurable parameters. Their self-imposed limit is two. See https://blog.golang.org/ismmkeynote.

                    Frankly I think it is a strange design goal, but it’s not amateurism. It’s a pretty good implementation if you assume the same design goals. It’s just that design goals are plain weird.

                    1. 13

                      I have no experience with Go – do people simply live with this and not talk about it?

                      My general impression is that tonnes of stuff about Go is basically “things from the 70s that Rob Pike likes”. Couple that with a closed language design team…

                      1. 2

                        It is configurable, though. You can set an environment variable to disable GC and then run it manually or you can just compile your own go with a bigger minimum interval.

                        Either would be a lot less work than rewriting a whole server in rust, but maybe a rewrite was a good idea anyway for other reasons.

                        1. 2

                          or you can just compile your own go with a bigger minimum interval.

                          I’m not sure “rewrite code to change constants then recompile” counts as “configurable”, nowadays.

                      1. 3

                        Changing to a BTreeMap instead of a HashMap in the LRU cache to optimize memory usage.

                        Why would a BTree map use less memory in rust?

                        1. 3

                          Hash maps usually try to have a lot of empty slots for performance, while afaik a BTree will usually just have the necessary nodes, right?

                          1. 1

                            I think it’s more about access patterns than just size. In a hash map the elements are essentially randomly jumbled about, whereas In a BTreeMap they can be sorted by time. The oldest elements will be closer together in memory and O(1) to find, which are both great news for a LRU cache.

                            1. 2

                              They didn’t say BTreeMap is faster, they said BTreeMap uses less memory. In fact, I very much doubt it was faster.

                              1. 0

                                I have no trouble believing it was faster for the workload. After all, that was the point of the article.

                            1. 2

                              Indeed!

                            1. 1

                              not sure if this is correct but: i can’t safely update the affected systems as the connection to the update hosts may be compromised, with signatures not working too?

                              1. 2

                                SwiftOnSecurity claims that Windows Update is not vulnerable:

                                https://twitter.com/SwiftOnSecurity/status/1217265731152289792

                                1. 1

                                  thanks for the link! would be interesting to have some design docs for this.

                                  1. 1

                                    The vulnerability is specific to elliptic curve cryptography. According to Twitter, Windows Update uses RSA as well.

                              1. 9

                                I can’t agree with this more. While doing olin/wasmcloud stuff I have resorted to this slightly insane method of debugging:

                                • debug printf the pointer of a value you want to look for
                                • immediately crash the program so none of the heap gets “contaminated”
                                • pray the value didn’t get overwritten somehow (in my subjective and wholly personal experience Rust does this sometimes, it’s kind of annoying)
                                • dump the webassembly linear memory/heap to a binary file
                                • search for that pointer using xxd(1) and less(1)
                                • grab my little-endian cheat sheet
                                • hopefully derive meaning from it

                                WebAssembly debuggers really need to happen, but until they do we are basically stuck in the printf debugging, black magick and tarot cards stone age, but at least it’s slightly easier to make a core dump. After a few hours though you learn to read little endian integers without needing to break out graph paper.

                                1. 2

                                  I’ve used instrumentation tools like Wasabi for dynamic testing during client assessments (yes, we have clients working in WASM already). Also, we’ve extended Manticore, our symbolic executor to support WASM, and we have some symbolic debugging-like scripts for it internally.

                                  In general tho, I whole-heartedly agree; we’re seeing more and more of this sort of thing, and the tooling definitely needs to catch up.

                                  1. 2

                                    To be fair, the changes needed on the DWARF/LLVM side are super recent; I believe I saw commits related to this in LLVM as recently as December. The debuggers will catch up, but it takes time for some of this stuff to percolate outwards, I haven’t done any testing myself in the last few months, but I suspect the browsers have experimental support on the way shortly, if not already available. It will take a bit longer for this stuff to make it into say, lldb, but not that much longer.

                                    The current situation does suck for those of us needing to build stuff with WebAssembly right now, but I keep myself sane by knowing that its just growing pains - we’ll get the necessary tooling sooner rather than later.

                                    1. 1

                                      Call me a pessimist, but existing debugging infrastructure might not be good enough for this. We might wanna start with something like AVR debuggers or other things made for Harvard architectures.

                                      1. 2

                                        Presumably there are different needs. The first need is for front end / in browser use cases. For that, foo.wasm.debug could effectively include all the debug symbols, and source map, or whatever and you use the in browser debugger. That seems fine.

                                        The server side is more problematic, I think, but that’s where your Harvard arch comes in. Provide a manhole cover that can be opened for the purposes of attaching a debugger to, in your execution environment, and poke and prod with step debuggers, perhaps? Still need the debug symbols and such… of course.

                                        1. 1

                                          Jtag into a wasm env, really this just needs to a handful of exported fn from the env itself. One could run wasm in wasm and implement it yourself in user space.

                                    2. 1

                                      What if your wasm env could visualize memory in real-time and set breakpoints in any fn call or memory write?

                                      1. 1

                                        Debuggers have two sides, symbol side and target side. Breaking/resuming and reading/writing memory is done in target side, and you can implement them in your WebAssembly environments. But that only lets you to do break 123 or print 456, not break function or print variable. The article is mostly about symbol side.

                                        1. 1

                                          That would solve a lot of problems, but it would also be a lot of work :)

                                          I’m gonna see what I can do though.

                                          1. 1

                                            Here is a veritable zoo of binary visualizers

                                            https://reverseengineering.stackexchange.com/questions/6003/visualizing-elf-binaries

                                            I think the past of least friction is run wasm3 within the browser so that you can have 100% over execution, it will be slow ~25x, but provide first class access to all facets of execution of the code under debug.

                                      1. 2

                                        This is very cool. Now I wonder which other natural languages we could do this for. Malay might be a good candidate.

                                        1. 3

                                          My guess would be that Korean is another likely candidate, since at least simple sentences can be translated word-for-word from Japanese to Korean (the way you might’ve pretended to write Spanish or French when you were a kid). The Ryukyuan languages might also be good candidates, since I know they’re pretty close to Japanese.

                                          But that might honestly be it, if those even do work. I can think of a lot of other languages with very regular grammars that might be candidates (e.g. Turkish), but for every one I can think of, at least some feature (in the case of Turkish, vowel harmony) messes it up.

                                          1. 1

                                            Korean also has vowel harmony.

                                          2. 1

                                            Chomsky originally tried to do this for all languages, but he started with English. Formal and generative grammars were invented for natural languages,but they kind of fall short of their goal. There’s a reason why computer scientists like formal grammars a lot more than linguists do.

                                            1. 2

                                              Starting with English is either a sign of prudently testing to see if the hardest problem can be solved first or naively thinking that English is straightforward. Given that this was Chomsky, I’d imagine it was the former, although I’ve never seen that explicitly stated anywhere.

                                              1. 1

                                                English is about as complicated as any other language. Chomsky started with English because it’s the language he knows best.

                                            2. 1

                                              As evidenced by the post, you can’t really do this for Japanese (it’s a very limited subset). So limited, you might do something similar for English.

                                            1. 1

                                              $915 a month looks downright cheap to me. It looks expensive only relative to other open source projects (such as Zig, $887 a month), not relative to software works.

                                                1. 6

                                                  You should maximize geometric mean of outcomes, not arithmetic mean. This is called Kelly betting.

                                                  1. 3

                                                    Exactly, Kelly was one of the few that found that. Ole Peters generalized the Kelly criterion with ergodicity economics: https://ergodicityeconomics.com/

                                                  1. 53

                                                    *writes a rant about how developers are petulant little children who care more about “convenience” and “fun” than “performance”*

                                                    *puts it on medium*

                                                    1. 19

                                                      I really hate this kind of response. It’s super-trolly and contributes nothing except making you feel good about yourself for somehow finding the “loophole” that allows you to ignore the valid objections raised in the article.

                                                      We don’t all have to be perfect paragons of virtue at all times in order to be allowed to complain.

                                                      1. 15

                                                        They’re blaming all performance problems on moral failings. These people are too whiny and self-entitled to care about the customers, so they use convenience trash like WEBPACK and BOOTSTRAP. Their language oozes contempt.

                                                        In that context, “you used a bloated website for personal convenience” shows how little they actually thought this through.

                                                        1. 5

                                                          No, it doesn’t. It shows the fact that they didn’t have the time to put up their own blog website and maintain the code for it, which is a PITA. Do you think that the author works for Medium and personally contributes code for their website? All of the people that already agree with the sentiment that “bloat is bad” are going to hoot at your epic comment and meanwhile the average person will look at it and be confused about why you’re such a dick about people not having the resources to maintain their own blog.

                                                          1. 21

                                                            He’s saying that people who trade performance for convenience are bad developers, and then he trades performance for convenience by using Medium. I don’t see anything wrong with trading performance for convenience. I see something wrong with him being an asshole to people who trade performance for convenience and then doing it himself.

                                                            If he doesn’t know how to post his stuff anywhere but Medium, then he probably doesn’t know enough about web development to make sweeping critiques about web developers.

                                                            1. 6

                                                              He or she knows stuff is bloated. He or she posting on Medium does not invalidate his or her knowledge of bloat.

                                                              She or he is not an asshole merely for pointing out that bloat is a problem and we’re all complicit.

                                                              1. 12

                                                                He’s not an asshole for pointing out bloat. He’s an asshole because he lays all blame for bloat squarely on the feet of people who use frameworks, calls them “McDonald’s line order cooks”, and claims anybody using a convenience is doing silicon valley cosplay.

                                                              2. 6

                                                                For basically every developer alive, blogging is not a job, and nobody is spending 40 hours a week doing it. What a ridiculous false equivalence. Some people have families and lives outside of work they must dedicate time to instead of idealistic puritanism, and can yet somehow still criticize the work they see happen on the job. Strange concept, isn’t it?

                                                                1. 4

                                                                  That’s a lot of snark when it’s not at all obvious why work-vs-hobby is a critical distinction for determining whether or not something is eligible for criticism. In particular, software businesses (like hobbyists) also have finite resources (notably developer time) and must make decisions about how to spend those resources. Contrary to the false dichotomy put forth by TFA, these businesses’ interests are generally aligned with their users such that choosing to spend developer time on new/improved features is often a better value for the user than performance. I’m a performance hawk, but even I don’t pretend that performance is the only valuable feature–that’s patently absurd. It turns out everything is tradeoffs, and the people ranting about web performance without acknowledging the tradeoffs aren’t worth listening to.

                                                                  1. 0

                                                                    Did you respond to the wrong comment?

                                                                    1. 1

                                                                      Nope. And I’m not sure what about my comment makes you suspect I was responding to the wrong comment.

                                                                      1. 0

                                                                        I didn’t say anything about businesses having “infinite resources” so I was confused when you suddenly changed topic completely

                                                                        1. 0

                                                                          Not changing the topic; just giving you the benefit of the doubt. I was responding to the most charitable interpretation of your ambiguous comment, because while the most charitable interpretations was still clearly wrong, it is more understandable and less nonsensical than other interpretations. Feel free to clarify your position.

                                                                          1. 0

                                                                            So what you’re saying is that you grafted nonsense onto my statement because people using their time away from work for anything other than being productive is nonsensical to you? I don’t see why I should clarify my position when you’re obviously just being a troll.

                                                            2. 1

                                                              webpack implements tree shaking. It is not a trash.

                                                            3. 3

                                                              Hate it or not, but it neatly reflects the problem. Easy trumps performant.

                                                              1. 2

                                                                I disagree that this is a trolly response. It’s sarcastic and brief to be sure, but it certainly accomplishes something. It points out that the author of this article is a hypocrite, and even if hypocrisy isn’t the same thing as being wrong, it’s good to know that someone is being hypocritical when evaluating how to treat people who are wrong in the way described.

                                                                1. 1

                                                                  You can make everyone a hypocrite if you try.

                                                                  https://thenib.com/mister-gotcha

                                                                  https://thenib.com/mister-gotcha-vs-the-green-new-deal

                                                                  It’s a cheap and pointless tactic of Status Quo Warriors. Nothing can be changed because we’re all hypocrites for living with things as they are now.

                                                              2. 12

                                                                *writes a rant about how developers are petulant little children who care more about “convenience” and “fun” than “performance”*

                                                                *puts it on medium*

                                                                This is quite a dismissal (appeal to ridicule?). Are people who post on medium not allowed to complain about web performance, simply because they post on medium?

                                                                1. 15

                                                                  They can both be right and be hypocritical, pointing out the hypocrisy isn’t wrong just because they’re right.

                                                                  If /u/hwayne said, “You’re contributing to the problem you claim exists by putting your article on medium, therefore your argument is wrong”, they’d be committing a logical fallacy. They’re not saying that though; they’re agreeing with the author about what the issue is, and pointing out that the author is needlessly contributing to it.

                                                                  1. 9

                                                                    I don’t actually agree with the author, because they didn’t do anything to support their claim. They just said “it’s devs’ faults for using frameworks” and left it at that. This comment has a different explanation:

                                                                    While frameworks add overhead, they are hardly ever the main source of bloat. In my experience, there is always lower hanging fruit than initial page size:

                                                                    • Analytics and trackers
                                                                    • Ads
                                                                    • Huge third party SDKs (e.g. social login or videos with ads/DRM)
                                                                    • Large images
                                                                    • Fonts

                                                                    Only the last two have to do with developers. The others are, unfortunately, the current business model of the web. )

                                                                    1. 4

                                                                      If developers didn’t make the decision to use the frameworks and libraries they use, then who did? Spooky ghosts?

                                                                      1. 7
                                                                        1. Are frameworks and libraries the primary cause of web slowness, or is it something else? How much does using frameworks and libraries contribute to slowness versus having tons of trackers and ads?
                                                                        2. How much faster would the website be if they replicated all of the functionality without frameworks and libraries? Is it enough to overcome the extra development time?
                                                                        3. Do devs use frameworks and libraries purely because of convenience, or are there other business constraints? Time to market? Responsiveness? Portability?
                                                                        4. Can the developers significantly improve performance without getting rid of the framework or library? Is the problem “they use a framework”, or “they haven’t had time to optimize their code, period”?
                                                                        1. 2
                                                                          1. Counter-question: have you ever in your life been employed to write client-side Javascript and the requirements didn’t list experience in one of JQuery, React, or Angular?
                                                                          2. Given that the greatest bottleneck for webpages is bandwidth and that tree shaking can only do so much, significantly faster if you’re on even somewhat poor internet (most of the US is) or a slow, congested device (a lot of the world is).
                                                                          3. This is a red herring because you’re assuming that because there are a multitude of justifications for development decisions that the developers making those decisions have no agency, which is absurd.
                                                                          4. Tree shaking has been mentioned many times on the page, but the moment a developer uses a scroll-based effect JQuery plugin or starts requiring DOM to be rendered through complex javascript, they are not only doing so because the architecture of the framework has inspired them to do so, but because the design of those libraries tell developers they don’t have to understand what the framework is doing. The problem is multitudinous, just like the justifications that you think relieve the developers of their agency in their decisions.
                                                                          1. 3

                                                                            Counter-question: have you ever in your life been employed to write client-side Javascript and the requirements didn’t list experience in one of JQuery, React, or Angular?

                                                                            Not the person you were asking, but yes, I have. Back when we called this stuff “DHTML”, and lamented that too many people copy/pasted snippets of JavaScript without understanding what they did.

                                                                            because the design of those libraries tell developers they don’t have to understand what the framework is doing

                                                                            There is no “bare metal” anymore. There is nobody who genuinely understands the entire stack anymore, and probably only a handful of people who even understand any single layer of it.

                                                                            Once, nearly two decades ago, I used to think I did, but even then it turned out I really didn’t. Everything from the JS library/framework to the language runtime to the browser itself to the operating system to the CPU it’s running on is full of too much deep magic for any one person to understand even with a lifetime to study it all. We are at the “if you want to make a pie from scratch, first you have to create the universe” stage.

                                                                            1. 1

                                                                              I mean, I wouldn’t equate the need for web developers to at the very least know the actual Javascript DOM API’s to a fancy for trying to understand “the entire stack” down to bare metal.

                                                                  2. 11

                                                                    I’m not dismissing them because they posted on medium! I’m not dismissing them because they complained about web performance on medium. I’m dismissing them because they say web devs are “an army of McDonalds line order cooks who fancy themselves as Michelin star chefs” on medium. If you’re going to be that contemptuous of web devs, you should at least show an iota of self-awareness about where you’re posting.

                                                                    1. 12

                                                                      1.2 MB transferred, 16.84s. That’s with an adblocker.

                                                                      No, they are not allowed to complain. A blog post this length easily fits in 200kb, with the worst CSS framework imaginable included, 10kb would probabaly be possible.

                                                                      1. 12

                                                                        They can complain all they like. I agree completely that Medium is a technologically subpar and gluttonous medium, but there are clearly non-technical reasons to use it, which (as evidenced) many people feel outweigh its shortcomings.

                                                                      2. 4

                                                                        I think it’s a form of whataboutism. Everyone lives in a glass house, so everyone should shut up and stop complaining, I guess.

                                                                        It’s a really cheap put-down, and I’m with you that it’s a ridiculous dismissal.

                                                                        1. 1

                                                                          There are many that do not like medium. To read content posted on medium I use a proxy that will leave only the content. Application is a of 10 minutes effort and the result is good enough - https://github.com/husio/readium

                                                                          Deploy your own instance or use https://readium-demo.herokuapp.com/@CM30/putting-devs-before-users-how-frameworks-destroyed-web-performance-6b2c2a506aab

                                                                          1. 2

                                                                            I think you’re correct to notice the mismatch in the medium and the message, but perhaps the channel was chosen specifically because the folks who may benefit most from the advice would read it on Medium instead of elsewhere.

                                                                            1. 6

                                                                              If they were choosing Medium specifically for that reason, they should have used Medium as an example. There was an essay a while back that did that: it said “you have to read this on Medium” and timed the complaints about medium’s UX to the exact points those UX problems manifested. Anybody else know what I’m talking about? I’m having trouble tracking it down.

                                                                              1. 2

                                                                                That sounds like a slick way of making a point!

                                                                                1. 2

                                                                                  Found it! https://lobste.rs/s/bn30zs/medium_is_poor_choice_for_blogging#c_udeux9

                                                                                  Turns out we both commented on it, haha

                                                                          1. 2

                                                                            The fact that my node_modules folder is > 100MB in size and my rendered assets are less than 80KB per page seems to render this rant moot.

                                                                            1. 7

                                                                              Your particular circumstance is not representative of the wider community and ecosystem.

                                                                              1. 2

                                                                                My point being that with adequate tree shaking the framework used shouldn’t matter as the “compiled” assets should only be big enough to contain functionality that is used. The problem I often see is websites loading MB of assets to only use a tiny percentage of it overall which to be fair to the author of this piece seems to largely be the point they are getting at.

                                                                                In reflection calling it a rant feels dismissive, that was not my intent; its a well written article on an important subject that more of us should pay attention to.

                                                                                1. 6

                                                                                  I think this depends a little on defaults and how much effort it is to achieve this. If 90% of the people are “using it wrong” as you say, I’d still say it’s the framework’s fault for having bad defaults or bad docs.

                                                                                  1. 5

                                                                                    The point is dev fun and user performance is not opposed. Let’s improve tree shaking, so that devs can continue to use fun frameworks and users can get fast web performance.

                                                                                    1. 2

                                                                                      Although in this case people who use/like medium may make the point that it’s also optimized for author fun in writing and publishing, to the detriment of the readers ;)

                                                                              2. 5

                                                                                I don’t particularly enjoy having to pull down 100MB of failes in order to generate 80kb, especially when that infrastructure results in loss of time, generation of waste, and damage to the climate.

                                                                                Additionally, it’s remarkably hard to secure such a situation.

                                                                                1. 1

                                                                                  I feel totally uncomfortable with it as well, it’s incredibly wasteful, but people don’t give it a second thought, brushing it off with ‘but hard drive sizes these days’. I just want to add a testing framework (though the same applies to many popular libraries/frameworks) to my project and I end up pulling in 300 dependencies weighing in the hundreds of megabytes (how???). There’s something very wrong with this ecosystem. I guess it is not so far removed from our attitudes towards disposable, single-use plastics and other forms of waste.

                                                                                2. 3

                                                                                  If that 80k does computationally slow things, then the end result is still going to be, well, slow.

                                                                                  1. 2

                                                                                    From GitLab:

                                                                                    $ du -hs node_modules
                                                                                    689M
                                                                                    $ du -hs public/assets
                                                                                    109M
                                                                                    

                                                                                    Loading https://gitlab.com/gitlab-org/gitlab results in 1.51 MB of assets being downloaded. Without caches that becomes 4.44 MB.

                                                                                    Mind you that this includes images, but the point is the same: the output size will vary greatly per project, but there definitely is a trend of websites becoming more heavy every year.

                                                                                  1. 2

                                                                                    That’s some potential for a rebirth of the demoscene. Portability of the abstract machine plus all opportunities for hand-coded optimization for size. I hope soon it will be possible to run such programs without JS wrapping, just load them with <script> and point them to a canvas element.

                                                                                    1. 1

                                                                                      Why is it desirable to get rid of JS wrapping here? I guess it’s a bit more tidy. Anything else?

                                                                                      1. 1

                                                                                        Why use many language when few will do?

                                                                                        Also, it would be nice to do all the relevant Web API things from WASM.

                                                                                        1. 1

                                                                                          Why would it be nice to do all Web API from WASM (apart from preference)? Since all web API need to be available to JS as well, it would just increase code for binding, not reduce it.

                                                                                          1. 1

                                                                                            Because of the runtime and complexity costs of using 3 systems (WASM, JS, Browser engine (Rust or CPP)).

                                                                                            If you can just create WASM blobs that do it all then there are fewer things to get wrong and you don’t have to context shift to another language.

                                                                                            1. 1

                                                                                              Eh, I meant, you can already do that, no browser modification is needed. My question was what is rationale for implementing WASM binding in browser, as opposed to using existing JS binding in browser through WASM/JS interface.

                                                                                              1. 2

                                                                                                I think you’re talking about the rust toolchain wasm_bindgen stuff?

                                                                                                I think that is bad because even though the developer doesn’t need to use JS, the machine still has to (and moving through multiple languages is more expensive) and someone has to write that bindgen infrastructure correctly for each language that compiles to WASM and wants Web API stuff.

                                                                                    1. 5

                                                                                      I am not sure why third party components (as opposed to “from scratch”) are blamed for page size. Isn’t the problem tree shaking? Using a big library for a single function is absolutely fine, there is no technical reason that should increase page size.

                                                                                      1. 1

                                                                                        Not sure I understand your argument. Big libraries (presumably in javascript) require downloading said libraries. Each of which can be hundreds or thousands of KB.

                                                                                        1. 5

                                                                                          JavaScript has a build system so that only required parts are included in the final download. That is called tree shaking.

                                                                                          In other words, the right reaction is “let’s fix bugs in tree shaking”, not “let’s all reinvent wheels from scratch”. The latter will be a disaster.

                                                                                          1. 8

                                                                                            Javascript has a build system

                                                                                            There are tools devs can use to shake these trees, but this doesn’t happen on it’s own.

                                                                                            1. 1

                                                                                              Thanks for the explanation, I didn’t know that was a thing.

                                                                                        1. 2

                                                                                          Note: the author is a long time GCC developer working at Red Hat and also implemented MJIT released with Ruby 2.6.

                                                                                          1. 3

                                                                                            I must point out reference genome is just that, and not a majority genome or anything like that. In other words, even if most people have brown eyes, there is no reason for reference genome to have brown eyes genotype: it just happens to do so in this case. But like any genome, reference genome also has rare variants, some of them probably even unique.

                                                                                            1. 1

                                                                                              If people’s genomes are going to be encoded as diffs against the reference then it’d be most efficient of it were a majority genome to minimize average diff size.

                                                                                              1. 1

                                                                                                Sure, but that’s not what Human Build 37 is. Maybe in the future.

                                                                                            1. 14

                                                                                              An important notice, since it is buried at the end: xor filter is immutable. You can’t add or delete items once it is constructed. (You can’t delete in Bloom filter either, but you can add. Counting bloom filter supports both, but with worse overhead.)

                                                                                              1. 2

                                                                                                In theory you create Bloom filter that would allow for removal of the entries (if you are sure that the entry was previously added). Just instead bits as flags you store count of elements that has that flag. Unfortunately that enormously increase the size of the filter.

                                                                                                1. 1

                                                                                                  If you want a filter which supports both adding and substraction with log(n) space, take a look at this new datastructure: Utreexo https://eprint.iacr.org/2019/611.pdf