1. 97
    1. 38

      It’s written in Zig!

    2. 18

      Wow, using JavaScriptCore instead of v8.

      1. 52

        This makes me really happy. I am a huge fan of the design of JSC. It has a single bytecode that is used at all layers (v8, last time I looked, re-parsed) and a really nice tiered JIT:

        1. The interpreter is written in a portable macro assembly language and so fits in the instruction cache on most CPUs. Each bytecode is executed by dispatching to the relevant block of instructions. The fact that it’s implemented in assembly means that JSC has full control over the stack layout, which makes on-stack replacement (deoptimisation) easy and means that it can quite easily transfer control to the next tier. In particular, this makes it easy to JIT hot loops within cold functions. For example, if you have a loop at the top level of a JS file, the loop will be executed once but the loop body will be executed many times and so you want to be able to replace it with a JIT’d version cheaply, without needing to JIT the whole top level of the file.
        2. The baseline JIT uses the same instruction sequences as the interpreter. For short sequences, it inlines them (just copying the block of instructions into the target), for larger bytecodes it inserts direct jumps. This gives a fairly large speedup (usually - in a few corner cases it gets slower because the interpreter fits in i-cache but the JIT’d code doesn’t). Both the baseline JIT and the interpreter record profiling information (especially dynamic type info), which is fed into the next tier.
        3. The control-flow-graph JIT uses a continuation-passing-style representation and generates machine code that’s specialised based on the observed types and across functions, with side exits that allow transfer back to the interpreter or baseline JIT when you hit a new dynamic type or control-flow edge that wasn’t optimised for. This tier records a tiny bit of profiling information that lets very hot paths transition to the final tier.
        4. The ‘bare-bones backend’ JIT. This was originally introduced as FTL it ftl(fourth tier LLVM) and generated LLVM IR from the CFG representation. After some profiling, the authors found that they weren’t getting much speedup from LLVM IR optimisations, but they were getting a fairly big win from the fact that LLVM had a much better register allocator than their speed-optimised one. They replaced LLVM with a simple back end that had a decent register allocator and did a couple of other (fast) optimisations, which dramatically reduced the time taken in this tier and meant that code could be moved to it more eagerly.

        It’s a beautiful architecture and I’d love for it to get the mindshare that V8 has managed to acquire.

        1. 3

          I will say however handling the 32bit version of the 64bit JSValue encoding was absolute misery to write in the macro assembler - I mean it’s a higher level than actual assembly, but only slightly.

    3. 21

      This is really cool and it’s amazing they have this level of performance gain atop Node that already has so many eyes on it.

      It’s a shame proprietary Discord is their only communication option, and proprietary GitHub their only Git mirror. They’re even advertising the Discord in the CLI. Also the shorthand syntax bun create github-user/repo-name destination is favoring users choosing GitHub above others instead of not favoring a specific Git forge.

      1. 15

        For the latter issue, the best path I’ve seen is how nixpkgs supports github:u/repo gitlab:u/repo sourcehut:~u/repo etc. to shorthand popular options but not favoring any, while still being flexible enough to continue extending shorthands.

      2. 5

        I’d agree, would love a first class Libera IRC channel too.

        1. 1

          I don’t even think it needs to go that far; just mentioning an unofficial room in the docs is often endorsement enough that some users will go join.

    4. 6

      i thought the “bun” stood for a “bunny”, im disappointed :(

      1. 3

        This is by far the worst part of this project.

    5. 5

      I wonder if there’s a comparison with Rome which seems pretty new right now.

      1. 5

        Some overlap but not very similar, I think.

        Bun is a new JS runtime and an all-in-one CLI for package management, compiling, testing and running your code.

        Rome is an all-in-one CLI for formatting and linting JS and TS that intends to also support compiling and testing eventually.

    6. 5

      I really like this project, but I really don’t like the curl | sh pattern of installing things. We should make an effort to make packaging a more universal and easy process for projects like this.

      I even went to do my due-diligence and read the shell script, but it was in a minified format that made it difficult to look at. I know I can trivially load it into my editor and replace the semicolons with newlines and read it that way, but I’d rather have an install that works with my package manager. I understand that code installed by package managers isn’t foolproof and has it’s own issues, but there has to be something better than a curl | sh pattern for it.

      I guess this gets to the bigger problem of properly packaging things for multiple systems without need for manually creating the packaging for each system. I recently attempted to package an application for mac and windows (leaving the linux users to figure out how to run a binary for themselves) and found it to be very difficult and requiring more knowledge than I think should be necessary to do so Windows particularly. Is anyone aware of a system where I can just drop my windows, mac, and linux (and each architechture supported by each) binaries in a folder and have the packages generated by an automatic system?

      1. 5

        I’d rather have a shell script that I can curl > install.sh and then less than add a new package repository to my system-wide settings. I don’t think a system package is any better than curl | sh over HTTP from a security standpoint. A hobby or poorly maintained system package repo is much more complex than a simple 14 line shell script.

        1. 4

          You can list and uninstall system packages.

          1. 4

            Until they’re compromised by malware, and it rewrites the list.

          2. 4

            True, but a system package can also add a zillion dependencies that somehow put the system into a weird state. I learned my lesson with third party packaging on Debian and Redhat already - for something simple like Bun, much better to pop it into ~/prefix/bun than somehow end up with a conflict about what version of OpenSSL should be installed system-wide.

        2. 2

          Problem is you’re the 1/5th of people using the program, the fifth who are going to take a cursory look at the script as opposed to the other four-fifths who will simply run curl | sh and not notice their local library has a fake “Free WiFi” MITM installed by some skid.

          1. 9

            How is a random bash script any different from a random .deb that contains a bash script?

            1. 1

              Are .debs not signed? or is this a .deb from random website vs the main Debian repos?

              1. 4

                .debs can be signed, but are not in general, so for the most part they’re trusted to exactly the same extent as the repository is. That means that curl | sh over HTTPS has basically the exact same threat model as installing a .deb does, and it always makes me wonder if people who lament the security failings of the former process are happily making use of the latter one. The same doesn’t hold for RPMs, though.

                1. 1

                  In what sense are RPMs different (it has been a very long time since I dealt with anything other than initial Linux setup - my wife is the one installing terrible bioinformatics software and complaining about the code quality there :))

                  1. 5

                    RPMs are much more likely to be signed than DEBs (where only the repo is usually signed).

                    But both points are moot anyways. If I were to ship malware to you via curl | bash, I might as well do it via a malicious .DEB or .RPM which I have signed with my private key and told you to add the corresponding public key to your configuration.

                    Only, the curl’ed shell script is easily audited, whereas the same isn’t true for a .DEB or .RPM package. Yes, they can be extracted, but while I know the tools needed to inspect a file downloaded by curl, I would have to look up the commands to unpack a .DEB and also, I will need understanding of the files inside of a .DEB to know what gets executed at install time

          2. 3

            I think much less than 1/5th of people will examine a script before installing it. That also goes for language dependencies, like NPM, PyPI, Bundler, Cargo, Go modules, etc.

      2. 4

        Is your concern about the security implications of running untrusted code? If so, wouldn’t you have the same concern when you actually run the installed program as well?

        1. 2

          On macOS binaries are by default required to be code signed, which means that the default behaviour requires some real identity of the authors (they have to pay apple for the signing cert), and - especially if historically - the authors signed the package, and then a fake update comes out that isn’t in principle you could notice. The signing requirement can be bypassed, but again requires extra steps that one would hope protect lay folk.

          Interestingly (for hilarious reasons) you can codesign a shell script on macOS, but the signature isn’t checked - presumably because the code running is the bash/zsh/whev shell which is signed.

          1. 2

            So the solution is to centralize software distribution and make it impossible for people to independently publish software?

            1. 1

              No, though that does come with very large security benefits.

              But a lot of malware relies on users simply double clicking something, which is path broken by the default, and by passable, Mac setup.

      3. 2

        Packaging a Mac app has to be done locally on your own Mac because it involves code-signing using your developer credentials.

        If it’s a developer/geek oriented app you might get away without signing it, since your users will probably have enabled running unsigned apps, but here in a thread complaining about insecure installation that doesn’t seem like a good suggestion!

        1. 2

          I really hope they don’t disable code signing requirements, and I hate with a passion these sites that say “just disable this core malware protection to run our app, making you vulnerable to binaries from other sites, not just ours”.

          You can run unsigned apps with the default signing rules: it requires that you know to context menu click and open, in which case it asks if you’re sure you want to run the app. It really is that simple, and means that a site can’t make a binary with the image or a zip file icon that then silently installs malware when a user “opens” it.

          1. 1

            You can run unsigned apps with the default signing rules:

            I think that’s changed recently…as far as I can tell, recent macOS now says something like “this app is damaged and can’t be run”, with no option to run anyways if it isn’t signed (and further shows a warning if it’s only signed, but not notarized; quite a pain)

            1. 1

              I believe an incorrect signature isn’t by passable (though obviously you could simply remove the signature if you were malicious?)

    7. 6

      Even my toy OS was 1000 times faster than Linux ;)

      1. 1

        I made an OS with a faster run to completion time than linux :D (though I guess technically so did MS[1])

        [1] https://www.cnet.com/culture/windows-may-crash-after-49-7-days/ I wish I could find some source like the old new thing, but this is the best google gave :-/

    8. 3

      I feel like the root problem in the JS/NPM ecosystem is that we rely on thousands of tiny packages, not enough on web standards, nor on a beefy set of stock platform frameworks. This project doesn’t really do anything about that, it just makes it easier to deal with thousands of packages.

      However, this is still great. Installing thousands of tiny packages faster is a solid practical improvement and would make my workspaces easier to live in. Cool project! Way to pick attainable goals and make a difference.

      1. 8

        not enough on web standards

        Bun implements important web standards that node lacks (fetch, esm, websockets).

        1. 7

          And Node was scared by Deno into implementing standard it hadn’t implemented before (atoi(!) and fetch are two that I know about).

      2. 6

        Speaking as one who’s only recently started with Node and “modern” JS/TS: by far the biggest pain point is dealing with modules and packaging. There are, what, like four different JS module systems, and they’re different between server/dev-side (Node) and client-side. TyoeScript source code uses one syntax (ES) but compiles into different ones, and there’s the huge-to-me footgun that the attractive option of compiling to ES modules for the browser doesn’t work because TSC emits module paths that won’t work at runtime. Plus there’s the fun of module search paths at build time vs runtime…

        I’ve lost whole days to this stuff. If Bun can combine some of these things, like transpiling and packaging, into one tool, it might make the situation more comprehensible for newbies.

        1. 5

          Yes, a consequence of Node gaining popularity before ES modules were standardized. I think Deno has the right idea, making a clean break and forcing ESM. Node is unfortunately probably stuck with CommonJS unless they force a change.