1. 40
  1.  

  2. 19

    Thanks for sharing this! I’m the author of Gleam (and this post). Very happy to answer any questions :)

    1. 6

      Thank you for your work on Gleam! It looks really promising, and it’s been great seeing it progress from the sideline.

      Is it easy to integrate it (e.g. writing one module in Gleam) in an existing Erlang + rebar3 project? (Is this documented somewhere?)

      1. 7

        Yes for sure. Currently we don’t have a dedicated build tool so all Gleam projects are rebar3 projects with a project plugin (https://github.com/gleam-lang/rebar_gleam), so compilation of Erlang works as per usual.

        There’s also a mix plugin for Elixir projects (https://github.com/gleam-lang/mix_gleam).

        The tooling is a bit rough-and-ready at the moment, I’m hoping to improve it in the near future.

    2. 4

      Looks really good. I like that the syntax is basically rust/ML, on the Beam runtime. I wonder if there’ll be a way to write reasonably self-contained programs with it (a bit like deno does for javascript), especially since the compiler is in rust?

      1. 5

        You still will need BEAM runtime. Doesn’t matter what is the language the compiler is written in, the output is what matters

        1. 2

          Maybe the beam runtime can be linked against a rust-compiled binary? Deno uses bindings to V8 to do the equivalent with javascript.

          1. 2

            I couldn’t say whats possible there, I’m not a C programmer and I’ve not looked into this. It would be a very cool feature though! Maybe one day

            1. 2

              I have been meaning to see if Gleam will run on https://github.com/lumen/lumen a BEAM VM written in Rust that targets Wasm.

              1. 2

                I intend to investigate this future. I’ve had some quick chats with the Lumen devs and it sounds like it would work nicely.

                One of the main things I’m interested in is providing additional type information to Lumen so that they can enable type based optimsations. This would make Gleam on Lumen outperform Erlang on Lumen, which is an exciting idea!

                1. 1

                  What are some mechanisms to deliver type information to the VM? Could it be generalized so that other languages on the BEAM could also benefit?

                  1. 1

                    At present there’s no type based optimisations AFAIK on the BEAM or Lumen, but Lumen’s dev team said they may be able to add an API for this at a later date. Gleam and Lumen are both written in Rust so I could imagine Gleam having a backend that generated type annotated Lumen IR.

                    1. 1

                      Right, but I was thinking of a how to encode the type information in-band so that other typed languages on the Beam could also take advantage of any possible speedups.

                      Like if a module had a Map(Fn => TypeTuple{}) if it exists, the alternative Beam implementation (Lumen in this case) could use it, it might even be a way to provide entirely alternate implementations for functions, it could be a Map(Fn => WasmBlob) and implement BIFS.

            2. 1

              There is no fully working BEAM runtime implemented in Rust right now. AFAIK Lumen is still in progress and other implementations didn’t lift off.

              EDIT: And releases (the deployment units in Erlang) often contains BEAM implementation provided, so if you build release for your target machine and the external libraries have compatible versions (libc and OpenSSL) then it should work with plain tarball of release.

              1. 2

                I was more thinking of binding against an existing BEAM runtime, like deno does with V8 :-)

                1. 3

                  The BEAM was not designed to be embeddable like JS engines.

                  If you just want self-contained apps, “releases” are that.

                  1. 1

                    They’re not really self contained in the way that a static binary is, they depend on libraries being installed on the host.

          2. 2

            What’s the appeal of shipping a completely static binary? I don’t think it’s hard to just install Erlang or include in your package.

            1. 5

              A lot of people say that their favorite feature of Go is that it gives you a single static binary, which is easy to copy around. I think that’s a good argument, particularly after seeing what node_modules and pip directories look like.

              1. 3

                Hmmmm, maybe I’m from a different world here, where foopkg add libfoo or tar cvf foo.tar foo.js node_modules/npm install isn’t a hard thing to do, and shipping libraries/vendoring them if you REALLY hate that is common practice on Windows anyways.

                1. 6

                  If you are to redistribute your solution (in binary form) to into various, separate, customer (or internal enterprise) environment running, different versions or patches of OSes,

                  would > foopkg add libfoo or tar cvf foo.tar foo.js node_modules/npm install isn’t a hard thing to do, and shipping libraries/vendoring them

                  be as convenient as a single binary distribution ?

                  1. 2

                    That approach is possible but you’d also need to ensure that the right C libraries are installed on the target machine too. It’s all doable but it requires additional work and a degree of technical knowledge compared to “copy this to your computer and double click on it”

                  2. 3

                    isn’t this what containers are for these days? Can’t you just ship a docker or flatpak (or the mac/windows equivalent) and be done with it?

                    1. 3

                      They do a good job for many uses cases but there are limitations. They require more technical knowledge to install and run than a file you can double click on, and there are performance + access implications of running software inside a container.

                    2. 1

                      after seeing what node_modules […] directories look like

                      On a related tangent, we’re starting to use webpack to bundle files for NodeJS apps at work, which so far seems to be worth it for us. (No minification though, that would be awful.)

                      I’ve seen a couple hundred MB node_modules result in around 10-20MB unminified bundle files. Deployment goes faster with one moderately big file than 100ks of tiny files. In the process we dropped the need for production to run npm install --production. Also in particular we see Azure app service is really slow at reading lots of small files: bundling reduced start up time for some services from ~30s to ~5s.

                      So I think bundling things into a smaller number of files makes deployments nicer even when that smaller number isn’t as small as 1.

                      In both the before-bundling and after-bundling states we have been doing installations using a single zip file: what changed is mainly the number of files in it.