1. 14

Some interesting bits from the README

Rome is an experimental JavaScript toolchain. It includes a compiler, linter, formatter, bundler, testing framework and more. It aims to be a comprehensive tool for anything related to the processing of JavaScript source code.

Rome is not a collection of existing tools. All components are custom and use no third-party dependencies.

No external dependencies. This allows us to develop faster and provide a more cohesive experience by integrating internal libraries more tightly and sharing concepts and abstractions. There always exist opportunities to have a better experience by having something purpose-built.

  1.  

  2. 5

    This seems like a very good direction to explore. The tooling complexity and rate of change is one of my main problems with the Javascript ecosystem. People who get used to it don’t realize just how poor the experience is compared to other toolchains.

    1. 3

      So FB goes to React, Reason or Rome? I’m not sure how they are different and why FB has 3 different frameworks.

      1. 6

        These are all vastly different projects. You can write React with Reason (ReasonReact). Rome is a suite of tools, not for writing web views or an OCaml-lang that compiles to JS.

        1. 4

          It isn’t used currently at Facebook according to the author.

          Cf: https://twitter.com/sebmck/status/1108416062414950400?s=20

          1. 3

            Well React and Reason were made by the same guy, who now focuses on integrating the two.

          2. 3

            I’ve loosely followed the author’s tweets about Rome and I’m excited about its ergonomics.

            1. 2

              “It includes a compiler” — what does it compile to? i.e. what backend(s) does it have? I don’t seem to see it mentioned in the readme. Or is it just to an internal AST, for linting etc. purposes? I.e. not a translator?

              1. 2

                I asked this on the orange site and got downvoted but not answered ¯\_(ツ)_/¯

                1. 2

                  Ok, found something akin to an answer therein:

                  It looks like it’s compiling TypeScript/Flow and newer JS features into more standard JS. Part transpiling.

                  1. 1

                    Aha, interesting. I wonder if there could be any support for non-JS-based languages there – I might have to poke around and see what their toolchain & APIs look like.

              2. 2

                Rome is not a collection of existing tools. All components are custom and use no third-party dependencies

                Can someone explain why some would go this route? There must be millions (maybe billions) of hours of coding in the frameworks, that this software tries to replace.

                Although this is yet another tool for JavaScript, this could become very good, especially for beginners and people like me, that gave up catching up on the newest tech in JavaScript. I’m curious.

                1. 6

                  There must be millions (maybe billions) of hours of coding in the frameworks, that this software tries to replace.

                  20 years into programming, some of the best advice I can give is this:

                  When a problem has taken tens of thousands of hours to solve (or more), it is either Hard Research or it is being approached in totally the wrong way.

                  Munging files together is not Hard Research. There is - almost certainly - an alternative formulation of the problem-space to be found which is drastically simpler.

                  Typescript, on the other hand, is in the Hard Research bucket. Creating a typesystem that can gracefully interoperate with common idioms from dynamic languages is not straightforward at all.

                  1. 2

                    When a problem has taken tens of thousands of hours to solve (or more), it is either Hard Research or it is being approached in totally the wrong way.

                    I may agree with you when you say “approached in totally the wrong way”, but not in the way you might expect.

                    Here are some core explanations of why JavaScript tooling is a wee bit complex:

                    1. The Web ecosystem incredibly moves fast, due to combination of innovation, consumer expectations, and commercial pressure.

                    2. Web browsers are quite complicated (they render, they manage a JS virtual machine, they do networking, handle sensitive information, all on various devices with various capabilities). Like the Web itself, browser are expected to “just work” in our world of shifting standards and expectations. I would expect over the past 10 years, they have racked up more cumulative usage hours than probably any other software, except operating systems. [1]

                    3. JavaScript tooling has grown out of (more like busted out of) a quickly-assembled scripting language. This organic growth, in my opinion, means by definition the tooling is often overextending itself, reaching into new areas it was not designed for.

                    To share my take: we have a Bizarro-World Wide Web. It is complex and a mess. The associated tooling is nothing short of a house of cards that somehow gets redesigned and reinvented very frequently. Somehow it has survived, without any one Superman to credit. (I personally think the engineering work on the JS VM is somewhat superhuman, though. We can all thank those people who struggle against browser quirks and share their work as open source. And, more cynically, we can thank the massive injection of advertiser money that has ensured that all of this will find a way to survive.)

                    So, to bring it back to why I agree, in a sense, but for a different reason… If we knew then what we know now, all of this could be made much better: simpler, more secure, and more sane.

                    Still, I wouldn’t say it “has been approached in totally the wrong way” because that would be judging people of the past based on the knowledge of today. The evolution of the WWW has changed the very way we define the problem and frame what is possible.

                    [1] I’m calling attention to my lack of a citation. Apologies. If it is any help, as I wrote this I thought about: consumer and business software, utility systems, and telephony systems. I may be wrong.

                    1. 2

                      Still, I wouldn’t say it “has been approached in totally the wrong way” because that would be judging people of the past based on the knowledge of today.

                      Whether a given approach is right or not is usually not apparent until substantial work has been done.

                      I think we need to get away - far away - from the idea that “current approaches have serious limitations and work to find better ones is ongoing” is in any way a judgement of the people who invented the current approaches. It’s usually those same people working on better ones.

                  2. 2

                    Well, it’s answered in the README:

                    This allows us to develop faster and provide a more cohesive experience by integrating internal libraries more tightly and sharing concepts and abstractions. There always exist opportunities to have a better experience by having something purpose-built