1. 1

    Looks cool! Is the source code to this project available?

    1. 2

      Yes, both the library on which this is built (https://github.com/felixpalmer/procedural-gl-js/) and this specific implementation (https://github.com/felixpalmer/volcanoes-of-japan) are open source

      1. 1

        you can usually find them if you transform the github.io domain into github.com:

        <user>.github.io/<repo> becomes:

        github.com/<user>/<repo>

        https://github.com/felixpalmer/volcanoes-of-japan/

      1. 1

        For the last 7 years I’ve been working on a WebGL powered mapping library, that I’ve been providing on a commercial basis via www.procedural.eu. It’s been used by millions of users across different products

        I recently decided to change the approach and instead provide a new library on an open-source basis, along with an supporting elevation API (www.nasadem.xyz).

        Procedural GL JS is a complete reworking of the original library, with the following features:

        * Stream in standard raster imagery tiles. Supports map tiles from a variety of providers
        * Batteries included elevation data. Global 3D data coverage courtesy of nasadem.XYZ
        * Powerful overlay capabilities. Draw crisp markers and lines
        * Well-thought-out API, complex applications can be built without needing to deal with 3D concepts
        * Great UX and intuitive controls, mouse-based on desktop & touch-based on mobile
        * Tiny filesize means library is parsed fast. Package size is less than THREE.js thanks to code stripping
        * Novel GPU powered level-of-detail system. Off-loading to the GPU frees up the main JavaScript UI thread
        

        I’m planning a series of blog posts about how it works in the future, but for today it’s just the launch. Happy to answer any questions!

        1. 1

          Congrats! The demo runs great and the API does seem simple indeed. With a quick scan I couldn’t find the code of the LOD system. I can see you use an array texture to upload the map images in chunks but I presume that’s not it?

          1. 1

            Correct, the array texture is just an atlas which I write chunks to to avoid having to bind many different textures on each frame. I had a version working with an actual WebGL2 array texture, but switched to an emulation so as to support WebGL1.

            The code is mostly in https://github.com/felixpalmer/procedural-gl-js/blob/main/src/terrain.js, however as it is a process that takes place on both the CPU and GPU the code is naturally split over a number of locations. At a high level, every few frames the terrain is rendered to a separate buffer, outputting the texture error and tile ids to a render target. This is then read and processed by the JavaScript code which detects tiles which are the wrong resolution tiles and adjusts them (either splitting them up, or combining them with neighbors)

            1. 1

              Thanks for the explanation. That kind of feedback between CPU and GPU must be difficult to do reliably on WebGL :) Performance-wise, I mean. But it looks like you’re doing some manual pipelining there to keep the transfer bandwidth low.

              1. 1

                Yes, it took some time to get working. And still I’m not completely happy, as it sometime stalls the pipeline, due to the fact that the readPixels operation can’t be done asyncronously. Thus if prior to the read the command buffer is longer than usual (e.g. a texture upload has just slowed it down), then it leads to a long frame

        1. 1

          One of the more useful comparisons I’ve read.

          I’ve been using Chef to deploy a reasonably complicated system recently and have found it to be a bit of a struggle. While Chef describes itself as a orchestration tool, I’ve found that this isn’t really what it is good at.

          It seems good at solving the “I have a software stack, and I want to be able to install it without caring about the underlying OS”, but not so good at “I have a number of different virtual machines, each performing a different task and I want to get them talking to each other, scale up and down and debug when things go wrong”, which in mind is the difficult bit of orchestration.

          It appears that Chef started as a tool for bootstrapping arbitrary boxes, hence all the abstraction around packages/cron/templating etc, and then tacked on the cloud functions when the cloud started taking off.

          What we’ve found is that we had to write all the “difficult” code ourselves, while only getting a minimal benefit out of the complicated abstractions Chef provides. Which brings me onto my question: do people have any recommendations for a cloud orchestration system, that:

          • Integrates well with cloud providers
          • Has a good framework for scaling systems up and down, and reclustering nodes? E.g I have a 4 node Cassandra cluster and want to double it in size.
          • Provides the useful file templating and package setup functions that Chef has to get initial installs working, but without the abstraction bloat of roles/environments/recipes/attributes/databags etc…
          • Provides tools for monitoring/debugging this system when things go wrong
          1. 2

            Great article (and illustrations), however I’m still not sure about how adding this level of abstraction really gives me an advantage when writing applications. Does anyone know of a good resource for explaining this?

            Comparing say Python to the given example:

            with open(raw_input()) as f:
                print f.read()
            

            vs

            getLine >>= readFile >>= putStrLn
            

            Sure, Haskell is more succinct, but Python doesn’t feel far behind. Is the benefit with Functors et al that one can elegantly include the error handling by using the Maybe construct?

            1. 4

              @moses did a good job explaining some of the benefits. These “abstractions” are really generalisations; they work over lots of things.

              So, benefits?

              First of all, your Haskell example is pure and referentially transparent. The compiler can now make those calls non-blocking because it knows about all effects (and GHC does). Awesome stuff but there’s lots of stuff out there that can talk about the other benefits of purity.

              But anyway:

              printFile :: IO ()
              printFile = getLine >>= readFile >>= putStrLn
              

              We’re working with the Bind/Monad class when we say >>=. In this case we’re working with IO but what’s interesting is that we can put Monads around other Monads. Let’s say we want IO calls that instead of throwing exceptions, returned values:

              -- safeReadFile will return a String in an error (yeah, urgh)
              safeReadFile :: ErrorT String IO String
              

              We’re going to assume getLine and putStrLn can never error (of course, a bad assumption) just for illustration:

              safePrintFile :: ErrorT String IO ()
              safePrintFile = lift getLine >>= safeReadFile >>= lift putStrLn
              

              We’re “lifting” our non-error IO functions into IO functions that always succeed. So together, safePrintFile returns the first error that is found, otherwise a successful value.

              We’re very easily mixing different “effects” together by stacking them, while staying pure and referentially transparent. Isn’t that cool!? This would be a lot more tricky in Python.

              Just one of the benefits of recognising generalisations and using them.

              1. 2

                I’m no functional programming maven, so I’m sure @puffnfresh or one of the other people on lobsters who knows functional programming well could give you a better answer, but as a python convert, what I found was that it was really nice to have a generalized version of some of my favorite things in python. For example, in python we use list comprehensions, dict comprehensions, and with as a poor man’s do loop (or in scala, for comprehension). Python has these built in, but with the monad abstraction, you can suddenly do for comprehensions on many more datatypes. Not just Maybes, or Lists, or Dicts, but we can make up our own monads. Just built into scala, some monads are: Either, Future, Traversable, Option. Some monads that I’ve seen people come up with outside of the scala standard library include Throwable, and ManagedResources. This means that instead of having to rely on the compiler to give you sugar like with, people are empowered to create it themselves.

                More specifically, the reason why people like things like the IO monad is because it banishes IO, which is inherently impure, as far away from the program as possible. You can think of the IO monad as gloves we can use to handle toxic waste. We prefer to keep things referentially transparent because state makes programs hard to reason about.

                The other nice thing is that code can get more legible when you just declare what you want to do with your data.

                edit tl;dr: To be clear, “with” is something that the haskellers would consider a functional abstraction, but they would pride themselves on being able to use it without it being a special language feature.