1. 6
  1.  

  2. 8

    I’m never one to advocate for a web stack given an alternative but I wonder how much of this is down to using the GPU, rather than anything to do with C++ vs JavaScript part. If you use WebGL / WebGPU to do the transitions / animation then I’d expect that this should be completely fine in the browser. If you write CPU-side C++ to do the transitions then the RPI 4’s CPU might struggle at 4K.

    1. 2

      I suspect you’re right. My gut feeling is that you could achieve these affects with CSS transitions, and the performance would be good. Probably wouldn’t amount to much code, or take very long, either.

      1. 3

        I tried to do this in the browser with CSS transitions and animations first. Nothing seems to be GPU accelerated on the pi. Performance was really bad and CPU usage very high.

        I didn’t try webgl. At the time I thought it would be easy to do with native OpenGL. It’s just 2 triangles and a texture right?

        It turned out to be a little bit more involved than that but I ended up sticking with it.

        1. 2

          You probably didn’t do anything to ensure the browser is accelerated. Firefox wouldn’t enable it automatically because the GPU is not on the qualified list. For good reason – WebRender is still glitchy on v3d. But you can use the old GL layers backend with layers.acceleration.force-enable, that would run CSS transforms on the GPU just fine. (That backend will be gone eventually but for now it still exists.)

          1. 1

            I tried to do this in the browser with CSS transitions and animations first. Nothing seems to be GPU accelerated on the pi. Performance was really bad and CPU usage very high.

            Oh, cool, I didn’t see CSS mentioned in your post. I don’t think that CSS effects really turned out to be the big deal they were introduced as, but they have been around for probably ten years. They’re very standard, so I’m surprised they’re not GPU accelerated on the Pi.

        2. 1

          WebGl / WebGPU does tend to be impressively performant and quite viable for many projects — as long as the target is a platform that doesn’t have any trouble running a browser in the first place. In this case you’d still have the overhead of a browser just to get to your app, and even if the platform could run your code just fine you’ve lost access to all the resources the browser itself is hogging. For the Pi and other embedded platforms that’s often a deal breaker.

        3. 5

          I had a small rant on Twitter about this, but I feel like these kinds of arguments always ignore that the “need for cross-platform things” really come from companies (mainly Microsoft and Apple) refusing to work on a common standard for application development and focusing the community to build one.

          And then a few years after that, pushing incompatible changes that force the community to build yet another one. We’re lucky to strong solutions that stand the test of time (GNOME, Qt, Tk, etc) but like they’ll always lag behind what the platform supports since they want to nudge people away from the other one.

          It makes me sad because, even as someone who came after the whole personal computing push, they’ve successfully made companies into dumb but powerful terminals that just run a OS within an OS for the sake of capital.

          1. 3

            It’s a little bit weird to claim that native is much harder and then point to OpenGL, a lack of window event callbacks and CMake. Yeah, if your usecase depends on telling the GPU what to do explicitly then you’re going to have to deal with OpenGL and this will add a layer of overhead, but just like in the web world, there’s higher level abstractions which make that easier. And there’s even libraries which let you register callbacks.

            Now onto CMake. Yes, if you come from a web background and try to pretend like the way the web works is how everything works and then try to translate this into CMake you’re going to have a hard time. If you’re developing a native application for linux which depends on various common OpenGL related libraries, image loading, imgui and some random single header library (eww) then you most assuredly don’t need to bundle the OpenGL related libraries at the very least. At least not if you want anyone to actually use your program.

            Then you just need to reconsider if you need to pull in a whole git submodule just for a 100 line Gaussian blur function you could have either rewritten or embedded in your code (keeping the license header). Also reconsider if it’s worth pulling in all of imgui for two buttons and some text or if maybe the imgui design was designed to be so simple that for such a trivial usecase it should take as much code as there already is to re-implement it from scratch.

            Now all that’s left is to realise that the #ifdefs are completely superfluous and could trivially be thrown into a header file as a single ifdef which either prototypes or static-inline stubs out some functions for dealing with the optional gui. Then configure cmake to conditionally compile the actual implementation if ENABLE_GUI is set.

            A few last points worth noting:

            • This code is basically C+ (with a single +), it doesn’t make use of many C++ features.
            • It’s unsafe to catch any exceptions in this code as it would likely result in memory leaks.
            • You probably don’t want to be calling cleanup functions in your escape key handler. Call glfwSetWindowShouldClose and let the main loop handle exiting and cleanup.
            • If you put all this code in one file it would probably be easier to navigate, especially given how tiny most functions and files are. A lot of this code just seems downright over-abstracted for what it is.

            So now we’ve gotten rid of all the cruft and the cmake file is like 10 lines (or 20 if you rewrote it in straight make and saved yourself the hassle at this point since you’re not relying on any cmake feature anymore).

            If all you’ve done all your life is web development with node.js and your first foray into native development is an OpenGL application what did you expect? Instead of asking someone with the same amount of native development experience as you have web development experience to guide you, you decided to just assume that all your experience is automatically applicable and as a result you hit a expected issues. Native development is not easy, but it’s not hard either. You are making it look harder by being unprepared and bringing expectations from a completely different world into the world of native development.

            Finally, I should point out that there is WebGL and it’s entirely possible that this would end up performing fine in a web browser on a pi if the browser had the right settings tuned to allow hardware accelerated WebGL.

            1. 1

              I also had to get very familiar with CMake which as far as I know is the only reasonable way of getting something similar to what you get with npm packages

              CMake is, to be frank, not good. It’s barely better than autotools. It’s not built with layers and layers of separate text macro processing tools, but it’s about the same level of abstraction. It doesn’t really do “convention over configuration”.

              There’s a reason all the freedesktop/etc. projects ignored CMake, but switched to Meson ;)

              1.  

                Meson may be better in some aspects (generally shorter and cleaner-looking build files), but it’s much worse than CMake in others (awful documentation in comparison, at least in my view, and it’s very difficult to make it do certain things; it also lacks CMake’s neat GUI and TUI, and doesn’t have any packaging capabilities).

                GNOME, for example, ended up adding a custom module directly to Meson. It would be horrible to use without.