1. 16
  1.  

  2. 7

    This is super sweet and it certainly feels much better than pretty much any web demo with similar objectives that I’ve seen the past years. It’s way past the “neat hack” stage.

    I’m not sure I completely share your optimism about web platforms the timeline for web platforms being viable for this sort of applications. It seems like half of the tools you mention are either not supported by all browsers, or have just been introduced, and the history of web standards is littered with bones of cool technologies that allowed us to do amazing things, but were dead before they even had a chance to get some traction. Reading all that it feels like targeting just one platform with a web interface is kind of hit or miss – you’re still stuck ironing Safari, Chrome and Firefox quirks (although I suppose if you’re just shipping an app built with e.g. Electron that’s kind of a moot point?).

    In comparison, implementing high-DPI rendering in a native UI framework like QT5 looks… rather more difficult.

    I haven’t written real-time graphics code with Qt in forever so I’m not sure if I’m reading the docs right, but it… feels to me like it’s actually the opposite? The old LTS docs dwell a lot on non-DPI aware apps because Qt 5 had to do the big hi-dpi leap, but the newer docs make it a little more obvious. The tl;dr is you don’t need to detect or configure anything unless you want to override the platform’s native hi-dpi settings for whatever reason, and you get to write your drawing logic in universal coordinates in the first place (so you don’t need to render a high-res version and scale it down explicitly). Am I getting this wrong?

    That probably doesn’t work everywhere – e.g. if you want to skip Qt and do straight Win32 drawing/painting routines on Windows, which aren’t hi-dpi aware IIRC, or if you want to do OpenGL, in which case you can’t really eschew working in device pixels – but it doesn’t really seem hard?

    What does seem hard, and blows my mind, is profiling. Chrome’s web worker profiling tools are something else, I don’t think there’s any native cross-platform tool that comes close to that.

    1. 3

      The only thing I disagree with is naming this “web maximalism”. This isn’t the web, it’s using the (misleadingly titled) “web browser” as an application platform. Which I don’t know if I disagree with. It’s exclusionary to everyone that isn’t Google, Apple and currently Mozilla, but if you’re not worried about the future of open access to these applications it is probably the best place to plonk your code, assuming your users will have the hardware capabilities to render it.

      Maybe future OSes should use Linux to boot directly into a SpiderMonkey userland. I’ll go live with the orangutans at that point.

      1. 2

        Maybe future OSes should use Linux to boot directly into a SpiderMonkey userland. I’ll go live with the orangutans at that point.

        Congrats, you’ve basically just described ChromeOS?

        1. 3

          ChromeOS (as of 2012 when I bought the adorable C720) boots into a Gentoo userland with Chrome as the root window, which isn’t going deep enough. PID 1 should be a JavaScript event loop.

        2. 1

          Maybe future OSes should use Linux to boot directly into a SpiderMonkey userland.

          Why even use Linux? Why not put a WASM / WASI / WebGPU runtime directly on hardware? The Birth and Death of JavaScript is getting closer and closer to reality every year. As modern web tech gains in power, it has to absorb and address concerns that traditional multi-user OSes have had for a very long time, while also solving portability problems that traditional OSes have (often) ignored. WASM is a truly portable and reasonably performant ISA; WebGPU is a truly portable and reasonably performant compute / graphics accelerator API, WASI is a truly portable and reasonably performant syscall ABI… If we lean in really heavily to this, isn’t it exciting to think of deploying totally portable containers to essentially any hardware anywhere? (Let me use the spare compute from my RISC-V NVMe controller for some background management tasks.)

          It’s also possibly horrifying (rooting out malware, botnets)… But this future has a lot to recommend it, technically.

          1. 1

            I only say Linux because it already has drivers for so much hardware. We don’t want to have to rewrite drivers for everything in HardWASM.

        3. 3

          The interesting part of this work is the usage of very modern web tech to power it.

          But why?

          1. 4

            Might want to check out the last few sections of the post :)

            1. 4

              I did. The point is that there is another platform with good, consistent and coherent APIs: your operating system.

              Web browsers are glorified document viewers, their user experience is not quite great. They even have a Back button which most “web applications” can’t handle well.

              1. 14

                The main thesis of this post is that web browsers are no longer glorified document viewers. The years of improvements and new APIs have made them into excellent app platforms for complex, multi-threaded, graphically-intensive programs. I present the signal analyzer project as a case study since it’s something I actually built myself using the tech.

                I truly believe that if I were to build a similar project as the one described in the post using native tech (QT, SDL, etc.) it would take >2x as much time to make and wind up a lower-quality tool. The browser does so much of the heavy lifting for you (as detailed in the post) and provides a universal interface to all operating systems.

                Time will tell who’s right, of course. There’s a lot of stigma and bad practices to overcome from decades of shitty web apps, but I think we now have all the tools we need to do it.

                1. 2

                  The main thesis of this post is that web browsers are no longer glorified document viewers.

                  Why do you think they have a Back button?

                  The years of improvements and new APIs have made them into excellent app platforms for complex, multi-threaded, graphically-intensive programs.

                  Compared to native desktop applications, they eat more RAM, waste more CPU cycles (I never want to read anything about climate change again as long as people deliberately do that), introduce many completely unrelated security problems and work slower. But it is “so much easier” - and in my opinion, it’s still not worth it.

                  1. 9

                    Also compared to native desktop applications, they run on any consumer OS and most server OSes, in a RAM+CPU footprint that has been commonly available for about a decade.

                    If I squint, I can juuust about see why a product team might optimize for that, especially given the fact that 99 out of 100 users don’t even know there is a difference.

                    1. 5

                      What web browsers have confirmed is that OSes need to offer a system supporting:

                      a) secure application containers

                      b) Common, standard confirming APIs that include a programming language (may be more, as long as they are interfaceable at source code level)

                      c) A way to automatically (or on user demand) to update application triggered by the application provider

                      d) optional - an app directory (eg play store) in a common way across platforms and devices

                      e) a common standard confirming way to access external, optional devices

                      f) a common way to ‘mesh/integrate’ multiple applications into a user-specific/user-chosen workflow

                      When OSes start providing that and sort lift-up ‘their-goals’ above POSIX apis, file systems, systemd levels of interaction – then the switch away from browsers will happen.

                      I also happen to think that OSes are the proper layer to offer distributed services that today are handled by a combination of Kafka+Kubernetis+Redis+NGNIX kinds of tools. When OSes provide those (and more, in a standardized, well-thought out way, including the multi-host / mulit-container coordination part) – the landscape will change in a positive direction.

                      But today we do not have this, and the above is probably another 40-60 years away at our current pace of incremental discovery and misdirection

                      1. 1

                        I expect to see this by making a version of the web adapted to running on the host. Like node.js, but for whole applications, and heavily leveraging what’s been built for browsers. We’re already starting to see this come alive, with OCI containers for WASM workloads, but I expect to see significantly more in this direction over time. We’ll see…

                      2. 4

                        Realtime audio is one of the niches in which people are still regularly running into CPU and RAM limits in native applications, so your “99 out of 100 users” isn’t accurate in this specific sort of development.

                        1. 2

                          Fair! But 99 out of 100 apps aren’t doing real-time audio :D

                          1. 3

                            But this submission we’re commenting on is, which is why I mentioned it ;)

                        2. 3

                          in a RAM+CPU footprint that has been commonly available for about a decade.

                          Wirth’s Law still works well, I see. Today’s software does not need more RAM than 90s software, it just uses more RAM. People have stopped optimizing their code because “the users have more than enough resources”.

                          Well, they do, unless every single software does that.

                        3. 2

                          I never want to read anything about climate change again as long as people deliberately do that

                          This is hyperbolic.

                          1. 1

                            It is not.

                            1. 2

                              First, ban Bitcoin (and all other proof-of-work cryptocurrencies). Then I’ll consider moderating my CPU usage.

                              1. 1

                                Can’t you just do that voluntarily?

                                1. 3

                                  No, there’s no point as long as worthless heat generation swamps any tiny contribution I could make.