1. 26

  2. 6

    This. This is why independent and different implementations matter. If you understand the web as an application platform you will realize that some browsers should not be able to dictate how it works. And there is little independence out there…

    1. 6

      The problem boils down to something like “who will pay for independence” ?

      As in, dontaions is not a reliable model, and ads are blurrying the line of ethics.

      Personally I always though there’s an audience for a premium browser that includes other cost saving features (e.g. banning spammy websites, currated suggestions, built-in unblocking for papers, easy download for all mutlimedia, built in search engine that behaves based on your stated preferences rather than your behaviour-revealed preferences etc). But the problem is that most of the audience that would understand why such a browser is necessary are the peole that need it least.

      One could also hope that some crypto billionare might just give a 10B endownment to a foundation to build a competitive browser under very strict decentralized supervision, but afacit that hasn’t happened yet.

      1. 9

        TBF Mozilla continues to make it structurally impossible to donate directly to Firefox development for some reason.

        1. 4

          I’d love to pay/donate to Mozilla on a monthly base, if it could be done in a way that made it clear I’m not in support of their stupid shit.

      2. 3

        Frameworks like GTK and QT will become widely used in this space, HTML and CSS will become optional

        Please don’t do this if you have a choice. You give up a lot when you throw away semantic HTML, especially accessibility.

        Of course, I know some applications will take this approach; there’s no avoiding it. That’s why one of the planned targets for my new AccessKit project is the browser.

        1. 3

          This is a really interesting view into the future – and I’ve become, oddly, a bit less worried about it as time goes on.

          Google Docs recently announced – and we discussed that they’ll be doing canvas-based rendering of documents. Does this further lock us into the Google monopoly as they now do everything server-side? Possibly, but is it truly any worse than the webasm/minified JS monstrosities that are modern “web apps”?

          It feels like the web is starting to split: on one side a large amount of data is still following the hypertext vision we’ve traditionally chased, alternative decentralized networks like Mastodon and the Tildeverse are taking root, and hell – we’ve got people writing Gopher pages! In this century! On the other side, we have video games using OpenGL in your browser.

          Stuck in the middle is the last independent browser trying to decide which of the competing standards to chase. Perhaps a hard fork between “the simpler, hypertext web” and “the web as a way to deliver arbitrary, sandboxed applications” helps them decide where they want to spend their more limited resources.

          I dunno. Maybe I’m just the frog that’s been boiled slowly enough that I’m not as worried about this future, but I’m not convinced it’s entirely terrible.

          1. 4

            Does this further lock us into the Google monopoly as they now do everything server-side?

            What do you mean, server-side? My assumption is that the only change here is that the ‘rendering’ step now just calls the canvas draw methods as opposed to generating HTML, along with accompanying changes in how they figure out what user input should do (since now every click will hit the same element). It definitely makes it harder to pull out the textual content from extensions and the like, but I don’t think this really increases lock-in.

            And I think the problem is that there is no actual hard divide between “simple hypertext” and “arbitrary sandboxed applications”. It’s funny you mention Mastodon, because the default Mastodon frontend is, conceptually, much closer to the application model. Clicking profiles doesn’t navigate you to separate pages, it does an XHR and then renders that in a new ‘column’. It even literally embeds an OCR engine that’s compiled to wasm for generating image captions!

            1. 4

              What I’m unclear on is whether or not there’s an actual split to be talked about.

              As in, you could still support html+css in a browser as VMs setup, at most they’d have to include something like an HTML+CSS “special compiler” of sorts that adds some pre-defined rendering logic.

              And I doubt any browsers will want to loss e.g. wikipedia or the blogosphere. More interestingly though, if javascript becomes something closer to flash (unsafe, redundant, nobody uses this anymore kinda thing) it might prompt minimal-javascript websites to switch to being fully static.

              Lobster might be a good example of this where (while I can’t speak for the devs) I could see it being easy to port back into raw HTML, and given enough js related issues some people might just say fuck it and go back to HTML instead of chasing compatibility. The fact that HTML has added many quality of life elements and tags also helps.

              1. 1

                I think a big reason Flash had the (justified) reputation it did is that it fundamentally had less of a sandbox mechanism than JS, and the developers just were not as security-focused as browser developers were/are. And as JS grew more and more powerful, there was less and less reason to use Flash. I don’t see this happening to JS unless something new comes to replace it, and I don’t think this will happen; in particular, I think the ability to just mess around in the web console is incredibly useful, and you just don’t get that with wasm.

                1. 1

                  Wait… I was following you until the last bit.

                  Why don’t you get that with WASM? As in, I get that you don’t get that today but in principle, nobody is stopping an addon that compiles wasm to js and runs it in the console. And/or direct vendor support if it gets popular enough… it’s a bit annoying since you’d have to specify to the browser a path to the compiler of choice (since you’re not going to ship browsers with all of them). But that’s a one-time setup.

                  Or do you mean that people won’t be able to mess with the code of the websites they are using?

                  That also seems untrue, given that EOD WASM just modifies the DOM. You’ll still be able to use the console to add functionality. And most websites, even “puritan” ones ala lobsters come with compiled javascript that’s impossible to mod.

                  There’s a dying breed of dev (e.g. myself) that enjoy shipping readable js to the user, but that’s so niche as not to matter.

            2. 2

              PWAs are underrated due to the portability the browser provides. I’ve been heavily considering an open source, GNU/Linux phone. I know my bank has e-banking but it looks like it’s not been in 10-15 years (I can’t even type commas because it’s TIS encoded instead of UTF). There are so many basic features this could provide like QR scanning, but I can’t even get a UI vaguely optimized for mobile. If PWAs were taken seriously, I could get at least basic support on almost any platform.

              As we push for native apps for performance, the web is still the best first target platform to reach the most amount of people with an acceptable user experience. But in reality, I think marketing and advertising will continue to push for native since they can escape the sandbox that the browser VM provides us.

              1. 2

                Somebody on the ‘net… I can’t find the link at the moment, took on the challenge of trying to see how many levels of qemu’s running inside qemu’s he could get.

                Answer a lot more than you’d ever need.

                Meta answer… just because you can doesn’t mean you should.

                1. 1

                  Think for a second of a future where we can open a browser and run tf/torch, starcraft, sqlite, postgres, clang, brew, wayland, R studio, a bsd or linux VM, octave, the .NET ecosystem, vim, vscode, another browser.

                  NetBSD’s rump kernel has been able to run in a browser for well over a decade (perhaps two?) And I’ve seen Gameboy and Nintendo emulators that run in a browser.

                  Just how complex browsers are these days boggles the mind. They’re also nigh impossible to secure.

                  1. 1

                    NetBSD’s rump kernel has been able to run in a browser for well over a decade (perhaps two?)

                    Can you give some more detials here? Do you mean in userspace or as part of browser’s code?

                    I guess I should have mentioned I was referring soley to userspace. If that’s the case… I don’t really see the reason why you might want to do that. Maybe to run it on in-browser simulated hardware?

                    1. 1

                      NetBSD has pretty good documentation on rumpkernels. Effectively, it’s akin to running the kernel as a program in userspace. Rumpkernels allow one to develop and test new TCP/IP stacks and filesystem drives, for example. Pretty cool methodology of OS/systems development.

                      1. 1

                        It doesn’t seem to me like it’s running in a web browser, but rather that it’s running the web browser.

                        Relevant section being:

                        This section explains how to run any dynamically linked networking program against a rump TCP/IP stack without requiring any modifications to the application, including no recompilation. The application we use in this example is the Firefox browser. It is an interesting application for multiple reasons. Segregating the web browser to its own TCP/IP stack is an easy way to increase monitoring and control over what kind of connections the web browser makes. It is also an easy way to get some increased privacy protection (assuming the additional TCP/IP stack can have its own external IP). Finally, a web browser is largely “connectionless”, meaning that once a page has been loaded a TCP/IP connection can be discarded. We use this property to demonstrate killing and restarting the TCP/IP stack from under the application.

                        1. 1

                          Years ago, you could run the rumpkernel inside the browser, exactly similar to emulating a gameboy or NES in the browser. Similar to this: https://bellard.org/jslinux/index.html

                          Those demos showing the rumpkernel booting in the browser itself seem to be no longer available on the internet. That’s a shame, because rumpkernels-in-js are awesome.

                          1. 2

                            Those demos showing the rumpkernel booting in the browser itself seem to be no longer available on the internet.

                            Curiosity got me and I think I found a copy of one of those demos: