1. 24

A follow-up to https://gist.github.com/lf94/ad72f1da36fbc965e4a1d4daeb1d6cb3 .

  1.  

  2. 9

    I strongly agree with your original diagnosis of a split web.

    If Google is competent, the split web will split into 95% that gradually grows increasingly proprietary, reimplementing the same features as before but with increasingly arcane reasons to switch from open standards. For example, if URLs were to hash their domain name and path, ad blocking would become impossible. Since URLs are already deemphasized in Chrome, this is only a matter of time.

    As the web becomes more proprietary, it’ll become more and more difficult to maintain an open-source browser that provides a good experience with the proprietary web. So those who try to fork open-source browsers will end up trying to live within the 5% web that is committed to not picking up proprietary (i.e. post-standards) features.

    If Google is incompetent, the split will be closer to 50-50. It’ll be easier for people to live in the pre-standards web. Proprietary features will provide insufficient value-add for websites to adopt. The hosted future of AMP will not be significantly more convenient than maintaining your own servers.

    So, I’m hoping Google grows incompetent. But, going by the Microsoft Windows/Office past I’m extrapolating from, it’s a real Hail-Mary hope. The only thing that will break this monopoly is a new frontier opening up. Or anti-trust action.

    1. 6

      One thing I need to clarify before anything is my usage of “Web” and “Internet”. I understand the Web exists in the Internet, and the Internet is not the Web.

      Something to be said here about how hard it’s talking about computing when you need to start with explaining the separation between the car and the road and the conversation gets sidetracked there already.

      I have a thought piece on the topic in the draft-bin (no worries, you lot have been punished enough by my prose as of late, there’s about 20 such pieces and they tend to simmer for a long time and then be discarded in an act of self loathing), but four quite big exercises that might help.

      1. Think of the Web and its browsers as a gradient that stretches from solid green ‘as a document’ (with certain properties, there are rules for how to reference, compose and modify it) to solid red ‘as an app’ (software – it restructures your view and the contents of the ‘document’, sometimes from your interaction, sometimes seemingly at random). Use the Wayback Machine and walk along that transition and look for key steps/changes where there are ‘jumps’ in this transition, mark them down.

      2. Take the early extreme of ‘as a document’ and look for what it was missing as to why it couldn’t stay a document. Think of other ways that could be done and, thereafter, read up on (some) parts of project Xanadu. The Gemini approach is just regressive and dull. If going through the hoops of something like that, at least explore other ‘documents’.

      3. Take the later extreme of the ‘as an app’ and compare to how software is executed and loaded, i.e. binary formats, linkers and loaders and why that is. Map those concepts to how the web-app is loaded.

      4. Take your internalised “what you think is the browser” and jot down what that entails. Break down into core component categories, features. It is a jack of all trades, master of none - but there are certain things that it is better and worse at. Compare to the masters in each category and see what makes them different.

      The extra credits part is looking back on BBSes, FidoNET and how those could re-emerge in a different package. And skip the boring ‘how it self organises’ social/political stuff, it erodes the fun of the tech.

      1. 2

        The Paradox of the Sandbox

        A successful sandbox is self-negating; there’s no safety after everyone gets in.

        Operating systems and a web browsers are the same thing. To paraphrase Nicholas Nethercote, they’re both “execution environments that happen to have some multimedia capabilities.” In that vein, Google, as an actor, is irrelevant in the steady venn overlap of operating system and web browser. Microsoft feared Netscape for the same reason.

        1. 5

          Virtually every sandbox is non-nestable. IMO this is core to the issue; if your sandbox is useful but not nestable then you need another sandbox inside it.

          LUA contexts are closer to the right kind of thing but are not as generally useful as I would like.

          1. 3

            Totally! Once everyone is the same sandbox, then someone makes a new sandbox inside of the old, and the cycle begins anew. Because of that, I wonder if sandboxes are cheap, easy, and wrong. One alternative security model I like is capability-based security.

            How do Lua contexts work? I’m not familiar.

            1. 2

              Most programming languages have a global namespace (for eg classes) and if your code calls “fs.Open” it gets the syscall.

              LUA lets you craft a new namespace and run other code within it. That namespace could have a different definition of “fs.Open”, and it’s 100% transparent. These are nestable. Code running in a namespace without the network or filesystem defined cannot access those things.

              1. 1

                If I understand you correctly, a nested context is a strict subset of its surrounding context with no way to jailbreak. Very cool!

                1. 1

                  Yep! Main issue is that it wasn’t designed for untrusted code (eg the interpreter isn’t hardened) or non-LUA code (limiting usefulness). Still very cool.

                  1. 2

                    Another issue, which is tackled directly by communicating-event-loop designs, is how to avoid plan interference, the legendary concurrency bug class. It is important not just to be able to run code with a new context of objects, which can limit the authority to invoke various powers, but also to be able to run the code with a new execution context (a new continuation/thread/etc.), so that the containing code is not denied of its own ability to manage its invariants.

                    I think that some Lua environments handle this, and they do it through communicating event loops just like E.

                    1. 1

                      This is really neat. Feels like a dynamic analogue to effect systems in statically typed FP langs.

                      1. 0

                        FWIW Oil will likely grow this subinterpreter feature, which Lua and Tcl have. (In contrast, there have been many attempts to put it into Python, but the code fundamentally isn’t architectred that way)

                        https://github.com/oilshell/oil/issues/704 (a bunch of links here about Tcl, node.js, and so forth)

                        Use cases:

                        • evaluating untrusted config files (similar to Lua’s original use case)
                        • writing interactive shells in Oil, and separating user state from shell state
                        • maybe: Lua-like “process”-like concurrency with states and threads (not sure if anyone uses this, but it’s in a Lua paper)
                  2. 2

                    Since you said the magic phrase, I should fork the comment thread here to note that capability-based security properties are relatively cheap in formal settings. In particular, the ability to isolate one computation from another is free in all of the pure lambda-calculi.

                    This has an immediate and attractive suggestion for language design, which I want to avoid mystifying: Consider the ability of one object to interfere with another unsuspecting object as an impurity or side effect. That includes function calls! We do have to work to ensure that some objects are sufficiently tame so as to not commit side effects; this is usually called freezing and the resulting objects are not just immutable, but transitively immutable and unable to store private references for any reason. There will be no hidden caches, debugging routines, timers, or other potential side channels.

                    We don’t have to have all objects be frozen. Instead, we can hope that the objects which represent possible behaviors are frozen; this then allows us to instantiate objects we know to be safe, and combine them with those frozen objects, and know that the worst that can happen are the normal Turing-complete things. Specifically, I think that when modules are frozen objects, then code loading can be as (un)safe as the user desires. The user can design their own sandbox, and be confident that the programs inside that sandbox will not be able to import any outside references.

                    Right now, the main design problem is the one that both you and Munroe refer to. The missing solution which will end the cycle is a common interchange format for object references, so that delegations of authority can happen in a truly uniform fashion; this breaks the cycle at Munroe’s northern arrow by suggesting that we can interchange limited authority between existing power structures without requiring both structures to be embedded within a single common context. Of course, literally every power broker on the planet would rather that this not happen, and so instead we are stuck in our current situation, where we must import each proprietary API by hand and integrate its object model to our desired approximation.

                    (This last bit tugs at a philosophical pondering I have been aiming to reasonably justify for some time. If some Alice and Bob have a dispute, then justice should consist of both Alice and Bob being satisfied in the arrangement, regardless of who they are. (This is the famous cake-cutting concept.) But then, if some judge Judy is summoned to adjudicate the dispute and agrees, or if she shows up of her own volition and intercedes, then surely justice should consist of all of Alice, Bob, and Judy being satisfied in the arrangement. Otherwise, Judy may well use some private knowledge to deprive both Alice and Bob of what would otherwise have been equitable.)

                2. 1

                  A successful sandbox is self-negating; there’s no safety after everyone gets in.

                  Isn’t that what the Same Origin Policy is supposed to resolve? That is, if you want more sandboxes, use more domain names.

                  1. 1

                    Yes, and do you think it has been successful? I don’t.