1. 15
  1.  

  2. 5

    Perhaps a good fit for this web of documents would be IPFS (or any similar technology), each document is immutable and the referencing is based on the content itself (newer versions would have different hashes or in other words would be a different document).

    I think it checks all the boxes described on the post, but I might be missing something.

    1. 2

      Freenet follows the web of documents idea closely: just immutable HTML and CSS, no JavaScript.

      1. 2

        It’s been a while since I looked at Freenet. Can one access content there via a “normal” web browser and URI?

        1. 2

          You start a daemon and then can access pages from a normal browser using URIs like http://localhost:1234/pages/${somehash}.html

          Most (all?) writes require a Freenet plugin or separate application to upload content. For example there are apps for mail, a message board, and to upload a site. Of course this seems odd because the web has evolved Turing-complete servers and clients, but it puts Freenet in an interesting niche because it has neither.

        2. 1

          I know next to nothing about freenet, but browsed some of the documentation now.

          Is there a starting point for someone who wants something line the old web? Just plain olf websites with all kinds of weird and sometimes useful information?

      2. 3

        No scripts of any kind.

        While I browse with JS disabled most of the time, and I would like it to be easier, there can be good uses of it in document/non-application contexts. For example, there currently isn’t a way of reliably rendering mathematics on the web without scripts, as MathML adoption seems to have stagnated, leaving the world reliant on solutions like MathJax and KaTeX.

        I’d also argue that an interactive graph, with a static image fallback, can be a good use of JS in a document context.

        1. 3

          Since OP mentioned Xanadu, I just have to put in my two cents :)

          Moving to static documents is an improvement upon the current web (and, as @dethos notes, using content addressing improves your situation even more, since you can ensure that nobody’s changed the document under you), but the absence of mechanisms to ensure staticness of content is tied up with a number of seemingly unrelated design problems with the web.

          TBL’s URL model just exposed paths on the server’s filesystem – which meant that if the files got modified, the content at an address would change. The w3c maintained (up until fairly recently – maybe a decade ago?) that changing the content at an address was Very Rude, particularly if you changed it in a misleading way, but did not enforce this. Of course, everybody started using CGI, and these days, most static pages are served up from a database (if you count CMSes and services like cloudflare). Why did he do this? It’s not that it would be particularly technically difficult to implement content addressing even in 1990; he just didn’t understand why it was so important.

          The reason he didn’t understand its importance is that he didn’t implement some really key features of hypertext – bidirectional span-to-span linking & transclusion. He didn’t implement these key features because, by using embedded markup, he made them very difficult to implement.

          With embedded markup, it is not necessarily safe to embed a fragment of one document into an arbitrary position in another document. It’s also not meaningful, in embedded markup, to link to an arbitrary chunk of another document, for the same reason. So, we don’t support indexing to the byte level – only to the tag level. But if you don’t support that, it’s not such a big deal if bytes change: you’re pointing to entire documents, or to the beginnings of sections, and the content can & probably should get updated over time (so long as the edits are within the spirit of the original intent). Since sections need to be named and declared to be relied upon in HTML, they remain unindexable most of the time, so even though taking a subtree from one document and putting it into another is technically possible, it’s only very rarely done.

          When you have totally static objects with permanent addresses, you can reliably byte-index them, which opens up byte-index-based references like linking & transclusion.

          Now: Ted, in a recent video, said that the currently-under-development version of Xanadu will use HTML & index at the subtree level. He’s fought against this for a long time & gotten worn down, but I still think it’s a shame – Xanadu really was the last holdout for proper hypertext.

          1. 1

            Moving to static documents is an improvement upon the current web

            Serious question: why do you believe this?

            1. 1

              See the rest of my comment.

              (tl;dr: rich linking is only possible with permanent addresses)

              1. 1

                OK, one thing - if I publish a document under this hypothetical “true hypertext” system, then realize I want to fix a typo, do I have to generate a new permanent address/hash/whatever?

                1. 1

                  Indeed. It is a new version.

                  Now, most likely, you will do it with transclusion – i.e., your new document will be a pointer to all characters before your typo in the previous version, a pointer to the replaced text, and a pointer to all the characters after your typo in the previous version. It’s not like you’d be storing multiple copies of the same document.