1. 13
  1.  

  2. 9

    Tech has a short memory lately, and I would like future implementors to learn not only the lessons of the web but the lessons of pre-web hypertext systems (which often solved problems that the web has yet to address).

    I do wish the author would have followed this with lessons from history. I found the requirements list interesting but I would likely give them more weight if they were tied to the specific lessons they were informed by.

    1. 6

      There are a number of issues with these ideas but there are two I want to draw attention to in specific.

      All byte spans are available to any user with a proper address. However, they may be encrypted, and access control can be performed via the distribution of keys for decrypting the content at particular permanent addresses.

      While perpetually tempting, security through encryption keys has the major drawback that it is non-revocable (you can’t remove access once it’s been granted). As a result, over time it inevitably fails open; the keys leak and more and more people have access until everyone does. This is a major drawback of any security system based only on knowledge of some secret; we’ve seen it with NFS filehandles and we’ve seen it with capabilities, among others. Useful security/access control systems must cope with secrets leaking and people changing their minds about who is allowed access. Otherwise you should leave all access control out and admit honestly that all content is (eventually) public, instead of tacitly misleading people.

      […] Any application that has downloaded a piece of content serves that content to peers.

      People will object to this, quite strongly and rightfully so. Part of the freedom of your machine belonging to you is the ability to choose what it does and does not do. Simply because you have looked at a piece of content does not mean that you want to use your resources to provide that content to other people.

      1. 1

        Any application that has downloaded a piece of content serves that content to peers.

        The other issue with this is what if the content is illegal? (classified government information, child abuse, leaked personal health records, etc.) There are some frameworks like Zeronet where you can chose to stop serving that content, and others like FreeNet where yo don’t even know if you’re serving that content. (These come with a speed vs anonymity trade-off of course).

        I do agree with the idea that any content you fetch, you should reserve by default, maybe with some type of blockchain voting system to pass information along to all the peers if some of the content might be questionable, giving the user a chance to delete it.

      2. 3

        Interesting.

        No facility exists for removing content. However, sharable blacklists can be used to prevent particular hashes from being stored locally or served to peers. Takedowns and safe harbor provisions apply not to the service (which has no servers) but to individual users, who (if they choose not to apply those blacklists) are personally liable for whatever information they host.

        This is something I have given some thought to. I agree with things not being removable, however, who controls the blacklists? That’s an extraodinary level of power. Conversely, blacklists are likely to be reactive rather then proactive, and therefore it’s almost certain that at some point a user will end up hosting something that is illegal in one state or another - without even being aware of it. Which is also a problem.

        1. 5

          The key to making peer to peer work is groups. When everyone has to manage moderation, block lists, illegal content, and encryption by themselves, the overhead makes the network difficult if not impossible to use for most.

          If you base these decisions on groups, much of the overhead can be amortized such that the cost of using the network is not much more than using a centralized, managed network like Facebook. Like-minded groups (say /r/science and /r/chemistry could collaborate on this to further reduce the workload).

          You also get the benefit that TOFU is per group, not per individual. This greatly decreases the need for and importance of manual certificate verification

          1. 2

            That’s a very nice approach actually. Thanks for explaining!

          2. 3

            To be clear this is also the same thing that corporations deal with. Safe Harbor rules basically are what you would need here.

          3. 2

            A link has one or more targets, represented by a permanent address combined with an optional start offset and length (in bytes).

            In a UTF-8 world, shouldn’t this be characters rather than bytes?

            1. 1

              PNG and any binaries is not exactly UTF-8 (I assume you still want to access images via hyperlinks).

              1. 1

                True, but does it make sense to link to a byte position within an image or binary? In the latter case, perhaps — but might it not also make more sense to link to a character within the display version of the binary?