1. 37
  1.  

  2. 9

    I’m reminded of Freenet in the old days (20 years ago? hah). Back then, freesites were getting around the lack of a name service by providing links to as yet unpublished future updates of their websites, wrapped around image tags from that future which would show as broken until the new version was uploaded. That was slow but worked.

    Judging by this article, it’s unclear that IPFS is any further than Freenet was back then. That can’t be true – does the difference lie with IPFS making stronger guarantees?

    1. 13

      I was very concerned when I saw that IPFS is connected to Filecoin, because while there’s no reason blockchain technology has to involve energy-intensive / environmentally disastrous proof-of-work like Bitcoin, it frequently does. Happily, as far as I can tell Filecoin does not use proof-of-work and is aware of the problems with proof-of-work: https://filecoin.io/faqs/#what-s-the-difference-between-all-these-proofs

      As long as they continue to avoid proof-of-work and keep an eye on their environmental impact, I will cautiously wish IPFS the best.

      Personally I’m more of a fan of Dat and Beaker Browser. We’re already seeing cool, functioning projects come out of Beaker such as Duxtape, I think the main thing holding Dat+Beaker back is not having a mobile browser.

      1. 3

        The author’s claims about poor documentation are, amusingly, bolstered by the problems with his other arguments. For instance, IPNS (despite the name) is really a way to support the kind of mutability he wants (and has nothing much to do with names), and despite the research he’s done he somehow didn’t manage to understand that.

        IPFS really does ‘just work’ in the sense that a sanely-structured website (one that uses no leading ‘/’ & is totally static) can just be given to a single command & become immediately available. Newcomers like OP have an extremely hard time getting anybody to tell them what that one command is.

        1. 3

          You’re right that IPNS is the built-in way of handling mutable references. The author is right that it’s unusable.

          Not only is IPNS slow and unreliable, but entries also have a TTL of 1 day. This requires entries to be re-published at least that often, or else the name becomes unresolvable (which is bad news if it’s hard-coded anywhere, like a DNS record).

          In the short term this is annoying, since it places a constant maintenance burden on the publisher. Keep in mind that we’re talking about static sites being served in a decentralised way from a multitude of unreliable peers, which is otherwise “fire and forget”. Even though this might only require a simple cron job, the site owner might need to set up reliable, single-point-of-failure infrastructure like a VPS just to run that cron job. Perhaps someone could offer this as a service, but that (a) defeats the point of IPNS being distributed and (b) requires handing over private keys. Also worth noting that this is orders of magnitude more maintenance than DNS (my domain name is renewed once a year, not once a day).

          In the long term this is a race condition waiting to happen. After publishing an update, we might find it gets overwritten by a re-publishing of the previous entry that was happening concurrently. Considering how slow IPNS can be, I can imagine this being quite likely. Avoiding this would require some coordination layer between the updating mechanism and the refreshing mechanism, like a lock file. Whilst the solution is pretty trivial (again, things become trivial if we don’t mind introducing single points of failure), it seems silly to me considering that IPNS itself is meant to be a coordination layer.

          a sanely-structured website (one that uses no leading ‘/’ & is totally static)

          That’s a No True Scotsman fallacy, since it implies that using absolute URLs is ‘insane’ rather than ‘a perfectly sensible approach for many circumstances’. One of those circumstances was the author’s site, which is why they encountered problems when using IPFS and needed to write a script to relativise them. It’s absolutely right for the author to document the issues they faced, since I imagine many other people trying to do the same thing will hit the same problem. I know of another person who encountered exactly this issue, and had to write a script to relativise their site’s URLs before it would work on IPFS: me!

          If you want a technology like IPFS to be adopted, I would recommend finding out what problems people encounter with it, and either fixing those within the project (if possible) or else pre-empting the issue by pointing out to new users that they might encounter this problem and providing help (like relativising scripts) for those who do. This is what the author did.

          If you want a technology like IPFS to be adopted, I would not recommend that commonly-encountered problems be brushed aside without any help given, or to invent your own definition of “sane structure” and blame prospective users for not following it. That’s an effective way to alienate people.

          1. 1

            entries also have a TTL of 1 day

            This is new information to me, and indeed would make IPNS unusable.

            That’s a No True Scotsman fallacy, since it implies that using absolute URLs is ‘insane’ rather than ‘a perfectly sensible approach for many circumstances’.

            Using absolute paths is a bad practice because it introduces exactly the kind of fragility the author experiences. Literally any rehosting of the HTML within a subdirectory would break such absolute paths (including trying to view the files on localhost without a server). In other words: it doesn’t make sense to blame IPFS for the author’s decision to do linking in an extremely fragile way, any moreso than it makes sense to blame the developers of xfs for the same when trying to view the pages from the filesystem.

            There are cases when absolute paths are necessary, & where dealing with that fragility makes sense. And, if you choose to do it that way, fine. It’s absurd to make a decision like that and then blame a static file serving tool for failing to automatically work around it.

            1. 1

              Using absolute paths is a bad practice because it introduces exactly the kind of fragility the author experiences.

              If you mean “a bad practice when planning to host a site on IPFS” then I agree; if you mean “a bad practice when making Web sites” then this is the same No True Scotsman fallacy as before.

              it doesn’t make sense to blame IPFS for the author’s decision to do linking in an extremely fragile way

              Who’s “blaming” IPFS? I’m certainly not. It could be construed that the author is “blaming” IPFS, but from my reading they’re just pointing out a common pitfall that they hit which the project didn’t warn them about or help them to work around (and that less-savvy developers might not be able to work around it like they did, or like I did).

              If there is anything to “blame”, it would be attempting to switch out the storage/protocol underlying an existing site from HTTP to IPFS and expecting it to work with no changes. Yet I don’t think the author had such an expectation, and neither did I. We both hit a problem, found a way to work around it, and documented those workarounds (here’s my own blog post on the subject). It would be nice if the IPFS project pointed out the problem and provided a solution (e.g. “if you want to upload a Web site to IPFS, you may find that absolute links are broken; here is a script which will update those links, which you can run before ipfs add to avoid the problem”).

              It’s absurd to make a decision like that and then blame a static file serving tool for failing to automatically work around it.

              Nobody is doing that. This is a straw man.

              1. 1

                The author complained that a website written in a fragile and non-portable way needed to be modified in order to work on IPFS. The same website would also need to be modified in the same way to support most rehosting situations aside from being at the top level of a different domain. In other words, the author’s complaint is that they made a poor decision when writing their website, not anything specifically to do with IPFS.

                IPFS is a filesystem (albeit a distributed network filesystem). When you put a directory on a filesystem (instead of putting the contents of that directory onto the root of the filesystem), that directory has a name, and that name is part of the absolute path. If you fail to put the name of the directory you’re referencing in an absolute path, regardless of what filesystem you are using, your absolute path will be wrong and refer to a different (probably nonexistent) file.

                Heavy use of absolute paths is fragile across not just web development but development in general. Their use in older windows programs is the cause of much consternation among folks who have a bigger secondary disk than their primary disk (a fairly common situation that elaborate hacks exist to work around). All over the unix build & command line world, mechanisms exist specifically to make it straightforward to avoid absolute paths. And, the virtual directory ‘..’ is available for use in URLs specifically so that absolute paths can be avoided in them.

                Absolute paths should be avoided, in general. They shouldn’t be used in websites. They shouldn’t be used in software. They are fragile & break normal behavior with regard to directory encapsulation. In cases where they must be used, they should have their prefix configurable so that moving the directory doesn’t break all links.

          2. 3

            What is that one command you mention, if you don’t mind me asking?

            1. 2

              It’s something like ipfs add -r -w $path for a completely static subdirectory (which gives you a single hash you can supply to IPFS).

              With regard to IPNS, you need to have initialized IPNS first & then updating the hash involves some other command. I’ll look it up.

              (Basically, how IPNS works is that it’s a static hash that mutably points to some other static hash for a whole directory. So, you have some IPNS hash that you own, and you periodically point it at the output of running ipfs add -r -w on a modified directory.)

            2. 2

              Is the author’s impression wrong that IPNS is unusably slow, to the extent that updating normal DNS on a site update is more workable?

              Do you happen to have an example of such a website that’s hosted on IPFS?

              1. 3

                Is the author’s impression wrong that IPNS is unusably slow, to the extent that updating normal DNS on a site update is the more workable?

                IPNS is not, in my experience, any slower than IPFS.

                IPFS has some performance issues (running a daemon is effectively a slow fork-bomb) and there are difficult-to-avoid propagation issues in any totally-distributed network (with respect to which, IPFS is in my experience better than SSB).

                Do you happen to have an example of such a website that’s hosted on IPFS?

                The obvious example is ipfs.io. There are others, & since I’m sure some lobste.rs users are running them, they’ll probably comment with their own. (I don’t run one because if I did, it wouldn’t get enough traffic to remain persistent if my machine went down.)

                1. 3

                  IPNS is not, in my experience, any slower than IPFS.

                  Please note there’s an issue on go-ipfs with the very title “IPNS very slow”, as of yet still open, as far as I can see. Personally, I haven’t experimented with IPFS/IPNS recently; I’d be very happy to learn and see some confirmation somewhere if things have improved since a year ago (which roughly corresponds to the time I last experimented with them).

                  1. 1

                    I haven’t used it in a couple years. It might have gotten worse since I last used it, or it might not have. I know the forkbomb issue has gotten worse (to the point where I don’t even run the ipfs daemon anymore because I was finding myself needing to kill & restart it every 12 hours so I didn’t run out of ram).

                    The open issue you linked to appears to be a case of what I mentioned earlier – which is to say, decentralized replication can be slow when you have few peers. IPNS is basically an extra layer of abstraction over IPFS, so if you’ve literally just added something, whatever replication delays occur with IPFS will be doubled over IPNS. This is like if you’ve put an archive of torrents up on bittorrent & both the archive and each torrent inside it are seeded only by you – until you have a certain number of peers, you’re gonna be slow, and a naive DHT won’t prioritize nearby nodes the way it ought to.

                    In other words, this is a side effect of the main design problem with IPFS in general: that no balancing mechanism exists to ensure peers cache & pin content, so content availability is extremely sensistive to popularity in the short term.