1. 21
  1.  

  2. 7

    But—think about this: I don’t have to take on cloud hosting! I don’t need to scale the app! This is a huge relief. URGENT QUESTION: Why are we trying to even do this?

    Erm, yes you do. Unless you can afford to run your computer 24/7 so someone can see the site, or you have maybe 50 other people that use dat that are willing to seed your site.

    The other problem is that Dat is simply a web-only technology. That is, unless the development of libdat has picked up recently, from being totally dead. Although, you can’t blame them because last time I looked at the source code there really isn’t a ‘libdat’ protocol. A major chunk of the protocol is built on the back of two nodejs-only libraries that handle dht swarm communication. Much of it was completely unspecified other than in those two implementations. The amount of ‘dat’ on top of that was simply a port number and a couple of async libraries.

    1. 2

      Agreed that to reliably host a Dat archive, you need to maintain a server, or use a cloud service like https://hashbase.io/.

      The Dat protocol has recently been documented in much more detail, which can be found at https://www.datprotocol.com/ - particularly the “How Dat Works” guide is excellent: https://datprotocol.github.io/how-dat-works/

      Alternate implementations, in Rust (https://datrs.yoshuawuyts.com/) and C++ (https://datcxx.github.io/), are currently in progress. These are currently not fully usable yet - in the case of the Rust implementation, the implementation is blocked on the standardization of async APIs in Rust (https://areweasyncyet.rs/). The lead developer of the Rust implementation is part of the Rust async working group, so this should all be coming along soon. Currently, it’s possible to read the Hypercore data structure, but not replicate it across the network, as far as I’m aware.

      1. 1

        As the number of users scale, theoretically the number of users seeding the site also scales, yes?

        This has certainly proven to be true for torrents, where I’ve been able to download even seemingly-obscure torrents at any hour of the day without issue.

        1. 5

          This has certainly proven to be true for torrents, where I’ve been able to download even seemingly-obscure torrents at any hour of the day without issue.

          I have several torrents that get only tens of kilobytes with a single peer, and I have encountered numerous others that have several leechers with no listed seeders. For an example of this, a while back ctrix handed out USBs of previously unreleased music and music that was (used to be?) on his website, at gigs he did. That USB was put on filesharing sites, I’ve seen a magnet link for it. However, the torrent is as dead as a doornail.

          The author says that dat “truly RESISTS centralization” when it’s the opposite. It’s openly encourages such centralization because of the ‘one writer’ policy. Common sense says that if you can only make a website from one machine (i.e. moving or losing machines causes the site to become immutable), then you’ll do it from a 24/7 hashbase server or something similar.

          Ultimately the question is: Do you want to store a copy of every single site you’ve been to?

          Most users don’t.

          As can be seen with Torrenting, it rests in the hands of people who can afford Terabyte hard disks, fast internet, and seed boxes. Because the ranking of what is seeded is popularity, the internet equivalent of sites like CNN and such will get extremely high bandwidth.

          As the number of users scale, theoretically the number of users seeding the site also scales, yes?

          Exactly! So, those who can afford servers will be more popular (Or conversely, those who are more popular will need to afford servers less) because their site is available 24/7, so they have to spend less on scaling.

          At the risk of sounding over-critical:

          In other words, those who aren’t able to have seedboxes and fast internet, won’t be affected or touched by Dat, and those who already can have seedboxes and fast internet, will be better off because they suddenly don’t have to pay as much for those things. This is like the thousands of other efforts by the middle class to aid the ‘poor’ and give the ‘power to the people’ – because they have no idea what poverty really looks like, because they have no experience of what being poor is like, and because they haven’t thought about the impacts of what they do past “this will be really cool and might work”.

          1. 3

            They were not thar obscure then ;)

            I did try download some actually obscure torrents and sometimes it even took several months, if I ever managed to complete it.

        2. 2

          And the underlying technology is solid: a binary protocol very similar to Git.[1] (As opposed to Secure Scuttlebutt, which is tons of encrypted JSON.)

          Why is a binary protocol considered solid vs. encrypted JSON? Is SSB not “solid” because it’s JSON? Because it’s encrypted? Because there’s lots of it? I don’t get it. Not to mention the characterisation as “binary protocol very similar to Git” is just wrong. Dat uses protobufs! Git doesn’t!

          1. 3

            SSB is janky because it’s signed JSON, except the signature is stored inside the JSON and must be the last key in the root object (and the signature is of ‘all JSON text up to this signature’).

            Not only is that pointlessly hard to implement for no good reason, it introduces a exciting new ways to mess up signature verification (eg by verifying up to the signature but then allowing additional keys/values after the signature).

            1. 1

              Sure, but the comparison is “solid: a binary protocol very similar to Git” (which is completely off-base) and “tons of encrypted JSON” – it’s not obvious that a “binary protocol” is somehow better or more ‘solid’ than encrypted JSON. I have issue with the writing being inaccurate in multiple dimensions; I’m not talking about SSB per se.

              1. 2

                Yeah, that’s sloppy writing.

              2. 1

                wow. why don’t they fix this?

                1. 2

                  They are, but it’s slow because there are many, many SSB nodes in production, and most of them have very slow internet connections (one of the main selling points of SSB is for use on e.g. a yacht, or remote farm), so you can’t assume anyone will update their software quickly.

                  In a few years time, there will probably be sensible crypto rolled out across the SSB network.

            2. 1

              How does resolution work? In beaker, if I type in lobste.rs it seems to resolve to https://lobste.rs but what if someone hosted a file that was equivalent to dat://lobste.rs?

              Sorry if its a trivial question for those in the know but thats what immediately comes to mind.

              1. 2

                The DNS resolution works either using a DNS TXT record, or a HTTPS request to /.well-known/dat. This is specified here: https://www.datprotocol.com/deps/0005-dns/

                An example is https://beakerbrowser.com/.well-known/dat.

                This ensures that only the owner of the domain (or the owner the page hosted on the domain) is able to publish a Dat site using that name.

                1. 1

                  If you visit an https:// website that also supports dat://, you’ll see [an] indicator in Beaker’s URL bar

                2. 1

                  IPFS is really cool—but how do I surf it?

                  Uhm… with your normal web browser? There’s an extension for major browsers if you want it to be extra-seamless, and you either need that extension or a local node installed to use it truly distributed, but it even works in normal web browsers using the many public gateways.