I hope so, even if it is just for images/resources, not whole sites.
We could get rid of CDNs for example, popular content could be delivered quickly due to locality (A possibility), some level of redundancy could be achieved, server hosting costs could decrease, and backwards countries that censor and block websites (Yes that includes you too, UK and Australia) could be routed around.
Also, isn’t distributed HTTP just a darknet, which exist but up to now just haven’t been as popular as the normal HTTP internet? Maybe one day a darknet will become more popular and we’ll have headlines like “Will there be a centralised HTTP?”.
Actually, no; Tor and other onion-routing protocols obscure the path between the client and the server, but there’s still a single authoritative server. Except when that hostname or hidden-service name is actually mapped to a CDN, of course, but as the article says that’s still a single source of authority.
These are orthogonal concerns; onion routing keeps the server from knowing the client’s IP, but not from knowing that there is a user. Distributed serving would keep any single server from knowing that there was ever a request for content, but it doesn’t actually protect the client’s IP from the machine that does serve its request. They could sensibly be used together, if the protocols were designed for that.
Onion routing isn’t distributed, but there are other darknets based on ideas like storing content in distributed hash tables. Freenet got a lot of buzz in the early 2000s, though I haven’t kept up with it since then. There’s also a GNU project, GNUnet.
But theoretically you could have multiple Tor “servers” that all index the same content. A client would know or be told about them, so it could pick a random one out of say 20 for each request or switch which “server” it uses every 10 minutes for example. Hashes could be used to make sure the “server” or Tor exit node actually gives you what you requested. If you just care about making a distributed CDN then the hash could even be the only way to reference a resource, ie. instead of get me “http://www.google.com/image.png”, if the page it was on also had meta data to provide a hash then you could just say give me resource “5328fdef8384”, whatever that is.
I hope so, even if it is just for images/resources, not whole sites.
We could get rid of CDNs for example, popular content could be delivered quickly due to locality (A possibility), some level of redundancy could be achieved, server hosting costs could decrease, and backwards countries that censor and block websites (Yes that includes you too, UK and Australia) could be routed around.
Also, isn’t distributed HTTP just a darknet, which exist but up to now just haven’t been as popular as the normal HTTP internet? Maybe one day a darknet will become more popular and we’ll have headlines like “Will there be a centralised HTTP?”.
Actually, no; Tor and other onion-routing protocols obscure the path between the client and the server, but there’s still a single authoritative server. Except when that hostname or hidden-service name is actually mapped to a CDN, of course, but as the article says that’s still a single source of authority.
These are orthogonal concerns; onion routing keeps the server from knowing the client’s IP, but not from knowing that there is a user. Distributed serving would keep any single server from knowing that there was ever a request for content, but it doesn’t actually protect the client’s IP from the machine that does serve its request. They could sensibly be used together, if the protocols were designed for that.
Onion routing isn’t distributed, but there are other darknets based on ideas like storing content in distributed hash tables. Freenet got a lot of buzz in the early 2000s, though I haven’t kept up with it since then. There’s also a GNU project, GNUnet.
Good to know. I hadn’t heard of those efforts but it makes sense that they exist.
But theoretically you could have multiple Tor “servers” that all index the same content. A client would know or be told about them, so it could pick a random one out of say 20 for each request or switch which “server” it uses every 10 minutes for example. Hashes could be used to make sure the “server” or Tor exit node actually gives you what you requested. If you just care about making a distributed CDN then the hash could even be the only way to reference a resource, ie. instead of get me “http://www.google.com/image.png”, if the page it was on also had meta data to provide a hash then you could just say give me resource “5328fdef8384”, whatever that is.
Yes, this is how you’d integrate the two technologies. :)