1. 16

This is part 1 of 4-part series. Here’s

    1. 7

      Awesome, thanks for sharing! I’ve been playing a lot with Yggdrasil lately, and also have gnunet and CJDNS / Hyperboria on my radar.

      1. 1

        Oooh, Yggdrasil looks like fun, I should try it out sometime.

    2. 6

      This is all too exciting technologically, but a point that is rarely discussed is: the web is not centralized out of (pure) evilness from corporate giants.

      The single key reason for centralization, I believe, lies in economics: larger players enjoy gains from scale, are further down the learning curve and in some cases can form network effects (positive externalities).

      It is simply cheaper to run larger data centers, cumulative experience matters a lot in software development, and network effects are evident in market places and social media.

      The risks from concentrating too much power in the hands of a few giant corporations must be balanced against the costs of running smaller tech organizations.

      Or course novel technology can reshape the economics that lead to centralization, but I think most discussions miss the economic causes.

      1. 9

        The network itself remains very important.

        I remember the advent of ADSL in France. Emphasis on Asymmetric DSL. Before that everyone was on Dial up, and cell phones were still fairly uncommon. And since Dial-up modems basically hang up your phone while you’re connected, no one left their connection open indefinitely. Having a server at home was thus a pretty big no-no. The market picked up on that, and concluded people didn’t “want” to have servers at home.

        So when DSL came about this all “wise” and “efficient” market sacrificed upload bandwidth to give customers more download bandwidth. Dial-up showed they don’t “want” to run servers anyway, and the fact DSL connections can let us use our regular phone at the same time totally won’t change anything. Or so the market thought. And what do you know, the market was right: people still didn’t run servers at home, and it totally had nothing to do with the asymmetry of the bandwidth… you get the idea.

        Starting from there, there was no market for servers, low-power servers, easy to use servers, distributed social networks in a box, photo and video sharing from a home server… you get the idea: had bandwidth been symmetric from the start, peer-to-peer protocols like BitTorrent would have entirely voided the need for all centralised services except search. And it’s not at all certain that the global costs would have been any higher: with centralisation, data tend to travel longer distances, and that’s not free. Besides, pretty much everyone has a router at home, that is beefy enough to host all kind of services. And with peer-to-peer protocols it wouldn’t even need to scale.

        But with the network we have now, I guess I’ll have to let Google and Amazon decide what I want to eat for breakfast…

        1. 3

          I’m not defending market allocation as wise and efficient, I’m pointing to benefits of scale, learning and network effects. There are lots of suboptimal equilibria in economics.

          1. 3

            “suboptimal equilibria” is an interesting way to refer to the horrors of the last century of globalized markets

            1. 1

              It’s a technical term in economics, but there’s a cool example that shows it is not related only to markets.

              If you have two ISPs, each covering part of a graph of nodes, each has the incentive to deliver a packet as quickly as possible to the other ISP. Then the packet goes through the other ISP possibly making the combined path large than the shortest path. Both ISPs are worse off if there is no regulation.

              I feel the sarcasm in your comment, but I think it as detrimental to the economist profession as abusing open source contributors is detrimental to the software development profession. For that reason, I would like you to the consider the following arguments, because they might convince you.

              (For the record, my general political stance is to the left, and I loathe the egoistical morals that arise strongly in a free market economy.)

              1. that the “horrors of globalized markets” must be compared to alternatives, i.e., non market economies, for which only bad examples exist nowadays. Soviet economies in the long run collapsed because central planning is really too complex. China can be called as an example but it does feature markets. Primitive, pre agricultural societies are speculated to have featured a better quality of life than today, but hunting and gathering do not scale. That is, the horrors must be compared to the alternative horrors.

              2. that the “horrors of the globalized markets” are mostly attributable to fraud, greed, lack of proper regulations and governance, and not attributable to markets as a mechanism for allocating resources.

              3. that new investments that are more efficient can instantly depreciate older, less efficient investments in capital that is not reallocatable. If you invest in a new factory that is much more efficient, old factories might become worthless before old factories are paid back. If you hire cheaper workers, more expensive local workers might not be able to find a new occupation; that is one of the negative outcomes of competition, and still characterizable as suboptimal equilibria.

              1. 1

                How can any particular act of fraud or greed be described as exterior to that act’s historical social conditions? You argued that “alternatives” to markets are necessarily worse off because of their worst anecdotal/particular features; if there is No True Scotsman for the social conditions associated with markets, why is the same not true in your view for the social conditions associated with non-markets? It also gives me the question: do you or do you not believe that the behavior of people is the product of their environment?

                Honestly, I think here not about just “the market” or really even “globalized markets” but seriously speaking I think of the history of capital accumulation instead because it is, in my opinion, not only easier to define and identify discretely than some idealized social system of “capitalism” or an abstract notion of trans-historical market relationships (“a mechanism for allocating resources”) but can be easily understood as particular historical events whose effects, such as materially unnecessary suffering, can be directly observed. I look across that definite history, and I see a violently self-expanding phenomenon, whose blood trails aren’t at all accidental or difficult to correlate. I can come up with a litany of my own views on the particulars, which I definitely prefer to vague conversations about abstract signifiers, but I have yet to be satisfied by any argument to the contrary. No person has (or likely ever could) convince me, for example, the Bhopal Disaster was just accidental to the process of capital accumulation. You may not even disagree with me when I’ve worded my view on this in this way, but that is at least how I see it.

                Additionally, I don’t understand how the cheapening of labor can be seen as a “suboptimal equilibria” of any real market when it appears in measuring our ongoing epoch of industrial capital to be a tendency of all mass forms of waged labor (which become increasingly dominant), with the temporary and localized exception of class struggle, such as through unionization or the passage of beneficial legislation, stalling the ever-falling value of living labor in the face of a ceaselessly growing mass of dead labor.

              2. 1

                that the “horrors of globalized markets” must be compared to alternatives, i.e., non market economies, for which only bad examples exist nowadays.

                If we stick to totalitarian examples only. Partially non-market economies aren’t that awful. I’m speaking specifically of the public sector amidst an otherwise market economy: administration, police and justice (that you’d find even in a “minimal state”), but also firemen, schools, roads, trains, energy, water, health care, unemployment insurance, retirement plans. I’m sure I’m forgetting some.

                Point is, most USA citizen would recoil in horror at the “Socialism” implied by taking all of the above out of the market, and give it to this inevitably bad state “monopoly”. But it worked in France for decades. Many of our problems come from dismantling this public infrastructure and giving it to the market.

                In fact, whether something should be given to the free market or commanded by the state (or local public institutions) should be decided on a case by case basis. For instance, internet providing should probably be a mix of the two: the cables themselves are a big investment, best undertaken by the state (and it should be optic fibre with symmetric bandwidth, since (i) symmetric is Good™ and (ii) symmetric fibre is the cheapest option anyway). To pay for it the state operator leases the bandwidth in those cables to anyone who would buy those slices. With one very important caveat: flat rates. That setup has been experimented in some regions of France, and the result was a plethora of internet providers, both commercial and non-profit. That’s how you make a free and uncontrollable internet.

                1. 1

                  Yes, completely agree: public goods should be provided by the state, including the Internet. The economic analysis is that of positive externalities. “minimal state” is not good even in theory, because it excludes the possibility of providing goods that are more efficiently provided by a single, internalizing actor.

                  What’s your view on universal basic income?

                  I tend to favor it.

                  1. 1

                    There are 2 competing views for unconditional income. Basic Income stems from the idea that we are beings of need, and need some unconditional minimum income even if we don’t contribute. It’s mostly a way to lessen the horrors of capitalism without actually ending it. The other view comes from some form of communism or classical anarchism: it’s called qualification based salary (a mouthful), and it stems from the idea that we are productive being, and every citizen ought to receive a salary for this productivity to be recognised. It’s part of an alternative to capitalism.

                    I’m personally not sure where to lean exactly. But there’s one thing we must recognise as soon as possible: our current way of life is not sustainable, and if we keep this up we’re facing not just the collapse of our civilization, but possibly even the extinction of humanity before the end of the 22nd century. Whatever we do, we must not cross the hard limits that will prevent the Earth from being inhabitable by humans, and if at all possible, we should at once preserve what can be, and sacrifice what threatens it.

                    The primary cause of that pending doom, namely the way we produce and consume goods, also known as our economy, has a name: Capitalism. It cannot be saved in the long term, and must be sacrificed. Merely trying to reining it in will either not work, or bind it so tightly it might not be called “capitalism” any more. And then we must pursue a life that doesn’t consume as many resources, yet live long and happy lives, and keep hope for a brighter future.

                    I’m not certain how to go about it, but the usual defeatism about capitalism being the only alternative does not help. As is branding any proposal that’s not capitalism as “we’ve tried this before, it was worse”. Thing is, keeping up with capitalism is likely to reduce the world’s population to a tenth of its highest peak in less than 2 centuries. In practical terms this means war, famine, and illness. Mostly the last 2, fuelled by the first. So now we get to pick our poison: change the system and risk some new variation of some horrible totalitarian rule, or not change the system and risk the Malthusian consequences.


                    If you’re still with me so far, and can entertain the idea that we ought to find alternatives to capitalism, we can start thinking how to proceed from there: what do we want? And how do we decide it?

                    One such proposal, that we can call “communism” though it’s very different from what we have seen in USSR and China, start with that qualification based salary. The idea is giving everyone their minimum income at their political majority. Then, as their “qualification”, we would increase that income up to some maximum. Typical proposals for that maximum range from 3 to 6 times the base salary. Note that in France, less than 10% of the people are paid more than 3 times the minimum wage, and less than 5% are paid more than 6 times that. So a limit of even 3 isn’t that unreasonable.

                    One reason for that qualification, as opposed to an equal allocation for everyone, is that there is work to be done, and some of that work is a combination of painful, boring, dangerous, unrewarding, specialised… one way or another it’s stuff that not everyone will want to do. Raising one’s qualification is an incentive to do that work. Who exactly decides how and when to raise someone’s qualification is an open problem. Some people will inevitably try to game or cheat the system, so as in everything politics we’ll need some kind of trustworthy process, checks and balances and all.

                    Then we need to decide what actually needs to be done. Food and shelter, education, infrastructure, whatever we need to live well enough while making sure we do not render our planet uninhabitable. That probably means ditching a few things, such as airliners or short-lasting electronics (no more buying a new iPhone every couple years). And as much as I love my own VR setup, we may have to stop producing high-performance computers for mere entertainment purposes (as amazing computers are, they’re one of our most polluting industries).

                    Then we need to decide how it needs to be done. And even though so far I’ve been talking “communism”, we did try command economies, and that didn’t go so well. Instead the workers should probably decide how they work. We would abolish lucrative property (getting dividends just because you happen to “own” the factory people are working in), but keep usage property (those who work in the factory decide how the factory is actually run). This means localizing and distributing what can be. A bakery for instance has a very limited radius of influence. In a densely populated area it would be barely half a klick. The stakeholders of that bakery would then be all the people within its sphere of influence. They could decide the price of bread, and how much the bakery should be subsidised, if at all. And if it turns out the bakery is profitable, great, those profits goes back to the community to subsidise something else (possibly including an increase in the bakers’ qualifications to reward their good work).

                    A nuclear plant on the other hand should be managed at the grid level, and that grid (at least in the EU) has international reach. That’s quite a different game. Anyway, this system is bound to be very complex with lots of disputes, and I bet different areas or cultures would make very different choices. But by keeping things local we change one crucial thing: people can meaningfully participate in politics again (the price of bread after all is a meaningful political decision). If we are to get out of the political apathy I see around me (including, let’s not lie, myself), this ability to actually participate is a necessity.


                    What I think of Universal Basic Income? I’d wager it’s a necessary component of a much, much larger whole.

          2. 3

            You don’t get my point: one huge reason for the benefits of scale to realise in the first place, is the asymmetric nature of the network, and the centralised nature of the web (one web server for many web clients). If we had symmetric bandwidth from the beginning of cable & DSL, coupled with a web that used a peer-to-peer transport network, the story of the internet would have been very different.

            About the other two… yeah, network effects are a big one, but they become less relevant in federated networks such as email (except when the big ones start to discriminate small servers), and not a problem at all in truly distributed networks such as BitTorrent. I’m sceptical about knowledge transfer within a single company. I’ve been programming for money for 15 years now, and no company I have ever worked with transferred any significant amount of knowledge. Trivia specific to the company or the project, sure, but nothing fundamental beyond what I’ve learned at school or on my own.

            1. 3

              What I think is even more primary than assymmetry of bandwidth, is assymmety of connections. Assymetry of bandwidth IMO just grew from the fact that regular consumers were not allocated a scarce resource that a public static IPv4 address is. In order to address that scarcity, solutions like NATs started to develop and took the web in the direction where is it is not true that if you can connect A to B, then B can also connect to A. Assymmetry of bandwidth seems to me to be a consequence of that. I totally agree though that the internet would look totally different had we lived in a symmetric world. IPv6 offers such a perspective.

              1. 3

                Good point, I got tunnel vision here. Offering a single IP to consumer then the subsequent need for NAT is indeed worse than the bandwidth story. I think it stems from similar causes (nobody has a server at home during dial up times), but I agree the consequences are even worse.

                Now I almost hope IPv4 becomes unusable enough that everyone switches to IPv6. But I’m afraid we’ll just generalise Carrier Grade NAT instead. In my opinion the regulators should step in: “Internet providers must give each customer at least one public, static /64 IPv6 range, with no restriction”.

            2. 2

              Bluntly, the availability requirements of content consumers, in general, cannot be satisfied by people hosting their own content. Hosting requires specialized knowledge and consistent attention that simply can’t be provided by individuals.

              Centralization delegates these complex requirements to a third-party, which can effectively satisfy them, at an organizational level, without burdening the content producer with the details. It’s an abstraction, basically, which models the perspective of its users. Even if all of the technical costs of self-hosting were reduced to zero, people would still choose centralized hosting.

              1. 2

                Bluntly, the availability requirements of content consumers, in general, cannot be satisfied by people hosting their own content.

                What availability requirements exactly? Sure I like YouTube to be available all the time, but I’m certainly willing to tolerate some of my favourite channels going down from time to time. Which is what will happen with self hosting, and that’s fine.

                Hosting requires specialized knowledge and consistent attention that simply can’t be provided by individuals.

                Hosting with 99.99% availability, sure. In practice, 99.9% is more than enough, and easily achieved with an auto-updating box, even with the odd power outage, or even hardware failure if you can replace the parts fast enough.

                Of course, countries with less reliable power will have more problems.

                Even if all of the technical costs of self-hosting were reduced to zero, people would still choose centralized hosting.

                I blame political apathy.

                1. 1

                  Self-hosting on private uplinks delivers p90 zero-nines availability. The set of people capable of operating a one- or two-nines appliance in their home is statistically zero.

                  1. 2

                    Let’s count, shall we?

                    • One nine means 10% unavailability, or less than 37 days per year.
                    • Two nines means 1% unavailability, or less than 4 days per year.
                    • Three nines means 0.1% unavailability, or less than 9 hours per year.

                    I have a router at home (the one that provides internet access, and in about 3 years it only went down for a couple days because the connection was cut off. That’s well over 2 nines, with zero effort on my part. My grandma who knows nothing about technology, gets similar performance.

                    I also have a NAS that I share with my brother (installed in my home, accessed by both, and so far the only reason for it to go down was when I moved it from one room to another (5 minutes, so this still allows for 5 nines), or when my connection itself went down. Also zero effort beyond the initial setup.

                    Going beyond that will require some distributed backup tech. Something basic like doubling disk usage and sharing with a single trusted friend or family member would solve most problem and add 1 or 2 nines right off the bat. And I bet something fancier like Reed-Solomon codes could deliver similar reliability while sacrificing much less. Of course it needs to be developed, but the maintenance effort? There isn’t much of one.


                    You keep asserting that people need skill to host stuff at home. They don’t. They just need the right appliance. The ease of use of centralised system can extend to “host-at-home” systems as well. If you disagree, please name specific skills people would need, and why those couldn’t be addressed by NAS/router vendors.

                    And if I may, the centralised systems aren’t that reliable either: they often take down your content, or even end your account, for reasons that aren’t always under your control. And some content deemed inappropriate to the biggest platforms can’t be hosted there at all. Not for technical reasons of course, but we ain’t ever going to find Bible Black on YouTube even if the authors wanted to.

              2. 1

                Most torrents I’ve downloaded are an order of magnitude more available than anything ever hosted in CDN’s honestly. You could argue that the complexity is offloaded to the tracker but honestly running something like a tracker is dead simple these days with docker and such.

                1. 1

                  Most torrents I’ve downloaded are an order of magnitude more available than anything ever hosted in CDN’s honestly. You could argue that the complexity is offloaded to the tracker but honestly running something like a tracker is dead simple these days with docker and such.

                  To be clear, you’re saying the user-experienced availability of data from a .torrent file — typically magnet links or whatever? — is an order of magnitude higher than the availability of content on CDNs like Akamai, Cloudflare, or Fastly?

                  1. 2

                    Duh, because the files in the CDN are often taken down by their owners for financial or other reasons. One CDN provider’s network goes down (happens) the files go down as well, while torrents require every single peer or tracker on a file to fail.

                    1. 2

                      the files in the CDN are often taken down by their owners for financial or other reasons

                      There are two dimensions availability under discussion here.

                      1. Availability of content which is currently authorized by the owner for consumption
                      2. Availability of content which was once, but is not currently, authorized by the owner for consumption

                      First, let’s scope these categories. How much content do you believe exists in each of them? (This reduces to: how much content do you believe is generally requested, which has been taken down by the producer?) 50/50? 90/10? 99/1?

                      (Secondary question on that point: do content producers not have the right to delete content they once made available? If not, what are the consequences on the sovereignty of data?)

                      Then, let’s define availability. If I request a file at time t0, then availability means it should arrive in full by some time t0+N, else it is unavailable. What’s a reasonable N? “Indefinite” is not a reasonable answer.

                      I claim that a reasonable answer to the ratio-of-content question is at least 99/1, and more likely 99.999/0.001; and a reasonable answer to the availability question is at least O(single-digit seconds), and more likely O(sub-second). Through that framing, I don’t see how torrents would ever provide a better user experience.

                      I’m sure you don’t agree with this framing. But I think that just speaks to the use cases we’re each starting from. The current internet provides roughly 100ms response latency to individual requests. No rational actor will opt in to a system with inferior performance, absent extraordinary circumstance. So it seems to me that no system that doesn’t meet or beat that performance benchmark can be considered viable.

                      1. 2

                        I know the use cases were different. I just personally don’t care about the former (business-oriented) case while understanding it is important for other people who are not me. Agree on the latency problem for smaller files though. Every decentralized system out there is non-ideal in one way or another (and I don’t mean generally as I don’t believe there is a such thing as a “general” use-case). I also think the “centralized”/“decentralized” dichotomy is quite telling in itself due to the natural tendency of capital investments to centralize. Computer systems only care about such a thing insofar as they are created as a part of an economy; there’s nothing technically speaking that says data or computation must be perfectly distributed or perfectly centralized. Distribution is just a tool that’s deeply underutilized due to the practical dominance of market interests in our society.

                        1. 2

                          If we’re talking about systems that are meant to be used by a non-trivial number of human users, then the concerns which become very important, and which generally disqualify decentralized systems, are addressability and availability.

                          Addressability means that users should be able to efficiently interact with the system via identifier(s) that are humane and authoritative. Humane means something like “can communicate it in a spoken conversation” which is satisfied by a URL or phone number, and not by an IP address or content hash or Bitcoin wallet address. Authoritative means that my youtube.com better be the same as your youtube.com (net, at least).

                          DNS is authoritative. Tor, Onion, ENS, etc. are not.

                          Availability means that users can rely on the service to work, to their level of expectation. People expect that a physical bank might be closed from 5pm to 8am, but anything on the internet is expected to work 100% of the time. There’s some tolerance for outage but it’s maybe like 1%, anything more than that and the service is understood as unreliable.

                          A website with a team of on-call engineers distributed across timezones is available. A video hosted on a home internet connection is not.

                          I’m not aware of any way for decentralized systems to satisfy these requirements. Certainly no decentralized system existing today does so.

      2. 3

        It worth separating centralisation in terms of infrastructure and in terms of governance and administration when considering economics. Cloud providers pass on a lot of the cost savings from economies of scale and, as long as there is more then one, they will keep being pressured to do so by the market. With modern FaaS and disaggregated storage offerings, you can build things that are a lot cheaper to run than a single VM (let alone a real computer connected to the Internet). You may be limited to 4-5 cloud providers that can host it cheaply, but thousands of users can deploy separately administered instances of whatever other thing is.

      3. 2

        Enormous agreement re: broad misunderstanding of the forces that motivate centralization. And I also totally agree that the economic perspective is underemphasized. But I think, if we’re talking about systems with a non-trivial number of (human) participants, then centralization is both inevitable and necessary, for sociological reasons.

        I may be able to negotiate consensus with my neighbors to decide who will host the yearly block party, but that stops working beyond an approximately Dunbar-number-ish group of participants. If a system needs to accommodate more than that number of users, then it must define abstractions, and operations on those abstractions, to be practical. You can’t get those things without delegation of trust to some kind of authority.

        Concretely: if I want to propose a vegan option for the block party, I can go to 10 neighbor’s houses and make my case; if I want to propose closing the border to Canada, I cannot realistically canvas every citizen in America to make my case. Equivalently, if I want to make some change on Ethereum, I can’t realistically persuade every participant in the open network, I have to go through some governance mechanism, which is necessarily well-known, well-defined, respected by most/all participants — i.e. centralized.

        IMO no matter how you shape the economics, or iterate on the technology — no amount of BFT protocol cleverness, or trust-less encryption shenanigans, or whatever — it’s fundamentally not possible to decentralize systems at scale.

        1. 5

          You’re talking about governance here. When the limitations are more technical, decentralisation is very possible. For instance, there would be no need for YouTube if:

          • Our bandwidth was symmetric from the start.
          • All videos were distributed through BitTorrent or similar.

          Instead everyone would be able to host their videos from their home connection at relatively little cost. A centralised video search service would probably still have emerged, but the only reason hosting itself is centralised is because the web is centralised.

          1. 3

            I don’t buy this argument. It assumes technically capable users who are able and willing to have their machine on at all times. Even most techies don’t want the hassle of having a server in their cupboard (which also requires maintenance to be kept secure and also hardware will fail).

            You could argue that specifically bittorrent would be able to deal with nodes dropping out and coming back up, but that assumes there are enough people willing to sacrifice disc space and upload bandwidth with Joe Schmoe’s home videos to ensure there’s always someone’s machine online when you want to watch such a video. Don’t underestimate the convenience that hosted solutions offer.

            This is also the same reason more and more goes into “the cloud” even though that’s not technically necessary. There were actually probably more home servers in the late nineties/early zeroes than nowadays.

            1. 5

              It assumes technically capable users who are able and willing to have their machine on at all times.

              Yeah, there’s just one snag: it’s already the case. Right now, my grandma has a machine at home that is always on, and always connected to the internet: her router. Now I’m not asking that people actually administrate their home servers. There is such a thing as usable software. The only reason having a server is such a PITA right now is because of that very router, with NAT, firewalls etc… whose configuration interface isn’t under the control of any software provider (another avoidable reason for decentralisation: even if I play a 2-player game the only usable option is hole punching the NAT with the help of a central server… or even use a central server from the start).

              You could argue that specifically bittorrent would be able to deal with nodes dropping out and coming back up, but that assumes there are enough people willing to sacrifice disc space and upload bandwidth

              The upload bandwidth is only a problem because our bandwidth is asymmetric. If it was symmetric, people would naturally upload just as much as they download, supply would naturally equal demand, and it would all scale up and down without any problem. The disk space would not be a problem either: if everyone just uploaded what they were downloading right now (thanks to symmetric bandwidth), they would only “sacrifice” space to store stuff they actually want. And they can stop sharing not long after having downloaded what they need.

              As long as there’s one primary source in the world that is dedicated to distribute something they think is important (such as the videos of my cute cat), anyone who want to download it, can. That part is not much different from the web, actually. It just scales better without imposing all the costs on the distributor.

              Alas, our bandwidth is asymmetric and people have little upload to spare. In this world (our world), you’re correct. My conclusion from this is that asymmetric bandwidth is one of the biggest causes of internet becoming ever more centralised. That, and the hegemony of HTTP.

              1. 2

                And the router my parents have at home occasionally (once a month?) craps itself and has to be restarted. I know because my off-site backups stop working and I have to ask my dad to go and do stuff with it. We have symmetric bandwidth, or at least high enough upload, in some places and it doesn’t change things: very few people host stuff at home.

                1. 2

                  And the router my parents have at home occasionally (once a month?) craps itself and has to be restarted.

                  Sure. Nothing is perfectly reliable. But thanks to asymmetric bandwidth, there never was a market for 99.99% availability in home routers/servers.

                  We have symmetric bandwidth, or at least high enough upload, in some places and it doesn’t change things: very few people host stuff at home.

                  Again, no market. And even if we solve the asymmetry now, the cloud players are too entrenched by now.

          2. 2

            there would be no need for YouTube if our bandwidth was symmetric from the start [and] all videos were distributed through BitTorrent or similar

            The assertion here is that if we can reduce bandwidth costs to zero, and provide a practical solution to addressability, that most people would choose to self-host rather than uploading content to centralized services?

            1. 2

              I’m not making any claim about now. I’m making a counterfactual claim about the past: if bandwidth had been symmetric, and we had usable server boxes from the start, most of those centralised services would likely not have risen to power to begin with. (A single box provider probably would have though, just like Microsoft did with Windows. But I think it would have been a lesser problem.)

              Now that they have however, switching away from them is going to be a totally different game. Without some strong regulation (say, mandate adversarial interoperability for stuff like Facebook), I’m not sure this is even possible. They’re too entrenched now.

              1. 1

                if bandwidth had been symmetric, and we had usable server boxes from the start, most of those centralised services would likely not have risen to power to begin with.

                Keeping a server up over time requires continuous attention and specialized knowledge which is unavailable to everyone except a tiny niche of people. These costs dominate the calculus that drives hosting decisions. No amount of UX improvement or bandwidth cost reduction will allow a .mp4 hosted on my mother’s home internet connection to meet user expectations. Two nines, hell, one nine, is uptime that’s only possible with round-the-clock on-call engineering staff. And that’s only possible to provide in centralized organizations. Centralization isn’t some pathological side-effect of market forces, it’s Actually Good™ because it delivers net better results to all stakeholders.

                1. 2

                  Keeping a server up over time requires continuous attention and specialized knowledge which is unavailable to everyone except a tiny niche of people.

                  I’m repeating myself, but you really need to state the specialized knowledge you’re speaking of.

                  I’ve had a router at home for over 15 years now, and the only reason it ever went down was because I was moving to another house, or because the line itself went down. That’s between 2 and 3 nines out of the box, before we even speak distributed backup. (2 nines means 1% downtime, or 3 days and 15 hours per year). Nobody was ever on call for that. And both my Mom and my Grandma enjoyed similar availability on their own connections (likely a bit more since they haven’t moved out in the last 20 years).

                  I’m also not sure why you’re talking about bandwidth costs at all: sure to deliver stuff fast you need enough bandwidth, but protocols like BitTorrent aren’t limited by the origin’s upload, it’s limited by everyone’s upload. If everyone can download at 1MB/s and upload at 100KB/s, the average torrent will download at 100KB/s. But if both upload and download are capped at 500KB/s, then download will reach 500KB/s. And the beautiful part is that it doesn’t matter whether there’s 1 peer or 1 million: everyone gives a bit of their upload, so increasing a server’s popularity does not increase their bandwidth costs.

                  But we got asymmetric bandwidth, so the network as a whole had much less upload capability, and so in our world torrents tend to download fairly slowly. Much slower than something hosted centrally. But that’s only because our network is asymmetric, and therefore optimised for centralisation.

                  Centralisation isn’t an inevitability inscribed in the laws of physics. If anything in fact, I would guess the laws of physics (specifically the speed of light) would likely favour decentralisation, because it allows data to go through shorter paths without necessarily needing CDNs or similar huge cache systems.

                  1. 2

                    Centralisation isn’t an inevitability inscribed in the laws of physics

                    Physics, no. Systems used by humans, yes, it actually is.

                    1. 2

                      How then? Is is something as vague as “people seek power at the expense of others, therefore inequality”? Or do you have actual mechanisms in mind?

                      1. 2

                        It’s not a question of power or equality, it’s a question of human nature.

                        Humans interacting with systems express intent, and evaluate outcomes, invariant of the implementation of those systems. When I send a payment to Visa, I have the domain concepts of “me” and “Visa” and “my payment”. If I fat-fingered the recipient of my payment, it’s absolutely necessary that I can petition a well-defined, singular, authoritative Visa entity to fix that mistake.

                        You can’t assert that Visa is a decentralized abstraction, and so has no singular — centralized — authority that I can address when there is a problem, and thus that my mistake is just too bad and I have to deal with it. That system is inhumane and non-viable.

                        Visa is, to me, a single centralized thing. That makes the singular-Visa model of Visa the truth, the actual reality. Humans define what is real. And we have a basically fixed cognitive capacity. Abstractions are how we extend that cognitive capacity to higher-order outcomes. And abstractions require a minimum set of trust assumptions to be useful and non-leaky. Centralization is effectively the only way to establish those trust invariants.

                        1. 2

                          Sounds like you’re saying that humans are lazy and apathetic, they don’t think of the non-immediate consequences of their actions, and will just reach for the most comfortable solution regardless… And to some extent, you’re right. But is it a reason to give up? I’m okay with you being a cynic, but I’m not ready to sink with you just yet.

                          It’s interesting that you brought up Visa specifically. First, let’s get the technical point out of the way:

                          If I fat-fingered the recipient of my payment, it’s absolutely necessary that I can petition a well-defined, singular, authoritative —

                          1. I think you mean something like a judge.
                          2. You could ask the same of your payment provider even if it was much smaller than Visa. Banking delays are there for this very reason, in fact.

                          Now, it turns out the Visa/Mastercard duopoly is strangling some businesses and non-profits so hard that they’re effectively banning them. One casualty is a good chunk of the porn business. We could be tempted to say good riddance to this immoral den of women’s exploitation, but shouldn’t this be discussed democratically? Another casualty is Wikileaks, the most impactful journalistic organisation of the century, responsible for countless revelations in the entire world. (By the way, Assange is slowly dying at the hand of the USA and England, because of his Pulitzer-worthy work.)

                          It seems to me that a sufficiently diverse set of payment processing offers (in other words a decentralised payment system), is a necessary component of healthy democracies, without which speech isn’t as free as it should. For similar reasons, a sufficiently diverse set of internet hosting platforms is just as necessary.

                          I concede that there are forces for centralisation. They’re just not overwhelming. At the very least, there is such a thing as anti-trust laws and forcibly splitting companies. Preventing and reversing excessive centralisation may be a hassle, or even difficult, but the alternative is nothing less than the death of democracy and freedom.

                          I’m not quite ready to let that happen.

                          1. 2

                            Now, it turns out the Visa/Mastercard duopoly is strangling some businesses and non-profits . . .

                            You touch on an important dimension of the problem space which deserves attention. This should be fixed, I agree. But you have to address stuff like this while maintaining the essential properties of the underlying system(s). If your solution to porn businesses not being able to accept Visa payments means users who get defrauded of some payment or fat-finger a transfer or forget their master key or whatever don’t have any practical resource, then that’s not a solution, it’s a regression.

                            I think what this boils down to is a disconnect on the nature of large-scale systems as they exist today. The fundamental properties of these large-scale systems are the result of thousands of years of evolution, thousands of years of toil and experimentation and course correction. I’m no advocate of the status quo for it’s own sake, but I’m definitely an advocate for understanding the history and rationale for something before advocating change — a Chesteron’s Fence maxi, let’s say.

                            Why is the centralization fence in the field? Because it’s the natural outcome for large-scale systems. It’s not pathological, it’s inevitable, and Actually Good™. Humans have fixed cognitive capacity, scaling that capacity non-linearly requires abstraction, effective (non-leaky) abstractions require authoritative definitions, and authority requires centralization. Not complicated.

                            1. 1

                              If your solution to porn businesses not being able to accept Visa payments means users who get defrauded of some payment or fat-finger a transfer or forget their master key or whatever don’t have any practical resource

                              That’s a big if. If I recall correctly, users already had that kind of recourse before everything got centralised that way.

                              Humans have fixed cognitive capacity, scaling that capacity non-linearly requires abstraction, effective (non-leaky) abstractions require authoritative definitions, and authority requires centralization.

                              You lost me at the very last word. The kind of centralisation you should speak of here is a standard, either formal or emergent. One standard everyone agrees on, one standard to save everyone their cognitive resources, yet you can have many providers.

                              Some of the best examples I can think of are IP, TCP, UDP, and BitTorrent. One standard, many implementations, decentralised network. You get the cognitive benefits of centralisation without paying for it with a monopoly.

                              The fundamental properties of these large-scale systems are the result of thousands of years of evolution, thousands of years of toil and experimentation and course correction.

                              Evolution remains a mindless process that doesn’t have the same goals as us humans. I wouldn’t so readily treat it as an authoritative source.

                              Why is the centralization fence in the field? Because it’s the natural outcome for large-scale systems. It’s not pathological, it’s inevitable, and Actually Good™.

                              Standards are good. Monopolies… my political opinion is that they’re bad. And interestingly, we can observe diseconomies of scale too. We’ve seen for instance different branches of the same company lobby governments for opposite bills (I recall one example with Universal).

                              Also, how such huge conglomerate rise isn’t all that pretty. Though there is such a thing as “winner takes all”, once companies exceed a certain size they tend to squash competition with things other than efficiency or serving the customer better. They buy competitors, prevent interoperability, lobby for regulations small players can’t follow… it’s not exactly the free and unbiased competition we’ve been sold.

                              1. 1

                                Some of the best examples I can think of are IP, TCP, UDP, and BitTorrent. One standard . . .

                                These are protocols. Protocols are things defined well below the level of authorities understood by users. No human being gives a shit about TCP or BitTorrent, they care about the things that those things provide.

                                Following…

                                The kind of centralisation you should speak of here is a standard, either formal or emergent. One standard everyone agrees on, one standard to save everyone their cognitive resources, yet you can have many providers.

                                Standards, providers, these things are details which are irrelevant to users. A user expressing the concept of Youtube doesn’t care about any of these things. Youtube is a single conceptual thing, invariant to delivery, protocol, standard, etc. It absolutely doesn’t matter how it’s resolved (via DNS, etc.) or how it’s served (via HTTP, HTTPS, QUIC, etc.) or anything else. It is one abstract thing, existing above those details.

                                The thing the users of the system care about is the higher level, abstract concept of what they express. This is definitionally singular, and definitionally requires authority, which in turn requires centralization. There’s no avoiding this. You can’t decentralize a definition of Youtube.

                                1. 1

                                  The thing the users of the system care about is the higher level, abstract concept of what they express.

                                  I want to see an internet video. Let me click on the button that let me see internet videos. I want to bypass my country’s Great Firewall. Let me click on the button that let me bypass that firewall. Got it.

                                  This is definitionally singular,

                                  One button to rule them all. Well, except users around the world won’t click on the same button. They’ll click on their own instance of the button, and depending on culture, languages, personal preference, or disabilities, the ideal button will be a little different. Not exactly singular.

                                  What is singular is the idea that I get access to the same stuff as everyone else. But that doesn’t mean everyone is hosted by the same company. It only means we’re all accessible through the same network.

                                  The Tor network is interesting in this respect, because to reliably give everyone access to the same network (untainted by some Great Firewall), it has to be decentralised. There’s basically one Tor Browser (one button to rule them all I guess), but there are many nodes.

                                  and definitionally requires authority, which in turn requires centralization.

                                  Yes, some authority needs to define the protocols, someone needs to write server/client or peer code, and most people will just use the most popular one. But then again, this protocol and code only needs to give access to the same network.

                                  There’s no avoiding this.

                                  Indeed, there’s no avoiding a single network.

                                  You can’t decentralize a definition of Youtube.

                                  You’re not getting away with a tautology. YouTube is several things, really:

                                  • A video hosting service
                                  • A search & recommendation service
                                  • An ad network.

                                  Unfortunately it’s hard to escape a centralised search & recommendation service, because those are very capitalistic (they require a significant investment up front). I hate that, but I do have to concede this point.

                                  The ad network… well it’s not that centralised even on YouTube. Many content creators disable ads, or have their videos demonetised, so they rely on “sponsors” instead, and those are quite varied. There are also donation platforms, and we do have very few of them, but I believe the network effects for them is not as strong as the network effects borne out of the need to… just click the button to see internet videos.

                                  The video hosting service however doesn’t need to be centralised in a single company. BitTorrent is proof of this. People need to all be on the same network, they need to be reachable and searchable for sure, but hosting itself (and the rules for hosting) can easily be very decentralised. See PeerTube.

                                  Even the concept of subscribing to a channel has already been demonstrated (and was widely used) with RSS (and then Atom). Even today I get the occasional request that I update my feeds, even though RSS/Atom are supposed to be “dead”. It wouldn’t take much, technically, to write a unified PeerTube client that let user access to all PeerTube servers and watch videos like everything was hosted in the same place — even though it’s only the same network.

                                  1. 1

                                    I want to see an internet video.

                                    You want to see an internet video — with a specific identifier X.

                                    The identifier X must be, for all users of the system, no matter where they are, (a) humane (e.g. can be communicated over a phone call); (b) deterministic (i.e. youtube.com/abc cannot ever be one video for me and a different video for you); and (c) available (i.e. fetching youtube.com/abc must succeed 99%+ of the time).

                                    These properties are table stakes. A system which does not satisfy them is non-viable. HTTP URLs work (barely). BitTorrent magnets, IPFS hashes, etc. do not.

                                    Search and recommendation systems translate abstract user query Q to identifier X. Ad networks decorate requests for identifier X with stuff that generates revenue for stakeholders S1, S2, etc. But it all boils down to well-defined resources with the above properties.

                                    The network is an implementation detail, ultimately irrelevant. The user experience is the thing that matters.

                                    I agree that it is theoretically possible for any kind of network, including a decentralized network, to solve this problem. But as far as I’m aware, it is factually impossible to assert an authoritative and verifiable definition of a qualifying identifier X, and practically impossible to offer acceptable levels of availability for the content behind that identifier, without centralization.

                                    If I’m wrong, tell me why! Show me a counter example. I’m honestly eager to be corrected.

                                    1. 1

                                      It seems to me YouTube itself fails to uphold the table stakes you speak of. It blocks videos, sometimes in a country by country basis. It arbitrarily unsubscribes people from channels without asking either the subscriber or the author. It has lots of rules for publications that stops authors from publishing many kinds of videos that would otherwise be perfectly legal in the US or the EU. So before we even think of referring to a video, we should think of a system that can reliable publish it.

                                      HTTP URLs work (barely). BitTorrent magnets, IPFS hashes, etc. do not.

                                      YouTube uses random unique identifiers, same as magnet links and IPFS hashes. I mean look at this: https://www.youtube.com/watch?v=l1ujHfWoiOU. Have you ever dictated “l1ujHfWoiOU” over the phone? I tried to, and without a nice monospace font, I never know whether the first character is an uppercase I or a lowercase l. And without those identifiers you do not reliably access even publicly available videos, because search is (i) finicky, (ii) location sensitive, and (iii) some videos are explicitly excluded from search queries (and not just at he behest of the author).

                                      The network is an implementation detail, ultimately irrelevant.

                                      Indeed. I care most about the governance of the network.

                                      That’s the ultimate problem with YouTube. They could use BitTorrent under the hood to cut down their server costs to almost nothing at all, it wouldn’t necessarily change their recommendation system, search queries, or subscription model. People could probably give direct access to their videos even if Alphabet doesn’t want them to, but then we’re back to unreadable hashes.

                                      Still, not all networks can be governed the same way. Good luck decentralising the governance of a single data centre for instance. If we want people to be able to talk to each other without some corporation deciding for everyone, it has to be physically decentralised to begin with. Even here for instance, we’re talking in a public forum handled by a handful of moderators. Bless them for their work, but if one of us gets out of line, we’ll quickly get a reminder that this is not our turf. We wouldn’t have the same restrictions over email, or our respective blogs.

                                      (I’m not saying centralisation is all bad: we gather here in Lobsters mostly of our own accord, and a central-ish governance for the sake of moderation makes a lot of sense here. There are other, more popular forums out there if this one doesn’t suit us. The advantages of moderation and curation are real, and the price is very tame when this is not a near-monopoly like YouTube.)

                                      But as far as I’m aware, it is factually impossible to assert an authoritative and verifiable definition of a qualifying identifier X,

                                      Unless that identifier is a hash. Can’t be manually copied, so you need computer aided copy&paste, the ability to click on the link, or a QR-code. Note however that with unreadable hashes come one little benefit: they can include a certificate or the public key of the publisher. So as long as you get them from a trusted source (a geek friend who is recommending some author, or the bank teller handing you a card with a QR-code on it), what you get is even more authoritative and verifiable than a regular Web URL (which by the way is vulnerable to typo squatting).

                                      It’s just… not readable by humans.

                                      and practically impossible to offer acceptable levels of availability for the content behind that identifier, without centralization.

                                      There’s something here that I wanted to say quite a long ago in this thread: the availability of a decentralised system is very different from that of a centralised one. Let’s keep the YouTube analogy:

                                      • A single video may not be available.
                                      • A single channel may not be available.
                                      • YouTube itself may not be available.

                                      As a whole, it is indeed unacceptable for YouTube to have less than 4 nines of availability. If it goes down for more than 0.01% of the time, something is very wrong. But lucky us, decentralised networks basically never go down that way: you’d have to convince every node to shut down, and if they’re diverse enough not even a botched update will do that.

                                      For a single channel though? 3 nines (going down 0.1% of the time) is more than enough most of the time, and for low-stakes stuff we could even go down to 2 nines. I could do it with self hosting at home. My grandma could ask a 3rd party hosting provider, similar to what I do for my website.

                                      And that’s without distributed backup. Sure, applying a distributed backup layer manually is likely to be a pain, or even impossible, but if it’s implemented as part of the hosting solution… I mean, if nodes dedicate a reasonable percentage (likely between 5% to 20%) to error correction codes for other people’s videos, 2 nines availability for any single channel can easily turn into 3 or 4 nines as if by magic.

                                      1. 1

                                        It seems to me YouTube itself fails to uphold the table stakes you speak of. It blocks videos, sometimes in a country by country basis. It arbitrarily unsubscribes people from channels without asking either the subscriber or the author. It has lots of rules for publications that stops authors from publishing many kinds of videos that would otherwise be perfectly legal in the US or the EU. So before we even think of referring to a video, we should think of a system that can reliable publish it.

                                        I’m not sure why you’d expect any system to operate without regard to relevant laws or regulations. If a video is removed from a hosting provider for e.g. copyright infringement or whatever, that’s (hopefully non-controversially) unrelated to availability in the sense that we’re talking about here.

                                        it is factually impossible to assert an authoritative and verifiable definition of a qualifying identifier X,

                                        Unless that identifier is a hash . . . [which] can include a certificate or the public key of the publisher. So as long as you get them from a trusted source (a geek friend who is recommending some author, or the bank teller handing you a card with a QR-code on it), what you get is even more authoritative and verifiable than a regular Web URL (which by the way is vulnerable to typo squatting).

                                        This notion of trust in the authenticity of the hash — some out-of-band or sidechannel mechanism by which information can be personally or individually verified — doesn’t scale beyond Dunbar’s number. Even if it were conceivable to suggest users would only interact with URLs sent to them from their cryptographically verifiable inner sanctum of close friends (it isn’t) you can’t build anything but trivial systems out of that foundation. Anything that serves meaningful numbers of human users needs to have mechanisms for delegation of trust.

                                        Assume for the sake of the argument that you can trust the hash (you can’t, but). And say that hash includes some a signature over the data. For this to provide meaningful assurances, you need, at a minimum, a trusted source of truth for the authenticity of the signature, and reliable defenses against MITM attacks of the hash and the content. It’s fractal trust assumptions, all the way down.

                                        And then! That hash still needs to be resolved to a physical address from which you can read the actual content. What system does that resolving? How does it work? Specifically, can that system provide anything resembling acceptable latency without trust delegation? (Spoiler: nope, you need caching, caching requires trust, etc. etc.)

                                        the availability of a decentralised system is very different from that of a centralised one . . . As a whole, it is indeed unacceptable for YouTube to have less than 4 nines of availability. If it goes down for more than 0.01% of the time, something is very wrong. But lucky us, decentralised networks basically never go down that way: you’d have to convince every node to shut down, and if they’re diverse enough not even a botched update will do that.

                                        Availability isn’t a theoretical measure of content delivered eventually, it’s a measure of actual user experience. If I open my DecentralWeb browser and paste in a trusted hash, and I don’t see the content I expect in something like 5 seconds at the most, then that content is not available. And if I experience that more than just sporadically, then the DecentralWeb network is down.

                                        It doesn’t matter if a single node in some remote EC2 region, or on some ADSL connection in Marseilles, has the content I’m looking for, if I can’t actually fetch it in a reasonable timeframe. And you just don’t get that sort of availability, at non-trivial scale, out of consumer network appliances attached to home internet connections joining [permissionless] decentralized networks.

                                        2 nines . . . I could do it with self hosting at home

                                        I think you’re making some simplifying assumptions. Sure, if you host a video that gets 1 view a day, nothing really matters. But availability is measured invariant to stuff like request load or internet weather or etc. If you make some video that goes viral, and suddenly you’re seeing 10k RPS for the thing, your home internet connection will factually not get you any nines of reliability ;)

                                        1. 1

                                          I’m not sure why you’d expect any system to operate without regard to relevant laws or regulations.

                                          My point is, YouTube goes well above and beyond any laws and regulation. It arbitrarily unsubscribes random people from channels. There’s no law that forces it to. It arbitrarily unreferences some videos even though they’re legal. The appeal process for takedown notices doesn’t leave room for any kind of fair use. That kind of thing wouldn’t affect a decentralised system where each author has to be contacted directly for possible unlawful content, where each user would be subscribed to each author by an RSS/Atom feed or similar.

                                          This notion of trust in the authenticity of the hash — some out-of-band or sidechannel mechanism by which information can be personally or individually verified — doesn’t scale beyond Dunbar’s number.

                                          Correct… and incorrect.

                                          Users obviously cannot meaningfully trust more than 200 sources directly. In practice they can only track the most important website they visit, their family and friends, the physical places they go to that also have an online presence.

                                          On the other hand, it’s very possible to be directly trusted by millions of users. Take banks for instance. People usually go physically to the bank. When they do so, the clerk can give them a little card with a QR-code on it, that contains every information needed to contact & authenticate the bank. Some URL equivalent (except it doesn’t need to be readable), and the public key of the bank’s certificate authority. The source of trust here is “I went to the Bank and I trust that the clerk there gave me a genuine card”. That’s not perfect obviously, but it should be better than the web’s PKI already. More importantly though, people can use the physical interactions they are used to to establish trust — or distrust.

                                          Availability isn’t a theoretical measure of content delivered eventually, it’s a measure of actual user experience.

                                          Of course it is. What made you think I was speaking of anything other than the user experience?

                                          • YouTube goes down for 1% of the time? I get nothing 3 days a year or so.
                                          • YouTube channels go down 1% of the time? I get 99% of what I want to see all the time.

                                          Over-simplifying of course, but you can already see that this is not the same user experience. Hence my conjecture that the temporary availability of any given channel is much less important than the availability of YouTube as a whole. Nobody in their right mind would say YouTube is down because 2 of their favourite channels are unavailable this day. The same reasoning apply to a distributed system.

                                          Sure, if you host a video that gets 1 view a day, nothing really matters. But availability is measured invariant to stuff like request load or internet weather or etc. If you make some video that goes viral, and suddenly you’re seeing 10k RPS for the thing

                                          Doesn’t sound like such a big problem to be honest. Assuming a request and its response take up to a Kilobyte, we’re talking something like 80 Megabits per second in both directions. It’s high for sure, but not out of reach of most fibre connections I believe. And if people get the equivalent of a magnet link instead of querying my server directly I won’t get anywhere close to 10k RPS.

                                          Even if we can’t avoid some centralisation there, remember that the entirety of the Pirate Bay was never more than 4 server racks. The most popular torrent tracker in the world, with website, ads, search, torrent files & magnet links, all in 4 racks.

                                          1. 1

                                            YouTube . . . arbitrarily unsubscribes random people from channels.

                                            (YouTube.com has different availability vs. a given YouTube channel)

                                            I don’t believe either of these claims are true. They’re facially nonsensical at a technical level, and would be self-subversive to YouTube as an organization if they were enshrined as any kind of policy. What makes you think otherwise? Links or etc.?

                                            Doesn’t sound like such a big problem to be honest. Assuming a request and its response take up to a Kilobyte . . .

                                            Sustained 10k RPS from arbitrary clients over the internet was my random example, but in fact that’s (a) orders of magnitude less than the traffic levels that viral content triggers in this day and age; and even ignoring that (b) it’d be bottlenecked not by available bandwidth but by the receiving node’s network stack. A reasonably optimized application (e.g. a compiled language written by a senior engineer) running on server-class hardware (e.g. Xeon) in a well-connected datacenter (e.g. redundant and 1-2 hops from a backbone) can reasonably be expected to serve 10k RPS. A consumer device on a home connection satisfies none of these conditions.

                                            Also, self-hosting doesn’t mean hosting the hash or magnet link for a bit of content that’s served by other peers in a network, it means serving the content directly. But okay, let’s say that when grandma “self-hosts” a video, she’s actually copying the video to an arbitrary number of unknown peers. The immediate question then becomes: how does she delete a video she posted accidentally? This isn’t a nice-to-have, it’s table stakes.

                                            Some URL equivalent (except it doesn’t need to be readable), and the public key of the bank’s certificate authority. The source of trust here is “I went to the Bank and I trust that the clerk there gave me a genuine card”. That’s not perfect obviously, but it should be better than the web’s PKI already. More importantly though, people can use the physical interactions they are used to to establish trust — or distrust.

                                            How is the the delegation of trust performed when one loads a URL provided by a bank teller via a piece of paper containing a QR code any different than the delegation of trust performed when one types www.bankofamerica.com into the URL bar of their web browser?

                                            1. 1

                                              YouTube . . . arbitrarily unsubscribes random people from channels.

                                              I don’t believe either of these claims are true

                                              I don’t have hard evidence, but I do have several account of various channels complaining about various things, including shadowbanning (popular content not showed in the recommendations), blatant biases in the recommendation system (not necessarily for nefarious reasons but still) and yes, involuntary un-subscriptions. Here’s one example of some thing a content creator might complain about.

                                              But even if I’m wrong (I reckon I’m not very confident in that particular claim), YouTube indisputably does strike or demonetises individual videos and ban whole channels based on things that (i) go beyond the law, and (ii) go beyond their own terms of service. Not necessarily on purpose, but at that scale they have to make lots of mistakes. And we’ve all heard stories about their not-so-fine support and appeal process.

                                              (YouTube.com has different availability vs. a given YouTube channel)

                                              I don’t believe either of these claims are true

                                              It wasn’t a statement of fact, it was an example. What if we had partial availability? Doesn’t make sense for YouTube of course, but it does make sense for any distributed system. I mean the Web is not down just because 5% of the websites decided to shut themselves down in protest, is it?

                                              Sustained 10k RPS from arbitrary clients over the internet was my arbitrary example, and in fact is […] bottlenecked not by available bandwidth but by the receiving node’s network stack.

                                              Yeah, this I did gloss over this problem even though I was aware of it. There’s one thing though, we have some evidence suggesting that the average Linux network stack is slow. My guess here is that if we were serious about selling low-cost appliances tailored for network loads, we could handle this 10k RPS on much less powerful hardware than our current server.

                                              It’s not easy to solve, mind you. This is a “company builds new hardware with new OS and gets successful” levels of hard. Technically feasible, good luck with the economics and the network effects. I don’t expect it in the foreseeable future.

                                              Also note that viral content is rare. It only looks frequent because we see viral content all the time thanks to availability bias. (Same thing for the lottery: we basically never win, but we often hear the winners.)

                                              Also, self-hosting doesn’t mean hosting the hash or magnet link for a bit of content that’s served by other peers in a network, it means serving the content directly.

                                              It’s a UX thing. If we can not-really-host content while giving a sufficient illusion that we are hosting it, that’s enough for me. Though in this case I was thinking more along the lines of providing a peer in addition to the magnet link. You do host the file, but if it goes viral the load is still distributed among the peers that are currently downloading it.

                                              The immediate question then becomes: how does she delete a video she posted accidentally? This isn’t a nice-to-have, it’s table stakes.

                                              Those table stakes don’t exist even on YouTube. If someone has copied the video, it’s too late, the cat’s out of the bag. You might get lucky and delete the video before someone else sees the magnet link, same as YouTube, but beyond that you can only hope that people will lose interest or be decent enough to stop exchanging your video.

                                              Defaults matter a lot here. Watching a YouTube video doesn’t retain it by default. Downloading a video on BitTorrent does, making reuploads so much easier. But nothing stops things like PeerTube from retaining videos only long enough to make peering work. If retaining the videos require users to click the “save” button, most won’t.

                                              1. 2

                                                We have now fully departed the material conversational plane and transcended to a higher dimension, speaking both directly at and entirely past each other, a simultaneous expression and destruction of the self. Glory and pity to us both, amen.

                                                1. 2

                                                  Well, we tried. This still got me thinking, so thank you for having stuck here.

    3. 5

      Reading part 2… How does SSB get an “A” for mutability when its (immutable) append-only log is its most glaring flaw for users? In the post it is said “you’ll need to make users aware that their posts are immutable and cannot be changed” and yet if someone only read the first blog post (more likely than people reading them all) they will be lead to believe data on SSB is truly mutable. I know these ratings are silly and for fun but as a fan of SSB but not its quirks regarding the log, it irked me lol.