1. 6
    1. 15

      Don’t use URL shorteners!

      ArchiveTeam, a separate but related entity to the internet archive, has a massive project trying to clean up after them. Every time one goes bust, is abandoned, or breaks for any other reason, tens of thousands of links across the web break.

      This is cryptographically interesting, but please - don’t actually use URL shorteners!

    2. 3

      This is an awfully complex solution that, IMO, doesn’t create a more trustworthy environment.

      1. like @kline already mentioned: don’t use URL shorteners. They reduce transparency and web reliability.
      2. It’s not even clear what the use case is. People in low-trust environments who for some reason trust URL shorteners? See (1.)
      3. The whole point of URL shorteners (or if you insist, content linkers) is that they’re lossy. You’ll never know what content they contain without retrieving the content, which might be malicious.
      1. 4

        Okay… I’ll be honest: I was expecting better comments. I don’t mean that as a jab! I’m sincerely surprised by the kneejerk reaction here.

        I’ll answer point 2 first then points 1 and 3 together:

        Regarding 2: The paper points out that the use case is basically anything that takes a short identifier and turns it into a longer thing with a global integrity view. That’s very much not just URL shorteners. URL shorteners are a quick useful demo, but you can apply this to all kinds of things! Mission-critical files. Documents. Text. You name it. The service gives you one short identifier, and then commits a zero-knowledge proof to a smart contract such that any person using the short identifier to retrieve the full payload gets a global authenticity guarantee.

        Regarding 1 and 3: Your qualm with URL shorteners seems to be that they can redirect to malicious URLs. Again, DuckyZip isn’t just about URL shorteners, but if you want to focus on that demo use case, this is actually something that DuckyZip can help solve: before redirecting to any URL, you can obtain not only the full URL and vet it, but also a discrete zero knowledge proof that it’s the right URL to begin with.

        If you don’t like URL shorteners then by all means, don’t use them – DuckyZip is a low-level protocol with much broader use cases. Less knee-jerking would be appreciated.

        1. 3

          Less knee-jerking would be appreciated.

          More useful examples would be appreciated.

        2. 3

          Your qualm with URL shorteners seems to be that they can redirect to malicious URLs

          The problem with URL shorteners is that they stop operating eventually, because there’s no reason to operate one. Organization-specific shorteners like https://dolp.in/ have much better longevity.

      2. 2

        The whole point of URL shorteners (or if you insist, content linkers) is that they’re lossy. You’ll never know what content they contain without retrieving the content, which might be malicious.

        And, in particular, they support updates. You can keep the stable short URL and redirect it to a new canonical URL when things move.

      3. 1

        That you’ll never know which content they contain without retrieving/parsing/executing is an intrinsic part of how the web treats a URL as a link regardless of another runtime translation layer/virtualisation/indirection.

        You have no guarantees that you will retrieve same exact contents the next request or from the same provider, if you share a ‘direct’ link to someone else they will quite likely still get a different version. My ‘link’ sharing among friends is more often than not print to PDF first for anything not video now for this reason.

        Even in a world where the URL would carry all state used as input to the content provider, you’d still fight low level tricks like hosts mapped to round robin DNS as well as high level ones from other tamper-happy intermediates – how many pages that relies on CDNs actually use SRI etc[1]?

        As such the shortener doesn’t fundamentally change anything - the weakest part of the link will set the bar. If anything, you could use your own shortening service layered on this to provide further guarantees. If anything having one sanctioned by archive.org that >also< syncs wayback machine >and< provides a signed Firefox Pocket style offline friendly version would improve things at the expensive of yet another round of copyright and adtech cries - the sweetest-tasting of tears.

        [1] Kerschbaumer, Christoph (2016). [IEEE 2016 IEEE Cybersecurity Development (SecDev) - Boston, MA, USA (2016.11.3-2016.11.4)] 2016 IEEE Cybersecurity Development (SecDev) - Enforcing Content Security by Default within Web Browsers. , (), 101–106. doi:10.1109/SecDev.2016.033

        1. 1

          I believe there to be a fundamental difference between domains that may redirect users anywhere and domains that one can inspect, recognize, and vet in advance. I also consider link transparency to be a fundamental building block of the web’s trust model.

    3. 3

      Cool idea, but in practice, wouldn’t content-addressed solutions (say, Bittorrent) be more straightforward? It’s standard lore that we don’t always get to have cute short URLs in a global namespace with billions of participants and quadrillions of interesting files.

    4. 1

      It requires checking by the recipient that they received the honest payload, yes?

      https://www.rfc-editor.org/rfc/rfc6920 seems like a more accessible, offline capable (la local-network only, no contract system required, can verify the data as soon as its received), option. which gives you something like nih:sha-256-32;53269057;b, and is inherently portable, if a server doesn’t have a file, it can proxy through to another server and duplicate it, etc. Clients can switch servers since ni:/nih: is made to work without authority

      My short form was uM73gznF, with the capitals, makes it a bit hard to verbalize, which would be one major use-case for shorter urls. ’Course this is resolvable