1. 19
  1.  

  2. 3

    Wow, pretty crazy that 95+% of the traffic is garbage!

    1. 1

      I was really hoping to learn more about this.

    2. 3

      The paper just mentions the benefits of this, but mentions no drawbacks at all: are there any?

      If the root-level file is so small (MBs), and if this approach was even mentioned in the original RFC (see note 4), I wonder why this wasn’t proposed before.

      1. 1

        Finally, as we discuss in §3, there is no requirement that all 22K records in the root zone file be stored in the recursive resolver’s cache. Rather, the records could be read as needed from the root zone file. In this case, the caching requirement of our approach is the same as when obtaining TLD records from the root nameservers. Further, as a simple test wrote a Python script to extract all records related to a given TLD from the standard compressed root zone file. Over 1,000 trials the script takes an average of 37 msec to extract all records that pertain to a random TLD. This is similar to network round-trip times and so even a rudimentary scheme should not slow DNS lookups. Finally, there are clearly additional steps that would make the process faster—e.g., loading the root zone into a database or creating a single file for each TLD.

        Putting the root zone data into something like sqlite, and just caching on demand, does seem like it would be a great solution.