1. 45
  1.  

  2. 12

    This is the first truly compelling post I’ve read about DoH. In particular, these parts:

    It does however make it possible to track an application from IP address to IP address because this TLS Resumption session ID is effectively a cookie that uniquely tracks users across network and IP address changes.

    Wow. Great catch.

    I’ve always felt the primary value of DoH was the authenticity and integrity, not the secrecy. As the author points out, there are plenty of other ways to snoop a hostname from traffic. And I’ve heard vague things about DNSSEC leaving much to be desired. To me, someone relatively uneducated about DNSSEC, DoH seemed like a step forward in that regard.

    They’re absolutely correct about TLS session resumption. Those tickets last for hours or days, and absolutely can be reused across different networks. Normally your machine will be resuming TLS sessions all the time, exposing you to identification via the usual TLS sessions you have. Off the top of my head, email and messenger clients seem particularly susceptible to this. But DoH users would have a single, easily identifiable, frequently used, long lasting session. You’d easily be able to track them over time.

    Regular DNS can be correlated with individual connections: a DNS request for example.com followed quickly by a TLS connection to example.com is rather obvious. But as the author again points out, that doesn’t provide much value. Individual DNS queries cannot be correlated with each other through a NAT gateway. But DoH queries can via TLS session tickets. A sufficiently dedicated snoop could correlate packets on a DoH session with new TLS sessions, thereby correlating all the TLS sessions of a particular user. You lose group the anonymity you have with plain DNS.

    I’m not sure how viable such an attack would be in practice, nor how useful it would be compared to other attacks, but it’s an attack vector nevertheless.

    Now I’m left wondering why some people don’t like DNSSEC, and whether their complaints are worse than stateful DNS sessions.

    Similarly, using lists of known malware associated domain names, it is very possible to cheaply block devices from accessing botnet infrastructure.

    In principle, you could do the same with DoH by running your own DoH resolver. In practice, there are no good ways to advertise your DoH server. With DNS you can still block queries regardless of what resolvers clients use on your network. That power also facilitates censorship, but VPNs can circumvent censorship. DNS-based malware blocklists, though far from foolproof, still provide value to network administrators.

    I don’t have the data or domain knowledge to weigh the value of DoH’s censorship-resistance against DNS’s utility in network security. I’d love to hear the perspective of a seasoned network admin.

    1. 3

      Now I’m left wondering why some people don’t like DNSSEC, and whether their complaints are worse than stateful DNS sessions.

      Another reason for organizations to dislike DNSSEC, is that it allows for enumeration attacks. The problem here is, that many organizations have a huge copier/printer set up on the network which can be accessed through some DNS-name. If you can simply enumerate the zone of that organization, you can easily find that machine and use it as the next step in your attack.

      1. 1

        Interesting. How does DNSSEC allow enumeration attacks that aren’t possible on plain DNS?

        1. 1

          Interesting. How does DNSSEC allow enumeration attacks that aren’t possible on plain DNS?

          In older versions of DNSSEC uses a system in which records have a pointer to the next dns-record in the zone. This is done to prevent forgeries of fake subdomains amongst other reasons. This allows enumeration. In modern versions this is somewhat shielded by pointing to the hash of the next record, but the prejudice still exists.

          You can read more on wikipedia and follow the sources from there.

      2. 3

        Now I’m left wondering why some people don’t like DNSSEC, and whether their complaints are worse than stateful DNS sessions.

        Simple: DNSSEC requires extra maintenance. You’d have to set up some procedure to re-sign your dns-records every three months or so.

        There ’s also the risk that if you fail to update those keys in time, or if you make a mistake in the signing procedure, that your entire site will be unreachable.

        You’d also have to upgrade all the DNS middleware that runs everywhere on the internet.

        That’s mainly why a lot of people and organizations, including Google, don’t like DNSSEC.

        In my opinion however, DNSSEC is just as inevitable as IPv6. You can delay it’s rollout, but one day you simply cannot do without.

        1. 1

          I see. So it’s not any particular property of the protocol, just the same inconveniences around support and adoption as IPv6?

          Is signature maintenance any more difficult than HTTPS certificate maintenance?

          1. 3

            I see. So it’s not any particular property of the protocol, just the same inconveniences around support and adoption as IPv6?

            I was oversimplifying things here, but in an oversimplified world, then yes, it means that you have to replace just about every home-router-like endpoint in a network if you want DNSSEC to function properly, just like IPv6.

            In reality, the deployment of DNSSEC is easier than the deployment of IPv6. The amount of devices/servers in a network that function as DNS-resolvers is vastly smaller than the number of devices running IP protocols. If DNSSEC is to ever function properly, every device with DNS-resolver capabilities needs to be replaced or upgraded. So you’d still need to replace all the home routers.

            Another option is, if a provider has access to the configuration of the home routers, to turn off the DNS-capabilities of the routers and let them transmit the ip-addresses of the provider’s DNS-servers over DHCPv4/v6 to all other devices in the network.

            Is signature maintenance any more difficult than HTTPS certificate maintenance?

            In all honesty? No, it’s not.

            However, I’ve observed in practice that it’s very hard for many system-administrators to get the certificate configuration of their servers right.

            If you mess up the configuration of your server, you’ve only messed up on one server, but if you mess up signing with DNSSEC, it’s not limited to just one server, it’s your entire domain that goes offline. There is a business risk here that you’ll need to consider as well.

            Ironically I don’t think this is the biggest obstacle to DNSSEC adoption. I think the biggest obstacle is the fact that many developers and sysadmins need to be trained and educated on the topic of DNS and DNSSEC. It generally costs a lot of money to hire someone who points them to the right textbooks.

            1. 2

              If you mess up the configuration of your server, you’ve only messed up on one server, but if you mess up signing with DNSSEC, it’s not limited to just one server, it’s your entire domain that goes offline. There is a business risk here that you’ll need to consider as well.

              Perhaps the most obvious issue of this (to me) is that you can’t use emails at your domain (no MX records) to arrange someone to fix it.

              1. 1

                Perhaps the most obvious issue of this (to me) is that you can’t use emails at your domain (no MX records) to arrange someone to fix it.

                That also depends on how the domain is set-up.

                DNS has this thing called a TTL on all of it’s records. For most webservers the TTL is set to 3600 (one hour) while MX records usually have a larger timeout value (one week is not uncommon). This means that the old MX-records will probably still be somewhere in some cache while the A-records might not be.

                Besides: The person(s) in charge of handling your DNS record should be knowing what they are doing and at least have read a textbook on the matter so that this doesn’t happen!

                1. 2

                  The person in charge of handling those records is currently me.

                  DNS records have a TTL, but DNSSEC signatures do not. The MX records will most likely expire at the exact same time as the webserver records - when the signature timestamp arrives.

                  However, I might not be at my computer the second the records expire - someone has to contact me somehow. Hopefully they know, somehow, not to use email (which otherwise works fine) to do that - otherwise they could be waiting some time for a response (assuming I forgot to modify my monitoring/alerting when I enabled DNSSEC to also check signatures).

                  In order to publish new records, I might have to login to a site like cloudflare, who will - as part of a normal login from a new device - send you an email with a link to confirm you really own the account.

                  None of this is a deal-breaker, but it is going to require code changes (and culture changes) at every part of the stack.

                  1. 1

                    None of this is a deal-breaker, but it is going to require code changes (and culture changes) at every part of the stack.

                    Absolutely. Also, set different TTL’s for different types of records. How long in advance do you know you are going to pull that mail-server out? More than one week? If so, then the TTL to one week, because this saves you a lot of trouble down the line if something does fail.

                    But you might just want to have some other mail-provider for e-mail traffic that is this critical to what you are doing.

                    1. 2

                      You keep talking about TTLs - I don’t see how they are relevant. After the date specified in your DNSKEY record, no DNS server can give a valid answer to any query (eg MX), regardless of TTLs. That’s a troublesome failure mode.

                      1. 1

                        After the date specified in your DNSKEY record, no DNS server can give a valid answer to any query (eg MX), regardless of TTLs. That’s a troublesome failure mode.

                        Actually, no. Define time A as DNSKEY_EXPIRY-10 seconds and TTL = 3600. If my computer (or a caching resolver) requests a record at time A, then I will have a validated and verified record on my computer for the next hour (or whatever the ttl value is set to). This will be the case through the enitre DNS-system, which means that all systems that recently did look up the relevant records, will continue to work until A+TTL.

                        So every system that has the records in cache, will continue to work up to A+TTL, but it is also true that after DNSKEY_EXPIRY no new systems that do not have the records in cache, will reject all new responses from the DNSSEC enabled servers, or from caching resolvers.

                        1. 2

                          That assumes that downstream DNS resolvers follow the spec.

                          This… has not been my experience, particularly with MX records.

                          1. 1

                            I totally agree with you there.

                            I only wanted to point out that this can still give you a fairly large window (that is, if you set the TTL high enough) in which a large part of the infrastructure will still be working.

        2. 1

          From what I understood a swarm proxy would be a good idea. Swarm proxy would made requests to the actual DoH server for a pool of clients, thus preventing DoH server from tracking.

          1. 2

            From what I understood a swarm proxy would be a good idea. Swarm proxy would made requests to the actual DoH server for a pool of clients, thus preventing DoH server from tracking.

            But why would I trust a swarm proxy, that can just as well track my every move online, over my own router’s or ISP’s, or heck, even Google’s or OpenDNS’s caching resolvers? All of which have well established and easily verifiable trust anchors.

            If you cannot trust your ISP, Google or OpenDNS you have a bigger issue that cannot be solved with DoH.

            What you are suggesting is exactly the “techbro” thing the author is warning against.

            1. 1

              This swarm proxy can be p2p (browser to browser), not necessarily hosted.

              1. 1

                So you want to decouple the lookups from DoH? If that’s the case, it makes sense.

                However: DNS is often used in the biggest DDoS-attacks. How are you going to make sure that your p2p swarm-proxy does not get involved with that kind of mischief?

                1. 1

                  How are you going to make sure that your p2p swarm-proxy does not get involved with that kind of mischief?

                  The simplest I think is to limit number of requests from a clinet to say 100 per second.

        3. 9

          The arguments in this article focus around the same premise, for the most part: this isn’t a complete fix, and it adds extra centralisation. So let’s deal with the centralisation because I feel this is the main issue with Mozilla’s implementation.

          As somebody who lives in the UK, it’s no secret that my ISP is already actively engaged in reading and modifying my plaintext DNS queries.

          In fact, Mozilla have agreed not to turn on DoH by default in the UK after the IWF publicly claimed that encrypting DNS would facilitate spreading child pornography. Think of the children!

          While I’d prefer not to send all this data to Cloudflare, the fact is that unless I run my own DNS server, ultimately I am always going to be sending my DNS requests to an untrusted third party.

          For me and other users in the UK, DNS-over-HTTPS is an improvement, even with the faults of this implementation.

          In the US, am I correct in thinking that this is the other way around - there’s no (tin foil hat moment - no public) program forcing ISPs to filter or report DNS queries? Meaning that if you must trust someone, your ISP is a good a bet as Cloudflare or better? Genuine question - IIRC this was shot down along with SOPA, but I’m not 100% sure of the state of it.

          All of this by way of saying that I wish we could change the focus of this conversation. How do we keep the benefits of DNS-over-HTTPS and avoid unnecessary centralisation? What steps can Mozilla take, or can we take as private individuals, to fix this implementation?

          1. 3

            I believe that the main problem is that Mozilla/Firefox has a long history of avoiding presenting a user with necessary choices. It doesn’t ask you for a preferred search engine on profile creation. Neither does it ask you if everything you type in the address bar should be sent to that search engine (usually google). And, quite obviously, it doesn’t ask you for a preferred DoH resolver.

            All of this means that the vast majority of people will feed data to these preselected companies because Firefox intentionally avoids asking its users. You can of course change all of these things afterwards in the settings. But many people - quite rightfully in my opinion - worry that most users will not touch these settings at all. What’s odd is that this discussion now arises as an issue around DoH rather than as a general problem that should be addressed by itself and that expands way beyond Mozilla’s preferred DoH resolver..

            1. 2

              Excellent point. At least for search, Mozilla has a bad incentive, in that their primary revenue stream is Google paying for the privilege of being a default provider.

          2. 5

            This article hits many of my concerns, my biggest being that half-jobs that become popular/used end up becoming permanent. Part of the push to DoH is “Getting people to agree to encrypted DNS is hard!” but it ignores the very real possibility that an incomplete/partial DoH implementation and strategy can very well end up the same way (more so because it eschews consensus in favor of trying to force it down our throats).

            I run my own DNS server on site for caching and local in house dynamic DNS - if Mozilla proceeds with their experiment, I’ll probably figure it out after wondering why everything is slower for resolution for external sites and not resolving for internal names.

            1. 2

              Following the write up at the Cambridge, I’ve added a zone file that points to 127.0.0.1 for A and ::1 for AAAA solely for internal clients that effectively blocks use-application-dns.net.

              If you use BIND, you can use this tutorial. Be mindful of the UTF16 garbage that comes with copy/paste of the type master; line.

              I’d like to support DNS over TLS and DNS over HTTPS using Cambridge’s doh101 server. But I don’t have the time atm and Firefox’s chicanery doesn’t help.

              1. 2

                Actually according to the documentation[0] I don’t think routing use-application-dns.net to localhost will work as intended.

                The way I read it, you need to define use-application-dns.net but return NO A/AAA records.

                0: https://support.mozilla.org/en-US/kb/canary-domain-use-application-dnsnet

                1. 3

                  For Unbound:

                  # disable DoH
                  # See: https://use-application-dns.net/
                  # See: https://support.mozilla.org/en-US/kb/configuring-networks-disable-dns-over-https
                  local-zone: use-application-dns.net always_nxdomain
                  
                  1. 2

                    The procedure I outlined above results in a SERVFAIL from both gateway and internal clients:

                    (cpython37) InvincibleReason:~$ nslookup use-application-dns.net
                    Server:		192.168.1.1
                    Address:	192.168.1.1#53
                    
                    ** server can't find use-application-dns.net: SERVFAIL
                    
                    (cpython37) InvincibleReason:~$
                    

                    Perhaps it’s only working by accident. I’m not going to stake my reputation on my knowledge of Bind, only the effective results to preserve my split horizon dns.

                    1. 2

                      YAY! glad it works!

              2. 3

                I was thinking to run a local proxy that checks first some local hosts lists (like one from EnergizedProtection) and if the domain is not matching, hits some of the predefined DoH servers.

                1. 2

                  We need local (stub) resolver that do DNSSEC and DoT/DoH over Tor.

                  It is the only combination that provides confidentiality and integrity.

                  1. 1

                    No we don’t.

                    There is a staggering amount of traffic manipulation going on in the Tor network. It’s not a are sight to see the SSL encryption dropped on certain websites that would otherwise have SSL enabled and it’s also not uncommon to see the contents of websites being manipulated so that you fill out a few captcha’s for various nefarious purposes.

                    If you need integrity, you should stay as far way from Tor as possible.

                    1. 6

                      None of that matters with DNS over HTTPS, which will simply never run over port 80 no matter what the intermediate network does.

                      1. 0

                        It does matter, because attacks on Tor exit nodes that drop the ssl-connection are a real thing.

                        Who knows what future attacks on SSL, DoH, or firefox itself will be discovered? Why would you use something like this, that depends on just one technology, while the problems are more or less solved by another different and clearly superior technology in terms of scaling that, above all, does not provide a single point of failure or compromise, except for the domain you are connecting to?

                        1. 2

                          It’s possible that some Tor exit node will exploit a zero-day in TLS, though we’re basically talking about the next heartbleed at that point.

                          But you need something a lot harder than the usual SSL strip, which works by intercepting port 80 connections and filtering the https upgrading redirect. The widespread traffic manipulation that you’re talking about is mostly based on a trick that only works if the initial connection is over plain-text http. DoH is not.

                      2. 3

                        There is a staggering amount of traffic manipulation going on in the Tor network

                        “staggering” is a bold claim to make, given that malicious exit nodes can be identified. Please provide evidence to back that claim.

                        Besides, both DNSSEC and DoTLS are designed specifically to prevent manipulation of the DNS traffic.

                        1. 1

                          I have no evidence, other than practical experience looking at the developer tools in Brave Browser while using it in Tor mode.

                          Furthermore, I am not going to do an elaborate study into this, but I pose you a very valid question: “Do you trust a network that is mainly used to do things people would rather keep hidden for something as critical as DNS lookups?”

                        2. 2

                          There is a staggering amount of traffic manipulation going on in the Tor network.

                          (emphasis mine)

                          No there’s not. There’s a very disappointingly large amount of bad behavior on the part of servers receiving connections they can identify as coming from Tor (e.g. Cloudflare forcing captchas on connections from exit nodes), but if you’re aware of significant manipulation occurring within the network, you’ll have to cite it, as I haven’t heard of any, and most ways of manipulating circuits are easily detected.

                          1. 1

                            Not in the network per se, but the exit nodes are known to manipulate traffic from time to time. There have been cybersecurity researchers that demonstrate time and time again that it’s easy to mess with the traffic coming from the exit nodes. I don’t see why DNS-traffic would be exempt from this if it becomes one of the, inevitably, larger services on the TOR-network.

                            1. 2

                              I like my TLS connections messed with. Tested security is good security :)

                      3. 2

                        Should be merged into this, it’s a copy of the PowerDNS post (see the end of the linked post)

                        https://lobste.rs/s/sno4wu/centralised_doh_is_bad_for_privacy_2019

                        1. 2

                          Thank you @skade. I’ve merged 7c6bc1 in to sno4wu.

                        2. 2

                          Until recently everyone happily set their DNS to 8.8.8.8, and did not worry at all that all of it is broadcast all over the place. Now that it’s leaking less, it’s suddenly bad. Ugh.

                          1. 19

                            I never, ever, used Google DNS. So your assumption that this is suddenly a problem people care about is not correct since some of us have understood for a while how terrible an idea it is to consolidate all DNS queries from internet users to one for-profit company.

                            1. 1

                              We’ve had problems with 8.8.8.8 interfering with our CDN; being significantly further away than the local DNS servers.

                              If you have a bad provider playing tricks with your DNS, sure, one of these may be a lifesaver. If your provider doesn’t hijack the DNS, then they’re usually the best choice, given both the latency as well as CDN locality considerations.

                            2. 1

                              Hey, VPN providers? Just offer a DoH service. I’d be happy to pay for it in a competitive market. Really want to gain my trust? Get regular privacy audits from a trustable third party.

                              1. 1

                                This is a great article. TBH, none of these things come as a surprise to me, however, it’s amazing how so many folks buy into these authoritarian technologies introduced straight out of the corporate coffers with an obvious conflict of interest, and see it as an improvement, where it is in fact a huge downgrade for the whole industry.

                                The TLS resumption and tracking across IP-addresses is a very good point, too. I’m glad that the ad-blocking-by-DNS is mentioned as well (which is more reliable than a browser-based blocking, probably much faster, too, and more cross-platform as well).