It’s unfortunate the blog post doesn’t mention this, but if you’re already routing all your dns queries through dnsmasq you can mitigate the exploit until all your packages and such are updated by setting dnsmasq’s edns-packet-max to 1024.
Does that only apply to udp packets? The man pages seems to imply that is the case, while the CVE says large tcp responses are also a problem.
I’m trying to wrap my head around how this can be exploited.
While browsing the internet, someone gets your browser to make a DNS lookup to a specially formatted domain. This special domain triggers the RCE in getaddrinfo() on your machine
Someone gets your webserver to make a DNS request. If any machine in the DNS lookup chain is compromised, it can serve a special request and get an RCE. This is somewhat mitigated if you trust your ISP/DNS, right?
Someone gets your webserver to make a DNS request to a server that they control, or a website that they control… similar to the browser case.
Is this accurate? Any suggestions for mitigating the browser case for everyone at a large company, besides wait for the Apple software update ?
From what I could tell, the vulnerability is triggered if a DNS server sends two big UDP packets to you. So you can be protected by being behind a reasonable recursive DNS server that will truncate big responses - I couldn’t tell from the article if that’s a special option for a server, or if it’s something a normal “good” server will do.
So I don’t think it’s the first one there - the problem is not in a specially formatted domain name, it’s in the handling of the protocol.
Since it’s protocol-layer, does that mean that if you’re the evil operator of a coffee shop, you don’t even need to induce your customers to make any particular DNS request? You can just respond with evil packets to any query?
So setting my DNS to 22.214.171.124 doesn’t fix things, because any machine on the route from me to 126.96.36.199 can send back an evil response. Though it does help a little bit by not defaulting DNS lookup to “whatever the router wants to use” ?
That agrees with my reading. There’s a suggestion buried in the middle that you can mitigate this with a firewall.
That’s my understanding, yeah. But I’m neither a security nor a DNS expert.
I couldn’t tell from the article if that’s a special option for a server, or if it’s something a normal “good” server will do.
I think a few servers default to 4k these days (RFC5625 recommended apparently).
# this is max udp size in response to clients
Maximum UDP response size (not applied to TCP response). 65536
disables the udp response size maximum, and uses the choice from
the client, always. Suggested values are 512 to 4096. Default
# this is what unbound tels upstream autoritative servers is the max size,
# when asking for a result
Number of bytes size to advertise as the EDNS reassembly buffer
size. This is the value put into datagrams over UDP towards
peers. The actual buffer size is determined by msg-buffer-size
(both for TCP and UDP). Do not set higher than that value.
Default is 4096 which is RFC recommended. If you have fragmen-
tation reassembly problems, usually seen as timeouts, then a
value of 1480 can fix it. Setting to 512 bypasses even the most
stringent path MTU problems, but is seen as extreme, since the
amount of TCP fallback generated is excessive (probably also for
this resolver, consider tuning the outgoing tcp number).
It looks like there’s a possibility that it’s exploitable through a normally configured DNS cache server, which would be really bad - it expands the attack surface from “anyone on your network” to “anyone who can make you look up any domain name.” But people sound cagey, maybe it’s not for sure? Maybe details are not public yet? https://threatpost.com/magnitude-of-glibc-vulnerability-coming-to-light/116296/
According to this tweet https://twitter.com/NLnetLabs/status/700253478115999745, nsd4 and unbound are not affected. nsd4 cannot be used as a caching server, so you can not hide an affected implementation behind it.
Seems a thorough description. After reading this and Dan’s blog post it seems a matter of time to see a public PoC.
Maybe it’s time to move on to saner libc’s like musl.
Take a look at their code and compare it to glibc’s. The only reason why glibc is even used today is because the code has been reviewed like crazy; however, I’m sure it has a lot of critical bugs remaining.
To all the Rust fans: Please don’t take glibc’s code as a reference for how C code looks like. I’m glad only FSF code and few others' looks like that.