There is a whole category of amplification attacks like this. DNS has been a good source because (especially with DNSSEC, the response can be a lot bigger than the request, so if you fake the request address the DNS resolver will send back much larger packets.
amplifiers such as chargen or DNS multiply the size of packets.
In both cases it’s easy to get amplification factors of 200 or more.
Packet count attacks are extra spicy because routers often have a lower packets-per-second capacity than their bits-per-second capacity. Even worse if the attack traffic targets a router’s slow path: they often have feeble CPUs compared to their backplane performance, so they can be easily overwhelmed by the right kind of traffic – ICMP is a common weak point,
There are plenty of lower-factor amplifiers. You can craft a collection of NS records to turn DNS’s iterative resolution algorithm into an amplifier. TCP is an amplifier, because servers will send multiple SYN ACK responses if a client does not respond.
Modern protocols (last 10 or 15 years) are designed to avoid amplification. QUIC is the best example: the client has to pad its initial packet to be larger than the server’s response, and the client is responsible for connection retries.
Rachel talks about source address spoofing as if it is a thing of the past, but sadly it is not. There is an RFC known as BCP38 that says networks should be configured to prevent it, but it’s basically ignored. It requires a lot of effort to set up correctly if there is even slightly complicated multihoming involved. This is a perennial topic of complaint in the ISP ops community.
More fundamentally, I think it’s a mistake to base a global network protocol on addresses. If the network were based on paths instead of addresses then instead of trusting every client to be honest about source addresses, each packet’s return path would be constructed as the packet traverses the network and would necessarily be correct. Destination addresses require basically all routers to have a map of basically all of the network, which gives it a low scaling limit: if it were properly scalable then your personal area network of your phone and its tethered devices would connect to the network using the same protocols as an ISP.
Packet count attacks are extra spicy because routers often have a lower packets-per-second capacity than their bits-per-second capacity
Presumably this is even more true for things like the Smurf attack, where the origins of the reflected packets are different and so limits on packets-per-host don’t apply
Modern protocols (last 10 or 15 years) are designed to avoid amplification. QUIC is the best example: the client has to pad its initial packet to be larger than the server’s response, and the client is responsible for connection retries.
That’s nice. How variable is the response size? Can IoT devices try a small request and a larger one if needed, or do they need a large initial packet? I’m starting to look a bit (a very small bit so far) at QUIC for CHERIoT because it has a couple of potential memory reductions:
The TCP bit of the network stack is pretty big and includes a pile of buffering, whereas UDP can trivially do zero-copy and is smaller.
The TLS layer needs to buffer inbound data for, potentially, multiple packets because TLS over TCP doesn’t know about packets and the receiving stack can’t pass on the data until it’s got to a boundary where it has an HMAC to check. In contrast, QUIC is encrypted per-packet and so can have less buffering.
Needing larger transmit buffers may be less optimal.
More fundamentally, I think it’s a mistake to base a global network protocol on addresses
As I recall, this was one of the big disagreements with the design of IPv6 (are addresses routes?), and the consensus ended up in the opposite direction, though I’m not sure I understood the rationale.
How variable is the response size? Can IoT devices try a small request and a larger one if needed, or do they need a large initial packet?
The client has to send 1200 byte packet(s) before the connection is established. The server must not send more than 3x the number of bytes back to the client before it has verified the client isn’t spoofed. The main difficulty is the size of the server’s certificate. There’s more about the complications in RFC 9000 section 8.1.
the design of IPv6 (are addresses routes?)
As far as I know, there was a lot of discussion about how addresses are split into host and network parts, and how network parts are allocated (hierarchially, geographicaly, …) and about decoupling endpoint identity from the route used to reach it. This eventually became LISP, the location / identity separation protocol (terrible name). But it is just addresses with more steps: the location is a globally routable network prefix which is still limited to a few million in total globally and still spoofable. And unlike 30 years ago, we know that endpoint identity must be cryptographic in nature and it must appear random to on-path snoops.
1997 .. “vintage”? I feel old.
Going by American car insurance classifications, it should be a “classic” network attack:
https://americancollectors.com/articles/vintage-vs-classic-vs-antique-cars/
There is a whole category of amplification attacks like this. DNS has been a good source because (especially with DNSSEC, the response can be a lot bigger than the request, so if you fake the request address the DNS resolver will send back much larger packets.
There are two kinds of amplification:
a smurf attack amplifies the number of packets;
amplifiers such as chargen or DNS multiply the size of packets.
In both cases it’s easy to get amplification factors of 200 or more.
Packet count attacks are extra spicy because routers often have a lower packets-per-second capacity than their bits-per-second capacity. Even worse if the attack traffic targets a router’s slow path: they often have feeble CPUs compared to their backplane performance, so they can be easily overwhelmed by the right kind of traffic – ICMP is a common weak point,
There are plenty of lower-factor amplifiers. You can craft a collection of NS records to turn DNS’s iterative resolution algorithm into an amplifier. TCP is an amplifier, because servers will send multiple SYN ACK responses if a client does not respond.
Modern protocols (last 10 or 15 years) are designed to avoid amplification. QUIC is the best example: the client has to pad its initial packet to be larger than the server’s response, and the client is responsible for connection retries.
Rachel talks about source address spoofing as if it is a thing of the past, but sadly it is not. There is an RFC known as BCP38 that says networks should be configured to prevent it, but it’s basically ignored. It requires a lot of effort to set up correctly if there is even slightly complicated multihoming involved. This is a perennial topic of complaint in the ISP ops community.
More fundamentally, I think it’s a mistake to base a global network protocol on addresses. If the network were based on paths instead of addresses then instead of trusting every client to be honest about source addresses, each packet’s return path would be constructed as the packet traverses the network and would necessarily be correct. Destination addresses require basically all routers to have a map of basically all of the network, which gives it a low scaling limit: if it were properly scalable then your personal area network of your phone and its tethered devices would connect to the network using the same protocols as an ISP.
Presumably this is even more true for things like the Smurf attack, where the origins of the reflected packets are different and so limits on packets-per-host don’t apply
That’s nice. How variable is the response size? Can IoT devices try a small request and a larger one if needed, or do they need a large initial packet? I’m starting to look a bit (a very small bit so far) at QUIC for CHERIoT because it has a couple of potential memory reductions:
Needing larger transmit buffers may be less optimal.
As I recall, this was one of the big disagreements with the design of IPv6 (are addresses routes?), and the consensus ended up in the opposite direction, though I’m not sure I understood the rationale.
The client has to send 1200 byte packet(s) before the connection is established. The server must not send more than 3x the number of bytes back to the client before it has verified the client isn’t spoofed. The main difficulty is the size of the server’s certificate. There’s more about the complications in RFC 9000 section 8.1.
As far as I know, there was a lot of discussion about how addresses are split into host and network parts, and how network parts are allocated (hierarchially, geographicaly, …) and about decoupling endpoint identity from the route used to reach it. This eventually became LISP, the location / identity separation protocol (terrible name). But it is just addresses with more steps: the location is a globally routable network prefix which is still limited to a few million in total globally and still spoofable. And unlike 30 years ago, we know that endpoint identity must be cryptographic in nature and it must appear random to on-path snoops.