iptables is only useful for very simplistic loss simulation.
The iptables statistic module has support for random and dropping every nth packet, but real-world packet loss is usually correlated and bursty. tc shines when you need a loss model that more accurately matches what you’d see in the wild (also when you need to simulate delays, duplicate packets, reordered packets, etc.)
For sure, it would be unusual to see 80% packet loss in the real world. More likely, it would flip-flop between 100% and 0%. I was just testing where the limit is. A bursty packet loss would be an interesting followup experiment.
Now thinking about the bursty scenario: leaving out the “–retry-delay” and doing the exponential back-off might allow more requests to eventually succeed. But the use case is the blog article is “heartbeat messages to a monitoring service”. Each request tells the server “Checking in – I’m still alive!”. For this use case, if the client experiences severe packet loss, maybe it shouldn’t report itself as healthy. In other words, trying to get the request through at any cost isn’t always the correct thing to do.
I wasn’t aware iptables could be used to simulate packet loss like this. I would have turned to tc disc or whatever the command is.
iptables is only useful for very simplistic loss simulation.
The iptables statistic module has support for random and dropping every nth packet, but real-world packet loss is usually correlated and bursty. tc shines when you need a loss model that more accurately matches what you’d see in the wild (also when you need to simulate delays, duplicate packets, reordered packets, etc.)
For sure, it would be unusual to see 80% packet loss in the real world. More likely, it would flip-flop between 100% and 0%. I was just testing where the limit is. A bursty packet loss would be an interesting followup experiment.
Now thinking about the bursty scenario: leaving out the “–retry-delay” and doing the exponential back-off might allow more requests to eventually succeed. But the use case is the blog article is “heartbeat messages to a monitoring service”. Each request tells the server “Checking in – I’m still alive!”. For this use case, if the client experiences severe packet loss, maybe it shouldn’t report itself as healthy. In other words, trying to get the request through at any cost isn’t always the correct thing to do.