It’s not clear why the graphs have an x axis of seconds, since that doesn’t appear to be mentioned as part of the experiment. Also, why does it take up to 300 seconds to warm up to a steady state?
We should have made it more clear. In all rate graphs load gradually increases from 0 to 4M (2M when testing with server) packets/s over the period of 10 minutes.
Syn, Syn/Ack, Ack TCP segments will be 64 bytes, which is also the smallest Ethernet frame a driver will send (the nicer ones padding the payload with zeroes if a packet was shorter). This is a good lower-bound to use for stress-testing, as the connection-per-second will be influenced by the capability of a system to process such short packets.
Then, it is also good to know there is a PPS budget on EC2 instances. What I find curious is that kernel-bypass solutions are able to go way over this limit, and Amazon published an ENA driver in DPDK for higher-loads. With the budget seen here, such kernel-bypass solution seems useless.
It’s not clear why the graphs have an x axis of seconds, since that doesn’t appear to be mentioned as part of the experiment. Also, why does it take up to 300 seconds to warm up to a steady state?
We should have made it more clear. In all rate graphs load gradually increases from 0 to 4M (2M when testing with server) packets/s over the period of 10 minutes.
I’d guess that’s how long an instance takes to launch, including successful cloud-init execution.
Shouldn’t it stay and zero and shoot up then? They’re all perfectly linear.
Syn, Syn/Ack, Ack TCP segments will be 64 bytes, which is also the smallest Ethernet frame a driver will send (the nicer ones padding the payload with zeroes if a packet was shorter). This is a good lower-bound to use for stress-testing, as the connection-per-second will be influenced by the capability of a system to process such short packets.
Then, it is also good to know there is a PPS budget on EC2 instances. What I find curious is that kernel-bypass solutions are able to go way over this limit, and Amazon published an ENA driver in DPDK for higher-loads. With the budget seen here, such kernel-bypass solution seems useless.
I updated with 0 bytes payload (54 byte packets) results.
Yes, trying DPDK would be interesting. At packet rates reported in our blog kernel handling seems to be perfectly sufficient.