I would probably reach for wget (can read an a file of urls to fetch) or a combination of curl and xargs (or gnu parallel), before trying a bespoke tool like this though. That said, the X-Cache-Status statistics are neat, if you need that.
That’s what I thought.
When looping through a file with a few hundred thousand entries with bash/ curl, I had a throughput of ~16 requests/second, while cache_warmer easily was able to do >500 req/s.
Thanks for the hint, I should probably add that to the post.
Indeed, looping via bash would be slow due to not reusing the connection. With a carefully crafted xargs, you should be able to get multiple urls on the same line (eg. curl url1 url2 url3...). Then curl /should/ reuse a connection in that case. If curl had a ‘read urls from a file’ parameter, it would be quite a bit easier to script, but alas it currently does not.
Seems like an interesting project.
I would probably reach for wget (can read an a file of urls to fetch) or a combination of curl and xargs (or gnu parallel), before trying a bespoke tool like this though. That said, the X-Cache-Status statistics are neat, if you need that.
That’s what I thought. When looping through a file with a few hundred thousand entries with bash/ curl, I had a throughput of ~16 requests/second, while
cache_warmereasily was able to do >500 req/s.Thanks for the hint, I should probably add that to the post.
Indeed, looping via bash would be slow due to not reusing the connection. With a carefully crafted xargs, you should be able to get multiple urls on the same line (eg.
curl url1 url2 url3...). Then curl /should/ reuse a connection in that case. If curl had a ‘read urls from a file’ parameter, it would be quite a bit easier to script, but alas it currently does not.