At my employer we have an Elasticsearch cluster that we’re always trying to make perform faster. My gut feeling is the biggest gains come from optimizing queries, then application settings, then maybe JVM settings, and that tuning Linux is down the bottom. But I’ve never actually tried to tune Linux.
What kind of difference can tuning Linux make for the performance of an application?
It depends on the application’s characteristics, but you can often get significant improvements by simple changes to the default networking and disk parameters (e.g., picking a more appropriate disk scheduler algorithm for the workload). It’s worth taking a look at for a high intensity specialized workload like a load balancer, database, or storage server.
I really enjoyed this as a quick summary of in the trenches Linux performance monitoring. So often I run into a random misbehaving host and need to decide if it is hardware or software, and if it is exhaustion of a resource, which resource.
I come from the old days of netstat and top and need some of these new counter based metrics for the newer kernels. I do wish that each of these were being ported to macOS and BSD, as I am becoming more and more unfamiliar with the command line tools on that side of the fence.
Mac OS and Solaris use dtrace (not sure about the other BSDs) to gather those metrics. dtrace was also written by Brendan Gregg, the person giving this presentation. I’ve played a bit with dtrace (since we use Solaris at work) and I wish it was available under Linux because there’s so much you can do with it. Want to profile? dtrace can do it, without a special build. How long is read() taking? Which process do you care about? And what file descriptor?
At my employer we have an Elasticsearch cluster that we’re always trying to make perform faster. My gut feeling is the biggest gains come from optimizing queries, then application settings, then maybe JVM settings, and that tuning Linux is down the bottom. But I’ve never actually tried to tune Linux.
What kind of difference can tuning Linux make for the performance of an application?
It depends on the application’s characteristics, but you can often get significant improvements by simple changes to the default networking and disk parameters (e.g., picking a more appropriate disk scheduler algorithm for the workload). It’s worth taking a look at for a high intensity specialized workload like a load balancer, database, or storage server.
Cool, I didn’t even know there were different disk schedulers to choose from. I’ll have to do a bit of research 😀
I really enjoyed this as a quick summary of in the trenches Linux performance monitoring. So often I run into a random misbehaving host and need to decide if it is hardware or software, and if it is exhaustion of a resource, which resource.
I come from the old days of netstat and top and need some of these new counter based metrics for the newer kernels. I do wish that each of these were being ported to macOS and BSD, as I am becoming more and more unfamiliar with the command line tools on that side of the fence.
Mac OS and Solaris use
dtrace
(not sure about the other BSDs) to gather those metrics.dtrace
was also written by Brendan Gregg, the person giving this presentation. I’ve played a bit withdtrace
(since we use Solaris at work) and I wish it was available under Linux because there’s so much you can do with it. Want to profile?dtrace
can do it, without a special build. How long isread()
taking? Which process do you care about? And what file descriptor?Why did Apple prevent running dtrace without disabling SIP?
That I do no know, but I suspect it might have something to do with “security”.
he did not write
dtrace
, he wrote the book and the dtracetoolkit.Ah, my mistake then.