Isn’t it accurate to say all computing is limited by Amdahl’s law? I’m not sure how that fits into a story about optimizing runtime performance, largely through reducing object allocations. If you do less work, Amdahl’s law doesn’t really apply, does it?
I’m not sure if I’m just “whooosh” missing your point, though. Can you explain?
My point is that the limitation of rails is that it can handle so few requests concurrently. Many people are running, not even, in the double digits per machine. Compare that to something like Erlang where you can handle 100’s concurrently. So a 10% improvement in latency buys you basically nothing in terms of what the actual bottleneck in Rails is.
I had the opposite impression, that the bottleneck in a rails app is quite likely to be in the rails code and not the database, etc. Speeding up rails by 11% seems likely to directly correlate to handling ~12% more clients.
I disagree with your impression. In my experience, the individual request latency of a Rails application is acceptable, it’s just that it takes so much hardware to scale it because the model is so poor.
I don’t disagree with you that Rails is slow. But i the experiences I’ve had with it, I would rather be able to handle 1000 concurrent connections with 100ms latency than 10 concurrent connections with 10ms latency on the same infrastructure.
I would say that Rails is intentionally not architected to solve the problem of concurrent requests. This article barely touches on concurrency, except to say that the memory savings may enable you to add a worker to your heroku dyno with no penalty. I agree that the process-per-request model that most rails apps are deployed under scales poorly, but in the context of a single request I think a 10% improvement in latency is significant for not having to change any of your application code. For some problems that 10% may mean little, but for others it’s a nice boost.
Ignoring speed limits, if a software update made your car 10% faster that doesn’t solve the problem of a driver and passenger having different destinations, but you’re still going to get both people to where they want to go faster than you did before. When there’s one person in the car, that’s going to make them pretty happy. It’s not necessarily on the table to convert the car into two motorcycles.
Really poor analogy (I sincerely hope somebody rips it to shreds), but it’s not really a valid solution to the schneems' problem to convert people from Rails to Phoenix. I think you can get a lot from this article even if you ignore that we’re talking about Rails - I like reading about peoples' tools and approaches to profiling applications/frameworks.
Haha, I think using “intentional” there is a bit generous. Rail’s architecture is due to technological limitations of Ruby, not careful thought.
I would argue, though, that a 10% reduction in latency being user noticable is unlikely. Being able to run another hand request handler is great, but we’re still talking going from a handful per machine to a handful + 1. it’s unlikely to save much in costs.
As I said in my first post, I think it’s great the author managed to get some performance improvements and I don’t mean to put it down, but 10% reduction of poor is still poor.
Great that the author managed to get improve things. But, Rails seems more limited by Amdahl’s law than the per-request throughput.
Isn’t it accurate to say all computing is limited by Amdahl’s law? I’m not sure how that fits into a story about optimizing runtime performance, largely through reducing object allocations. If you do less work, Amdahl’s law doesn’t really apply, does it?
I’m not sure if I’m just “whooosh” missing your point, though. Can you explain?
My point is that the limitation of rails is that it can handle so few requests concurrently. Many people are running, not even, in the double digits per machine. Compare that to something like Erlang where you can handle 100’s concurrently. So a 10% improvement in latency buys you basically nothing in terms of what the actual bottleneck in Rails is.
I had the opposite impression, that the bottleneck in a rails app is quite likely to be in the rails code and not the database, etc. Speeding up rails by 11% seems likely to directly correlate to handling ~12% more clients.
I disagree with your impression. In my experience, the individual request latency of a Rails application is acceptable, it’s just that it takes so much hardware to scale it because the model is so poor.
Hmm. If a request has a latency of say 100ms, where is that time spent?
I’m assuming most of that time is in rails, so now the request only takes 88ms. Win.
But if rails is only responsible for 10ms (now 9), then our new request time is still 99ms. Not a win.
Thoughts on what representative numbers might be?
I don’t disagree with you that Rails is slow. But i the experiences I’ve had with it, I would rather be able to handle 1000 concurrent connections with 100ms latency than 10 concurrent connections with 10ms latency on the same infrastructure.
I would say that Rails is intentionally not architected to solve the problem of concurrent requests. This article barely touches on concurrency, except to say that the memory savings may enable you to add a worker to your heroku dyno with no penalty. I agree that the process-per-request model that most rails apps are deployed under scales poorly, but in the context of a single request I think a 10% improvement in latency is significant for not having to change any of your application code. For some problems that 10% may mean little, but for others it’s a nice boost.
Ignoring speed limits, if a software update made your car 10% faster that doesn’t solve the problem of a driver and passenger having different destinations, but you’re still going to get both people to where they want to go faster than you did before. When there’s one person in the car, that’s going to make them pretty happy. It’s not necessarily on the table to convert the car into two motorcycles.
Really poor analogy (I sincerely hope somebody rips it to shreds), but it’s not really a valid solution to the schneems' problem to convert people from Rails to Phoenix. I think you can get a lot from this article even if you ignore that we’re talking about Rails - I like reading about peoples' tools and approaches to profiling applications/frameworks.
Haha, I think using “intentional” there is a bit generous. Rail’s architecture is due to technological limitations of Ruby, not careful thought.
I would argue, though, that a 10% reduction in latency being user noticable is unlikely. Being able to run another hand request handler is great, but we’re still talking going from a handful per machine to a handful + 1. it’s unlikely to save much in costs.
As I said in my first post, I think it’s great the author managed to get some performance improvements and I don’t mean to put it down, but 10% reduction of poor is still poor.