Whatever PaaS offering you are using, this is a very common capacity-planning question: How many dynos/instances of my app do I really need in order to support x concurrent users at any given time. Auto-scaling and elastic-load-balancers are awesome, but you still need to know what you are up against. With the ruby gem from blitz.io, this is super easy to iterate and find out for yourself before you go live!
Deploying on Heroku
We used this really simple Sinatra app on Heroku: dyno-capacity.heroku.com which waits 250ms before it returns the response. This simulates a database query or some such thing to introduce artificial latency on the requests. Given this delay, we expect to get the dreaded H11: backlog too deep or the H12 – Request timeout errors from Heroku. These errors are indications that your app doesn’t have enough capacity handle many users.
get '/' do delay = params[:delay].to_f || 0.25 sleep delay 'blitz-me!' end
When we now access this page, we can see the logs reflecting the time taken by the app to respond (line breaks added for clarity):
2011-05-25T05:11:14+00:00 heroku[router]: GET dyno-capacity.heroku.com/ dyno=web.1 queue=0 wait=0ms service=260ms bytes=181
The optional delay parameter allows us to change the response time of the app to correlate the number of users when the app takes 100ms, 250ms, 500ms, etc. Getting this app up and running is simple with Heroku’s git workflow:
git push heroku master
Measuring the number of dynos you need
Let’s say you want to support 1,000 concurrent users on your app with a 1% tolerance on the error rates. The overall loop looks like this:
[ 1, 2, 4, 8, 16, 32 ].each do |dynos| heroku.set_dynos 'my-app', dynos run-a-load-test break if percent-errors < 1
Turns out with the ruby gem from blitz.io, the real code is not that far from the pseudo-code above!
Check out dyno-blitzer hosted on GitHub.