Let’s Talk About Your Performance
Do you know how long your customers are going to wait for a page load?
By: TR Jordan
May. 5, 2013 03:00 PM
Do you know how long your customers are going to wait for a page load? More importantly, how long are you making them wait right now?
Back in the misty eons of time, it used to be easy to measure the performance of your application. You’d grab a stopwatch, load up your web application, and see what happened. If it was slow, you’d look at the mess of PHP, HTML and CSS you crammed into index.php and make sure that you weren’t using bubble sort anywhere. In these modern times, you typically take a few more extra steps:
And of course you still want to look at the performance of the Ruby/Python/PHP code that you wrote, with some insight into how that performance relates to the web framework you’re currently using.
You need to know what’s slow before you can make any informed decisions about which changes you need to make to your system or your code. You need need to measure your web application’s performance, preferably broken down by software layer, and machine, and available to you in real-time. Once you have this data, you can start digging into it.
Then Look at Your Data
This plot is called a timeseries. There’s a couple things worth noting about this picture. First of all, it’s not a straight line. Even if you have tons of data and everything is going swimmingly, you’ll see a bit of jumpiness. Secondly, it’s not a very rich visualization. You’ve taken everything you know about all your servers and all your code in the last day and compressed it down to 96 data points. The average word in Moby-Dick is ljklk, but that seems to miss the point.
Since we started from the same data, the top of the line is still our average request latency, but now we can see where each request is spending its time. In this graph, we can actually see that not only is most of the time spent in PHP, but PHP is also the most variable layer.
This cuts out a whole class of potential upgrades and changes. Upgrade the DB? Add an index to a table to speed up some queries? Add RAM to the memcache servers? None of that seems necessary; all the action here is in PHP. Even if we reduced the time spent in every other layer to zero, we would see only a 40% decrease in latency at best.
This chart breaks down how often we’ve served up requests at a given latency. If you compare it to the timeseries chart above, it looks fairly different. About 80% of these requests finish in 2 seconds, even though the timeseries seems to hover right around 2 seconds. The culprit here is the long tail — a request that takes 10 or 15 seconds to finish pulls the average up more than a request that finishes in 0.01 seconds pulls it down.
Even more interesting, there are actually two different populations here. There is a cluster at 0.1s, then a larger, broader hump at 0.4s. This information had been lost in the simple timeseries, and even a stacked area chart couldn’t seperate those two. On the other hand, a single histogram could never tell you if you’ve improved. You’d have to compare two histograms, and even then, you’d have to squint pretty hard to figure out where the shape had changed.
One way to get around this problem is to plot a bunch of histograms over time. We’ll keep time on the X axis, left to right, but at each point, let’s plot a distribution. In this graph, we’ll put latency (the X axis of the histogram) on the Y axis and color each point in proportion with the amount of data we see there.
A picture is worth 1000 words:
This is just the data from memcache, which typically has very short calls. Just like the above histogram, we can see a double-humped distribution at 0.6ms and 1.3ms. On the other hand, we can also see abnormalities such as the traffic spike just after 6:15PM that caused the latency to temporarily increase.
Latest Cloud Developer Stories
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
SYS-CON Featured Whitepapers
Most Read This Week