AJAX Performance Part 1: I Love Your Website, but it’s so Slow!

One of the main goals of our latest release was to improve the overall performance of the user interface, so we’re starting an intermittent series today on the dev blog talking about what we learned along the way.

Firstly, we had to recognize (channeling Steve Souders of YSlow and High Performance Web Sites fame) that the largest part of our performance problems were on the client-side, and the problems were most severe in IE6.

We set about trying to optimize client performance independent of network latencies and server query times. For the most part, this meant reducing the time the browser spent running our JavaScript. One reason this is particularly important is that the browser does not do anything else while JavaScript is running; the UI is completely locked up. No events, no back button, no browser menus.

If you do a lot of heavy processing on the client side in JavaScript, this can be a real problem since it causes visible delays and makes a web application generally clunky and unresponsive. Accepting that there were times when our website was sluggish, we set out to improve the overall performance of our user interface. More about how we figured out where to start after the jump.

When thinking about performance, it’s important to avoid the trap of “premature optimization;” that is, you don’t want to start optimizing something before you know whether it actually contributes a significant portion of the running time. That’s not to say you shouldn’t be thinking about performance from the very start; you should! Just don’t be trying to squeeze out the very last bit before you know if it matters. This notion is justified and formalized by Amdahl’s Law, which gives the relationship between the fraction of the running time a certain segment of code contributes to the overall running time and the maximum possible speedup to be had by optimizing this segment. Therefore, the first thing to do is take some measurements and figure out what the major sources of poor performance actually are.

Since JavaScript is all single-threaded it is (theoretically) easy to measure the runtime of various methods (more on the gotchas in a future post). We began with a fairly obvious brute-force manual binary-search through all of our JavaScript to find out which portions were causing the greatest slowdown. This is as simple as doing Date arithmetic and continuing to narrow down your search area by investigating the portions of code that take the most time. The following example demonstrates how this might work:

We create an empty div my_debug_div somewhere in our page to contain our timing output. In our JavaScript code we take a few timing measurements to get an overall look at what might be causing slowdowns:

	var myTimer1 = new Date();
	doStuff();
	doMoreStuff();
	var myTimer2 = new Date();
	doEvenMoreStuff();
	andStillMoreStuff();
	var myTimer3 = new Date();
	document.getElementById("my_debug_div").innerHTML =
		"time1: " + (mytimer2 - mytimer1);
	document.getElementById("my_debug_div").innerHTML +=
		"<br>time2: " + (mytimer3 - mytimer2);

Let’s say my_debug_div contains the following output after our javascript runs:

	time1: 10000ms
	time2: 15ms

Well, it looks like the culprit is either doStuff() or doMoreStuff() since those are the two things that happen between myTimer1 and myTimer. Now let’s dive deeper:

	var myTimer1 = new Date();
	doStuff();
	var myTimer1a = new Date();
	doMoreStuff();
	var myTimer2 = new Date();
	.
	.
	.
	document.getElementById("my_debug_div").innerHTML =
		"time1: " + (mytimer1a - mytimer1);
	document.getElementById("my_debug_div").innerHTML +=
		"<br>time2: " + (mytimer2 - mytimer1a);

Now let’s say the output is:

	time1: 9970ms
	time2: 30ms

Well, it looks like doStuff() is the problem. We can now continue this methodology inside of the function doStuff() to see if there is anything we can do about the slowdown. Admittedly, there are fancier ways of profiling code (dojo.profile, JSLex), but when trying to figure out where on earth the time is going, we found the easiest way to stay sane was to add as little complexity as possible. Through this timing methodology, one of the offenders we found in the Redfin website was the scrollable list below the map. For this release we switched to TurboGrid which gave us some significant performance gains; you can read more about this in Sasha’s earlier post. There were many other offenders, though, which we will get to in later installments.

In the next part of our performance series, we’ll see that sometimes it’s okay to be lazy.

Discussion

  • Eric

    When are you going have a site that is iphone compadible. Dummy it down, just prices and descriptions would be enough for me. Keep up the great site!

  • Mike Young

    Hi Eric,

    We hear ya! We’ve been waiting for one particular vendor/partner to support Safari, and we’re confident that it’ll be in our next major site update (think: mid-December or early January). Supporting Safari (Mac or iPhone) is a P1 feature for us.

    And thanks for the kind words.

    Cheers,
    -mike
    CTO, Redfin