Prefetching Web Content: Trials and Tribulations

Stoyan is totally right and I’m totally wrong (see his comment below, which reads “The thing about google maps you load is that it’s an html page. When you load html page in object tag it’s as if you put it in an iframe. It includes all markup and extra css/js/img resources.”) My test was incorrect. I was testing with a Google Maps URL, but I should have been testing with a Google Maps API URL. I can’t explain how I used the wrong URL- I THOUGHT I copied that URL directly from our web site, but apparently not.

I’m sorry for the mistake and any confusion it may have created.

[Click below for the full content of the original post.]

There has been a lot of discussion of resource prefetching for HTTP, but unfortunately all of the alternatives I’ve seen have problems.

What Is It?

Prefetching resources can greatly enhance the perceived performance of your web apps. A common use of prefetching is to download images that will be used later. For instance, the Aardvark page might link to the Bonobo page, and the Bonobo page might display a large image of Jerry the Bonobo. If a user were reading the Aardvark page, it might be good for the browser to download Jerry’s image in preparation for when the user clicks on the link to the Bonobo page. When the user clicks through to the Bonobo page, the browser can render the image without having to download it- Jerry’s photo can be rendered very quickly.

Prefetching resources can also HURT performance when done wrong. In the worst case, the browser might download Jerry’s photo before downloading content for the Aardvark page, so a user that wants to see aardvarks is delayed. And if the photo isn’t properly cached, the browser might have to go get Jerry’s photo AGAIN when the user goes to the Bonobo page.


Firefox Native Support

Firefox has a special feature just for this- including <link rel=’prefetch’ href=’’> in your html tells Firefox to download the photo of Jerry when it gets some free time. Firefox will NOT make the user wait for any content on the CURRENT page. Pretty nifty.

Unfortunately, it doesn’t work in other browsers.

Worse, it triggers a BAD bug in Chrome 9. Chrome 9 will prefetch the resource, but then refuse to use it on the next page AND NOT TRY TO GET A NEW COPY. The second page will just barf. At least that’s what happens when the resource is Javascript. NOTE: This seems to be fixed in Chrome 10 (and users are auto-upgraded), but wow, what a nasty bug.

One workaround would be to detect the User Agent on the server, and return browser specific HTML. For Firefox, you could use link prefetching, and for other browsers you could skip it. This is bad for a couple reasons. First and obviously, you don’t get the benefits of resource prefetching in other browsers. Second, if the content varies by browser, it’s hard to cache the page well (e.g. in a CDN.)

Custom Javascript

Stoyan Stefanov describes a better workaround on his blog. He advocates including some client-side Javascript that’ll include the resources in a browser specific manner- via dynamic Images in IE, and via dynamic Objects in other browsers.

Others have jumped on this bandwagon. The estimable Steve Souders uses this approach in his ControlJS library. Further, it’s been adopted by YUI.

Unfortunately, this approach is (slightly) broken.

Stoyan wrote a great test case which you can see here: His code works perfectly for his test case. In particular, his “page 1″ will download BUT NOT EXECUTE a Javascript file (1.sleep.expires.js.)

I extended his test case slightly here: I added an additional resource- a link to Google Maps. Unfortunately, including Google Maps breaks this code. The code from Google Maps is executed when prefetched (in Firefox and Chrome.) This is worse than it might seem. First, Google Maps hasn’t been properly initialized, etc., so it throws errors (and also has other bizarre effects- in some cases it creates a hidden iframe, and sets focus to that frame!) Second, the parsing and execution of Google Maps code takes a while, and the browser is frozen during that time. It can be hard to tell that the code is executed, but you can verify it with a debugging proxy like Charles. If you hit the page, you’ll see that your browser downloads a LOT of content from Google- a lot more than we intended to prefetch.

Our Approach

This is tricky- how do you get prefetching on multiple browsers without execution and without server-side logic? We ended up using Stoyan’s approach for IE, using the ‘native’ approach on Firefox (via a dynamic iframe), and giving up on other browsers.

The Javascript looks like this (using the Dojo library):

var prefetchURLs = [
dojo.addOnLoad(function() {


redfin.util.prefetchMapURLs = function(URLs) {
    if (dojo.isIE) {
            new function() {
                for (var i = 0; i < URLs.length; ++i) {
                    new Image().src = URLs[i];
    else if (dojo.isFF) {
        var iframe = document.createElement('iframe'); = 'ifrm_prefetch';

For Firefox, the /prefetch-urls link will return HTML that uses the standard Firefox technique (<link rel=’prefetch’>, as discussed above.) Firefox properly defers download until it is not busy, it doesn’t run the Javascript, etc. For Internet Explorer, we wait a bit after onload and then download the content using Stoyan’s approach. For other browsers, we don’t prefetch. As other browsers start to support <link rel=’prefetch’> it’s easy to add support for them.