Whenever I out myself as a member of the Redfin search team to someone who has used Redfin, one of the first questions I get is, “so why do you guys use the Microsoft Map? Why didn’t you choose Google?”. The full answer is a bit long, but the short answer is easy: speed.
Every few months, we’ve tested Google Maps against Virtual Earth, and the result was always the same: Google’s script and tiles loaded considerably faster, but Virtual Earth was as much as four times faster at adding a ton of items to the map. Since our user interface can add up to 500 houses at a time to a map, we just felt like Google wasn’t able to give us the performance we needed. To be fair, part of VE’s speed was due to a bulk add feature that we had lobbied for them to put in, but it worked well, and so we put Google aside, wistfully looking at those speedy script and tile load times.
A few months ago, though, we started contract renegotiations with Microsoft, and we decided to give Google Maps a closer look. One of my colleagues, the brilliant Dan Fabulich of Selenium fame, figured out that we could code our own custom GOverlay to make Google Maps display items much faster than it had previously. The question then became: how hard would it be to port our site from one platform to the other? And would it be worth it to do so?
To answer those questions, I took off from work one Friday determined to see what a meth-fueled* weekend of coding could come up with. I started demolition of the codebase Friday night and worked the rest of the weekend from wake until sleep to get something up and running. By the end of the weekend, I had a working prototype that had most of the major features of our site.
The most shocking thing to me was how relatively easy the transition was, but that turned out mostly to be a testament to how little of the map API we actually use. There are a lot of features of Virtual Earth that we’ve never touched: traffic, 3D, directions, geocoding, and the native zoom and pan controls, just to name a few. All we do is tell the map to go somewhere, tell it to draw some houses on the map, and once in a while tell it to draw a polygon representing a neighborhood. In return, we expect the map to send us an event when the user drags or pans it. Both VE and Google handle these basic needs deftly, and their APIs aren’t wildly different. Most notably, Google Maps and Virtual Earth use the same numbering scheme and heights for their zoom levels, so we didn’t have to worry about URLs with zoom levels suddenly looking wrong on Google. The most complicated parts of our application turn out to be things that have nothing to do with the map: parsing massive data structures we retrieve via XHR, locking the UI at certain moments to prevent the user dragging mid-search, providing back button support, and making sure we load our page resources in the most network-efficient way.
So, how did it feel as a programmer to move from VE to Google? Overall, it was great; there were a bunch of times where it felt like Google’s Map API just worked better or was better organized than VE’s. None of them were large issues, but they added up. As examples:
- Google provides both a movestart event and a dragstart event so that you can easily distinguish between map moves that are caused by programmatic calls to move the map versus map moves that are caused by the user. Doing this in VE has often proved painful for us.
- Fewer things happen in a timeout in Google Maps, which was a pattern that caused us considerable pain in VE.
- Resizing seems like less heavyweight an operation in Google Maps, as the map doesn’t move by default when you resize.
There were of course a few issues with Google Maps (including at least one troubling bug we found in both maps.google.com and the maps API), but the overall experience was silky smooth. And when we did come across problems, we found that, while the Google API docs were less thorough than comparable VE docs, there were many, many more forum and blog posts to help us find the answer.
After my weekend code-a-thon, I put the GMaps prototype up on a test server in our QA labs, and Dan ran a few of our standard performance tests against it. The first thing he found was the map page consistently took 2.5 seconds faster to fire its onload event, seemingly due to the size and CDN-ness of the GMaps script. Next, he ran our absolute worst case scenario test (driven by Selenium, naturally) where we add 500 houses to the map. In IE6, we saw a speedup of about 10 seconds. Even on a fairly fast browser like Firefox 3, the speedup was 3 seconds. And that was in addition to the 2.5 second load speedup.
On the web, times like that are just gold; there’s just no way you can ignore a several second speed up. Even though we had a lot of i’s to dot and t’s to cross, I knew at that moment we were going to make the transition.
My prototype seemed about half done and had taken two days to code; it took another two weeks of the search team’s time to get it up to production quality. The difference between a product that was coded in a weekend and basically works and one that is coded and tested and works reliably for hundreds of thousands of people is massive. As Glenn is always reminding us, 95% done is at best half done.
Anyway, hope you all like the end result, and I hope you find using Redfin that much snappier. My thanks to all the folks who worked on the transition, especially Dan, Navtej, Jane, and Jim. If you have any questions about the transition, feel free to ask in the comments.
* If you’re reading, Mom, no, it wasn’t literally meth-fueled.