five by five, in the pipe

A little more than eighteen-hundred days ago, I and many others held our breath as the much-anticipated Firefox 1.0 was released to the world. A million downloads in the first week pushed our server infrastructure to the brink, and left me reeling: we had come so far from the days of Netscape 6 and the drive to Mozilla 1.0. Our message of a better browser experience, exemplified by the security and performance and personalization and open source and standards-friendliness of Firefox, had found a welcoming audience.

We faced, then, a daunting series of challenges: shifting focus to our most promising product (Firefox) while maintaining the energy and contribution of the Mozilla community; making the project sustainable over the long term, within the inviolable parameters of our mission; navigating new waters of commercial-non-profit-hybrid-community-mainstream-competitive software. We’ve had success at all of those so far, by my lights, though surely not without our bumps and scrapes.

The world is very different today than it was when Firefox was born. Microsoft has rebuilt its browser team, and released two major updates to its browser — at the time, I counted IE7 as one of Mozilla’s greatest achievements. Two other software Goliaths, Apple and Google, have joined the browser fray with gusto. Where once only Opera dared to tread, the browsing experience is now seen as a defining characteristic of a mobile phone, and we are ourselves getting ready to rock it.

Even in this savagely competitive environment, Firefox and Mozilla continue to thrive. Of our 330 million users world-wide, more than 100M of them are in the last year, and 30M in the last two months alone. We’ve continued to grow incredibly even since the latest competitor entered the scene, because we’ve continued to relentlessly improve Firefox and the web in ways that matter to people around the world. Every day we, along with our incredible and essential mirror partners, ship almost twice as many Firefox downloads as we did in that incredible release explosion from five years ago.

In January, I’ll have been involved in Mozilla for a dozen years. It has been a lot of work and a lot of fun, a professional and personal opportunity that I think makes me one of the luckiest software professionals ever to whine about their debugger. Thank you to everyone who has helped make Firefox what it is today, and what it will be tomorrow. There’s lots more to do, but please take at least a few minutes today to sit back and relish the impact you’ve had on the web, and on the people who use it.

updating the update, as it were

I made an update to my WPF timeline post, but I wanted to make sure that the correction was seen by people who may not revisit that post.

The SRD blog post which revealed that Firefox users were also exposed to the IE vulnerability was published on Tuesday, not Monday. The post is labelled as having been published Monday, and the timeline including that survived review by Microsoft, but nonetheless it was an error that I published, so I’ll own it. To the best of my knowledge, the SRD post which informed us and the world of the Firefox exposure was published on Tuesday after the patch and bulletins were first made available to Windows users.

You guys all about ready to have this thing entirely behind us? Yeah, me too. Me too.

.NET Framework Assistant blocked to disarm security vulnerability

I’ve previously posted about the .NET Framework Assistant add-on that was delivered via Windows Update earlier this year. It’s recently surfaced that it has a serious security vulnerability, and Microsoft is recommending that users disable the add-on if they have not installed IE patch MS09-054.

Because of the difficulties some users have had entirely removing the add-on, and because of the severity of the risk it represents if not disabled, we contacted Microsoft today to indicate that we were looking to disable the extension and plugin for all users via our blocklisting mechanism. Microsoft agreed with the plan, and we put the blocklist entry live immediately. (Some users are already seeing it disabled, less than an hour after we added it!)

Updated to reflect updates to Microsoft blog post. Also, the add-on was confirmed to not be a vector for the vulnerabilites, so it was removed from the blocklist. The plugin is still blocked pending more information about patch deployment rates; work is underway to make the blocking overridable to accommodate enterprises and sophisticated users who know they have installed the IE patch.

dealing with the .NET ClickOnce add-on

As a number of people have reported, a recent update to Microsoft’s .NET Framework resulted in an add-on being installed into Firefox. Shortly after this patch was released through Windows Update, we were in contact with Microsoft to see how to resolve this issue, as we were hearing directly and indirectly from users that they wanted to uninstall the add-on, and were unable to do so through the Firefox Add-on Manager.

Until recently, removing this add-on from Firefox required that users manually edit the registry, but I’m pleased to report that Microsoft has made available a downloadable patch, and has now added it to the knowledge base article on the topic. Once this patch is applied, the add-on can be uninstalled per-user. (On Windows 7 Release Candidate, the add-on is already the fixed version, at least in my own testing.)

The add-on that was delivered through Windows Update is not compatible with Firefox 3.5, so we’re still trying to figure out how to make sure that 250M-or-so users aren’t confused or — worse — scared off of the upgrade when they are informed that this add-on will be disabled. I’ll report back when we know how that’s going to work, hopefully before Firefox 3.5 is released!

[Edit: removed reference to "disabling".]

it’s full of bits

Deb’s excellent post about Firefox 3′s bookmarking system hit Digg today, on our shared server, which reminded me that I needed to install some WordPress caching software.

big network spike around 9AM Eastern

No sweat; wp-super-cache, I thank you.

year of the Gecko

Stuart put up a great post today describing the results of our intensive focus on memory use in Firefox 3 (and followed up, after many requests from commenters on his blog and elsewhere, with a graph including Safari and Opera). The memory gains are great, and they cover all sorts of improvements: leak fixes, allocator changes, new facilities to eliminate classes of troublesome entrainment, and better cache management.

It’s a time-honoured programming tradeoff that using more space speeds you up, but that’s not what happened here: our memory-reduction regimen actually made us faster in a lot of cases by making us more cache-friendly and by side-effects like using a better allocator. And we didn’t stop there, dropping the hammer on major performance gains in rendering and JavaScript as well, and leaving us as of today right at the top of tests like Apple’s SunSpider.

Productivity and feature wins in Firefox-the-application are really coming together as well, with the AwesomeBar leading many people’s lists of favourite new feature. It really has changed the way I use the web, and I feel like everything I’ve ever seen is right at my fingertips. Add to that the great strides in OS integration and theming for Mac and Linux and it really is shaping up to be the best browser the web has ever known.

I’m obviously excited; this feels like exactly the right sort of everything-coming-together that should be in the air on the cusp of the 10th anniversary of the original source release. It hasn’t been an easy ride, especially pre-Firefox, and nobody on the project takes our success so far for granted — which makes it all the more satisfying to see years of investment pay off in a fantastic product.

Other people are excited too, from users and journalists to extension developers and companies looking to add web tech to their products. In the mobile arena especially we’re seeing a ton of excitement about the gains in speed and size. A lot of people aren’t yet used to thinking of Mozilla as a source of mobile-grade technology, but they weren’t used to thinking of us as a major browser force either. It’s fun to break the model.

Fast, small, cross-platform, industry-leading stability, solid OS integration, excellent standards support, excellent web compatibility, great security, ridiculously extensible, a productive app platform, accessible, localized to heck and back, open source from top to bottom: it’s a great time to be building on top of Gecko, and Firefox 3 is just the beginning. Wait until you see what we have in store for the next release…

why update add-ons now?

With Firefox 3 still a couple of months away, it would seem reasonable to wonder why we’re encouraging add-on developers to get their add-ons updated for Firefox 3 already. For most add-on developers, it will indeed be a pretty quick process to update to the new chrome layout, a new API or two, and test it out, but we want people to start on that process now nonetheless. There are two reasons for this, in my mind:

  • The kinds of people who test our betas and give us great feedback are the kinds of people who have a bunch of extensions installed, and not having their favourite extensions work makes it much less pleasant for them to do in-depth testing.
  • If there is a hard problem found when updating an add-on, we want to know about things we can do on the Firefox side to make it easier, in time for those changes to safely get into the release stream. Waiting until the Firefox RCs are out would mean that we have a lot, lot less room to maneuver when it comes to resolving any problems found.

So please, take a moment to start updating your add-on this weekend, and let us know if you need help. Operators, in the Special Forces sense, are standing by.

leaking, growing, and measuring

(This post started small, but got bigger as I noticed more things that aren’t necessarily as obvious to my readers as they are to me, with respect to our process and software. So it grew over time, oh ha ha! It’s almost 1AM, so I will not be editing it further this evening! I might post a summarized version at some point in the future, or I might not.

And then I edited it because Dave pointed out that it sounded like I was saying that other browsers necessarily suffered similar fragmentation woes, which wasn’t my intent. Indeed, the main point of the post is that there can be many possible causes for a given symptom, and that the popular theories (e.g. “massive memory leaks”) may not prove correct.)

I’m going to share some non-news with you: Firefox has memory leaks. I would be shocked to discover that there were any major browser that did not have memory leaks, in fact. Developers in complex systems, be they browsers or video games or operating systems, fight constantly against bad memory behaviours that can cause leaks, excess usage, or in the worst cases even security problems.

(As an aside, it’s still quite, quite common to read articles which reference this long-in-the-tooth post from Ben as the “Mozilla development team” denying that there are leaks in Firefox. You would have a hard time getting any developer to say that there are no leaks in Firefox, and indeed the post in question says second sentence that Firefox has leaks. You do not need a secret nerd decoder ring here to interpret the text, just basic literacy. Also, it’s no secret that Ben hasn’t been active in Firefox development for quite some time, so for people to point at an article that’s thinking hard about what it would like for its second birthday, rather than actually contacting any of the rather visible and accommodating developers of today — well, it just feels kinda sloppy to me.)

So, Firefox has leaks, and Firefox uses a lot of memory in some cases. A student of logical fallacy will no doubt have no difficulty setting development priorities: to reduce the amount of memory used by Firefox, fix all the leaks. In this case, though, a student of Mencken can happily triumph over the student of fallacy, for even with multifarious leak fixes we would still see cases where Firefox’s “used memory” was quite a bit higher than leaks could account for.

Let me now take you on a journey of discovery. Measuring leaks — contra identifying their root causes or fixing them — is actually quite simple: you count the total amount of memory that you ask the operating system for (usually via an API called malloc), you subtract the amount of memory that you tell the operating system you’re done with (usually via free), and if the number isn’t zero when your program exits, you have a leak. We have a ton of tools for reporting on such leaks, and we monitor them very closely. So when we see that memory usage can go up by 100MB, but there are only a few kilobytes leaked, we get to scratching our heads.

Schrep, our intrepid VP of Engineering and sommelier, was doing just this sort of head-scratching recently, after he measured some surprising memory behaviour:

  • Start browser.
  • Measure memory usage (“Point 1″).
  • Load a URL that in turn opens many windows. Wait for them to finish loading.
  • Measure memory usage (“Point 2″).
  • Close them all down, and go back to the blank start page.
  • Measure memory usage again (“Point 3″).
  • Force the caches to clear, to eliminate them from the experiment.
  • Measure memory usage again (“Point 4″).

You might expect that the measurements at points 1 and 4 would be the same, or at least quite close (accounting for buffers that are lazily allocated on first use, for example). You might, then, share the surprise in what Schrep found:

Point 1Point 2Point 3Point 4

(You can and should, if you care about such things, read the whole thread for more details about how things were measured, and Schrep’s configuration. It also shows the measured sizes for a number of browsers after this test as well as at startup with some representative applications loaded. You may find the results surprising! Go ahead, I’ll wait here!)

So what does cause memory usage to rise that way, if we’re not leaking supertankers worth of memory? Some more investigation ruled out significant contribution from the various caches that Firefox maintains for performance, and discovered that heap fragmentation is likely to be very significant contributor to the “long-term growth” effects that people observe and complain about. Heap fragmentation is a desperately nerdy thing, and you can read Stuart’s detailed post if you want to see pretty pictures, but if you’ve ever opened a carefully packed piece of equipment and then tried to put it all back in the box, you’ve experienced something somewhat similar; if you take things out and put them back in different orders, it’s hard to get every thing to fit together as nicely, and some space gets wasted.

The original design for Gecko placed an extremely high premium on memory efficiency. The layout code is littered with places where people did extra work in order to save a few kilobytes here or there, or to shave a few bytes off a structure. If you compute the classic malloc/free running total I mentioned above, I think you’ll find that Gecko typically uses a lot less memory than competitors. But, as I hope I’ve made at least somewhat clear here, there’s more to managing the memory impact of an application than simply balancing the checkbook and keeping your structures lean. When and how you allocate memory can be as-or-more important in determining the application’s “total memory footprint” than the things that are simple to theorize about. And making sure that you’re measuring the same things that users are seeing is key to focusing work on things that will be the maximum benefit to them, in the shortest time. We’re working now on ways to reduce the effects of heap fragmentation, just as we’ve invested in fixing leaks and improving our tools for understanding memory consumption and effects, and the outlook is quite promising.

The real punch line of this for Firefox users is that Firefox 3 will continue to improve memory behaviour over long-term usage, and you’ll soon be able to try it out for yourself with the upcoming Firefox 3 beta. Beta 1 won’t have the benefits of the work on fragmentation reduction, but many testers are already reporting dramatically improved memory consumption as well as significant performance gains. We’re never satisfied with the performance of Firefox, just as we always seek to make it more secure, more pleasant to use, and nicer to smell.

relevance, your honour?

The search engine business is a tough one. People are generally pretty bad at knowing how to phrase queries to give them what they want, to say nothing of dealing with spelling mistakes and synonyms and stemming, and you have to do all that work basically instantaneously. The relevance of search results might be the only thing more important than performance in determining if users will stick with your particular product, or make the trivial switch to another one.

So I was pretty surprised to discover how, er, idiosyncratic the search results were on Live Search for what I — perhaps naively — think of as a pretty straightforward query.

When searching for “Firefox”, the user might want to find the home page for the product, or a description of the history of the project, or maybe even a review of the software. Both Yahoo and Google give you some mix of that, with what seem to me to be pretty reasonable orderings of results.

The Live Search results are a little more difficult for me to understand, since they have the Silverlight developer FAQ as the first result, then an article about cross-site scripting, then an article about ASP.NET, and then the Wikipedia page about Firefox. You have to go to the 8th entry to get the product’s home page, well below the fold on my machine at least. I’ve saved off the results, in case you disbelieve me, or for some reason can’t reproduce them yourself.

Maybe Live Search users really are a different breed, if that’s what they would be most likely to want when searching for Firefox; a ballsy market-differentiation move by Microsoft, if so.

(Canadians don’t call their judges “Your Honour”, and Americans don’t spell honour that way, so the title of this post is a somewhat impossible reference, but I figure you’ll let that slide.)

justin timberlake is a web data ninja

Someone made the mistake of asking, and I couldn’t find the old email I wrote on the topic, so I’m going to inflict upon you how I think that the whole dealing-with-web-data should probably break down in terms of information flow. My thinking here is heavily influenced by a very popular treatise on problem deconstruction, of course.

Step the first: you cut a hole in the page.

To work with page data, we need to find the data. We can do that using heuristics (like various webmail systems do to identify dates for calendar integration, or the auto-linkifying of URLs that is so common), using explicit metadata like microformats, or even letting the user select something on which to focus our data-detection powers.

Step the second: you put the data in the box

Once we’ve found and analyzed the data of the moment, we probably want to bin it (in the statistics sense, not the adorable British accent sense) into a broad classification like “date”, “place”, “person”, “photo”, “event”, etc.

Step the third: you give him or her the box

Once we’ve determined the type and value of the datum in question (I like to use words like “datum” to cover up my insecurity about a lack of academic credentials), we can then present it to the user so that they can send it to a web service, poke a helper app, turn it into HTML on the clipboard for them to paste in their blog, annotate the page in their Places store with the juicy tidbits. The work on improved content handling in Firefox 3 will give us some useful primitives here, I have reason to hope.


Every now and then, someone asks me “what are microformats? why do people use them? do they smell nice?” My first instinct is to say “well, google can tell you that” but it turns out that it’s really pretty likely that you’ll end up on, where they will tell you this:

Designed for humans first and machines second, microformats are a set of simple, open data formats built upon existing and widely adopted standards.

There is then a link to “learn more” about microformats, by which I think a reasonable person might assume that they mean “learn anything“, because that description is sort of equivalent to describing Firefox as “a piece of software that is built from C++ and JavaScript”.

But then, I don’t think that talking about microformats specifically is really the right way forward, and I think that the microformats dudes would agree that microformats are a means to an end.

next page »