counting still easy, critical thinking still surprisingly hard

Another security study making the rounds today in which someone who purports to know a lot about analyzing security — whose blog tagline, in fact, cautions that “we should try not to simplify [security] to the point of uselessness” — has decided that a product becomes less secure when the developer fixes and discloses vulnerabilities that they find in-house. What Jeff Jones, a director of Security Strategy at Microsoft, has done is simply counted the number of fixed vulnerabilities reported by each of Microsoft and Mozilla, grouping by labelled severity.

What could be simpler? Perhaps nothing. What could be more useless? Again, perhaps nothing.

You can only count what the vendor wants you to see

If Mozilla wanted to do better than Microsoft on this report, we would have an easy path: stop fixing and disclosing bugs that we find in-house. It is well known that Microsoft redacts release notes for service packs and bundles fixes, sometimes meaning that you get a single vulnerability “counted” for, say, seven defects repaired. Or maybe you don’t hear about it at all, because it was rolled into SP2 and they didn’t make any noise about it.

We count every defect distinctly. We count the ones that Mozilla developers find in-house. We count the things we do to mitigate defects in other pieces of software, including Windows itself and other third-party plugins. We count memory behaviour that we think might be exploitable, even if no exploit has ever been demonstrated and the issue in question was found in-house. We open our bugs up after we’ve shipped fixes, so that people don’t have to take our word for our severity ratings.

While Microsoft’s senior technical staff are trying to get severity ratings dialed down (unsuccessfully; kudos to MSRC for sticking to their guns), we are consistently rounding our severities up when there’s any doubt at all.

More fixes means less security?

Even if the scales were the same, and we were living in a parallel universe in which Microsoft even approached Mozilla’s standards of transparency and disclosure, the logic is just baffling: Jeff is saying that Mozilla’s products are less secure than Microsoft’s because Mozilla fixed more bugs. By that measure, IE4 is even more secure, because there were no security bugs fixed in that time frame; bravo to Microsoft for that!

I use Microsoft’s software products myself; I’m typing this on a machine that’s running Vista, in fact. Not only am I pretty upset that we see Microsoft referencing this report without disclosing that it was written by a Microsoft director of Security Strategy, but I’m also concerned for my own safety. Do people in charge of security strategy at Microsoft really believe that aggressively concealing the count of fixes that do make it out makes a product more secure? Shouldn’t they be trying to fix more bugs, rather than writing reports that would “punish” them for actively improving the security of their users rather than hoping that defects aren’t found by someone who they can’t keep quiet?

Microsoft should be embarrassed to be associated with this sort of ridiculous “analysis”. We don’t pretend that hiding the rate of fixes improves our users’ security in any way, and we never will. We’re transparent and aggressive in dealing with security issues, and 130 million Firefox users are safer for it every day.

with vacations like this, who needs work?

I took a day off today to get some stuff done around the house, and to put some new drives in the server that hosts this blog, among other wonders.

Once we got into the server hosting space, which was mildly exciting, things went downhill in a hurry:

I had 4 drives in my backpack, and 4 drives in the machine, but only capacity for 6 drives of this size (3.5″). I did not have the right kind of screwdriver to manipulate the front panel of the machine — and to fit the 5th and 6th drives, even, I would have to remove the CD-ROM and media-bay stuff. And get more sleds.

While I was removing the CD-ROM, and discovering that the chassis probably wanted an adapter of sorts to enjoy these new drives at all, I managed to stick my left pinky a little into the whirling maw of pain that is a chassis fan. That little mother was really spinning, because the impact sent a blade flying off the fan, and sent me to St. Mike’s to get a tetanus shot and have a doctor inspect my now-oddly-hamburger-like pinkytip.

So now I need:

  • 2 drive sleds,
  • possibly, some conversion kit to go from ’4 x 3.5″ drives plus media bay’ to ’6 x 3.5″ drives without media bay” on a Dell Poweredge 2950,
  • two new chassis fan modules,
  • someone to work the shift, control and command keys on my computer for a little while,
  • a picture of the blood spatter on the sticker that says “don’t put your finger in here when the power’s on, moron”.

And I’m totally going to need another day off at some point soon, after this mess.

(Oh, I also lost my book somewhere today, possibly at lunch, so I had to buy another copy.)

what makes firefox 3

Couple of articles today reporting that we’re going to ship Firefox 3 with 80% of our current blocker list still remaining to be fixed, which have cause quite the kerfuffle in our little corner of the internet. It appears to be an honest mistake, since a set of meeting notes did include that prediction, along with other elements that mention other approaches to the Firefox end-game, but it’s not our intent to cut Firefox blockers from the fix list against a hard numerical target or fixed deadline. As Matt Asay has noted, we’ve already demonstrated with this product cycle that we don’t roll that way.

At some point, of course, the number of “bugs we’ll ship with” will hit 100%, unless we manage to produce the first piece of bug free software I’ve ever worked with, but even with such numerical truisms aside, the picture here isn’t as simple as it seems. “Bug” in our world — as with every software shop I’ve ever worked, to be honest — includes desired feature improvements, optimizations, basically everything in the gap between “how the software is” and “how someone would like the software to be”. Because of history and some tool limitations, and because we now have a larger set of people triaging blocker nominations than we ever have before, the “blocking” flag doesn’t always strictly mean “we would not ship Firefox 3 if this specific bug isn’t fixed”. It can also mean “we should look at this in more detail before we ship” or “we’d like to focus developers on this set of bugs” or “don’t forget to do something (release note, document workaround, reach out to site authors, etc.) here before we ship”.

Sometimes, of course, it definitely does mean “we really should not ship without this bug fixed”, which is the most common understanding of “blocker bug”. Over time, our impressions of the severity of something can change, up or down, as usage on the web changes, or features get deferred (meaning that “mandatory” platform changes to support the features are no longer mandatory). Some things that we thought were blockers at one point may well be evaluated not to be later on; we reserve the right to change our minds, as must all learning people and organizations, but we’re not going to do that on a strictly numerical basis, and certainly not on the basis of some system dreamed up by a 16th century doctor.

Of course, we want to get Firefox 3 out to users soon, because there are tens of thousands of improvements there: better support for web standards, speed and memory improvements, great new productivity features, safety and security features, straight-up bug fixes, lots of UI polish, and powerful new APIs for extension developers. But we also need — which trumps the “want” of soon, as you would expect — to make sure that we ship a product that’s good enough for a quarter-billion users (on our current growth curve, we could easily see that many people using Firefox during Firefox 3′s lifetime), that’s worthy of the name Firefox, and that we’re all proud to send into the world. Many of us worked on Netscape 6, so we take this pretty seriously.

Mike Schroepfer, our VP engineering, isn’t dogmatic about many things, which is one of the reasons he’s so good at his job. But he’s pretty damned unequivocal that we’re not going to ship until we’re done, as you can read here and elsewhere:

1) We are driven by quality, not time. We want to Firefox 3 to be something that we are all proud of. This means features that delight users and the same or higher quality than previous releases. “Quality” includes performance (Tp/Ts/TDHTML/etc), footprint, web compatibility, regressions, and general fit and finish. Having said that, we want to move the web forward and are in a competitive market. So we should converge on a release as fast as possible.

4) We’ll release betas until we complete our regression work and incorporate feedback from wider-scale testing. Before we release the final beta Performance (specifically Ts, Tp, Tdhtml, Txul, and any other benchmarks we add to the main tinderboxes) will be as good or better than 1.8. We should strive for improved Tp and Tdhtml scores performance v.s. 1.8.

  • When will the last Beta ship?
    • As soon as it is ready (see #4 above)

There’s nothing new or changed here, other than an unfortunate mixup in some meeting notes and that more people than ever before are watching what we do and how we do it. That reporters are tracking our meeting minutes to track the project indicates that what we’re doing is important to a lot of people, and that makes us more motivated to focus on quality than before, and not at all motivated to push out a release to meet some arbitrary deadline. We’re in this for the long game, and years after the release date is nothing more than nerd trivia, people will remember what Firefox 3 did for them, and how well it worked. It’s going to be awesome, even if you have to wait.

Update: Matt Asay has posted a follow-up article. It’s all good.

correction: Mozilla, StopBadware and Google

Earlier this week, Chris Soghoian posted an interview he did with me about Mozilla and our relationship with Google, as well as some other security topics. I’d had some concerns with a previous article of his, and I was glad that he was willing to take the time to discuss them with me.

In the course of that conversation, though, I misled him about how the upcoming malware list is managed; I had mistaken StopBadware’s role as review and removal channel for them having editorial control over the list in general, which is not the case. Maxim Weinstein of StopBadware contacted me to correct my misunderstanding, for which I’m grateful, and has provided a great explanation on the StopBadware site:

Mozilla, Google, and StopBadware are all expected to play a role in ensuring that the needs of both users and web site owners will be addressed in Firefox 3. Mozilla is working with Google to provide a list of potentially harmful URLs that will be used by Firefox to warn users before they browse to a site that may contain malware. This data comes from Google’s own scanning and research, not from StopBadware, as reported. (Our Clearinghouse allows users to search for a site to see if it is currently on Google’s warning list.)

StopBadware’s role will be (as it is now) to ensure that users and web site owners receive as much information as possible about the warning and to provide a transparent review process to assist site owners in understanding why a site was flagged and/or notifying Google that it has been cleaned.

By working together, we help protect users from potentially dangerous web sites while ensuring that owners of legitimate sites have a way to understand the warnings, clean up their sites, and remove the warnings.

I think this is a really good model that combines the scalability we need to protect more than 130 million users, the breadth of detection necessary to keep up with the modern pace of attacks, and a transparent and neutral process for reviewing and clearing sites that have been fixed. I just wish I’d understood it properly before speaking with Chris!

on evangelism

I was going to write about the latest addition to our team, but then I realized that I still haven’t actually posted about what the Evangelism team does here at Mozilla, and we’ve been together for about 3 months now.

Here’s what we’re set up to do:

  • Help people understand Mozilla
  • Help people connect productively and enjoyably with the Mozilla project
  • Help Mozilla communicate “with itself”
  • Help people understand, build, and benefit from the open web

I’ll elaborate on each of those in upcoming posts, because I think they’re each important and interesting, but the core concept that we’ve been using to organize our thoughts so far is one of “stories”. What should the story of a new contributor getting involved be like? What’s the story of the performance work happening in Firefox 3? What is our story on standards, specifications, and interoperability? What story do we need to tell to explain to people why accessibility and cross-platform technologies are important? Stories aren’t always in narrative form — though when they can be, it’s often both fun and inspiring — and they’re definitely not intended to be fictional. We might tell stories that are aspirational, describing what we want the experience to be for someone who wants to help web sites become more compatible with all browsers, but we won’t tell lies. We’ll make mistakes, and we’ll change our minds, and we might have to simplify to make things understandable, but we won’t lie to people. We don’t need to — the true things about Mozilla are fantastic — and we just aren’t good at it. Mozilla is a project that is defined by openness and candor, and fighting that nature is neither productive nor viable in the long term.

We’re going to work on helping identify and refine important stories, as well as finding ways to make some aspirational stories come to life, but we’re also going to help people tell their Mozilla stories. From a peek inside the history of Mozilla’s test automation to what’s involved in a briefing with a reporter, we want people to understand what all is going on with the Mozilla project. Helping the right stories get to the right audiences — within the community and beyond — is a big part of the value we need to provide, and we’re ready to get started.

Evangelism isn’t just about saying nice things, since productive attention to challenges and mistakes is a critical part of improving how we work, and it’s not about making everyone love Mozilla. It’s about making sure that the great things Mozilla does are visible to the world, encouraging people who share our values, and helping our unique project work together better. It’s not going to be how everyone else does evangelism, but after nearly a decade of working on Mozilla I’m used to us just being kinda different. We’ll overlap in some areas with marketing, public relations, and other activities — if we do our jobs well, we’ll probably overlap with just about every part of the project! — but we’re concerned with helping out and not with defining territory.

If you’ve been around the project for a long time, the term “Evangelism” probably makes you think of “technical evangelism“, which was the term used for people doing outreach to sites that were not compatible with web standards, and helping/encouraging the site owners to repair their problems. That’s a very small subset of the sorts of evangelism that we’re starting to work on, and to be frank we’re unlikely to do very much such technical evangelism directly, though we’re definitely interested in helping organize and support those who do. (In fact, we have some ideas about that very topic!)

Next up: meet the MoCo Evangelism team.

two things you should read

One thing about work: Stuart has a truly excellent post about memory work in Firefox. The sort of post I’ve been wanting to figure out how to write for some time, and he just plain-out nails it.

One thing about not-work: my lovely sister Steph is featured in an article in the Montreal Gazette, which — in spite of the strange headline and conspicuous lack of photo — I found quite enjoyable.

server switch

I finally got off my butt and moved my blog to the new machine that has php and mysql of appropriately recent vintage to actually run a supported version of wordpress; if you’re reading this, all went quite well.

leaking, growing, and measuring

(This post started small, but got bigger as I noticed more things that aren’t necessarily as obvious to my readers as they are to me, with respect to our process and software. So it grew over time, oh ha ha! It’s almost 1AM, so I will not be editing it further this evening! I might post a summarized version at some point in the future, or I might not.

And then I edited it because Dave pointed out that it sounded like I was saying that other browsers necessarily suffered similar fragmentation woes, which wasn’t my intent. Indeed, the main point of the post is that there can be many possible causes for a given symptom, and that the popular theories (e.g. “massive memory leaks”) may not prove correct.)

I’m going to share some non-news with you: Firefox has memory leaks. I would be shocked to discover that there were any major browser that did not have memory leaks, in fact. Developers in complex systems, be they browsers or video games or operating systems, fight constantly against bad memory behaviours that can cause leaks, excess usage, or in the worst cases even security problems.

(As an aside, it’s still quite, quite common to read articles which reference this long-in-the-tooth post from Ben as the “Mozilla development team” denying that there are leaks in Firefox. You would have a hard time getting any developer to say that there are no leaks in Firefox, and indeed the post in question says second sentence that Firefox has leaks. You do not need a secret nerd decoder ring here to interpret the text, just basic literacy. Also, it’s no secret that Ben hasn’t been active in Firefox development for quite some time, so for people to point at an article that’s thinking hard about what it would like for its second birthday, rather than actually contacting any of the rather visible and accommodating developers of today — well, it just feels kinda sloppy to me.)

So, Firefox has leaks, and Firefox uses a lot of memory in some cases. A student of logical fallacy will no doubt have no difficulty setting development priorities: to reduce the amount of memory used by Firefox, fix all the leaks. In this case, though, a student of Mencken can happily triumph over the student of fallacy, for even with multifarious leak fixes we would still see cases where Firefox’s “used memory” was quite a bit higher than leaks could account for.

Let me now take you on a journey of discovery. Measuring leaks — contra identifying their root causes or fixing them — is actually quite simple: you count the total amount of memory that you ask the operating system for (usually via an API called malloc), you subtract the amount of memory that you tell the operating system you’re done with (usually via free), and if the number isn’t zero when your program exits, you have a leak. We have a ton of tools for reporting on such leaks, and we monitor them very closely. So when we see that memory usage can go up by 100MB, but there are only a few kilobytes leaked, we get to scratching our heads.

Schrep, our intrepid VP of Engineering and sommelier, was doing just this sort of head-scratching recently, after he measured some surprising memory behaviour:

  • Start browser.
  • Measure memory usage (“Point 1″).
  • Load a URL that in turn opens many windows. Wait for them to finish loading.
  • Measure memory usage (“Point 2″).
  • Close them all down, and go back to the blank start page.
  • Measure memory usage again (“Point 3″).
  • Force the caches to clear, to eliminate them from the experiment.
  • Measure memory usage again (“Point 4″).

You might expect that the measurements at points 1 and 4 would be the same, or at least quite close (accounting for buffers that are lazily allocated on first use, for example). You might, then, share the surprise in what Schrep found:

Point 1Point 2Point 3Point 4

(You can and should, if you care about such things, read the whole thread for more details about how things were measured, and Schrep’s configuration. It also shows the measured sizes for a number of browsers after this test as well as at startup with some representative applications loaded. You may find the results surprising! Go ahead, I’ll wait here!)

So what does cause memory usage to rise that way, if we’re not leaking supertankers worth of memory? Some more investigation ruled out significant contribution from the various caches that Firefox maintains for performance, and discovered that heap fragmentation is likely to be very significant contributor to the “long-term growth” effects that people observe and complain about. Heap fragmentation is a desperately nerdy thing, and you can read Stuart’s detailed post if you want to see pretty pictures, but if you’ve ever opened a carefully packed piece of equipment and then tried to put it all back in the box, you’ve experienced something somewhat similar; if you take things out and put them back in different orders, it’s hard to get every thing to fit together as nicely, and some space gets wasted.

The original design for Gecko placed an extremely high premium on memory efficiency. The layout code is littered with places where people did extra work in order to save a few kilobytes here or there, or to shave a few bytes off a structure. If you compute the classic malloc/free running total I mentioned above, I think you’ll find that Gecko typically uses a lot less memory than competitors. But, as I hope I’ve made at least somewhat clear here, there’s more to managing the memory impact of an application than simply balancing the checkbook and keeping your structures lean. When and how you allocate memory can be as-or-more important in determining the application’s “total memory footprint” than the things that are simple to theorize about. And making sure that you’re measuring the same things that users are seeing is key to focusing work on things that will be the maximum benefit to them, in the shortest time. We’re working now on ways to reduce the effects of heap fragmentation, just as we’ve invested in fixing leaks and improving our tools for understanding memory consumption and effects, and the outlook is quite promising.

The real punch line of this for Firefox users is that Firefox 3 will continue to improve memory behaviour over long-term usage, and you’ll soon be able to try it out for yourself with the upcoming Firefox 3 beta. Beta 1 won’t have the benefits of the work on fragmentation reduction, but many testers are already reporting dramatically improved memory consumption as well as significant performance gains. We’re never satisfied with the performance of Firefox, just as we always seek to make it more secure, more pleasant to use, and nicer to smell.

meanwhile, in the ecosystem

Right before Hallowe’en, Songbird 0.3 hit the wires, giving people an updated look at what the ‘nest denizens are planning in their webified music player. Right after Hallowe’en, Flock 1.0 arrived, featuring their “social” spin on the web browsing experience. Those teams have obviously worked hard and long to bring new and exciting things to the open web, and not to take anything that work, but these apps are also things that the rest of the Mozilla community should feel some pride in. Mozilla has always insisted on very liberal licensing of our technology in no small part so that people can innovate in different directions at the same time. Sometimes those innovations can come back into the shared code, sometimes they inspire other work, and sometimes they help generate experimental results that everyone can use to improve their own products and projects.

Are relations between all the different application developers and technology hackers and community members as great as they could be? No, though I think we’re all working to improve them as we learn how, and I think we’re getting better all the time. Our baseline openness helps a ton, and gives us a ridiculous amount of visible — though not always easy to digest — history of what the project has done, and why. We’re going to hear more and more about openness of platforms, technologies, organizations and processes as that becomes something that developers and users come to expect from the people they work with; I think the world and the web would be in a much better place if more of the players were open in ways that transcended specifications and publication of finished works. But then, I would think that.

Chris Pine from Opera on ES4 decimals

Chris Pine, a great guy who works at Opera on their JS engine (and who once lived with Sid Meier, I was awestruck to learn) has written an article about the decimal arithmetic available in the proposed ES4 draft. Decimals are pretty handy for applications where you need to…well, you should just go read the article; he does a better job than I would. I only got to meet Chris once, at a fateful ECMA meeting last spring, but from reading even this one article I’m sure you’ll understand why I’d love to meet up with him again.

Also of interest to many will likely be this phrase: “JavaScript 2 (supported in future versions of Opera, but not right at the moment.)” Go Opera!