ten

Ten years ago today I was fortunate enough to participate in a tremendous event: the release of the Netscape Communicator source code as the first Mozilla source drop. Mitchell has posted a great summary of the past ten years, and I’m sure there will be many tales of woe and glory recounted over the course of the day, but I wanted to take a quick moment, before heading to the office to get ready for a special 10th Anniversary edition of Air Mozilla Live, to say thanks to a couple of people, thereby ensuring that I give incredible offense to those I miss in my rush.

Thanks to Mitchell and Brendan for being my Mozilla mentors, and helping me learn an immense amount over the last decade about how code, people, law, and mission intersect. Thanks to Frank Hecker for seeing an opportunity to maximize where all around him saw risk to mitigate. Thanks to Alan Cox, Miguel de Icaza and others for holding my hand during my first forays into open source, long before I would ever have a chance to make my own small mark on that world. Thanks to Brendan and Clayton for bringing me to Netscape, and for supporting my pre-Mozilla push to make the JS engine open source; I was pleased to be overtaken by events.

Thanks to the hundreds of friends and thousands of colleagues I’ve gained over the past decade — a third of my life! — who have made this ride go, and who have made it wonderful.

I’ll admit that I’m a little verklempt right now.

meat

People keep asking me to blog this, so I shall. It’s really exactly what Alton Brown would tell you.

Needed:

  • Meat, such as rib roast or lamb leg
  • Roasting pan
  • Probe thermometer with temperature-based alarm
  • Kosher salt, black pepper (you can add more spices, obv.)
  • Just a soupçon of oil

Steps to reproduce:

  • Bring roast to room temperature (optional, but helpful!)
  • Set oven to 250F.
  • Lightly (lightly) oil the meat — it should glisten, but there should be no dripping.
  • Rub with salt and pepper and whatever other spices you can’t resist.
  • Put probe in the centre of the meat, set the alarm for 118F
  • When the oven hits 250F, turn it to 200F, and put the meat in
  • When the probe alerts you to the 118F-ness of the meat, take the pan out and tent
  • Turn oven to 500F
  • Take the probe out: you’re not gonna need it any more, and some of them don’t really enjoy 500F
  • When the oven gets to 500F, wait 15 mins to let the walls of the oven get good and hot. (You can actually wait a long time at this stage, right up to your level of comfort with food sitting out of the oven.)
  • Put the roast back in and let it go for about 15 mins, such that a pleasant crust develops .
  • Take the roast out of the pan, tent to let it rest ~15 mins.
  • Meanwhile, you have pan juices and 15 mins to kill. Let your conscience guide you.

The reduction of heat from 250F to 200F when the roast goes in is key. It’s the difference between “perfect medium-rare with a 1/2″ of medium around the edges” and “medium-rare at the centre, medium for the outer third”.

the missed opportunity of acid 3

Ian Hickson released his Acid 3 test a little while ago, and though there have been a couple of changes recently, it’s pretty much settled down in its final form. There’s been a lot of discussion of how different browsers do on the test, and it’s clear that some projects are really buckling down to get to 100 and the pixel-perfect rendering. You might ask why Mozilla’s not racking up daily gains, especially if you’re following the relevant bugs and seeing that people have produced patches for some issues that are covered by Acid 3.

The most obvious reason is Firefox 3. We’re in the end-game of building what I really do believe is the best browser the web has ever known, and we expect to be putting it in the hands of more than 170 million users in a pretty short period of time. We’re still taking fixes for important issues, but virtually none of the issues on the Acid 3 list are important enough for us to take at this stage. We don’t want to be rushing fixes in, or rushing out a release, only to find that we’ve broken important sites or regressed previous standards support, or worse introduced a security problem. Every API that’s exposed to content needs to be tested for compliance and security and reliability, and we already have some tough rows to hoe with respect to conflicts with existing content as it is. We think these remaining late-stage patches are worth the test burden, often because they help make the web platform much more powerful, and reflect real-web compatibility and capability issues. Acid 3′s contents, sadly, are not as often of that nature.

Ian’s Acid 3, unlike its predecessors, is not about establishing a baseline of useful web capabilities. It’s quite explicitly about making browser developers jump — Ian specifically sought out tests that were broken in WebKit, Opera, and Gecko, perhaps out of a twisted attempt at fairness. But the Acid tests shouldn’t be fair to browsers, they should be fair to the web; they should be based on how good the web will be as a platform if all browsers conform, not about how far any given browser has to stretch to get there.

The selection of specifications is also troubling. First, there is a limitation that the specification had to be suitably finished by 2004, meaning that only standards that were finalized during the darkest period of web stagnation were eligible: standards that predate people actively reviving the web platform, through work like the WHATWG’s <canvas> specification. While this did protect us from tests requiring the worst of SVG’s excesses (1.2, with support for such key graphical capabilities as file upload and sockets, was promoted to CR in 2007), it also means that it includes @font-face, a specification which was so poorly thought of that it was removed in 2.1. I can think of no reason to place such time-based restrictions on specification selection other than perhaps to ensure that there has been time for errata to surface — but in the case of CSS, those errata are directly excluded! Similarly, the WHATWG’s work to finally develop an interoperable specification for such widely-used things as .innerHtml were excluded. (If the test is about “fairness”, I could see not wanting to target specifications that were so new that people just hadn’t had time to catch up to them, but again I think that such a test should be built on more long-term criteria than lining up the starting blocks for a developer sprint. We could be using Acid tests as an “experience index” for the web, if you will.)

I also believe that the tests should focus on areas where the missing features can’t be easily worked around; SMIL’s capabilities are available to interested authors via an easy-to-use library, and if Hixie could stomach digging around in the SVG specification I wish he’d spent his time on things like filters or even colour profiles, which lacks are much harder to work around.

People who misread my disappointment or our lack of last minute Acid-boosting patches as a lack of commitment to standards would do well to study our history as a project. For more than a decade, we’ve been dedicated to support of standards, and improvement of those standards, even though it has often been painful to do so. In 1998 we threw away our entire layout engine in order to rebuild on one that provided, we believed, a better basis for the new standards of the day: DOM, CSS, etc. We’ve changed our implementations of <canvas>, of globallocalStorage, of JavaScript — all to track changes in standards, and to avoid conflicting with them. Last night, we disabled cross-site XHR because we aren’t certain what way the spec was going to go, and if, e.g., we ship something that doesn’t send cookies, we would constrain where the spec could go by building developer expectations about that behaviour. (These developer expectations about what sorts of requests can be triggered from cross-domain content are basically the entire reason that we need special work for cross-site XHR mashup technology.) We will fix standards compliance bugs; it’s what we do. But we won’t fix them all with the same priority, and I hope that we won’t prioritize Acid 3 fixes artificially highly, because I think that would be a disservice to web developers and users. Where Acid 3 happens to test something that we believe is important to fix, we will of course pursue it: surrogate pair handling or some of the selector bugs seem like good candidates.

Acid 3 could have had a tremendous positive effect on the web, representing the next target for the web platform, and helping developers prioritize work in such a way as to maximize the aggregate capabilities of the web. Instead, it feels like a puzzle game, and I can easily imagine the developers of the web’s proprietary competitors chuckling about the hundreds of developer-hours that have gone into adding another way to iterate over nodes, or twiddling internal APIs to special case a testing font. I don’t think it’s worthless, but I think it could have been a lot more, especially with someone as talented and terrifyingly detail-oriented as Ian at the helm.

Maybe next time.

[Update: the WebKit checkin to special-case Ahem wasn't one that used a private OS X API, it was one that used an internal CoreGraphics API on Windows; I should have been reading more closely, apologies to the WebKit folks.]

year of the Gecko

Stuart put up a great post today describing the results of our intensive focus on memory use in Firefox 3 (and followed up, after many requests from commenters on his blog and elsewhere, with a graph including Safari and Opera). The memory gains are great, and they cover all sorts of improvements: leak fixes, allocator changes, new facilities to eliminate classes of troublesome entrainment, and better cache management.

It’s a time-honoured programming tradeoff that using more space speeds you up, but that’s not what happened here: our memory-reduction regimen actually made us faster in a lot of cases by making us more cache-friendly and by side-effects like using a better allocator. And we didn’t stop there, dropping the hammer on major performance gains in rendering and JavaScript as well, and leaving us as of today right at the top of tests like Apple’s SunSpider.

Productivity and feature wins in Firefox-the-application are really coming together as well, with the AwesomeBar leading many people’s lists of favourite new feature. It really has changed the way I use the web, and I feel like everything I’ve ever seen is right at my fingertips. Add to that the great strides in OS integration and theming for Mac and Linux and it really is shaping up to be the best browser the web has ever known.

I’m obviously excited; this feels like exactly the right sort of everything-coming-together that should be in the air on the cusp of the 10th anniversary of the original source release. It hasn’t been an easy ride, especially pre-Firefox, and nobody on the project takes our success so far for granted — which makes it all the more satisfying to see years of investment pay off in a fantastic product.

Other people are excited too, from users and journalists to extension developers and companies looking to add web tech to their products. In the mobile arena especially we’re seeing a ton of excitement about the gains in speed and size. A lot of people aren’t yet used to thinking of Mozilla as a source of mobile-grade technology, but they weren’t used to thinking of us as a major browser force either. It’s fun to break the model.

Fast, small, cross-platform, industry-leading stability, solid OS integration, excellent standards support, excellent web compatibility, great security, ridiculously extensible, a productive app platform, accessible, localized to heck and back, open source from top to bottom: it’s a great time to be building on top of Gecko, and Firefox 3 is just the beginning. Wait until you see what we have in store for the next release…

that’s when I shoot the water cannon

Taras-and-company’s ridiculous progress continues apace, in pursuit of tools to let us master — rather than be mastered by — the scale and…uniqueness of our codebase. One great milestone is the addition of build-time static checking to the mozilla2 build process. If the enforcement opportunities presented by static-checking.js are insufficient, then this guy in Atlanta may have something to help us.

robot vigilante

credit where it’s due

There’s going to be a lot of talk this week coming out of MIX, about IE8. Early reports are interesting, if still often hidden behind “NOT PUBLISHED” links on MSDN, but the most interesting thing I’ve seen yet is their change in IE8 rendering mode default. The specific change is a good one, but even more than that I think it’s very promising as a matter of process.

The original decision to make IE8 default to matching IE7′s legacy rendering mode was made in secret, secret even from many of the web experts in the organizations linked to from Dean’s post. Once the conversation was opened to input from the rest of the world’s experts on web content compatibility, they were able to get to a much better decision, and happily that’s reflected in the updated plans for IE8. I hope that this case helps Microsoft understand more generally that significant web-compatibility decisions are too important to be left to closed groups: the web is simply too big, with too many stakeholders, for that to be a workable path to success. (Not that simply operating under “a standards body” is sure to avoid such secrecy, as they can also have related damage, but they at least tend to have more diverse viewpoints, which is a half-measure of some value.)

I was also heartened that they were able to make such a change after announcing it. We’ve all heard before that the IE team wasn’t able to talk about their plans until they were very certain, because people build businesses on those plans and will be harmed are made after an announcement. The conspicuous lack of bankruptcies attributable to WinFS being dropped from Vista aside, that they were able to listen to “global” feedback and make a significant change based on that feedback gives me hope that a new, more open process may be beginning here. Bravo and thanks to Microsoft for listening genuinely and making a change that I think will have a very positive effect on standards-based content on the web.

Some of the items listed in the “IE8 Readiness Toolkit” look pretty interesting, and in at least one case (the “XDomainRequest” API) seem pretty close to the subjects of some recent discussions in standards groups and various open projects about solving similar problems. I’m not sure if Microsoft proposed their API or semantics to that group, or shared the design thinking that went into their choices, but I’m permitting myself to hope anew!

(In light of my previous post about compatibility liability, I was also very interested to see this titbit:

[...]we do not believe any current legal requirements would dictate which rendering mode a browser must use[...]

It’s much more likely that they’re referencing the Opera suit than that they’re talking about Microsoft representatives’ previous claims of lawsuit risk stemming from changes in new product versions, but it made me smile nonetheless.)