another step forward for open video on the web

Today, Google announced that it is joining Mozilla and Opera in exclusively supporting open video codecs — to wit, WebM and Theora — in their Chrome browser.

It’s a great move, and one we at Mozilla are obviously glad to see. It’s been a great first 8 months for WebM: multiple browser implementations, hardware support, an independent implementation from ffmpeg, performance improvements, support from lots of transcoding services, and content growth on the web. Organizations like Google, Mozilla, Opera and others who really believe in the importance of unencumbered video on the web are putting their products where our mouths are, and the web is going to be stronger and more awesome for it.

Congratulations and thanks, Google.

onward, nimble monkey

Busy times, busy times. You guys see this browser from Google? A few people sent me that link today, definitely interesting stuff. They have a spiffy new JS engine, which has some genuinely cool stuff in it; I’m enjoying reading through the code, and I think I’m learning some Smalltalk along the way.

Since we landed TraceMonkey 10 (ahem) days ago, we’ve been working mostly on stability and bug fixes, as is usual for the early-but-promising stage of new tech. Jesse’s fuzzers have been as deliciously useful as always, and we’ve been able to make some small performance improvements along the way. Things are looking pretty good.

Pretty soon we’ll be able to dive back in and work on more optimizations, and perhaps even write a paper or two. (We’ve learned a lot in the past couple of months, and it would be great to share those lessons with others who might be looking to make dynamic languages fast.) I have lots to keep me busy, but I’m hoping I’ll be able to sneak off and dabble here and there. Don’t tell my boss.

correction: Mozilla, StopBadware and Google

Earlier this week, Chris Soghoian posted an interview he did with me about Mozilla and our relationship with Google, as well as some other security topics. I’d had some concerns with a previous article of his, and I was glad that he was willing to take the time to discuss them with me.

In the course of that conversation, though, I misled him about how the upcoming malware list is managed; I had mistaken StopBadware’s role as review and removal channel for them having editorial control over the list in general, which is not the case. Maxim Weinstein of StopBadware contacted me to correct my misunderstanding, for which I’m grateful, and has provided a great explanation on the StopBadware site:

Mozilla, Google, and StopBadware are all expected to play a role in ensuring that the needs of both users and web site owners will be addressed in Firefox 3. Mozilla is working with Google to provide a list of potentially harmful URLs that will be used by Firefox to warn users before they browse to a site that may contain malware. This data comes from Google’s own scanning and research, not from StopBadware, as reported. (Our Clearinghouse allows users to search for a site to see if it is currently on Google’s warning list.)

StopBadware’s role will be (as it is now) to ensure that users and web site owners receive as much information as possible about the warning and to provide a transparent review process to assist site owners in understanding why a site was flagged and/or notifying Google that it has been cleaned.

By working together, we help protect users from potentially dangerous web sites while ensuring that owners of legitimate sites have a way to understand the warnings, clean up their sites, and remove the warnings.

I think this is a really good model that combines the scalability we need to protect more than 130 million users, the breadth of detection necessary to keep up with the modern pace of attacks, and a transparent and neutral process for reviewing and clearing sites that have been fixed. I just wish I’d understood it properly before speaking with Chris!

relevance, your honour?

The search engine business is a tough one. People are generally pretty bad at knowing how to phrase queries to give them what they want, to say nothing of dealing with spelling mistakes and synonyms and stemming, and you have to do all that work basically instantaneously. The relevance of search results might be the only thing more important than performance in determining if users will stick with your particular product, or make the trivial switch to another one.

So I was pretty surprised to discover how, er, idiosyncratic the search results were on Live Search for what I — perhaps naively — think of as a pretty straightforward query.

When searching for “Firefox”, the user might want to find the home page for the product, or a description of the history of the project, or maybe even a review of the software. Both Yahoo and Google give you some mix of that, with what seem to me to be pretty reasonable orderings of results.

The Live Search results are a little more difficult for me to understand, since they have the Silverlight developer FAQ as the first result, then an article about cross-site scripting, then an article about ASP.NET, and then the Wikipedia page about Firefox. You have to go to the 8th entry to get the product’s home page, well below the fold on my machine at least. I’ve saved off the results, in case you disbelieve me, or for some reason can’t reproduce them yourself.

Maybe Live Search users really are a different breed, if that’s what they would be most likely to want when searching for Firefox; a ballsy market-differentiation move by Microsoft, if so.

(Canadians don’t call their judges “Your Honour”, and Americans don’t spell honour that way, so the title of this post is a somewhat impossible reference, but I figure you’ll let that slide.)

measure twice, cut once

I’ve been looking at web-site metrics services and tools lately, for projects both personal and Mozilla-related. So far, the one that looks best is the Google Analytics system, but the privacy policy they use for it is the one for the web site, and that kinda sucks.

Anyone out there have recommendations for other such packages? I’m especially interested in the geographic-breakdown aspect of GA, the ability to export the data in usable forms like XML and CSV, and kinda the hosted-service aspect of it, especially for my own personal use. If the privacy policy were more, mmm, protective I’d certainly be willing to pay a reasonable amount in exchange.

google should acquire google

Been playing around with some googlebits lately, like Google Reader, and I also noticed the other day that the big G is publishing a newsletter for librarians. (A wise move, says I, but that’s another post entirely.)

Both those things are OK, nothing really special but nothing embarrassing either. Except!

I can’t use Google Reader to search the posts in my subscription list, and there is no way (google group, RSS, anything that I can see) to actually subscribe to the Google Librarian News!

Alas.

(I’m also trying out this performancing extension, but talking too much about my findings there would be very very close to work, and this is a vacation week, dammit.)

(Regret the error: nemo points out in a comment that there is indeed a “subscribe” link at the bottom of the Librarian News. I got nuthin’.)