why facebook?

[I haven't started yet, and what I present here is based on things that are public knowledge, via press or F8 presentations or Facebook's own posts. My impressions are obviously informed by direct conversations, of course.]

As I’ve mentioned before, I’m going to start as an Engineering Director at Facebook some time in November (specific timing is up to the INS). I’m really really excited about it for a number of reasons, even though it means relocating to California. A number of people have asked why I chose to go to Facebook, so I decided to write some of the reasons down.

One reason is that Facebook is probably the most web-influential company in the world on that side of the wire. They’ve consistently invested in the web, from their mobile-client approach, to their APIs, to various tools and whatnot. I have unfinished business with the web myself, and Facebook is a great place for me to continue to have influence over how it evolves.

Another is that the engineering culture at Facebook is simply spectacular. It’s obvious that they’ve invested in it very heavily, from bootcamp and development tools to the testing and deployment model, and it has clearly paid off. It’s going to be a very cool thing to be part of, especially since the world of web-delivered services is so different from the client-side-software one in which I’ve spent the last 6 years.

The third reason is that Facebook’s management team is perhaps the best in all of software right now; Ben Horowitz agrees. (Mozilla operates in such a different way that I wouldn’t really know how to compare, but I’m sure they won’t take offense.) I’m really looking forward to learning a ton working with them (including a very good friend of mine) as well as the other amazing people at FB that I’ve had a chance to meet. In looking around the company while discussing a possible position, I didn’t see anything I didn’t want to work on, or anyone I didn’t want to work with, which was unique in my job-hunting experiences.

And finally, I am by no means an expert on social software and how it can connect people through the web. It’s obvious that personal connections, recommendations, and other shared experiences are going to be central to how the web looks in five, ten, twenty years. I think there’s an enormous opportunity for me to contribute to that, and learn a ton; I think Facebook’s vision of what the web can be is pretty exciting, and will be exciting to help build.

I think Mozilla is a great place, and I would recommend it strongly as a place to work (or a place to volunteer, as I plan to keep doing); it’s unique in the world of software, and changes you forever. I’m thrilled to now go to Facebook, another great place, and see what I can do to change the world again.

approaches to performance

[This post doesn't have links to anything, and it really should. I'm a bit pressed for time, but I'll try to come back later and fix that.]

Important: I no longer work for Mozilla, and I haven’t yet started working for Facebook, so what I write here shouldn’t be taken as being the stance of those organizations.

Platforms always have to get faster, either to move down the hardware stack to lesser devices, or up the application stack to more complex and intensive applications. The web is no exception, and a critical piece of web-stack performance is JavaScript, the web’s language of computation. This means that improvements in JS performance have been the basis of heated competition over the last several years, and — not coincidentally — an area of tremendous improvement.

There’s long been debate about how fast JS can be, and whether it can be fast enough for a given application. We saw it with the chess-playing demo at the Silverlight launch (for which I can’t find a link, sadly), we saw it with the darkroom demo at the Native Client launch, and I’m sure we’ll keep seeing it. Indeed, when those comparisons were made, they were accurate: web technology of the day wasn’t capable of running those things as quickly. There’s a case to be made for ergonomics and universality and multi-vendor support and all sorts of other benefits, but it doesn’t change the result of the specific performance comparison. So the web needs to get faster, and probably always will. There are a number of approaches being mooted to this by various parties, specifically around computational performance.

One approach is to move computationally-intensive work off to a non-JS environment, such as Silverlight, Flash, or Google’s Native Client. This can be an appealing approach for a number of reasons. JS can be a hard language to optimize, because of its dynamism and some language features. In some cases there are existing pieces of non-web code that would be candidates for re-use in web-distributed apps. On the other hand, these approaches represent a lot of semantic complexity, which makes it very hard to get multiple interoperating implementations. (Silverlight and Moonlight may be a counter-example here; I’m not sure how much they stay in sync.) They also don’t benefit web developers unless those developers rewrite their code to the new environment.

Another approach is to directly replace JS with a language designed for better optimization. This is the direction proposed by Google’s Dart project. It shares some of the same tradeoffs as the technologies noted above (easier to optimize, but complex semantics and requires code to be rewritten), but is probably better in that interaction with existing JS code can be smoother, and it is being designed to work well with the DOM.

A third approach, which is the one that Mozilla has pursued, is to just make JS faster. This involves implementation optimizations and adding language features (like structured types and binary arrays) for more efficient representations. As I mentioned above, we’ve repeatedly seen that JS can be improved to do what is claimed as impossible in terms of performance, and there are still many opportunities to make JS faster still. This benefits not only new applications, but also existing libraries and apps that are on the web today, and the people who use them.

Yesterday at the SPLASH conference, the esteemed Brendan Eich demonstrated that another interesting milestone has been reached: a JavaScript decoder for the H.264 video format. Video decoding is a very computationally-intensive process, which is one reason that phones often provide specialized hardware implementations. Being able to decode at 30 frames a second on laptop hardware is a Big Deal, and points to a new target for JS performance: comparable to tightly-written C code.

There’s lots of work left to do before JS is there in the general case: SIMD, perhaps structural types, better access to GPU resources, and many more in-engine optimizations that are underway in several places I’m sure. JS will get faster still, and that means the web gets faster still, without ripping and replacing it with something shinier.

Aside: the demonstration that was shown at SPLASH was based on a C library converted to JS by a tool called “emscripten”. This points towards being able to reuse existing C libraries as well, which has been a selling point for Native Client thus far.

As Brendan would say, always bet on JS.

a three-dimensional platform

Firefox now includes, on all desktop platforms, support for a technology known as WebGL. WebGL allows web developers to use accelerated 3D graphics, including textures and shaders; it exposes the same capabilities used by games like Doom 3 and (optionally) World of Warcraft, and virtually every game that runs on the iPhone, OS X, Android or Linux.

Security professionals, including Context IS, have discovered bugs in the specification (related to cross-domain image loading) and in Firefox’s specific implementation. Both are being addressed, as with security problems in any other technology we ship, but recently the conversation has turned to inherent security characteristics of WebGL and whether it should be supported at all by browsers, ever.

I think that there is no question that the web needs 3D capabilities. Pretty much every platform has or is building ways for developers to perform low-level 3D operations, giving them the capabilities they need to create advanced visualizations, games, or new user interfaces:

  • Adobe is building 3D for Flash in a project called “Molehill“, about which they say: “In terms of design, our approach is very similar to the WebGL design.”
  • Microsoft is doing something similar with Silverlight 5, where they’re bringing XNA Graphics to Silverlight 3D.

Adding new capabilities can expose parts of the application stack to potentially-hostile content for the first time. Graphics drivers are an example of that, as are font engines, video codecs, OS text-display facilities (!) and image libraries. Even improvements in existing capabilities can lead to new types of threats that need to be modelled, understood, and mitigated. We have a number of mitigations in place, including a driver whitelist that’s checked daily for updates; this seems similar to the driver-blocking model used in SL5, based on what information is available. Shaders are validated as legal GLSL before being sent to the driver (or to Direct3D’s HLSL compiler), to avoid problems with drivers mishandling invalid shader text. We’re also working with the ARB and driver vendors on extensions to OpenGL which will make the system even more robust against runaway shaders and the like.

Microsoft’s concern that a technology be able to pass their security review process is reasonable, and similar matters were the subject of a large proportion of the discussions leading to WebGL’s standardization; I also suspect that whatever hardening they applied to the low-level D3D API wrapped by Silverlight 3D can be applied to a Microsoft WebGL implementation as well. That Silverlight supports Mac as well, where these capabilities must be mapped to OpenGL, makes me even more confident. The Microsoft graphics team seems to have done a great job of making the D3D shader pipeline robust against invalid input, for example. (The Windows Display Driver Model in Vista and Windows 7 is a great asset here, because it dramatically reduces the “blast radius” of a problem at the driver level. This likely explains the difference between default-enable and default-disable for WDDM/non-WDDM drivers in SL5′s 3D support. It’s not yet clear to me what the model will be for SL5 on OS X.)

It may be that we’re more comfortable living on top of a stack we don’t control all the way to the metal than are OS vendors, but our conversations with the developers of the drivers in question make us confident that they’re as committed as us and Microsoft to a robust and secure experience for our shared users. Web developers, like all developers, need good 3D support, and — as with Flash and Silverlight — browser implementers will need to be careful and thoughtful about how to expose that functionality securely on all operating systems.

hey web-talker, talk web to me

Mozilla is hiring a technical evangelist to help the world get the most out of the web. It’s a position with a scope as broad as the web itself, bringing Mozilla’s keen sense of the web to people learning, building and pushing the envelope.

If you can write code and prose, listen as well as you explain, and you want to spend your days changing the web for billions of people, peep this link.

another step forward for open video on the web

Today, Google announced that it is joining Mozilla and Opera in exclusively supporting open video codecs — to wit, WebM and Theora — in their Chrome browser.

It’s a great move, and one we at Mozilla are obviously glad to see. It’s been a great first 8 months for WebM: multiple browser implementations, hardware support, an independent implementation from ffmpeg, performance improvements, support from lots of transcoding services, and content growth on the web. Organizations like Google, Mozilla, Opera and others who really believe in the importance of unencumbered video on the web are putting their products where our mouths are, and the web is going to be stronger and more awesome for it.

Congratulations and thanks, Google.

free as in smokescreen

The web is full of headlines today like this one from MacRumors: “MPEG LA Declares H.264 Standard Permanently Royalty-Free”. It would be great if they were accurate, but unfortunately they very much are not.

What MPEG-LA announced is that their current moratorium on charging fees for the transmission of H.264 content, previously extended through 2015 for uses that don’t charge users, is now permanent. You still have to pay for a license for H.264 if you want to make things that create it, consume it, or your business model for distributing it is direct rather than indirect.

What they’ve made permanently free is distribution of content that people have already licensed to encode, and will need a license to decode. This is similar to Nikon announcing that they will not charge you if you put your pictures up on Flickr, or HP promising that they will never charge you additionally if you photocopy something that you printed on a LaserJet. (Nikon and HP are used in the preceding examples without their consent, and to my knowledge have never tried anything as ridiculous as trying to set license terms on what people create with their products.)

H.264 has not become materially more free in the past days. The promise made by the MPEG-LA was already in force until 2015, has no effect on those consuming or producing H.264 content, and is predicated on the notion that they should be controlling mere copying of bits at all! Unfortunately, H.264 is no more suitable as a foundational technology for the open web than it was last year. Perhaps it will become such in the future — Mozilla would very much welcome a real royalty-free promise for H.264 — but only the MPEG-LA can make that happen.

being open about being closed

I saw an article float by on the newscurrent yesterday, in which Adobe evangelist James Ward talks about misconceptions about Flex. This one definitely caught my eye:

Flash Player is 100% Proprietary.

The core of Flash Player is the Tamarin Virtual Machine, which is an open source project under Mozilla. While the SWF file format is not fully open, it is documented by the community on osflash.org. There are numerous open source products that read and write SWF files. The Flash Player’s product direction has traditionally been heavily influenced by the community and their needs. The core language for Flash Player is an implementation of ECMAScript 262, which is the specification for JavaScript. Flex also uses CSS for styling of components / applications. Adobe AIR uses web standards as the basis for desktop applications as well as Open Source technologies like Tamarin, Webkit, and SQLite.

We’ll ignore the wordsmithing genius involved in choosing “100% proprietary” as the misconception to address, and even let the non sequiturs about open source elements in other Adobe products go without much comment. Normally those things would rankle a bit, but in the light of the other gems present, I can’t really be bothered.

Gem the first:

The core of Flash Player is the Tamarin Virtual Machine, which is an open source project under Mozilla.

It is indeed true that Tamarin is a major piece of Flash Player 9, as it’s what runs the ActionScript language — but not the objects of the API! — and we’re quite glad that Adobe opened it up for us to develop together into an implementation of JS2. But I think it’s pretty misleading to imply that the majority of Flash is provided by Tamarin: it’s quite possible to have Flash that doesn’t use ActionScript, but everything relies on the implementation of the Flash VM itself — with its own bytecodes, graphics semantics, object model implementation and data management. The Flash VM is a huge piece of engineering, and one that Adobe has not opened up at all, though Flash artists have been clamoring for years to know more about the platform they invest so heavily in, to say nothing of people wanting to bring Flash capabilities to other operating systems and devices. Ryan Stewart also seems to think very little of the Flash VM, as he wrote back in October:

But I look at Adobe and what we’ve done. We’ve open sourced the guts of our runtime, the virtual machine. All that’s left is pretty much just the proprietary codecs and we’ve even addressed that somewhat with H.264 support.

All that’s left is pretty much the most significant piece of the Flash runtime, I would say, which is the engine that drives the complex graphics, provides the interaction model for ActionScript, manages data, and provides access to multimedia input and output. When they’re marketing Flash as a platform, Adobe likes to list out all the amazing graphical and video capabilities, but when they don’t want people to think too hard about the fact that writing to Flash is committing yourself to proprietary platform, I guess they aren’t such a big deal. Ryan’s comment on that very post indicate how important he thinks the graphical capabilities are for “RIAs”, but I suppose it’s more convenient to minimize them when people start asking questions about the mystery meat they baked into their applications.

Adobe doesn’t even permit people to use the specification to implement anything other than FLV tools, though the specification is written and published — just be sure you don’t read it, because then you’re tainted and can’t work on an open implementation of Flash, or open tools, etc.

Which brings us to gem the second:

While the SWF file format is not fully open, it is documented by the community on osflash.org.

This is quite the brazen comment. “The community” here are people who reverse-engineered the behaviour of Flash so that they could write tools to make the Flash Player’s platform more valuable, while Adobe’s license terms tried to stop them! They have put themselves in legal jeopardy in some jurisdictions (and Adobe has in the past had people arrested for producing tools that manipulate their license-protected technology) and James has the nerve to call them “the community” and indicate that their work is a remedy for Adobe simply not being willing to remove the field-of-use restrictions on their existing documentation. (I haven’t read the specifications, and I won’t, because I was warned off of them when I was working on gameswf, precursor to gnash.)

Mike Chambers called Mozilla’s announcement of Prism “disingenuous”, I think largely because he misunderstood the difference between “a more convenient way to launch a web app” and “a way to build non-web apps using the same technologies”. Maybe more about that craziness later, but for now I’ll be interested to see to what extent Mike’s concern for disingenuity extends to his own colleagues.

meanwhile, in the ecosystem

Right before Hallowe’en, Songbird 0.3 hit the wires, giving people an updated look at what the ‘nest denizens are planning in their webified music player. Right after Hallowe’en, Flock 1.0 arrived, featuring their “social” spin on the web browsing experience. Those teams have obviously worked hard and long to bring new and exciting things to the open web, and not to take anything that work, but these apps are also things that the rest of the Mozilla community should feel some pride in. Mozilla has always insisted on very liberal licensing of our technology in no small part so that people can innovate in different directions at the same time. Sometimes those innovations can come back into the shared code, sometimes they inspire other work, and sometimes they help generate experimental results that everyone can use to improve their own products and projects.

Are relations between all the different application developers and technology hackers and community members as great as they could be? No, though I think we’re all working to improve them as we learn how, and I think we’re getting better all the time. Our baseline openness helps a ton, and gives us a ridiculous amount of visible — though not always easy to digest — history of what the project has done, and why. We’re going to hear more and more about openness of platforms, technologies, organizations and processes as that becomes something that developers and users come to expect from the people they work with; I think the world and the web would be in a much better place if more of the players were open in ways that transcended specifications and publication of finished works. But then, I would think that.

we’re from javascript, and we’re here to help

More than a decade ago, JavaScript ushered in a transformation of the web browser from simple navigator of pages to platform for universal applications. In the intervening years, JavaScript has been standardized as ECMAScript, revised twice in that context to include things like exceptions and regular expressions, used as the basis for languages like ActionScript, and embedded in everything from web servers to DVD players to video games — and it’s become the most widely-used programming language in the world. It’s the big dog on the web, you might say.

What does one do for an encore, looking back at a decade of one’s language being taken to places scarcely imagined, used to build billions of dollars of value, and employed by millions of programmers around the world? I would like to find out myself some day, but for Brendan Eich, the father of JavaScript, the answer is clear: you make it better.

For the last couple of years, language researchers, application developers and web browser developers in the ECMA TG-1 committee — and Brendan, of course! — have been working to craft an evolution of the JavaScript language to remove headaches that modern developers are hitting and take advantage of the lessons learned over those years of watching people use and learn and love (and hate) the language. The result of that group’s work, and of the feedback of many developers contributing in the open, will be finalized soon as ECMAScript Edition 4. It represents a huge amount of effort, and the distilled wisdom of literally hundreds of people. And it represents an amazing opportunity for web developers to take their own software to the next level of power, performance, correctness and clarity.

What will developers have to do to take advantage of these things? Virtually nothing, to start: because of the relentless focus on compatibility and interop between JS1 objects and code, they will in many cases have things just start working faster and better because the authors of their favourite toolkits and tools have made use of new capabilities in the language. The specification has been improved to be clearer and easier to reason about, and had many bugs fixed, so things will work more consistently and it will be easier to use tools to manipulate JS programs reliably. The type system improvements give authors more control over how JS’s powerful dynamism affects their scripts, and will let them better preserve their important invariants, for security and correctness and performance reasons all.

And if you as a developer find that you’re hitting a limitation of JS1 that you want to get beyond, for reasons of performance or scale (developer-count as well as line-count or object-count), or because you want to use some of the new features to streamline your code, you can add those elements to your code incrementally. You don’t need to switch all your scripts at once, or worry about what happens when you pass objects between your JS1 widget library and JS2-enhanced animation library. The language developers have been worrying about that on your behalf for a long time, and they take these issues very seriously. Breaking the web is not an option, and there have been great (if grim) examples of what happens when languages with large installed bases forget that compatibility matters.

There’s a lot to talk about when it comes to JS2, and there are definitely a lot of new features and goodies for developers to adopt as they choose to. With JS1, Brendan and others managed to bring functional concepts, first-class functions and other relatively advanced language features to a straightforward and “newbie-friendly” language, and JS2 will bring more of the same accessible power to the millions of people out there who are making great software, large and small, in JavaScript.

I’m going to be writing more about JS2 in the coming days and weeks, because I think it’s one of the most exciting things coming to the web, and it brings new things in a way that I feel is very web-like indeed: incremental, compatible features based on real-world experience, developed in a collaborative standards environment with a pretty decent (though not yet perfect) level of openness. Look for more on JS2′s type system, where you’ll see JS2 available, more details about compatibility with JS1, and other neat things about the next version of the web’s scripting language. It’s gonna be fun.

talk to me, baby

Start your week by telling us what you think!

You have before you two quick opportunities to help Mozilla improve its documentation and show people how great the web is. Let them not pass you by, for regret will linger long after the hangover from Sunday’s partying has left you.

Opportunity the first: help Deb collect great examples of the power of the open web, by linking your favourite web stuff in her blog post. Bonus points for stuff that doesn’t require extra plugins, points may not be redeemable in your jurisdiction, no purchase necessary.

Opportunity the second: tell Sheppy which of the hundreds of documents on MDC you think are the most important, over in his blog post.

next page »