approaches to performance

[This post doesn't have links to anything, and it really should. I'm a bit pressed for time, but I'll try to come back later and fix that.]

Important: I no longer work for Mozilla, and I haven’t yet started working for Facebook, so what I write here shouldn’t be taken as being the stance of those organizations.

Platforms always have to get faster, either to move down the hardware stack to lesser devices, or up the application stack to more complex and intensive applications. The web is no exception, and a critical piece of web-stack performance is JavaScript, the web’s language of computation. This means that improvements in JS performance have been the basis of heated competition over the last several years, and — not coincidentally — an area of tremendous improvement.

There’s long been debate about how fast JS can be, and whether it can be fast enough for a given application. We saw it with the chess-playing demo at the Silverlight launch (for which I can’t find a link, sadly), we saw it with the darkroom demo at the Native Client launch, and I’m sure we’ll keep seeing it. Indeed, when those comparisons were made, they were accurate: web technology of the day wasn’t capable of running those things as quickly. There’s a case to be made for ergonomics and universality and multi-vendor support and all sorts of other benefits, but it doesn’t change the result of the specific performance comparison. So the web needs to get faster, and probably always will. There are a number of approaches being mooted to this by various parties, specifically around computational performance.

One approach is to move computationally-intensive work off to a non-JS environment, such as Silverlight, Flash, or Google’s Native Client. This can be an appealing approach for a number of reasons. JS can be a hard language to optimize, because of its dynamism and some language features. In some cases there are existing pieces of non-web code that would be candidates for re-use in web-distributed apps. On the other hand, these approaches represent a lot of semantic complexity, which makes it very hard to get multiple interoperating implementations. (Silverlight and Moonlight may be a counter-example here; I’m not sure how much they stay in sync.) They also don’t benefit web developers unless those developers rewrite their code to the new environment.

Another approach is to directly replace JS with a language designed for better optimization. This is the direction proposed by Google’s Dart project. It shares some of the same tradeoffs as the technologies noted above (easier to optimize, but complex semantics and requires code to be rewritten), but is probably better in that interaction with existing JS code can be smoother, and it is being designed to work well with the DOM.

A third approach, which is the one that Mozilla has pursued, is to just make JS faster. This involves implementation optimizations and adding language features (like structured types and binary arrays) for more efficient representations. As I mentioned above, we’ve repeatedly seen that JS can be improved to do what is claimed as impossible in terms of performance, and there are still many opportunities to make JS faster still. This benefits not only new applications, but also existing libraries and apps that are on the web today, and the people who use them.

Yesterday at the SPLASH conference, the esteemed Brendan Eich demonstrated that another interesting milestone has been reached: a JavaScript decoder for the H.264 video format. Video decoding is a very computationally-intensive process, which is one reason that phones often provide specialized hardware implementations. Being able to decode at 30 frames a second on laptop hardware is a Big Deal, and points to a new target for JS performance: comparable to tightly-written C code.

There’s lots of work left to do before JS is there in the general case: SIMD, perhaps structural types, better access to GPU resources, and many more in-engine optimizations that are underway in several places I’m sure. JS will get faster still, and that means the web gets faster still, without ripping and replacing it with something shinier.

Aside: the demonstration that was shown at SPLASH was based on a C library converted to JS by a tool called “emscripten”. This points towards being able to reuse existing C libraries as well, which has been a selling point for Native Client thus far.

As Brendan would say, always bet on JS.

what’s next

I’ve now finished up at Mozilla, after a hectic and heart-rending last two weeks. I’m going to take October off to recharge, but I’m already quite excited about what I’m doing next.

On Nov 7, I’ll be starting at Facebook, where my journey will begin with the amazing Facebook engineering bootcamp. It’ll be a pretty different experience for me, no doubt. Of course, my thinking about the web was very much shaped by my experiences at Mozilla and, without speaking for my soon-to-be employer, that thinking is likely a large part of why they were interested in me in the first place.

Throughout my conversations leading up to my decision to join Facebook, I was consistently impressed by the depth and passion of the people I spoke with, from the executive team to the former Mozilla intern who gave me a coding test. I’m going to have a lot to learn, since I haven’t done large-scale computing in half a decade, but I’m really looking forward to pushing the web forward from the other side of the wire.

But yeah, first 5 weeks of vacation, the longest I’ve had since I started professional software stuff in 1992!