approaches to performance

[This post doesn't have links to anything, and it really should. I'm a bit pressed for time, but I'll try to come back later and fix that.]

Important: I no longer work for Mozilla, and I haven’t yet started working for Facebook, so what I write here shouldn’t be taken as being the stance of those organizations.

Platforms always have to get faster, either to move down the hardware stack to lesser devices, or up the application stack to more complex and intensive applications. The web is no exception, and a critical piece of web-stack performance is JavaScript, the web’s language of computation. This means that improvements in JS performance have been the basis of heated competition over the last several years, and — not coincidentally — an area of tremendous improvement.

There’s long been debate about how fast JS can be, and whether it can be fast enough for a given application. We saw it with the chess-playing demo at the Silverlight launch (for which I can’t find a link, sadly), we saw it with the darkroom demo at the Native Client launch, and I’m sure we’ll keep seeing it. Indeed, when those comparisons were made, they were accurate: web technology of the day wasn’t capable of running those things as quickly. There’s a case to be made for ergonomics and universality and multi-vendor support and all sorts of other benefits, but it doesn’t change the result of the specific performance comparison. So the web needs to get faster, and probably always will. There are a number of approaches being mooted to this by various parties, specifically around computational performance.

One approach is to move computationally-intensive work off to a non-JS environment, such as Silverlight, Flash, or Google’s Native Client. This can be an appealing approach for a number of reasons. JS can be a hard language to optimize, because of its dynamism and some language features. In some cases there are existing pieces of non-web code that would be candidates for re-use in web-distributed apps. On the other hand, these approaches represent a lot of semantic complexity, which makes it very hard to get multiple interoperating implementations. (Silverlight and Moonlight may be a counter-example here; I’m not sure how much they stay in sync.) They also don’t benefit web developers unless those developers rewrite their code to the new environment.

Another approach is to directly replace JS with a language designed for better optimization. This is the direction proposed by Google’s Dart project. It shares some of the same tradeoffs as the technologies noted above (easier to optimize, but complex semantics and requires code to be rewritten), but is probably better in that interaction with existing JS code can be smoother, and it is being designed to work well with the DOM.

A third approach, which is the one that Mozilla has pursued, is to just make JS faster. This involves implementation optimizations and adding language features (like structured types and binary arrays) for more efficient representations. As I mentioned above, we’ve repeatedly seen that JS can be improved to do what is claimed as impossible in terms of performance, and there are still many opportunities to make JS faster still. This benefits not only new applications, but also existing libraries and apps that are on the web today, and the people who use them.

Yesterday at the SPLASH conference, the esteemed Brendan Eich demonstrated that another interesting milestone has been reached: a JavaScript decoder for the H.264 video format. Video decoding is a very computationally-intensive process, which is one reason that phones often provide specialized hardware implementations. Being able to decode at 30 frames a second on laptop hardware is a Big Deal, and points to a new target for JS performance: comparable to tightly-written C code.

There’s lots of work left to do before JS is there in the general case: SIMD, perhaps structural types, better access to GPU resources, and many more in-engine optimizations that are underway in several places I’m sure. JS will get faster still, and that means the web gets faster still, without ripping and replacing it with something shinier.

Aside: the demonstration that was shown at SPLASH was based on a C library converted to JS by a tool called “emscripten”. This points towards being able to reuse existing C libraries as well, which has been a selling point for Native Client thus far.

As Brendan would say, always bet on JS.

57 comments to “approaches to performance”

  1. niall
    entered 28 October 2011 @ 9:37 am

    First, in case there’s any doubt, I’m a fan of improving the performance of all the language implementations, JS included. It can only be a good thing that the implementations are improving. Not being in any way a web programmer, some grains of salt are required for my opinions, but it seems to me that pushing for a monoculture is both a bad idea and fruitless in the long run..

    Anyway..

    • in addition to FPS, a vital metric to look at would be CPU usage (and energy efficiency) of the JS version. Assuming the JS engine sees the kernels early, it should settle into reasonable efficiency pretty quickly, but this is something to look at even with native implementations

    • regarding seeing the kernels, it’d be interesting to look at jitter in the decode process, if any, as the JIT is required to do some new work

    • eventually, we’ll want hardware supported decode to be the norm for energy reasons

    • my first (mental) question when someone is surprised by something like this is “is this surprising? why is it surprising?”. Given reasonably small code running for a long time, I’d expect, from the last decades of work on jit compilation, that the decode process would actually run pretty efficiently on modern machines even starting from a dynamic language like JS. This is particularly true if the demo was run on a modern Intel platform, for example, where the processor is incredibly well engineered to be pretty tolerant of code that isn’t hand optimized to within an inch of it’s life

    Still, despite such questions, it’s great that perf is getting to this point.

  2. entered 28 October 2011 @ 9:44 am

    Niall,

    I agree that for video decode there are good reasons to use hardware assistance. That’s true for software implementations in C as well, though, so the goal here is “how close can JS get to C performance?” more than “is it optimal to implement all of a video decoder in JS/C/C#/Java?”

    Thanks for your thoughtful comment!

  3. entered 28 October 2011 @ 10:27 am

    I think there’s another approach, similar to approach 1, which doesn’t feel quite so weird as moving code into Silverlight. You can build somewhat higher level primitives for Javascript that aren’t environments in their own right. Typed Arrays are an example of this, or DOMCrypt. Even if Javascript was as fast as C or C++ for doing crypto, we already have better (not just performance) crypto libraries in these other languages; by providing this code to Javascript we can make Javascript essentially just as performant. Then there’s other arguments like Niall’s that native support for high-level operations (like video decoding) can mean native integration. Of course these libraries also add their own problems, like backward compatibility, implementation differences, even compatible implementations with substantially different performance.

    WebSQL was this kind of hybrid approach (in this case exposing the nicely optimized SQLite library), but these issues sunk it. Though perhaps WebSQL and that general approach deserves to be more carefully considered and its the problems with its approach addressed, rather than discarded in favor of something that didn’t exist (IndexDB) and something therefore that having no users could suffer no criticism.

  4. Eli Savransky
    entered 28 October 2011 @ 10:41 am

    First of all, the need for faster programming platforms is clear, and it will always be.

    Should JS be improved by implementation, enhanced or replaced? I won’t comment on this, since I am not an expert.

    The biggest shift in platform development I saw in the last years (and maybe in my entire career) is the change from: Generic HW + SW that takes the common HW and try to use it the best it can –> HW and SW renegotiating tasks at every platform, at every generation, and getting closer to be at many different products. The video decode case is maybe a bad example, but it is indicative of the confusion: building a fast video decode on SW is a good idea only for extremely niche applications. Another example of this is GPU acceleration. Others (in both directions): WebCL, software radios, etc.

    My point is: no matter what the solution is for faster web SW environment, make sure 1) it is well integrated with the shifting HW capabilities that exist in your target HW platform (is “target HW platform” heresy in the web community? if it is, my apologies) and 2) push the HW guys to build the right accelerators to make your SW faster.

    Eli

    Ps: reading my own post, it sounds like a HW guy on a SW thread. Maybe after all I AM a HW guy…

  5. entered 28 October 2011 @ 10:42 am

    I think in some cases that’s worth approaching, as with typed arrays or for things with very well-specific semantics like cryptography. Whether it’s actually a performance win is actually hard to predict: in some cases calling into native code can provide worse performance, if the marshalling overhead is sufficient, or if it inhibits aggressive optimizations because the JIT can’t see through the calls. We’re finding that a JS-hosted DOM is in many cases much faster than a traditional C++-reflected-into-JS approach for these reasons. But at the broader platform level I certainly agree — we should let the <video> tag reach into acceleration hardware, and obviously we have WebGL as a demonstration of reflecting such things. I don’t really see this as comparable to Option 1, because it doesn’t require ripping and replacing — only incremental use of the new capability — and it leaves the computation in JS. I’m really talking about computation performance here; whole-platform performance is another, much larger topic.

    (IMO the fatal problem with WebSQL was that there was no compatible 2nd implementation, and nobody was signed up to try and reverse engineer sqlite behaviour in sufficient detail to make one. Not even its strongest advocates! It also lacked a usefully-broad test suite, which is a related problem. It may be that revisiting the decision and rolling those dice is OK, but I think it would be hard to get there without Microsoft on board as well.)

  6. entered 28 October 2011 @ 10:45 am

    I think your thoughts on how to evolve the broader web platform are exactly correct: in addition to making software computation fast, we need to provide access to specialized acceleration for performance, and additional device capabilities as well. I think that between JS engine work, things like WebCL and River Trail, and the work underway in “Boot To Gecko”, there are promising avenues for all of them.

  7. azakai
    entered 28 October 2011 @ 11:02 am

    Ian Bicking said:

    Even if Javascript was as fast as C or C++ for doing crypto, we already have better (not just performance) crypto libraries in these other languages; by providing this code to Javascript we can make Javascript essentially just as performant.

    That’s true, but the downsides to that approach are (1) you need to make sure that code is cross-platform, (2) you need to audit the code for security, (3) if you call the code a lot for short processing, you will have a lot of overhead in type conversion between JS and native code.

    Still, that approach makes sense in some cases. In others, compiling the existing code to JS might be better. And of course writing from scratch makes sense sometimes too.

  8. Jeffrey
    entered 28 October 2011 @ 12:16 pm

    I think the big difference between Emscripten and Native Client is the promise of running at about 95% the speed of compiled code. Sure the Broadway decoder is impressive but I think the reason they are working on a from scratch JS implementation is because it’s not comparable to the optimizations in Android’s original C version. This also speaks to them testing it laptop hardware as opposed to the smartphone hardware it’s native counter part was running on.

  9. entered 28 October 2011 @ 12:37 pm

    At the last Google I/O (2011) the V8 team mentioned they’ve done all the major optimizations they could and the focus should now be on DOM/CSS/etc to get more performance. I’m guessing that’s why they have the Dart effort, to change the language to build a faster VM on top but maintain legacy support.

    So from this point, how much faster can current JS get? Does Mozilla’s JIT/VM approach have a higher performance potential than V8?

    I think the major problem is the improvement speed of the web. Mobile OSs are rapidly advancing in features, speed and users, while web and its minions are complacent. They point to the old victory of open Web over closed AOL/Prodigy and pretend the situation is the same now – “always bet on JS.”

    JS and the open web have a lot of political clout but I wouldn’t bet on the technology improving rapidly when the decision making process is mired in politics. The open web can remain a dominant platform for a decade even if its technology stagnates. It’s not much of a bet.

    A major feature of Native Client is it allows a part of the web stack to be removed from the political circus. Look at CoffeeScript’s popularity and usefulness. What if the creator had to ask Google/MS/Mozilla for their point of view, it would never happen. A piece of the stack, JavaScript, was flexible enough to avoid the open argument circus and be tested in production. What if it was more flexible?

    That’s what Native Client allows, to agree only on parts where we can’t handle disagreement technologically and have more experimentation with the rest.

  10. njn
    entered 28 October 2011 @ 1:14 pm

    “At the last Google I/O (2011) the V8 team mentioned they’ve done all the major optimizations they could and the focus should now be on DOM/CSS/etc to get more performance.”

    Just wait until Type Inference and IonMonkey combine to make Firefox faster than Chrome on many JS benchmarks — I’m sure they’ll re-evaluate that opinion when that happens.

  11. Robert O'Callahan
    entered 28 October 2011 @ 1:22 pm

    I don’t know where people get the idea that the improvement speed of the web is a problem. The open Web is advancing in features and speed more rapidly than ever in its history. Far more resources are being spent improving browsers than ever before. Browser update cycles are much faster than ever before.

    Here’s a laundry list of features that are relatively new and will be supported by all major browsers when IE10 is released: * HTML5 Forms, Drag&Drop, parsing, , History, AppCache * DOM touch events, orientation events, geolocation, File API, Workers * WebSockets, EventSource * CSS hyphenation, columns, 2D and 3D transforms, gradients, transitions and animations, flexbox layout, fonts (WOFF) * Accelerated 2D canvas * WebGL (actually not Microsoft, but they’ll crack) * JS Typed Arrays, generally much faster JS engines And here’s a list of features that are available in some browsers now or become available over the next year or two: * JS language improvements (WeakMap, modules) and performance improvements * IndexedDB * CSS regions, grids, exclusions, Opentype control, element() images * Custom accelerated CSS/SVG filters * Fullscreen, MouseLock * SPDY, CSP * MediaStream, WebRTC, PeerConnection * Some kind of audio processing

    Those lists are partial, just the stuff off the top of my head. There’s so much happening it’s impossible to keep track of it all.

  12. entered 28 October 2011 @ 1:47 pm

    It seems like so much is happening because of the marketing. Mobile OSs have all these features and more and they had them years ago. That’s why browser makers started to compete on features, the popularity of the App Store must have been scary.

    The ultimate test is wether you can take all these features and build apps that are equal in capability to the most CPU intensive desktop or even iPad apps, like Maya, Final Cut, 3D renderers, and the current answer is no. Will this answer change when Type Inference and Ion Monkey are ready? No right? But they’ll continue to market small incremental performance improvements, add features from normal OSs, convincing developers to stay away from closed source evil. And that might be good enough to maintain platform dominance and legacy support at the same time.

    Foregoing JavaScript and accessing all HTML5 apps through Native Client would skip the incremental speed improvements and provide more performance years sooner. The marketing for NaCl at Google I/O was sad to watch as they demonstrated the C++ programming experience. But languages like Go and Rust show you don’t have to go through hell to program for a fast language implementation.

  13. azakai
    entered 28 October 2011 @ 2:06 pm

    Detrus said:

    At the last Google I/O (2011) the V8 team mentioned they’ve done all the major optimizations they could and the focus should now be on DOM/CSS/etc to get more performance. I’m guessing that’s why they have the Dart effort, to change the language to build a faster VM on top but maintain legacy support.

    So from this point, how much faster can current JS get? Does Mozilla’s JIT/VM approach have a higher performance potential than V8?

    First thing, Mozilla’s TI is already faster than V8 on many benchmarks. For example, TI is about twice as fast as V8 on the h264 decoder. When Mozilla’s IonMonkey lands and complements TI, that difference will be significantly bigger.

    Of course on other benchmarks V8 is faster. But the point is that V8 is already not the top JS engine – just one of the top ones – and SpiderMonkey has work in the pipeline that will make it much faster. So it is clear that there is plenty of room to work on V8 and improve it. In fact I am quite sure the V8 team is working in secret on such things – they don’t want to lose in the speed competition.

    Second, there are a lot of other ideas for optimizing JS that are not in production JS engines yet. For example, see the Tachyon project and the very interesting ideas there.

    Bottom line, there is absolutely no reason – theoretical or practical – that JS cannot run at the speed of Java, for the type of code that is heavily speed-dependent, such as an h264 decoder.

  14. entered 28 October 2011 @ 3:23 pm

    So why is it taking so long to pursue these ideas in Tachyon and optimize JS JIT/VMs in general? Why did it take so long to implement PyPy for Python? How long did it take to optimize Java’s VMs and how long will it take to optimize them again for mobile hardware constraints to catch up to Obj-C efficiency?

    From what I gather it’s very difficult to build these VMs. Then the work from one effort, like PyPy for Python is not easily transferable to JS VMs? Every language needs its own VM with its own long road to optimization. I appreciate the idea that high level languages can produce faster code than low level languages in theory, but in practice there are simpler approaches.

    What will happen to the current JS VMs when we’ll want better concurrency support? Something to keep up with Rust and Go and take full advantage of multi core hardware? Won’t they have to keep working on another set of optimizations for that because the language would have to change?

    Also these high level languages like Python/Ruby/JS are not that expressive. It seems like you can be almost as expressive in a language that can also handle low level systems programming, like Go and Rust. Business interests make it worth the effort to invest in VMs for popular languages, but NaCl combined with a language that considers performance and expressiveness from the start looks like a simpler and more flexible technical solution overall.

    What kind of progress can we make on expressiveness anyway without some AI system like here http://blog.wolfram.com/2010/11/16/programming-with-natural-language-is-actually-going-to-work/ ?

  15. azakai
    entered 28 October 2011 @ 4:31 pm

    So why is it taking so long to pursue these ideas in Tachyon and optimize JS JIT/VMs in general? Why did it take so long to implement PyPy for Python? How long did it take to optimize Java’s VMs

    It takes time. It took many, many years to get Java speed close to C. JS has been worked on for far less time than Java has!

    NaCl combined with a language that considers performance and expressiveness from the start looks like a simpler and more flexible technical solution overall

    1. NaCl cannot run dynamic languages like JavaScript and Lua as fast as their native implementations can. And NaCl has various other technical limitations that make it problematic.

    2. NaCl is not and will not be on the iPhone, iPad, or basically any non-Google device.

  16. Brendan Eich
    entered 28 October 2011 @ 5:53 pm

    Of course the demo is a sequential stunt, and we are working on parallel hardware utilization from JS for IQ, IDCT, MC, and colorspace conversion. At SPLASH I also demo’ed RiverTrail (see my blog for details and links). But what the demo shows is how much upside we have from here by ||izing in JS.

    Note that Huffman decoding is inherently sequential.

    NaCl boosters here are frankly kidding themselves if they think Pepper is coming to other browsers, ever (never mind years from now). Meanwhile JS is everywhere and getting stronger and faster, including in its ability to use || execution units safely. What would you bet on?

    /be

  17. entered 28 October 2011 @ 7:07 pm

    Well, admittedly, NaCl and Pepper are taking some time as well, but should be able to run apps fast before JS reaches Java speed. That is if development continues, it’s hard to tell in light of Dart VM, what the focus will be.

    In the next three years I’d bet on JavaScript cludging along close to its current form, but beyond that it’s hard to say. Mobile OSs are today’s upstarts, like Web and JavaScript were to Java and desktops in the 90′s. Would you have bet on Java then? What about Windows? It’s still early to call, Java might get a significant share of front end programming through Android.

    As far as betting on technology, always bet against the incumbent in the long term. Even if JS/HTML lasts for 15 more years, it won’t be as dominant, like Java and Windows today.

    Hopefully some technology will gain dominance on technical and UX merit wether it’s some VM/language, LLVM/sandbox/language or centralized security review. But it would be a shame if NaCl will be remembered as a technically better could have been because of politics and business.

  18. entered 28 October 2011 @ 8:33 pm

    Detrus: stop the poor-NaCl tiny violins, hearts are not bleeding. It’s not “politics and business” keeping Pepper from being adopted — it’s simple economics. Pepper is a mess, a giant, growing, unbounded API set specified only by code on chromium.org. No sane post-sophomore software business would undertake to reverse engineer it into a different browser than Chrome and then keep up on that treadmill — ignoring any competitive gaming and natural first-mover advantages that Google might exploit.

    I’ve heard otherwise level-headed people dream of NaCl allowing them to project their bespoke, multifarious server-side native code (including databases and language VMs) onto client computers via NaCl targeting a Pepper-equipped browser. That does not pass basic econ 101 tests either. My iPad’s battery is low, I’ve used up too much memory and flash filesystem space — now what? You cannot pretend client machines are commensurate with your provisioned server rackspace.

    NaCl for safer plugins? Sure, and perhaps safer C++, but with native apps one can, as you say, use even better, safer, and even concurrent languages (e.g. Rust). But not transmitted over the web to arbitrary browsers and client machines.

    /be

  19. Jeffrey
    entered 29 October 2011 @ 4:43 am

    I wonder why browser makers don’t support other scripting languages besides just Javascript. It seems a shame to have so many languages available on the server but not the client.

    I’m sure it has to be some technical reason. Like supporting 4 languages means you have to optimize that many virtual machines or something. Still maybe there’s a way to get around that using LLVM or something. There just has to be a better way than transcompiling to Javascript.

  20. halyavin
    entered 29 October 2011 @ 11:10 am

    LLVM is not managed like JVM or .NET. You can’t execute LLVM code safely in the browser. LLVM code may have buffer overflows that can damage any data in the process. The only way to execute LLVM safely is inside Native Client or some other sandbox.

  21. entered 29 October 2011 @ 1:30 pm

    Jeffrey: you might well think “There just has to be a better way than transcompiling to Javascript.” But realistically, there isn’t. LLVM is not going to be embedded in browsers and exposed to content. It’s a fine ahead-of-time compiler framework, Mozilla is using it for Rust and Emscripten, but it is nowhere near safe enough, or small enough, or otherwise ready to be put in browsers.

    For my analysis of “JAVASCRIPT – Y U NO HAVE BYTECODE”, see slides 17-22 of my SPLASH keynote slideshow available at http://www.slideshare.net/BrendanEich/splash-9915475.

    /be

  22. niall
    entered 29 October 2011 @ 2:46 pm

    Brendan: while I do not yet have a horse in the race, I’m compelled to comment to serve as a devil’s advocate for multiculturalism and stand against the tendency of the old guard to argue for the incumbent. Please pardon my polemic.

    I don’t believe that, realistically, Javascript need be the only language target. Holding up LLVM as the only alternative would be to create a false dichotomy. (I realize there are various projects which make it a common talking point as an alternative but that is not sufficient to make us limit our thinking).

    Can Javascript be extended to new domains? Undoubtably. It’s hardly basic research to implement algorithm X in Javascript; as useful as that might be to illustrate areas of improvement in the implementations. Nor, is it terribly surprising to see it extended with, for example, various typed arrays for efficiency or cross-compilation to data-parallel targets such as OpenCL, or with APIs to access to underlying hardware mechanisms such as video etc. I’m not against any of that, and actually it’s great to see such work. By all means, people should enthusiastically tack more legs onto the octopus (not a bad thing), and have fun doing it.

    I have only a passing familiarity with Javascript, but if I look at it as a compiler target, I immediately start to wonder. I wouldn’t be surprised if my questions are erroneous, or if you have tip-of-the-tongue responses to all of these with various existing proposals or future standards work. I’ll return to that shortly but first a random sampling..

    • why is the type system so sparse, I need integers and different sizes of floating point scalars for my language

    • are tail calls properly handled?

    • just how much of my runtime must I write in Javascript and have users potentially download each time my program runs?

    • why can I not pass explicit types for my values to the implementation, have them cheaply checked (far more cheaply than inferring again on each execution), and efficient code generated with such prior knowledge? Note that I’m remaining agnostic here about whether the programmer types the types or whether some ahead of time toolchain fills them in. The question is whether they can be transmitted.

    • can I in fact express the results of any optimizations I do, in valid and efficient Javascript? I’m ignorant but perhaps I cannot even express an irreducible loop in Javascript were I to optimize a piece of code aggressively and introduce a loop with multiple entry points.

    I also immediately begin to worry about Javascript implementation techniques I’d rather sidestep. E.g. are you using the, by now traditional, inline caching techniques that tend to leave pages of memory with both write and execute permissions to avoid costly repeated page protection changes?

    • Will I suffer from jitter inducing dynamic detection of types, and subsequent re-compilation and re-optimization, as the implementation repeatedly re-discovers things I could have told it in the first place?

    • How much energy will be wasted on mobile devices running my application that I could have avoided otherwise with knowledge I cannot transmit?

    • Do you not agree that compilation targets like this have built in advantages for the lead language? That would be a typical complaint about VMs repurposed from say Java, and it applies all the more when the interface is the lead language itself!

    In any event, I’m sure there are many answers to the above. But, the thrust of my question is why a monoculture is good? Yes, a common basis is great, but not to the total exclusion of alternatives with equal footing. Please, tear down this wall. Is the best we can do truly to say “Gentlemen, you may have any code format you like, so long as it’s Javascript”? (As my friends realize I can hardly pass up an opportunity to mash up historical misquotes).

    Most applications have yet to be written. Most programmers have yet to come to the web as a platform.

    Are we so lacking in ideas and resources that we cannot take some of the energy and build a common small and fast platform that levels the playing field for multiple languages without baking in a chosen high-level language?

    Alternatively, perhaps you can convince me that Javascript is the Right Thing, and convince a crusty old compiler/systems nerd like me (hey, I’m in my 30s) to throw some weight behind it :-)

  23. entered 29 October 2011 @ 4:13 pm

    Niall: I cited LLVM only because Jeffrey brought it up. I made no false dichotomy, so please don’t try to put one in my mouth.

    I’m not sure you viewed the Emscripten or River Trail demos, so I’ll just try to respond to your questions and hope you can find all the evidence to support my answers. This isn’t about “politics” or “culture” except in a reductive sense I find useless.

    I’m a realist. If there were a multi-language VM strategy that could get cross-browser standardization, I would support it. There isn’t one in sight other than JS as the bytecode and VM, and this is the case as far as I can tell for solid economic (thermodynamic) reasons.

    “why is the type system so sparse, I need integers and different sizes of floating point scalars for my language”

    See typed arrays and the ES.next binary data (wiki.ecmascript.org/doku.php?id=harmony:binary_data) proposal. The storage types are here in all modern browsers save IE, and I bet real money MS will do typed arrays and binary data. The rest (intermediate and local variable types) can be inferred by advanced JITs.

    “are tail calls properly handled?”

    Proper tail calls are normative in ES.next: wiki.ecmascript.org/doku.php?id=harmony:propertailcalls — yay!

    “just how much of my runtime must I write in Javascript and have users potentially download each time my program runs?”

    Potentially a lot, if your source language is not co-expressive with JS but has demanding semantics that require the full Turing tarpit of a compiler and custom runtime to implement.

    But for C and C++, as Emscripten demonstrates, and for many languages nearby to JS (Python, Ruby, Lua), hardly any runtime code is required. CoffeeScript is “just syntax”. The runtime helpers it requires are simply inlined at low cost into class declaration expansions, and they will go away in ES6 (e.g. <| relieves CoffeeScript from having to fake class-side inheritance by copying).

    “why can I not pass explicit types for my values to the implementation, …. The question is whether they can be transmitted.”

    Anders wisely made .NET bytecode untyped (Java bytecode is partly typed). .NET IL types are inferred from storage. JITs do this already for JS. Again the issue of “more types” can arise, and I noted at JSConf.eu that optional guards (wiki.ecmascript.org/doku.php?id=strawman:guards) for the basic machine types supported by typed arrays and binary data could be helpful.

    Or aggressive type inference may win — it’s a race, we don’t know for sure which will win, and we don’t want guards to take over JS as “type annotations” that everyone adds “for performance”. Experience with AS3 in Flash shows this can actually slow code down, and definitely burn programmer productivity on stupid over-annotation.

    “can I in fact express the results of any optimizations I do, in valid and efficient Javascript? ….”

    Please read Alon’s Emscripten paper: https://github.com/kripken/emscripten/blob/master/docs/paper.pdf?raw=true.

    “pages of memory with both write and execute permissions …”

    Yes, but we are also doing JIT-spray defenses. There’s no free lunch but we survived pwn2own along with Chrome. More work needed, security never done, but there’s no putting the inline-caching genie back in the bottle. See also iOS Safari and code-signed WebView embeddings.

    “… re-compilation and re-optimization, as the implementation repeatedly re-discovers things I could have told it in the first place?”

    Not if you don’t thrash the cache. Stable types can be inferred or profiled quickly. The problem case is not the one you pose (“told it in the first place?”) but the more generic code whose type sets evolve unpredictably. That’s a JS problem, not a compiled-from-typed-languages problem.

    “How much energy will be wasted on mobile devices running my application that I could have avoided otherwise with knowledge I cannot transmit?”

    Good question, some academic research here, promising results. We’re hooking up power meters. JS does well with stable types, but of course H.264 sequential code is a bounding exercise.

    The sweet spot is using the SSE, AVX, NEON, etc. instructions and of course the GPU. Then JS is battery friendly, as good as C++ in practice at runtime, and much much safer and better for programmer productivity. Don’t take my word for it, though. This is something that the market will sort out quickly.

    “Do you not agree that compilation targets like this have built in advantages for the lead language?” If you mean JS-like languages trans-compile best to JS, that’s obviously so. If you mean C and C++ do not, see the demos and stay tuned.

    JS is not a “monoculture” in the sense of a single implementation as specification for a de-facto standard, e.g., IE in the early 2000s. It’s a multiply-implemented ISO standard with multiple open source implementations, a technical commons.

    That there are no other equally secured (security never done, remember, but JS is way ahead of practical competition — see my “J word” slide) web content general purpose programming languages natively supported in browsers is a fact. I’ve analyzed why this is so and likely to remain true.

    But since JS is a usable target language, and getting better, there’s no primary source language monoculture. CoffeeScript is proof of that.

    “Are we so lacking in ideas and resources that we cannot take some of the energy and build a common small and fast platform that levels the playing field …”

    Who is “we”? Google could probably afford to do it, although they would never manage to support all popular languages equally well (that’s a Utopia). Microsoft might afford something like what Google can afford, but also fail to “level the playing field” in practice.

    Other vendors could not afford this asymptotic-to-utopia task, and after version 1 it would get ugly fast. But that assumes a miracle: getting to v1 among all the browsers.

    I think you’re making a category mistake: treating JS as lead language. It’s hardly that. It is simply good enough to survive and suffice, such that displacing it is very hard. A new monopoly could do that (talk about monoculture) over a long enough interval, but JS will be around for many years.

    Rather, I suggest you look hard at JS as target language (since per CoffeeScript, GWT, Dart, and the very long list I showed that Jeremy Ashkenas compiled on github at https://github.com/jashkenas/coffee-script/wiki/List-of-languages-that-compile-to-JS, it is indeed a target language), and see where it falls down. I would be interested in your feedback.

    Since I am a realist, if I see a new monopoly emerge with enough power to displace JS, I’ll be the first to work productively — both to help the new language along if my help is wanted, and to fight the new monopoly power per the Mozilla mission. There’s no contradiction there.

    In the mean time, “always” (hyperbole, as in that movie) bet on JS.

    /be

  24. entered 29 October 2011 @ 4:25 pm

    Reading this thread on HN http://news.ycombinator.com/item?id=2982256 there’s a level of paranoia and anger expressed that seems overblown. Politics seems to be clouding what would be technical decisions were it not for the standards circus.

    The reason I boost NaCl or whatever provides native performance is because it makes standards less relevant. Developers should not have to wait for the old boy’s club to agree on most things. If it were not for jQuery and efforts like it simplifying the nasty API decisions, JS developer community would look very different today. It probably would have never taken off and you wouldn’t have to worry so much about legacy support!

    But seriously, having native performance in the browser has many implications, not only can you implement features you’d otherwise have to pray for – like h.264/JPG5, non-generic compression etc, but you can spend far less time reading flamewars out of frustration and talk to computers instead.

    Standard bodies market themselves as open but to most developers browsers are pretty similar to proprietary platforms. Few outside developers participate in Webkit, Chromium, Firefox, etc, because of the simple economics, it’s impossible to keep up with their large teams. It’s also impossible for no name devs to demand functionality without writing some hack, popularizing it and hoping our browser overlords will notice and implement the feature properly. What will really make things open is giving developers the ability to implement the feature soup without your approval and NaCl/Dart are ways toward that, in theory.

    From the outside it looks like Google is the main force pushing for performance. V8, NaCl, Dart, are all big leaps or attempts at big leaps, all potential avenues to making the web an OS on par with the others. By comparison Mozilla’s stance looks complacent. Firefox has been catching up to Chrome for years and if it weren’t for Chrome’s helpful kick in the ass it would still be a shitty browser that didn’t take performance and UX seriously.

    From Brendan on HN: “Even if any means are justified toward the end of improving web programmer productivity over what JS affords today, Dart represents a specific, clear and negative judgment on the Harmony work in Ecma, a judgment that I believe will be shown to be a mistake. It can’t possibly help us make faster progress in TC39 — it’s at best a distraction and at worst a break in trust among the members.”

    Reading Google’s Dart memo and seeing it only from the perspective of fragmentation and back stabbing mainly makes sense when you don’t have performance and UX as top priority. You believe the judgement of Google’s celebrated VM engineers is a mistake or a business motivated lie. I’m not sure who to believe but Mozilla has its own interests, like declining market shares and may not want to play another round of catching up, if you want to turn up the paranoia meter. I don’t know what the big deal is with a separate legacy and modern VM if it speeds up development time of the modern VM, makes JavaScript as fast as Java if only a year early. I don’t like Dart’s Java taste, but it’s cleaner than many of the recent JS additions and can be prettified by CoffeeScript in the worst case.

    I don’t know which approach is best but language performance should be the highest priority because it helps alleviate the stagnation caused by the standards circus.

    tl;dr Performance now please

  25. Sam Tobin-Hochstadt
    entered 29 October 2011 @ 4:38 pm

    Niall: you write: “Are we so lacking in ideas and resources that we cannot take some of the energy and build a common small and fast platform that levels the playing field for multiple languages without baking in a chosen high-level language?”

    But this is not a question of ideas and resources, it’s a fundamental problem in systems and language design. No one has created a universal runtime that “levels the playing field”. The JVM did not, .NET did not, LLVM (which is not a runtime but a compiler intermediate format) did not. And how could they? What platform could equally support Go, Smalltalk, Haskell, Prolog, Scheme, and JavaScript?

    From a different angle, why is JavaScript necessarily a worse platform to develop a language for than X86? JS is safe, comes with a garbage collector built-in, and modern JITs will make dumb compilers perform decently. Listing the drawbacks of X86 as a platform is left as an exercise to the reader. :)

  26. entered 29 October 2011 @ 6:44 pm

    Detrus: “Politics seems to be clouding what would be technical decisions were it not for the standards circus.”

    Are you ignoring chronology on purpose? That HN thread predates Google uncloaking Dart, so “politics” were all we had to go by, based on the leaked Dash memo. What “technical decisions”, pray tell, were clouded then, other than by Google’s choice to keep Dart secret till this month?

    “You believe the judgement of Google’s celebrated VM engineers is a mistake or a business motivated lie.”

    Who said anything about lying? I did not. Dart is mostly a Lars Bak retention package as far as I can tell. Google’s vaunted “engineer-run” culture does not mean that Science Decides. In the battle to make the web incrementally better, trying to force a new client-side language into browsers is way down on the agenda.

    Maybe as a server-side Closure/GWT replacement, but that’s not all that was pitched (or even initially developed — it looks like the Dart-to-JS compiler was relatively recent).

    BTW, Lars is a great hacker and knows a lot about VMs, but he doesn’t know all about JS, and he doesn’t want to. He doesn’t like it. Dart’s Strongtalk-ish semantics may be easier to optimize if all you throw at JS is Self with hidden classes + the HotSpot client compiler, i.e. modern V8 with Crankshaft. As we’re showing at Mozilla, there are other ways to optimize harder.

    What I maintain was two-faced about the leaked Dash memo, in its own words, was the toxic brew of “we support JS in public” mixed with “develop Dart in private, target it in Chrome from web apps”. That’s not how open web standards development proceeds, even if the source code is eventually released. And in fact Google has pulled people off of JS standards work to focus on Dart and other proprietary moves.

    So Google is acting more like Microsoft, no surprise. But let’s have no hypocrisy about “open standards” and open-washed source as a lame excuse.

    As for Mozilla and our market share vs. Chrome’s, we certainly have our own bugs to fix, but you’re deluded if you believe that lack of a native Dart VM is among them. Or lack of NaCl/Pepper. I have no competitive pressure to implement either of these from actual users — just from Google sock puppets and fanboys.

    Chrome’s growing share has nothing to do with it having Google-only web language or protocol support. It has a lot to do with terrific Flash integration, and with Chrome’s successful (compared to Mozilla’s) “war on jank” (which is browser-kernel jank, process isolation is irrelevant).

    And it has a great deal to do with a marketing/advertising budget in the middle nine figures US$. Yes, >$300M, I’ve heard. That sure helps when crossing the market share chasm, more than any particular technical innovation. Big bucks subsidized opt-out/default-browser bundling deals don’t hurt either.

    ” I don’t like Dart’s Java taste, but it’s cleaner than many of the recent JS additions…”

    What JS additions? There haven’t been any comparable to what’s in Dart. ES5 (originally ES3.1) was no-new-syntax, all APIs. Sounds like you’re blowing smoke.

    “… and can be prettified by CoffeeScript in the worst case.”

    Oh right, you’ll be writing CoffeeScript which targets JS — semantics and syntax — but it’ll magically target a native Dart VM. In what browser? With what changes to adapt to the different semantics? I think you’re just making up nonsense here. I know Jeremy Ashkenas and communicate frequently enough that I can say this is about as likely to happen as Pepper in all browsers.

    What you dismiss as “the standards circus” is how browsers interoperate in general. Reverse-engineering proprietary standards is to be avoided. It is inefficient and error-prone and even slower than prototype-and-standardize as modern Ecma and WHAT-WG and some W3C groups do.

    Yes, standards bodies are always half-broken. I know this better than you because I led Mozilla’s technical community to launch Firefox to take back share from IE when no one thought that possible, and I co-founded the WHAT-WG, which eventually forced the W3C to recharter the HTML working group. But now that competition is back, proprietary moves are anti-social and counterproductive.

    Standards are thus stagnated mainly to the extent that market players don’t bring their innovations in time to get into the next round. Google chose not to do that with Dart. In doing so, they “help alleviate” nothing.

    If you’re not trolling me, you seem to be pinning your hopes on power moves by Google forcing de-facto “better is better” standards like Dart on other browser vendors.

    That’s consequentialist or morally blind at best. Ignoring morality, it did not work so well last time with Microsoft, and it has been a mixed bag with Apple (where at least they’re honest about what’s open and for the web, and what is closed native sauce), although some of the fault lies with others on the CSS WG not standardizing -webkit-* quickly.

    Pinning fanboy hopes is a risky bet at this point since Google doesn’t have enough market power to force a standard. The likely outcome is fragmentation and even less coherence and coordination via standards.

    This is the non-paranoid point that I made crystal-clear on that HN thread. Here it is again since you missed it or chose to ignore it.

    /be

  27. niall
    entered 29 October 2011 @ 7:27 pm

    Brendan: no false dichotomy in your mouth intended. Hence my attempt to hedge by point out that it’s a common talking point but that /we/ should not limit our thinking to it. Also, I welcome your answers, but do note that I expected Javascript advocates (which I certainly do not view as a pejorative) to have them ready. Perhaps I failed in my note to take enough care to stress that I have no horse in the race but am kibitzing from the sidelines. Also, mea culpa on the political overtones — not my intention to divert us in that direction despite my (perhaps poorly) chosen tone.

    Anyway, back to your technical content. I’m reordering slightly to give a better flow of ideas; no offense or misquotation intended, and any mistakes are mine (I find blog comments a suboptimal mechanism for this kind of conversation and am not sure Mr. Shaver appreciates my crudding it up in this style).

    Thank you for the links and references to proposed extensions. I’m familiar with the River Trail demo (I saw it at IDF and have had several private demos of similar technology). Someone had mentioned the Emscripten llvm-to-JS compiler to me too (probably cdleary).

    I’ll start with types as they are near and dear to my heart. Adding typed arrays is certainly a most welcome step in the right direction. As are the optional guards. You point out:

    BE: “Anders wisely made .NET bytecode untyped (Java bytecode is partly typed). .NET IL types are inferred from storage.

    Anders et al. did a good job on the initial design but the evolution didn’t stop there. Far from it. Subsequently, it was improved by Syme and Kennedy to integrate parameterized types and polymorphic methods into the type system and the intermediate bytecode language (including instruction changes and additions). The sophisticated support at that level enabled not just C# but arguably richer languages such as F# to be implemented efficiently and safely without resorting to, for instance, the tradeoffs due to relying on compilation by type erasure (i.e. throwing away type information as in the typical implementation of Java generics). There seems little doubt that this improved it as a compiler target. It is not that I intend to hold this design up as the be all and end all but it does serve as an educational example. There, of course, have been many other excellent works on intermediate languages.

    I’m sure you’re also familiar with Steve Lucco et al’s work at Colusa Software prior to the acquisition by Microsoft on the Colusa Virtual Machine “Omniware”. They had a RISC style load-store machine, with infinite integer and floating point registers, and augmented with quite complex addressing modes, memcpy style instructions. The implementation used software fault isolation. The transmission format was compressed. All in all, the result was small programs and near native execution coming from languages such as C. This is the opposite end of the spectrum from MSIL, essentially giving an abstract processor from which we must build up as usual when compiling higher-level languages.

    niall: “can I in fact express the results of any optimizations I do, in valid and efficient Javascript? ….”

    BE: “Please read Alon’s Emscripten paper: https://github.com/kripken/emscripten/blob/master/docs/paper.pdf?raw=true.”

    At first blush, it appears that Emscripten emulates memory in a similar fashion, using a Javascript array as the machine heap, and compiles branching to switching on a label to emulate the control flow. This of course works perfectly fine but isn’t as efficient. Plus presumably the implementation itself needs to be taught more tricks to make this perform better despite the extra semantic indirection (I realize I’m not phrasing that thought very clearly).

    And the document does answer my question as to the impact of optimizations. If the document is still current, Emscripten cannot handle all possible LLVM code, and one example is where an optimizer has converted 2 adjacent stores into a larger store. This is another example of the kind of thing I was curious about. My own preference would be to have a target that did allow such optimizations.

    C (and to a lesser extent C++) require relatively little in the way of runtime support, the other examples being rather “co-expressive” to use your phrasing. I shall follow up on some of the other language frontends to see how extensive the code they need to add for runtime is. I’m particularly interested to see where/if gyrations were required for source languages with sophisticated type systems; the Microsoft experiments certainly suggest adding support in the IL to support such languages was a worthwhile thing to do.

    niall: “… re-compilation and re-optimization, as the implementation repeatedly re-discovers things I could have told it in the first place?”

    BE: “Not if you don’t thrash the cache. Stable types can be inferred or profiled quickly. …”

    For some particular hot code in the current execution. I was assuming code was not cached between executions (e.g. between browser restarts) and also was leading into the mobile device question: the rediscovery costs need to be multiplied by number of users too for a full energy accounting.

    BE: “Good question, some academic research here, promising results. We’re hooking up power meters.”

    Now that is extremely interesting. If there is somewhere I can follow up on that please do let me know.

    BE: “The sweet spot is using the SSE, AVX, NEON, etc. instructions and of course the GPU.”

    Certainly; I’m a fan of vectorization. And transmitting the types is vital to easy targeting of such hardware. I think we disagree less than may be immediately visible.

    BE: “JS is not a “monoculture” in the sense of a single implementation as specification for a de-facto standard, e.g., IE in the early 2000s. It’s a multiply-implemented ISO standard with multiple open source implementations, a technical commons.”

    My bad writing. I intended to mean JS as the target language as a monoculture not being something I’d argue for since my initial impressions are that it needs significant work to become suitably broad in scope. The kinds of things I had in mind are, for example, the support for more sophisticated source language type systems. Yes, they can be compiled away, but that’s suboptimal. As is restricting the scope for optimizations that can be performance once up front.

    BE: “Who is “we”?”

    Perhaps take it as the royal We ;-) Really, I just meant us as a community of technologists. I find partisan politics and commercial interests to be tedious so was making a bad attempt to speak broadly as a ‘room full of geeks’. (For the sake of clarity, I do not work for a company with any interest in the web.)

    BE: “never manage to support all popular languages equally well (that’s a Utopia).”

    I over-chopped that. Just wanted to pick up this phrase. Everyone is scared of the UNCOL boogey man. I would think it uncontroversial to say that there’s a spectrum. We’ll never get 100% perfect support for everything, but that’s not an excuse for us to not push for better. I’m not saying you disagree (or agree for that matter :-).

    BE: “Other vendors could not afford this asymptotic-to-utopia task, and after version 1 it would get ugly fast.”

    Now this is where we are perhaps parting ways. The effort need not be that huge I think. We have existence proofs of better compiler targets (we also have bad examples such as the parrot vm attempt which seemed quite wrong-headed) and it seems to me that some of the implementation work done for existing Javascript engines is already more complex than it would need. I could, however, be completely wrong.

    Holding that thought for a second..

    BE: “Since I am a realist, if I see a new monopoly emerge with enough power to displace JS, I’ll be the first to work productively — both to help the new language along if my help is wanted, and to fight the new monopoly power per the Mozilla mission. There’s no contradiction there.”

    As a hypothetical, would you support, if to do nothing than provide some evidence one way or another, an exercise in trying to define a simple (in semantics and implementation), lower-level compiler target that attempts to address these points?Ideally a (very) liberally licensed open source one.

  28. niall
    entered 29 October 2011 @ 7:28 pm

    Sam Tobin-Hochstadt said: “But this is not a question of ideas and resources, it’s a fundamental problem in systems and language design. No one has created a universal runtime that “levels the playing field”. The JVM did not, .NET did not, LLVM (which is not a runtime but a compiler intermediate format) did not. And how could they? What platform could equally support Go, Smalltalk, Haskell, Prolog, Scheme, and JavaScript? From a different angle, why is JavaScript necessarily a worse platform to develop a language for than X86? JS is safe, comes with a garbage collector built-in, and modern JITs will make dumb compilers perform decently. Listing the drawbacks of X86 as a platform is left as an exercise to the reader. :)”

    Actually, x86 isn’t nearly as bad as it’s reputation! However, it is also true that it has corner cases that are significantly worse than most people realize :-) But, it has reasonably compact code, though the encoding is rather suboptimal, and rather excellent implementations. I suppose we could borrow some tricks from Morrisett et al and add types :-)

    I think however your comment mixes a few things up. The JVM was meant to support Java, not be a universal compiler target. .NET did rather better, but of course leaves room for improvement. LLVM was not meant to be a portable virtual machine, as you note. We can all compile to C, as so many of us have done at some point to get a compiler off the ground, but that has its own semantic impedance mismatches. I don’t think these are counter examples to the idea that an interface designed as a compiler target for a broader range of languages is not feasible.

    What platform could equally support the languages you mention? Optimally and 100%? Well I doubt we could do such a thing. Can it ever be done? I have no idea. However, I do know that just because we do not how to build the ideal version, that it is not an excuse to build something better than where we are right now. Perhaps I’m in the minority on that. But I’m encouraged by the examples we’ve seen so far of the platforms evolving because of such demands.

    From an engineering viewpoint, it seems clear that both Javascript and x86 could be improved. And are being improved.

    But still, why not a better common runtime?

  29. entered 29 October 2011 @ 7:35 pm

    @Detrus: “I don’t know which approach is best but language performance should be the highest priority because it helps alleviate the stagnation caused by the standards circus.”

    There is tremendous progress being made both with regard to performance and with regard to JavaScript features. Have you followed ECMAScript.next’s development? It will be a great language.

    NaCl: works well for a certain class of programming languages and not well for others [1].

    Dart: Not even Google seems to currently know what to do with it. Why didn’t they tell us when they launched Dart? There only was a vague “doesn’t target JavaScript, targets the fragmented mobile landscape.” Pre-launch, there was talk about embedding a Dart VM in Chrome, during the launch, that was still “to be determined”. So Brendan’s “Dart is mostly a Lars Bak retention package as far as I can tell” sounds right. Dart would make a lot of sense as a Java replacement on Android, though.

    [1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/043719.html

  30. entered 29 October 2011 @ 7:54 pm

    Niall: I’m familiar with the evolution of the CLR, also with Steve Lucco’s c.v. and recent history. In case anyone is confused, I admire the SFI and CFI research that has culminated (as far as I know) in Native Client. But it’s bad for the web and a mismatch for browsers, and I predict it will not make it to browsers other than Chrome.

    The Emscripten paper is not up to date with all the latest optimizations. Talk to Alon to get the latest, but one reason for the byte-indexed HEAP array holding uint32 elements is precisely to handle coalescing and aliasing. Don’t assume, check with Alon and use the source, Luke :-P.

    “As a hypothetical, would you support, if to do nothing than provide some evidence one way or another, an exercise in trying to define a simple (in semantics and implementation), lower-level compiler target that attempts to address these points?Ideally a (very) liberally licensed open source one.”

    No. I’d much rather see where the pain points are with Emscripten and other compile-to-JS exercises, since I give any from-scratch VM design very low odds of ever getting implemented in more than one browser. Boiling JS down as a target toward such a dream VM is the shorter path in the big evolving system that is the Web and its browser vendor apex omnivores.

    /be

  31. azakai
    entered 29 October 2011 @ 8:17 pm

    At first blush, it appears that Emscripten emulates memory in a similar fashion, using a Javascript array as the machine heap, and compiles branching to switching on a label to emulate the control flow.

    1. The switch-in-a-loop is the unoptimized mode in Emscripten. The optimized mode uses an algorithm (the ‘relooper’, detailed in the paper) that generates high-level native JS loops.

    2. Memory can be either a JS array, or a typed array with shared or non-shared buffer.

    And the document does answer my question as to the impact of optimizations. If the document is still current, Emscripten cannot handle all possible LLVM code, and one example is where an optimizer has converted 2 adjacent stores into a larger store. This is another example of the kind of thing I was curious about. My own preference would be to have a target that did allow such optimizations.

    The document is already somewhat outdated. The “typed arrays with shared buffer” mode can support such LLVM optimizations.

    However, in general such optimizations are nonportable due to endianness and other issues. So having a target that does allow such optimizations has inherent limitations on portability. There is no universal, perfect intermediary representation that is both portable and low-level enough to support all optimizations, pretty much by definition I think. So you need something less optimized and higher-level, that will be fully optimized later.

    We have existence proofs of better compiler targets

    I’m not sure I agree. No target that I am aware of has all of

    1. Portable
    2. Safe
    3. Fast for statically-typed languages
    4. Fast for dynamically-typed languages

    LLVM has 3. The JVM and CLR have 1-3 but not 4. JS has all but 3, but is better at 3 than the JVM and CLR are at 4 (and improving faster as well).

    JS does have some limitations for compiling arbitrary code (no shared state, no 64-bit integers), however in practice this is mostly not an issue with well-written portable code. And other platforms also have limitations (for example, Python on the JVM and CLR is not identical to CPython in behavior).

  32. niall
    entered 29 October 2011 @ 8:42 pm

    Brendan: “But it’s bad for the web and a mismatch for browsers, and I predict it will not make it to browsers other than Chrome.”

    I’m guessing that you are referring to Native Client here rather than SFI in general. Putting that aside, I think it’s worth considering some SFI ideas in other contexts than native binaries. Being inspired by such things, I’m sure someone has taken the obvious step to, for example, segment the JS heap and enforce local accesses only with security checks on inter-heap accesses, when sharing an address space between JS programs from different sources. Hopefully the research can be relevant no matter the final implementation choices.

    Given that you are familiar with the CLR evolutions (and as I assumed with the earlier OVM work), I guess I am still surprised you do not seem to think the addition of more powerful type system at a low-level in the CLR, where the frontend can express the true intent, to be a good thing instead of inference and observation at run time. All too likely I’m perhaps misunderstanding your earlier comments.

    I should wrap up my comment spree — in any event, thanks for the debate and the tweeted clarification of versioning.

  33. niall
    entered 29 October 2011 @ 8:59 pm

    azakai: Thanks for the clarifications. Your comment snuck in while I was typing my last so I’m subverting my own wrap up to quickly respond. I did indeed note the relooper, I should have referred to it, but as the author (you, I’m guessing) notes, the algorithm doesn’t apply universally. While most code will be well behaved, it’s possible to optimize to still portable code that would fail to be ‘relooped’ by this algorithm. Notwithstanding that, it’s still an obviously useful thing to do for most of the sources out there, and fine work to be encouraged.

    As I noted earlier, I’m not holding up JVM/CLR/LLVM/x86 as paragons of virtue, except to refer to some features that seem useful to absorb. I actually think a typed version of the OVM work would be a better example in some respects. There is also much to be learned from the lessons of using compressed ASTs, other typed SSA formats, ANDF (not that I’d recommend it), C–, and so on.

    Then again, I can hardly pass a windmill without tilting :-)

  34. Maks
    entered 29 October 2011 @ 9:00 pm

    Detrus: Your arguments are exactly those made by Francisco Tolmasky (in defense of Joe Hewitt), you have no idea how bizarre your talk of “…our browser overlords” is!

    Sure there’s a large team of people working on firefox, but they are not all employed by Mozilla! There is nothing to stop you “innovating” in the open source projects by making a contribution yourself. Look at the work being done with B2G, Rivertrail, etc. no reason why you can’t pitch in…

    But of course there is, its much easier to whinge from the sidelines and point at how much better proprietary platforms are than to actually do the (very) hard work of improving existing open ones. But if your are so enamoured with proprietary options, please go ahead and use them. But don’t pretend that efforts like NaCL and Dart are anything but proprietary efforts and its been pointed out time and time again that there is no way that all major browser vendors are going to implement them.

    As Brendan has pointed out, so far the track record for JS opponents has been pretty bad. Almost weekly now, each feature that’s been touted as the reason to drop JS and move to something (proprietary) else has been shown to be doable in JS and existing web browser tech in general.

    Of course its good to make developers lives easier, make the existing technology better, faster, but that exactly what is happening both in the std committee and out in the world of “independent” developers.

    TLDR: if you want “Performance now please” go do it yourself! or go lock yourself into some vendors proprietary wares. If you want to see it sooner than later, go pitch in and help or stop wasting others time by whinging form the sidelines about how you which the world was a more closed, locked in place.

  35. azakai
    entered 29 October 2011 @ 9:20 pm

    but as the author (you, I’m guessing) notes, the algorithm doesn’t apply universally. While most code will be well behaved, it’s possible to optimize to still portable code that would fail to be ‘relooped’ by this algorithm.

    Yes, the algorithm is guaranteed to generate some native loop structure, but not necessarily the optimal one. For example, if the original code was a tangled mess of |goto| commands, there might be no efficient way to represent that with loops. However, I believe this is not a significant problem on most real-world code.

  36. entered 29 October 2011 @ 10:18 pm

    “Brendan: “But it’s bad for the web and a mismatch for browsers, and I predict it will not make it to browsers other than Chrome.””

    Oops, my “it’s” dangled. I did indeed mean NaCl, not CFI enforcement in general. That’s a topic we continue to follow as researchers, and employ in Mozilla code when we can. Compiling Mozilla’s C++ with NaCl (Linus Upson says Chrome will do so at some point) is not practical right now. But we have made real strides, quantifiable in bug rates and bounties paid, using things like Gecko rendering object poisoning, an ad-hoc enforcement of CFI for certain C++ types where we control the allocator.

    On the question of low-level IR, typed instructions are definitely helpful — they may preserve crucial information from the high-level source language or an earlier phase that’s not in the “class info”, e.g. for explicit coercions. And of course we don’t want erasure if polymorphism is to be supported.

    But for a web-transported long-lived canonical low-form (ignoring generics which would take reified types), I’m skeptical. Over-lowering is future-hostile. We would rather have a higher-level intermediate in which types are optional. Again, such a “bytecode” looks so unlikely to me at this time (assuming it’s not evolved, minified JS) that I’ll quit speculating.

    /be

  37. entered 29 October 2011 @ 11:06 pm

    @azakai: I think to suggest that 64-bit integers are not necessary for well-written portable code is just a little dishonest. 64-bit integers get used all over the place. Emulating them in software is not exactly a recipe for great performance.

  38. azakai
    entered 30 October 2011 @ 8:06 am

    @Kevin Gadd:

    1. I agree this can be problematic. However, what I was saying was based on practical experience: This has not yet been a problem in any project I have compiled, which includes Python, Bullet, zlib, Doom, OpenJPEG, Poppler, FreeType, etc. etc. – plenty of real-world code.

    The only project where this looked like it might be a problem is in libAV, but I didn’t go through with compiling it (we ended up using the Android decoder instead), so I don’t know the answer there.

    The only case I can remember where I actually had to make a source code modification was in Doom – it uses 64-bit fixedpoint math (to awesome effect). But the change was less than 10 lines of code, it was trivial.

    But again, I don’t disagree with you – this is potentially a problem, which is why I mentioned it as such. But in practice I have not yet been hit by it. I suspect most projects that aim for portability simply do not want to rely on 64-bit math being fast.

    1. I might surprise you here, but emscripten doesn’t actually emulate 64-bit integers yet :) I haven’t had the need to actually write that, as I mentioned in point 1, so instead what the compiler does is use JS numbers. Which means that 64-bit integers up to 53 bits in size will work fine (as doubles) – and fairly quickly – but larger ones will hit rounding issues. Again, I am surprised this actually works in all the projects I’ve compiled, but it does so far…

    2. Btw, do you have an example of a real-world project that does rely on full 64-bit integers, preferably one where performance matters? I’ve been looking for one for testing purposes.

  39. Robert
    entered 30 October 2011 @ 12:35 pm

    And yet, having to code in javascript every day, I continue to hate it more and more.

  40. Robert O'Callahan
    entered 30 October 2011 @ 1:20 pm

    Detrus: “It seems like so much is happening because of the marketing.” All the stuff I mentioned (and more) really is happening, it’s shipping code. I don’t know how you can pretend it isn’t.

    azakai: I wanted to write a Chronomancer debugger front-end in XUL, but gave up because I couldn’t represent 64-bit addresses efficiently. So I would like real 64-bit ints in JS please :-)

  41. Robert O'Callahan
    entered 30 October 2011 @ 1:26 pm

    Tachyon doesn’t look all that interesting at the moment. The type analysis and specialization looks similar to what Brian’s TI does (enabled in Firefox nightlies). Just being metacircular isn’t itself a win … IBM Research went a long way down that path with Jikes RVM (I was there) and it never really paid off IMHO.

  42. entered 30 October 2011 @ 1:42 pm

    JS should support 64 bit ints, via bignums if not directly. I will work on this with dherman and others.

    /be

  43. entered 30 October 2011 @ 3:33 pm

    Brendan: It seems like you’re missing my main point and arguing about minor ones (minor to me at least) Sorry for not being clearer.

    I don’t know what kind of actual users would be asking for Dart/NaCl. Developers? If you don’t provide a performant runtime people won’t build apps to take advantage of it and end users won’t know what they’re missing. I myself am not asking for a particular technology, but for native performance sometime soonish, not in 10 years like its taking Java.

    If current JS can be optimized to Java/C speeds, how long do you expect it to take? You say Dart should be easier to build an optimized VM for, which also means faster to build right? The real question is how much faster. Maybe if it’s 2 years faster it’s worth considering?

    You have to use technology to get around the standards situation because it’s too fragile as a process. There was a hiatus of several years that was broken by Chrome and mobiles but now if Chrome ruins the party then what? If there was a fast runtime devs could make and popularize their own video tag like features as they did for DOM/Ajax, CSS/JS syntax while browsers had other priorities.

    As for my minor points, I don’t get what the big surprise was with the leaked memo. NaCl had similar implications, they talked about it being spread as a plugin, first being tested on Chrome before being considered as a standard. Then you’d need some popular game, app or marketing gimmick to take advantage of the tech. Chrome already had it’s own web app store and demoed some NaCl/Unity game that would only work there. It would be a way to leapfrog standards. It’s close to Dart VM in principle.

    If the Dart memo wasn’t leaked, you’d just start looking at it from its official release, skip some politics and judge its technical, development time of VM merit. You’re frustrated that big companies try to get around standards but many devs are frustrated at web capability advancing slowly. Slow progress justifies some consequentialist moves, even if they only underline that progress is too slow.

    Forked CoffeeScript or a new project in the same spirit would clean up significant aesthetic decisions in Dart. Call it DartScript https://github.com/jashkenas/coffee-script/issues/1765 but key things like significant whitespace, ->, optional parentheses, if/else/unless should work.

    And small scale JS/API additions like getter/setters, web workers have ugly syntax. Then there are suggested |>, .= things meant for CoffeeScript to clean up, and this http://brendaneich.com/brendaneich_content/uploads/JSLOL.010.png is scary. Getting pretty syntax and APIs from committees seems hopeless. You’re only considering pieces of CoffeeScript, but very little of JQuery was adopted on the API side? Maybe there’s some way to bundle popular projects so they’re perceived as part of the browser and standardize that?

    Also NaCl not being suited for the web, you mean the binaries? I guess large binaries on simple sites would suck, but for large apps you use everyday, it’s fine, if they were cached. There was also a demo of Go where it compiled from source inside NaCl instantly. I’m sure Pepper is as bad as you say but wasn’t the idea to standardize NaCl/Pepper after implementation because it’s so experimental?

  44. entered 30 October 2011 @ 3:49 pm

    Maks: NaCl/ChromeOS/Pepper/Dart are somewhere in between proprietary and open. Being experimental, there’s not much point in NaCl/Pepper settling on standards anyway.

    And fully proprietary platforms with wide reach, like Flash, have stagnated as well. Very little work on improving VM performance. No threading like web workers. They’ve recently prioritized 3D, hardware acceleration and mobiles but just like browsers/JS they have lots of technical issues to overcome and will keep being far behind native OSs as long as browsers if they don’t do some clean break approach soon.

    This is strawman stuff “whinging form the sidelines about how you wish the world was a more closed, locked in place.” If NaCl/Golang were standardized it would not create a world of closed source evil. I was just wondering if the approach of statically compiled languages and a binary sandbox were a better approach than VMs in general. Then I wondered if a language redesigned for performance would lead to an optimized VM faster. I wasn’t suggesting we should forget standards and open source, but consider separating from the legacy stack if new technology is significantly better.

    Also I don’t see how I could do “performance now please” myself if large teams of experts can’t. I’m not a VM/language design expert. What could I do, hack in NaCl/Go into Firefox and distribute a plugin? No that kind of thing has to be done by the browser overlords, workaround JS libs for performance are not possible.

  45. entered 30 October 2011 @ 8:37 pm

    Detrus: I’m gonna assume you’re not trolling, one more time, until I can’t stand it. After that, we’re done.

    “Brendan: It seems like you’re missing my main point and arguing about minor ones (minor to me at least) Sorry for not being clearer.”

    No, you made significant claims and I responded. Don’t weasel out of them.

    “I don’t know what kind of actual users would be asking for Dart/NaCl. Developers?”

    No one. Why are you hand-waving? Stop making stuff up. No developers are asking us, or my Apple friends, or my Microsoft colleagues on TC39, for Dart, and no NaCl fans have asked Mozilla for the unaffordable free lunch of Pepper in perpetuity.

    Only one blogger blamed Mozilla for hurting the open web by not supporting NaCl — as if that would snowball Apple and Microsoft into supporting it too. Come on, think through the costs, estimated from KLOC and interface complexity counts (crude ones like numbers of classes, methods, method parameters).

    “If you don’t provide a performant runtime people won’t build apps to take advantage of it”

    Stop right there. JS is performant for modern browsers, and cross-browser, and developers do not build for only one browser. The burden of proof is on you here.

    “… and end users won’t know what they’re missing. I myself am not asking for a particular technology, but for native performance sometime soonish, not in 10 years like its taking Java.”

    You can’t have native performance. Not even from NaCl. You are putting forth arrant spin here. Please stop or into the troll plonk-bucket you go.

    NaCl, especially on non-IA32, has non-trivial overhead. The best-case numbers are for IA32 and rely on segment registers.

    “If current JS can be optimized to Java/C speeds, how long do you expect it to take?”

    I never said to C-equivalent speed, so again: watch your rhetoric.

    In fact, JS using || hardware can go faster than sequential C, and C+OpenCL is unsafe compared to JS with ParallelArrays and WebGL.

    “You say Dart should be easier to build an optimized VM for, which also means faster to build right?”

    Wrong. Dart is a different language, different runtime semantics, no shared heap. That means a new codebase, plus a cycle collector and another extension to the common cross-language bridge (to avoid O(n^2) language to language ad-hoc bridging). That is a multi-year mission, and Google’s way ahead (I first heard of Dart in May 2010, and it was already under way).

    “The real question is how much faster. Maybe if it’s 2 years faster it’s worth considering?”

    No. The cost is in addition to required JS performance improvements, and we can’t afford both. Neither can other vendors. No free lunch.

    “You have to use technology to get around the standards situation because it’s too fragile as a process.”

    What do you mean by “use technology”? Try to use concrete examples. What you wrote is almost empty of meaning.

    “There was a hiatus of several years that was broken by Chrome and mobiles but now if Chrome ruins the party then what?”

    There was no hiatus. When Chrome launched in Sept. 2008, Apple and Mozilla were within 30% of its first debuted performance figures on the common benchmarks. Apple actually kept up well until Crankshaft.

    ” If there was a fast runtime devs could make and popularize their own video tag like features as they did for DOM/Ajax, CSS/JS syntax while browsers had other priorities.”

    As the Emscripten H.264 demo, plus River Trail and a bit of WebGL extrapolation suggest, this is precisely where we are going. Again, NaCl+Pepper will never cross to other browsers. So why are you even wasting your time and mine rewriting history and assuming JS can’t get to || hardware first, enabling exactly what you sketch here, which I agree is the goal?

    “As for my minor points, I don’t get what the big surprise was with the leaked memo.”

    Google was playing open-web good cop in public, fairly preening about it. The memo showed the Dennis-Franz-naked-butt bad cop. Bully for you if you had that figured out already (I did too, due to confidential sources, so [and anyway] big deal). My argument on HN and to you is that Google lacks market power to get away with such “might makes right” moves. They’re wasteful and costly.

    “NaCl had similar implications, they talked about it being spread as a plugin, first being tested on Chrome before being considered as a standard.”

    It’s not gonna make it — plugins are dying on Mac and Windows. Partly because JS+HTML+CSS+new-standards mean you don’t need native code to do advanced graphics or multimedia.

    “Then you’d need some popular game, app or marketing gimmick to take advantage of the tech.”

    I know the pitch. Why are you making it? It’s clearly “in default”, behind on reality-payments, about to go into bankruptcy.

    “Chrome already had it’s own web app store and demoed some NaCl/Unity game that would only work there.”

    Yes, the only business case for NaCl is to get Lego Star Wars (demo’ed two Google IOs ago, cleverly narrated as if it were “HTML5″) and the like into the Chrome store. But the Chrome stor is a bit of a flop, from the numbers.

    “It would be a way to leapfrog standards. It’s close to Dart VM in principle.”

    This kind of might-makes-right, force the other vendors to reverse-engineer or swallow Google-controlled open-washed source, gambit is also a flop. It can’t work without enough market power that it would work independent of technical and open source wins. Google doesn’t have that market power. Deal with it!

    “If the Dart memo wasn’t leaked, you’d just start looking at it from its official release, skip some politics and judge its technical, development time of VM merit.”

    Yes, but (don’t you dare weasel out of it), you implied previously on this very blog post’s comment thread (above, for all to see) that I should have already done that when I posted HN comments, before Dart was out.

    Stop rewriting history. Your rhetoric and broken chronology stink. I don’t care about apologies, but have the common decency to drop it when you’re this wrong.

    “You’re frustrated that big companies try to get around standards but many devs are frustrated at web capability advancing slowly.”

    And you are ignoring that the big companies trying to get around standards could have done more to advance the standards, making the devs less frustrated. This is so obvious I again suspect bad faith on your side.

    Prove me wrong. Why shouldn’t Google have taken a day to preview Dart to Ecma TC39 and propose changes to ES.next before our May 2011 proposal-cutoff? We could have done many things to make JS run Dart faster. Or perhaps make Dart more obviously unnecessary — oops.

    “Slow progress justifies some consequentialist moves, even if they only underline that progress is too slow.”

    No, consequentialism is never justified. Glad you cop to it here, though, and I wonder why you think it’s ok some of the time, but bad for the bad guys (like the villains in “Raiders of the Lost Ark”). You don’t get to pick which “good guys” employ it. Market power holders (not Google, not yet) may be good, bad, or a mix.

    “Forked CoffeeScript or a new project in the same spirit would clean up significant aesthetic decisions in Dart. Call it DartScript https://github.com/jashkenas/coffee-script/issues/1765 but key things like significant whitespace, ->, optional parentheses, if/else/unless should work.”

    Go for it. Oh wait, where’s that native Dart VM?

    And you again dodge the semantic shift. Syntax is not the issue. CoffeeScript’s semantics are exactly JS’s. Your fork has to change from a trans-compiler to a full Turing tarpit compiler with a JS runtime emulating Dart’s bignums, classes, optional/unsound types, etc. That’s a lot of work, and it will run slower without a native Dart VM.

    “And small scale JS/API additions like getter/setters, web workers have ugly syntax.”

    Web workers are an API from the WHAT-WG, they have no syntax. What in the world are you talking about?

    Getters and setters have ugly syntax? Give an example. I’ve never heard this. I hope you’re not talking about API or the old defineGetter method here.

    “Then there are suggested |>”

    Uh, it’s spelld <| not |>, and see my es-discuss post of today: https://mail.mozilla.org/pipermail/es-discuss/2011-October/017735.html

    And again, you shifted your argument. Before, you had written “but it’s cleaner than many of the recent JS addition” — clearly “recent JS additions” is talking about actual implemented additions to JS, whether de-facto or de-jure standards. Now you’re throwing stones at provisional next-generation syntax? Again, you are weaseling out of your own words, changing your argument under fire. Stinky.

    I’ll deal with the new argument anyway: <| is not going to make it, we’ll use a keyword instead. So much for “ugly’. But the quality of your argumentation is so poor that I’m going to cut off my blow-by-blow reply here.

    My question to you is why you are acting like a troll, or a Google sock-puppet. You’re not going to get NaCl+Pepper in other browsers. Dart isn’t even in Chrome. You are in a state of denial about these present and likely future facts.

    Meanwhile, JS can run H.264 in realtime and we’re ||izing using the GPU. Now I have work to do, and you can save your next reply for someone more credulous.

    /be

  46. Sam Tobin-Hochstadt
    entered 31 October 2011 @ 4:05 am

    A few bits on VM and bytecode technology:

    First, Azakai makes the key point that no one has developed a fast, safe, language-agnostic runtime. There’s a reason that the new JS engine in IE9 didn’t compile to .NET, despite Microsoft’s heavy investment in both (including a research .NET JS compiler and runtime). This gets even harder when you consider languages that are more different from each other than JavaScript is from C# or Java.

    There are certainly things that would make JavaScript a better platform for building other languages, and many of those things are being considered on TC39. Others, like Typed Arrays and WebGL, are already shipping in browsers. I think exposing a bytecode format for the semantics of the JavaScript VM in a browser wouldn’t have much if any gain beyond that.

    Safe, language-agnostic, fast VMs are a really hard problem — developing them is the task of researchers not browser implementors.

    Second, there seems to be confusion about what will lead to fast performance and when. It may be possible to optimize Dart more than JavaScript, but Dart has basically no optimization advantages over Java. So expecting Dart to move much faster than Java in performance is a mistake. NaCl is a very different technology which is almost guaranteed to run at native speed, but Google’s plan currently seems to focus on PNaCl, where performance is somewhat less guaranteed. Either one is a totally different proposition from Dart.

  47. entered 31 October 2011 @ 9:52 am

    [...] source code is available for download on GitHub. To test it, you want to use the latest nightly build of Firefox . The compiler is also [...]

    [WORDPRESS HASHCASH] The comment’s server IP (174.34.131.84) doesn’t match the comment’s URL host IP (174.34.131.82) and so is spam.

  48. entered 31 October 2011 @ 10:14 am

    “Safe, language-agnostic, fast VMs are a really hard problem — developing them is the task of researchers not browser implementors.”

    Add to that the design challenge of long-lived, compatible (version-free?) bytecode or AST encoding, also research, yet also wanted as part of the utopian package deal. I’m not saying it’ll never happen. I am suggesting JS could evolve into it more readily than a clean-slate replacement could take over. In a new “hyperweb” that is 10x better than today’s web? Sure. 10% or even 100% (2x) better? Not likely.

    /be

  49. entered 31 October 2011 @ 10:28 am

    [...] Shaver, former Mozilla vice president of technical strategy, highlighted Broadway in a blog post that he wrote last week about JavaScript performance. He compared various approaches that have been [...]

  50. Jeffrey
    entered 31 October 2011 @ 1:05 pm

    I don’t think the Flash Player will die. At least as long as most websites use it for video. Lets not forget about Netflix (a significant portion of internet traffic) witch uses Silverlight and some unknown software on DVR’s, game consoles, smartphones, Roku and Chromebooks.

    Imagine if Microsoft didn’t disband the Internet Explorer team for four years. That they kept improving it in speed and functionality (like users had expected them to). Would there really have been a chance for alternative browsers? Even in this timeline Internet Explorer is still about 40-50% of the market.

    Where would Firefox be without Netcape/AOL/Google’s money, promotions and developers? Probably underdeveloped and largely unknown to the general public. (Of course it wouldn’t even be recognized as Firefox)

    It seems that business interests are more important than morality, fairness, friendly competition and sadly, on several occasions, more than customers. This is why things like DRM and proprietary software still exist. Plugins as such will be around years maybe decades from now.

    I think it’s delusional to believe in a coming utopia where everything is open and everyone works together making the same platform several different times. If anything, I believe a thinning of the implementations will come. Like the format wars of the past, there can only be one true winner.

  51. entered 31 October 2011 @ 2:16 pm

    Brendan I’m not sure who is trolling who, I don’t know the particulars, I don’t pretend to be an expert and put plenty of question marks in my posts. You really misinterpreted my tone. No I’m not talking in some programming language designed for techno-political discussion which seems to be in the works at Mozilla. It feels like you’re twisting around every minor statement and extracting details which are not there.

    You in http://news.ycombinator.com/item?id=2982256 : “If they use Dart and need native performance” Me: ““If current JS can be optimized to Java/C speeds” You: “You can’t have native performance.”

    These are imprecise terms but thanks for fixing up my rhetoric. A key question is still, how long do you expect it to take, to optimize runtimes for current/standards evolvable JS to their theoretical/practical/Java/NotC performance limit?

    “Yes, but (don’t you dare weasel out of it), you implied previously on this very blog post’s comment thread (above, for all to see) that I should have already done that when I posted HN comments, before Dart was out.”

    I implied? No, you implied that I implied from “there’s a level of paranoia and anger expressed that seems overblown. Politics seems to be clouding what would be technical decisions were it not for the standards circus.” Did I say that thread should have been a technical discussion? No, what I actually implied was there seems to be a lot of politics involved in standards. I expect internal discussions of proprietary platform vendors like iOS or Flash to involve a smaller proportion of politics to technical discussion. That is because they don’t have to deal with standards, there’s less people to consider. It’s a captain obvious statement that you implied a lot about because it was vague.

    “And you again dodge the semantic shift. Syntax is not the issue. CoffeeScript’s semantics are exactly JS’s.”

    So DartScript’s semantics can’t be exactly Dart’s? I’m not saying a CoffeScript fork that can compile to either Dart or JS, only Dart. Is this more difficult than Coffee to JS? And such effort would have to assume a Dart VM could be widespread, through standards or wars, which I don’t. Dart got a poor reception from a coding comfort/technical POV. I assume they didn’t plan memos to leak and how that changes standards relations and their plans.

    “Prove me wrong. Why shouldn’t Google have taken a day to preview Dart to Ecma TC39 and propose changes to ES.next before our May 2011 proposal-cutoff? We could have done many things to make JS run Dart faster. Or perhaps make Dart more obviously unnecessary — oops.”

    The details of the standards process are not for me to know. You know and say it could be platform wars and/or retention packages. I could guess wildly, maybe Google wants ChromeOS to be as capable as iOS, with a massive framework that would put an end to JS library galore (as Closure was envisioned) and therefore make development smooth, and they expect to reach Javaish performance sooner than you and expected to convince you. They probably expected a reception sans ridicule https://gist.github.com/1277224. Their timing? Late or waiting for a better sweet talking position. They didn’t say they’d stop dev of regular JS in the memo, I didn’t know they cut resources to that.

    Who cares about my guesses, I don’t know enough details, am too credulous and not paranoid enough.

    “As the Emscripten H.264 demo, plus River Trail and a bit of WebGL extrapolation suggest, this is precisely where we are going. Again, NaCl+Pepper will never cross to other browsers. So why are you even wasting your time and mine rewriting history and assuming JS can’t get to || hardware first, enabling exactly what you sketch here, which I agree is the goal?”

    The time waste stems from Google engineers, like those on NaCl and Dart (memo) who shared their opinion that what they were working on had more potential (performance/language flexibility/legacy of C++ of NaCl) than standards based JS evolution. As a non-expert I was hopeful that NaCl could allow for a ubiquitous and performant platform, which browsers are not, yet, but would become through it. I’ve asked questions about this and read comments from who must have been sock puppet fanboys that indeed NaCl was a viable technology, and other comments that pointed out vendors won’t adopt it but not necessarily for technical reasons. Reasons like legacy support, loss of platform control, platform wars etc.

    “Don’t we need native apps/NaCl/Dart/Java/Flash/Silverlight/Unity?” – a quote from Emscripten slides, since it’s a common theme. I was hoping vendors could get over non-technical interests in some distant future and adopt what will be mature alternative technologies by then.

    You detailed technical reasons like Pepper and that JS performance should catch up earlier, cheaper, etc. Sorry we couldn’t have a more fruitful discussion.

    Sam Tobin-Hochstadt: “Dart to move much faster than Java in performance is a mistake”

    Do you mean the development of the Dart VM or the ultimate performance limit? I expected Java to be the approximate performance limit. From the memo I expected development of Dart VM to go faster than JS VM, assuming it was a technical conclusion.

    With many platforms in development making various promises, like Flash/Silverlight/NaCl/JS, it’s difficult to tell who’s implied timeline/capability promises will pan out. Flash promised good performance on mobiles with various GPU/AOT acronyms two years ago but it’s only recently become tolerable, but unimpressive. Hopefully browser vendors can manage better.

  52. Sam Tobin-Hochstadt
    entered 31 October 2011 @ 2:48 pm

    Detrus: “Do you mean the development of the Dart VM or the ultimate performance limit? I expected Java to be the approximate performance limit. From the memo I expected development of Dart VM to go faster than JS VM, assuming it was a technical conclusion.”

    What I mean is that given Java taking 15 years to go from being a “slow language” to a “fast language”, then Dart will probably take while as well. After all, it’s not the same technology for Java VMs or JS VMs will apply directly, and each of those evolutions took a while despite the existence of fast predecessors such as Self and Smalltalk and Common Lisp.

  53. entered 31 October 2011 @ 6:43 pm

    Jeffrey: I’m flattered you varied my words (“utopia”, “deluded”), but you’re knocking down straw men. I’m the realist who is not predicting identical-to-native-C speed (apples to apples, no || hardware units employed), or a VM for all languages in the browser, so spare me lectures on self-interest generally ruling this world.

    However, morality is still an option for all of us. If it weren’t, a great many institutions would have failed forever, and many people who put under-paying vocations and commitment to service ahead of money would be out of work.

    Yes, AOL and Netscape before them helped Mozilla survive during the dark days, and we’re grateful. They didn’t have a good business case for doing so, note well! And Firefox took off as the AOL money was running out. Next you’ll be praising Google for partnering with us, but that was entirely in their interest. It really was a business deal, based on a similar one Safari had struck.

    You’re right that a new monopoly or even an 80% market share holder could force new standards, as I noted in this interview last week:

    http://siliconangle.com/blog/2011/10/31/qa-javascript-creator-brendan-eich-on-standards-the-influence-of-lisp-and-more/?utmsource=dlvr.it&utmmedium=twitter

    and as I’ve said on HN and here in earlier comments. But realists take note; we don’t have such a power in play yet. Possibly Google will eventually get there, but it’s an open question, and Apple would have to go into decline.

    For now, the standards game is active precisely because of the more balanced competition, not due to the inherent altruism of all humans. Robert O’Callahan is right to cite recent progress on all fronts.

    So, why should we stop now? Because Detrus wants the unobtainable by any means? You haven’t made a case, just a prediction that’s probably true in a long-enough time frame: that a new power will emerge and reshape the Internet. Mail me when it’s time to give up, ok?

    /be

  54. Jeffrey
    entered 31 October 2011 @ 8:41 pm

    Don’t accuse everyone who disagrees with you of being a shill or fanboy. If fact is on your side the character or affiliation of the people opposing you shouldn’t matter. Still, I promise you I have no relationship with any corporate entity professional or romantic.

    Also, please don’t misunderstand what I was saying as pull/push with the arguments you and others are having with each other. I just thought you and others should be more realistic on how the world works. You implied that Internet Explorer failed because of morals and also that plugins were dying.

    As much of a realist you are in some cases I thought you could use some more with those areas. I guess I also wanted to share a recent realization of mine, that business interests tend to be behind all the useful and interesting products or services around. I wasn’t praising Adobe, Microsoft, Netscape, AOL or Google only recognizing their power and presence.

    Netscape/AOL didn’t fund you out of altruism it probably had more to do with their deals with Microsoft or PR or perhaps some ridiculous pitch that fooled the CEO. It doesn’t matter that Firefox took off while money was running out since the product still wouldn’t exist without it. Plus, as you admit it was in Google’s interest to partner with you. Yes, morals are everyone’s choice but they don’t make the world turn.

  55. entered 31 October 2011 @ 10:07 pm

    Jeffrey: I did not accuse you of anything!

    I did not say anything about IE failing because of “morals” either. Could you quote my words that made you write that?

    Plugins are dying, though. This isn’t a matter of my opinion. The iOS devices, now Windows 8 Metro. Adobe is moving to targeting browsers (+ PhoneGap).

    Your last paragraph’s incoherent. AOL did not give Mozilla $2M over two years with any business plan whatsoever. It was a matter of executive conscience (Ted Leonsis, with Mitch Kapor playing Jiminy Cricket to save us from only $1M and premature lifeboat ethics and death at sea before Firefox was ready). In other words, morality.

    Before that, AOL bled money on Netscape the browser, but they bought Netscape mainly for the netscape.com portal. That didn’t pay off either.

    I think you are punching at shadows. I never said morality always wins, or might always makes wrong. My arguments for standards work are pragmatic and provisional. In particular, I believe de-facto standards forged by market power should be codified by de-jure standards. This applies currently to many -webkit CSS properties.

    It doesn’t apply to Dart or NaCl/Pepper, though — pretty obviously. For the umpteenth time, my problem with those as would-be cross-browser standards is their opportunity costs and (lesser issue, maybe non-issue) fragmentation effects.

    /be

  56. Jeffrey
    entered 31 October 2011 @ 11:24 pm

    I must have misread what you were saying. You haven’t attacked my character either. I’m not even sure how I got that impression. By the way here’s the quote:

    “That’s consequentialist or morally blind at best. Ignoring morality, it did not work so well last time with Microsoft, and it has been a mixed bag with Apple (where at least they’re honest about what’s open and for the web, and what is closed native sauce), although some of the fault lies with others on the CSS WG not standardizing -webkit-* quickly.”

    I assumed you were talking about Internet Explorer’s marketshare decline.

    I still think you might be painting a glossy picture of AOL’s executive conscience but you also maybe you have clearer insight about this. As long as CEO’s are aloud to make moral decisions for companies, one man (probably the CEO’s acquaintance) can make a difference. I wonder if Mitch Kapor could maybe talk the head of Adobe into open sourcing their Flash Player.

    In the mean time I still see video sites using Flash. It’s hard to believe that one day that will suddenly just change because the Flash Player is no longer available. If that happens there will only be a bunch of really pissed off customers ready to use another web browser. Realistically, I think Microsoft will backtrack on a lot of their ambitious choices before Windows 8′s release.

    By the way, sorry if my writing isn’t good. I still try though.

  57. entered 1 November 2011 @ 11:46 am

    My words there were directed at Detrus, who advocated consequentialism in reply.

    “I assumed you were talking about Internet Explorer’s marketshare decline.”

    No, I was talking about the IE monopoly stagnating the web when they had 95% share. My “didn’t work so well” was aimed at Detrus and others who seem to think Google will do better. Some of my friends at Google admit the company is too short-term in its focus to do right for the long run of the Web.

    I’m not saying “Google = bad” here, please note. They do good too. But as my insider friends concede, their approach is increasingly short-term and distorted. It will take many parties, certainly not just Mozilla, to steer a better long-term evolutionary path that’s still achievable by short-enough hops.

    “I still think you might be painting a glossy picture of AOL’s executive conscience ….”

    No, I’m telling it like it was. In 2003 when AOL cut Mozilla loose and helped set up the foundation, they had no business interest in us, and no idea of what Firefox could do. None of us did, although we knew the right path to follow. Picking the right path is the trick, then and now.

    /be