enterprise

Supporting enterprise deployments has always been challenging for Mozilla (and for enterprise deployers), but we’ve nonetheless had some success; one Forrester study from 2010 indicated that Firefox has 20% or more market share within whatever they consider to be enterprises. That success has involved assistance from enterprise IT and OS vendors, both to represent their needs and to contribute actual work. If we want to expand on that success, then we will need to push farther ahead still; we need to be humble enough to know that we need help both understanding and doing.

what do enterprises need?

To my mind, there are two different elements at play here: supporting enterprise users, and supporting enterprise IT.

Enterprise users ask for Firefox to operate fully in their environment. Examples of things that fit in this category include NTLM, proxy auto-discovery, SOCKS proxies, running on Windows XP, and smart card integration.

Enterprise IT ask for a pretty different set of things, though they’re sometimes also the ones who present their users’ requests. Requests include deployment tools like MSI packaging, non-admin updating, preventing updating, pre-installing extensions, filtering/vetting/preventing extension installation and update, control over user settings via things like Group Policy, running quickly from a network share. There are probably many more, since I am mostly inferring now, and details like “control what exactly via GPO?” contain many devils. One proposal was to just provide support for all the knobs that IE (then 7) supported; when I saw that one of the knobs was to disable tabbed browsing, I think I used the word “unlikely”.

I don’t know how much of the enterprise IT checklist is necessary and how much is desirable. Similarly, I don’t know where the value curve and the difficulty curve cross, and it’s been very difficult to find out in detail.

when do enterprises need it?

Enterprise IT and internal app owners (but I think not very many enterprise users) also need varying amounts of time to absorb new versions of software into their environments. I’ve heard tell of 18 month cycles for that sort of thing, which boggles my mind a bit, and makes me weep for the people who are spending a year and a half doing that validation. Adapting to a world where you can get 4-8 browser updates in a year will be challenging for some organizations, though in truth they’ve already been in that world for some time: 3.6.4 with out-of-process plugins on Windows was probably a bigger content-compatibility risk than going from Firefox 4 to Firefox 5.

Chrome and Firefox both now use major version numbers to indicate “checkpoint that we pushed to our users”, rather than a specific amount of nature of product change. Again, though, we’ve been in that grey area for some time, with 3/3.5/3.6 being minor-with-major-characteristics sort of releases.

Our policy prior to FF5 was that we supported releases for a maximum of 6 months after the next release. Some Linux distributors needed to support their customers for longer, and weren’t (yet?) comfortable with bumping along to the next “major”; a couple of them got together and took responsibility for backporting security patches to the branches in question. If an “enterprise-friendly” lifecycle is needed, then that would be one way for enterprises to share the load of supporting their community’s unusual-at-the-scale-of-the-web needs.

why is this so hard?

One aspect is that Mozilla is mostly not composed of enterprise IT staff. This means that we rely on prospective deployers to tell us what their specific needs are, and hopefully to contribute help in meeting them. We’ve tried on a few occasions to collect this information — what sets of features would lead to which deployments with what user impact? — but have had a lot of trouble getting that information into our product planning in a usable way. A surprising (to me) number of institutions will not talk on the record about what they need, which makes it pretty hard for them to join a community conversation about what is worth investing in, and what the results are likely to be.

how can we make progress?

As I mentioned above, we’ve made some slightly successful attempts in the past to engage with enterprise representatives to identify the specific things that are barriers to their deployments. If enterprise deployments will have meaningful impact on the future direction of the web then we need to try another tactic. I don’t know what that is, but we are hearing more about enterprise needs now than ever before, so we have a new opportunity to figure it out.

(People like to point at the long, ugly tail of IE6 as an example of why we should want to make it easy for an administrator to deploy Firefox. Yet administrative ease is not enough, as those on IE6 didn’t even move to IE7 or IE8, which had all the administrative bells and whistles imaginable.)

We would also have benefitted from better communication about the release and update model, and what version numbers now mean (basically nothing). That would have given enterprises an opportunity to involve themselves in the process much earlier, and probably won us some advice about how to explain things even better.

To be successful, I think we’re also going to need enterprises to be more than just consumers of the software and tools. We need to build a framework for enterprises to contribute to the things they care about, and we need for enterprises to make contributions.

Enterprises may also need to change how they think about software rollout, if they want to keep pace with browser development and the evolution of the web platform. This is similar to managing the update cycle of a hosted application like Google Apps or Office 365, which could help the conversation. Some organizations will not be able to adapt, or not willing. If an organization stays on Debian stable for half a decade, they are just going to be left behind by basically all modern software. For less-severe cases, though, there is likely some meeting in the middle that’s possible. It could even lead to more agile deployment of their whole software stack, and more productive and happy users because of it!

Jay has written about this as well, and we’ll post more as we get set up for those conversations.

a three-dimensional platform

Firefox now includes, on all desktop platforms, support for a technology known as WebGL. WebGL allows web developers to use accelerated 3D graphics, including textures and shaders; it exposes the same capabilities used by games like Doom 3 and (optionally) World of Warcraft, and virtually every game that runs on the iPhone, OS X, Android or Linux.

Security professionals, including Context IS, have discovered bugs in the specification (related to cross-domain image loading) and in Firefox’s specific implementation. Both are being addressed, as with security problems in any other technology we ship, but recently the conversation has turned to inherent security characteristics of WebGL and whether it should be supported at all by browsers, ever.

I think that there is no question that the web needs 3D capabilities. Pretty much every platform has or is building ways for developers to perform low-level 3D operations, giving them the capabilities they need to create advanced visualizations, games, or new user interfaces:

  • Adobe is building 3D for Flash in a project called “Molehill“, about which they say: “In terms of design, our approach is very similar to the WebGL design.”
  • Microsoft is doing something similar with Silverlight 5, where they’re bringing XNA Graphics to Silverlight 3D.

Adding new capabilities can expose parts of the application stack to potentially-hostile content for the first time. Graphics drivers are an example of that, as are font engines, video codecs, OS text-display facilities (!) and image libraries. Even improvements in existing capabilities can lead to new types of threats that need to be modelled, understood, and mitigated. We have a number of mitigations in place, including a driver whitelist that’s checked daily for updates; this seems similar to the driver-blocking model used in SL5, based on what information is available. Shaders are validated as legal GLSL before being sent to the driver (or to Direct3D’s HLSL compiler), to avoid problems with drivers mishandling invalid shader text. We’re also working with the ARB and driver vendors on extensions to OpenGL which will make the system even more robust against runaway shaders and the like.

Microsoft’s concern that a technology be able to pass their security review process is reasonable, and similar matters were the subject of a large proportion of the discussions leading to WebGL’s standardization; I also suspect that whatever hardening they applied to the low-level D3D API wrapped by Silverlight 3D can be applied to a Microsoft WebGL implementation as well. That Silverlight supports Mac as well, where these capabilities must be mapped to OpenGL, makes me even more confident. The Microsoft graphics team seems to have done a great job of making the D3D shader pipeline robust against invalid input, for example. (The Windows Display Driver Model in Vista and Windows 7 is a great asset here, because it dramatically reduces the “blast radius” of a problem at the driver level. This likely explains the difference between default-enable and default-disable for WDDM/non-WDDM drivers in SL5′s 3D support. It’s not yet clear to me what the model will be for SL5 on OS X.)

It may be that we’re more comfortable living on top of a stack we don’t control all the way to the metal than are OS vendors, but our conversations with the developers of the drivers in question make us confident that they’re as committed as us and Microsoft to a robust and secure experience for our shared users. Web developers, like all developers, need good 3D support, and — as with Flash and Silverlight — browser implementers will need to be careful and thoughtful about how to expose that functionality securely on all operating systems.