AMO and the quality bar has long occupied a special place in the Firefox software ecosystem. It’s the only site in the installation whitelist by default, the default server contacted for update information about add-ons, and where we send users who are looking for hot add-on leads.

That unique position means that there is a lot of value for some add-on developers in being hosted on AMO. Such hosting involves a review process, which I think both reviewers and developers alike would agree is one of the most frustrating parts of the whole system. The intent of the review process is entirely on the side of the angels: help make sure that add-ons are good for users.

The devil, of course, is in the details here. At times, the review bar has been placed entirely too high, in my opinion: otherwise-fine add-on updates rejected because they cause a strict warning to appear in the JS console, for example. In other cases, we’ve had add-ons approved which send some data to a central server, but don’t have a privacy policy listed. The most common and burdensome cases of this latter example tend to be associated with “toolbar-building” services: the ostensible authors of the resultant toolbars typically know very little about what’s being collected or how it’s being managed, which makes for a predictably unsatisfying conversation with reviewers.

(There are other elements of the review process that are inconsistent and difficult, mostly related to needing to reject items for errors in things that the add-on authors can change after the fact without review, but which can’t be helpfully fixed by the reviewers. These are the “easy” implementation artifacts, though, and not really the topic of this post.)

The trade-offs here are painful: adding a standard of “usefulness” or “implementation quality” to the checklist will not only dramatically slow the review process and require more specialized skills among our reviewers, but will also increase the variability between different reviewers’ decisions. Those are all things that I don’t think we can afford to make worse, and both the history and special position of AMO make me tend towards a much more laissez-faire position: if the description accurately describes what the user will get when they install it, especially as far as the collection and management of private information is concerned, then I think we should let the user make the decision about whether they consider the functionality useful. Some popular add-ons duplicate functionality that is already present in the browser, such as preference settings, adding only an alternate means of accessing it, for example, so requiring “significant new functionality” seems to work against the interests of a fair number of users.

At the same time, of course, I think it’s quite desirable to be able to point users at a more “filtered” view of the enormous add-ons space hosted on AMO. We currently have one such view, the recommended list, but that’s not really much of a solution to the broader problem. (It doesn’t try to be, really.)

A minimum rating threshold would be one way to narrow the default search results returned to a user, though it depends on the reliability and resilience of a rating system. Our current one isn’t sufficient to prevent the sort of gaming and distortion that would plague us in such a world, but that’s not to say that a sufficiently robust one couldn’t be developed. (Not “perfectly robust”, mind; just enough to keep the damage well below the gain.)

A simpler system would simply provide a single piece of metadata that could be set by reviewers or administrators using their judgment and likely via some multi-reviewer discussion. This wouldn’t scale as well as the universal rating by users, but would be more resistant to gaming and abuse (and easier to track and remedy if such nefariousness is detected).

This post is already too long, but you can read and write more about various possibilities for rating and approval schemes in the Remora Idea Dump. We’re thinking about and working on ways to help users find good add-ons, in a way that scales across our community, and I suspect it’s something that we’ll be working to improve for some time!