The Danger of Mediocrity

Standard

a bobombAddons make the difference between a regular web browser and a browser that just got “pimped” (you can also pimp your email client, too :D). Having a wide variety of extensions and themes helps developers and product managers focus on core features the majority of users will need without having to simply say, “sorry, nope, can’t do that”.

Okay, so maybe your average extension isn’t going to turn your world upside-down the way West Coast Customs spending a week with your Honda would… fine. But addons for and by the people are a vital and important bridge between a non-bloated application core and end users who want personalized functionality.

Once we understand the importance of addons, and the role addons.mozilla.org (AMO) plays for Mozilla products, we can start to focus on improving it.

So it’s great to see healthly discussions about the future of the service, and how it should be modified to ensure we ship addons with a level of quality that is comparable with Firefox or Thunderbird.

I recently read David Baron‘s excellent post about the danger of extensions and how an undersupported and relatively unregulated addon service could mean disaster for Mozilla products and Mozilla’s reputation in the community.

To say the least, I am now filled with uncertainty and doubt.

Though it’s not about the service itself, or whether or not the community will be able to recover from the first security hole found in an extension, or how extensions are external to the normal Mozilla review standards.

I’ve got FUD about how these products will thrive if they cannot change and adapt themselves quickly to new trends, web services or client-side tools that are available in other browsers.

Despite the theoretical holes in AMO, it’s there, it’s important and it’s popular — for better or worse. It has many great extensions, some good ones, and many poor extensions as well. It’s a distribution as diverse as the community; filled with the good, bad, and the ugly.

And the dangerous? Maybe. David’s points are valid — I share his concerns as well — but assuming problems with extensions will translate into plumetting consumer confidence in an application isn’t necessarily such a straight line. The risks we take with AMO would also need to be compared with consumer confidence in a product that didn’t adapt and offer unique features not found anywhere else.

It’s clear — we’ll have to find the solution halfway. We need to improve the overall quality of addons by improving the review process, and making moves to encourage the same openness and exposure Firefox and Thunderbird get. Most of these changes start with an improved (and solidified) policy, which is currently a major focus.

The technical solution could be CVS integration, stricter review requirements, Bugzilla integration. Ideally, we would have all of those, and everybody would develop, test and verify quality for their own extensions the way Mozilla does for Firefox and Thunderbird.

That all sounds great. Now, who’s going to do it? What would the practical schedule be for a new addon? A new extension version? When does the process of controlling the processes become simply overwhelming?

While I wish it were, I have to say the complete Mozilla review process isn’t a perfect fit for addons. It would delay addon releases, create a barrier between developers and the community, and create a lot of additional work for addon developers — who would most likely be satisfied with keeping their extension simply on their own site, in their own repository, with their own bug reporting software.

So how do we properly review this stuff, then? Myk Melez brought up some good ideas about a modified rating system to guage the overall “trustability” of a given extension. I thought that his approach would be a good one given the unique nature of the addon life cycle:

(Our current system) has a number of problems:

  1. it ignores other trust metrics that would refine our sense of each extension’s trustworthiness;
  2. there have never been enough reviewers, so extensions often wait days or weeks to get reviewed;
  3. generally only one reviewer reviews each extension, increasing the risk of human error;
  4. reviewers review extensions they aren’t personally interested in, so they exercise them less thoroughly than the ordinary users they approve them for;
  5. there’s little reward to being a reviewer, and the downside (being held responsible for approving a harmful extension) is significant.

An alternative approach is to publish each extension to the site the moment it is submitted but hide it from the majority of users until it reaches a trustworthiness threshold which takes into account all available trust metrics, including user feedback, code review, whether it is open-source, whether it is signed with a valid code-signing certificate, the trust score for other extensions by the same author, etc.

Until then, only specially designated “tester” users who have been apprised of the dangers and are willing to face them can access the extension. These testers, a much larger group than the current pool of reviewers, will provide feedback on such extensions like ordinary users, and that feedback will serve as one of the trust metrics through which an extension can reach the trustworthiness threshold required to make it available to the rest of the users.

This approach has several benefits over the current approach:

  1. testers try out extensions they’re interested in using, so they exercise them more thoroughly;
  2. multiple testers must look at and provide generally positive feedback for each extension before it satisfies its “user feedback” metric, so no one tester is solely responsible for deciding if an extension is good;
  3. each extension’s trust score comprises multiple trust metrics, increasing the quality of our decisions about each extension’s trustworthiness.

And, I strongly suspect, the mean time to publication will decrease significantly.

Myk’s idea is basically to properly weight the different metrics unique to the AMO process to determine whether or not an extension is trustworthy. It’s like an improved Digg mentality with a bit of moderation. While there is definitely more discussion needed in the next month, this would be a great place to start.

Sometimes pushing the envelope can give you a paper cut.

19 thoughts on “The Danger of Mediocrity

  1. Tomm Eriksen

    I totally agree with you on the need for more quality in review process. But I think you can attack this problem from another side as well. Through communication.

    I think many people don’t understand that the extensions are not of the same quality as the core product, and they often hold Mozilla responsible for extensions not being upgraded and breaking when upgrading etc.

    A lot of these misconceptions can be avoided if properly communicated to the user. Tell the user that extensions are made by people outside Mozilla, and if a extension developer fail to maintain his extension, theres nothing you can do with that.

    I don’t mean a total “you’re on your own’ kind of disclaimer, but some more information about the kind of relationship and control Mozilla have on extension quality.

    I can’t find a single word about this on addons.mozilla.org…

  2. Yes, communication is an area that definitely needs improving. It’s been getting better, as I’ve seen some good signs — at least in the developer community — (i.e. dbaron’s post) that people are discussing and thinking about our situation.

    The policy area (or lack thereof) will have to be addressed very soon, so the lack of verbage regarding addons policy should be taken care of in the next couple of weeks.

  3. Dan Veditz

    “it’s not … whether or not the community will be able to recover from the first security hole found in an extension” — we’ve already had an extension security hole that got a fair amount of publicity (GreaseMonkey) and the world has not come crashing down. There have been others that didn’t make as big a splash, but weren’t hidden either.

  4. To build on what Tomm said: I’ve seen people complaining long and loud about how mozilla 1.5 is an unstable memory hog; when it was pointed out that this might be due to extensions, the retort was “I’ve only installed official ones like [huge list of extensions from AMO]”

    Bad memory leaks in popular extensions (e.g. adblock) can have a major negative impact, as can bad interactions between extensions.

    I can see a lot of not-as-techy-as-they-think-they-are gadget freaks downloading Firefox, immediately installing 30-40 extensions “because they look cool”, and then bitching to the world about how Firefox is crappy because it’s slow and crashes a lot.

    Anyhoo. There definitely needs to be some outgoing comms that these things aren’t official mozilla product; better QC on extensions is good (and will need a significant amount of work) but I think that some sort of (restrained) “caveat emptor” warning that they are not tested to the same level as Firefox itself is urgently needed.

  5. alanjstr

    You still have the problem of getting more, interested, trustworthy testers. How many testers frequently visit a russian forum or need to convert to cyrillic? How many really really really want to know if Abe Vigoda is still alive?

  6. Arteekay

    There needs to be some accountability on the part of the reviewers/testers as well, otherwise the trustworthiness of the extension is easily skewed in exactly the same way it is now. I would assume testers feedback would be one of the larger metrics in determining an extensions eventual release to the general public, and weighting it light would only serve to make any reviews somewhat pointless. I’d hope to be able to very easily see some statistics for each tester, what extensions they have reviewed or approved/declined, etc.

  7. This should absolutely be done. The awful extension version compatibility system would be greatly improved by some sort of community review.

    At a crude level, having a dozen members of AMO saying that an extension works in a given release would probably suffice. Maybe this should have moderator/developer review, but there’s definitely room for improvement.

  8. Cameron

    I love it. Lets do it. Who’s going to write the code for it though?

    As for stuff about policy, communication and documentation, lets do that too. NOW.

  9. Boris

    Perhaps there should be provisions for annotating an extension with comments about known bugs that it causes? We have plenty of such bugs in bugzilla. As a further thought, perhaps extensions that are causing severe enough bugs should be removed until fixed (that sounds like the “hide by default” idea, and may be the same thing at heart).

    But frankly, given the number of crashes and leaks involving extensions that I’ve looked at in the past week (about half of the total number of crash bugs), I’m feeling pretty down on extensions right now.

  10. Frank (DesertFox)

    just the kind of post i wanted to read.

    and your blog is ten times better because of the chinese, which i can read 🙂

  11. Actually, I’ve been thinking a lot about this. I tried to improve some of the more popular extensions and I can compare the results. And I think that this problem needs to be solved by giving the authors an incenitive and the means to improve the quality of their extensions (and by quality I mean code quality + interface quality + documentation quality).

    AMO doesn’t have the resources to test all extensions thoroughly of course, and no amount of unqualified testers will change it – some extensions behave mostly correctly from the user’s point of view but under the hood they are severly flawed. For some there simply aren’t that many testers (I agree with alanjstr on this, fortunately Xpoint Sidebar enjoys thorough testing outside AMO :). Furthermore, simply stating the issues doesn’t always help – many extension developers have only basic programming skills and would need to get tips on the direction.

    So my idea was to introduce a quality mark in addition to the current basic review: “this extension has been tested by an experienced developer and it looks good”. This testing would need to concentrate on the more popular extensions (probably on author’s request) and can’t be per-release any more. I think it would be a good idea to give people an option to report bugs in the extension – they are using comments for it anyway. This reports would go to the extension author but they would be also evaluated on AMO – if something severe is reported, the extension might need a new review.

    Then it has to be made sure that users will mostly install extensions with the quality mark – by explaning the meaning of the mark somewhere where people will see it, by putting extensions with the “quality mark” on top of all lists, by creating an additional list “high-quality extensions” etc.

  12. Something else: while CVS integration is obviously impossible, AMO could at least provide easy access to source code of the extensions listed. Decompressing every extension submitted into a source code repository shouldn’t be that difficult, then LXR could be used to navigate it (ideally one would be able to get a changelog for the files). This would help testers a lot, and Gecko developers wouldn’t have to guess on whether some extension uses a particular feature.

  13. Wes

    This “trust” system seems like it will be abused like crazy. Someone wants their extension out and wants it out soon. What’s to stop them from flooding it with good comments?

    The current system works fine IMO. If Moz is really nervous, they should review extensions on AMO harder, and leave them to be distributed from personal websites until they are accepted. It’s not that hard to run a Google search to find non-AMO ones right now anyway. In fact, I rarely download from there anyway, because I don’t trust that its up to date.

  14. Pingback: Peer Pressure
  15. Hello,
    My computer crashed and I lost everything, only days after installing Firefox. That was in November and it has taken me until now, February, to come back to Firefox, to trust it enough (plus I trust that Windows XP is more secure). So i’m now beginning to wonder if it wasn’t one of the extensions that messed my computer up. I dunno, but I think that the extensions do need a better review process to protect users. Thankyou and thankyou to all the developers and reviewers that work so hard.

  16. VanillaMozilla

    Tomm Eriksen wrote:
    “I can’t find a single word about this on addons.mozilla.org…”

    Actually, there is one word — “beta” (see if you can find it). And virtually no one sees it or pays any heed. I’m now telling people that extensions are beta or alpha software, that they will not necessarily work with future versions or with each other — not to mention possible security risks.

    The Support Forum is filled with loud complaints from people who glibly alter Firefox with extensions and themes, but who are offended, or at least disillusioned, by having to troubleshoot. It’s foolish not to inform people of possible quality problems.

    There’s another risk that hasn’t even been mentioned yet. Extensions are the only third-party software that can evade firewalls by default. They not only have all the potential power of virtually any malware, but they can also phone home with the explicit permission of the user. They have the potential for being the irresistable trojan. I don’t know what the actual risk is, but it will only take one to do serious damage.

    As for GreaseMonkey, this time the white hats found the problem first. We can’t count on always being so fortunate.

  17. Pingback: Mozilla News

Comments are closed.