Enter your email address to follow this blog and receive notifications of new posts by email.
mikeschinkel, Jake Goldman, mordauk, and 11 others are discussing. Toggle Comments
Summary of ideas for an advanced vetting process
1) Have a subset of plugins that are “vetted”, “flagged”, “badged” or some verbiage to differentiate them as peer reviewed.
2) Do so via a peer reviewed plugin review team, similar to the theme review team. Seemed to be some acceptance to my concept of the academic paper review process.
2a) Plugins utilize some sort of process for submission for peer review. Could be some a form submission where blog posts, code reviews done by devs outside of dot-org process, etc. Could also be based on a certain number of star ratings or downloads. Could also be based on the plugin author’s standing in the repository (aka is it a Mark Jaquith or a first-time plugin submitter). Could be some combination of these.
2b) Once plugins are deemed vet-worthy, perhaps a small group of people review the plugin and go through theme-check-esque review process of plugin, similar to peer review of an academic paper.
3) Assume a plugin is vetted. Now how will it be displayed, found / searched for and maintained? Some sort of tagging / badging system could be utilized. Ability to remove approval or badge would be important.
3a) Very difficult task here. How and where do we “promote” vetted plugins and give them an edge vs standard plugins. Make them sticky for standard plugin tags? Have the ability to limit searches to only vetted plugins? All of the above? Something else? This process needs quite a bit of brainstorming yet imo.
Might be wise to consult the vip team from .com to see how they go about this process, given the understanding that the goals are not the same.
One of the ideas we were tossing around for Renku (http://renku.me/) was a tagging/badge system. The idea we had was to have different people give their seal of approval to products. Given that it’s a commercial service, the idea here was “pay $x to get a review, and if you pass, you get a badge”.
For WP.org, I believe the same sort of system could be implemented, minus the commercial side of things. Rather than just having a general vetting, badges could be applied for different areas: say, a security badge, a usability badge, etc. These would all be handled through separate queues, with a team assigned to each type of badge. To start out, you could have just one badge (e.g.) and scale up from there.
That would also tie into the social aspect. Mark Jaquith, for example, is well-known for his security audits, so he’d be a great person for the security team. On the other hand, he might not be the best for the usability team (he’d be great regardless of the team; this is just an example), whereas someone from make/ui might be better.
The problem then of course comes down to: how do you handle a giant queue like this? Surely everyone will want their plugins reviewed. I think the best way would be that you can only be in one queue at once. Some plugins might want the security seal of approval more than a usability one, so this would help to spread out the load.
One other issue is, how do you handle updates? I think you could show the badge, but have it greyed out to indicate that the current version hasn’t been vetted fully, but that previous versions have been. A separate, expedited queue could be made for updates, where the diff of the versions is checked, rather than rechecking the entire plugin.
Regarding UI, I’d add a checkbox filter into the plugin search (both on wp.org and in the plugin installer) to filter to just plugins with a security badge, e.g. In the list, you could also have little icons next to plugins to see at a glance if they’ve been vetted. I’ll see about doing a mockup of this if it sounds good.
Really like this in contraposition with the general “vetted” option. The SEO plugin I need for a project with 5 MM visits a day doesn’t necessarily have the same requirements as a SEO plugin I want for my mom’s blog.
Don’t know that I agree there need to be multiple “exposed” metrics for vetting code, aside form separating “prettiness” from “well engineered” – and you could even argue those should both be part of a vetting process.
End users don’t want to have to look for 4 badges. They want to know that it passed the “WordPress acceptable standard” or failed it (or weren’t vetted yet). A plug-in should be secure, clean up after itself, not break anything, and not fail under load (etc).
Do we really want to say “vetted for security! but, hey it may break WordPress”?
If it may break WordPress, it should be out of the repo altogether.
But if I need to send my mom to look for a plugin, I don’t care about scale, but I want the best UX possible so she doesn’t call me 30 times a day. If this if for a corporation website with a tech savvy webmaster, I want the best scalability I can get, even if it doesn’t come with the best UX.
My concern about this “vetted” approach is what will happen with all the plugins that don’t make this exclusive list. Maybe because they are in queue, maybe because they are _great_ for scale, but suck at UI, etc.
In the attempt of fixing this, we shouldn’t stagnate the repo making new devs frustrated.
I think there may be a way to make it clear that a “vetted” badge doesn’t mean other plug-ins failed a vetting test – just that they haven’t been vetted yet. But yes, it’s sticky, and risks discouraging new players.
This is, in large part, why I originally felt that this vetted repo needed to be independent of the official repo. A “third party” vetting / review site (even if it’s the usual suspects) would be understood to only review plug-ins they get around to reviewing. Making it part of the official repo, and influencing the browsing of said repo, makes this politically stickier.
If this ends up being a private effort, things get _so_ much easier. We can make a website for that in 4 hours, and each one of us gets to invite their ~5 most trusted persons in the community. Or only big name companies / freelancers / core contributors get to participate and rate plugins.
Here you won’t have 15 plugins a day, just a curated list with the best of the best.
The question is what to do with plugins who fail vetting? Leave them in the repo? How do you differentiate between not vetted vs failed?
Daniel – I agree, and I’m tempted to go forward with this as a starting point. But I do want to see the official repo improve, and don’t want to be the proverbial guy who took my toys and went home.
Shane – glad you raised this. It’s been in the back of my mind. Even for the official repo – what if it’s been in the repo, submitted for vetting, and failed? Do we have a “Failed Vetting” badge? Do we recommend it for removal? Perhaps it gets an “under review” badge, and we ask them to make changes?
I think that “failed” vetting brings on a key part of this debate. Are we trying to totally transform the repository or are we trying to showcase the cream of the crop?
If we’re going for complete transformation, a “failed” vet may include removal, or at least “hiding” (like is done with some old plugins now), from the repository.
If we’re just showcasing the cream of the crop, I think a soft fail is pretty acceptable, especially if the other two parts of this discussion are successful (meaning better plugins will get in to start with and have better reviews in general). That way a plugin that isn’t vetted can still be downloaded and used, and who knows, maybe non-vetted plugins are okay for a lot of normal sites.
I personally think there is still enormous value in broad plugin acceptance to the repository (even if it’s pretty strictly security benefits from not having loads of third party repositories and sketchy plugins – ie – like with themes already). So to me, a soft fail is preferred. If it’s a harder fail, then the process of vetting *really* needs to be thorough and very involved w/ the dev, otherwise it’ll be brutal to be denied a vetted status.
PErhaps the best approach is to display the outcome of the review process, and then if it earned it, a vetting badge. The plugin could fail to earn vetting for a reason that many users might not care about (say scaling for example) but shine in other aspects that matter.
@shane, I think that makes sense. A prominent “vetted” badge. A less prominent “tested” notice if it’s been tested but not vetted, with visual cues as to sections of pass/fail or scores or whatnot, in order to give some reasoning to a user as to what happened in that scenario.
Re the private effort coversation – it might be easier and it might help your clients out (fair enough), but 99% of WordPress users are going to keep on using the official repo / WordPress admin area for plugins. Far better if we can improve the official repo.
100% agree. But maybe a private effort can lead the way and prove that some strategy works and some other doesn’t.
It’s like when you propose a new feature to core. The standard(?) response is: go make a plugin. If that proves to have adoption and work well, it may get integrated in core.
One way of handling fails would be to have a “Vetting History” tab added somewhere on the plugin page. Hide it if there is no history but show it if there is a history. Show who/when vetted the plugin and the results. A plugin may not be “vetted” for minor reasons or a major reason, this lets all users see exactly why in an open/transparent manner. Especially if there are minor reasons something doesn’t pass I’d hate to see a failed badge on it.
This way if I see a plugin that has gone through the process but didn’t get the vetted badge I can see “OK it’s just not that great for load” or that there are huge security risks because nothing is escaped, no nonces, etc.
I keep thinking of the Twitter “verified” type badge with a different application. Not being verified doesn’t mean @pmgarman isn’t me, but if I have that verification it’s proven for sure its me.
I think we’re all on the same page with this (again, I’m just now playing catch up), but I don’t think the ultimate goal is to kill plugins.
No matter how experienced a developer you are, every time you build for a new platform, you have to cut your teeth on something. WordPress is no difference. Keeping the doors wide open for new plugins – sort of the “Use at your own risk” for lack of a much friendlier term – is key.
But having this tier of vetted (or approved or stamped or labeled or starred or whatever arbitrary indicator is decided) is also important. And I don’t think we need to clutter the plugins page with 5 badges indicating security, code quality, UI, UX, and whatever else.
I think it needs to be simple (simple, not simplistic) – it either passes or it doesn’t. I don’t think it’s fair for us to complain about having higher standards of quality for plugins if we’re not willing to go all in.
By that, I mean that it’s not fair to say: “I’m want higher quality plugins – I’d aim for the badge of security and code quality but not be too concerned with UI/UX.”
I think it’s all or not.
Tom: How does a single binary yes/no help? It replaces a bad rating with a rating that tells a user it works for somebody but not necessarily for them, and it provides no additional context for them decide is the rating is relevant to them. It’s also likely to be like Dmoz vs. Google; fiefdoms will emerge and the plugins vetted will be those developed by the friends of the vetting commit because of access. And this means many good plugins won’t get a reasonable chance to get vetted.
Instead let everyone who wants rate and review plugins, let everyone who reads reviews say if a review is helpful, and let people build their reputations based on the quality of their reviews. Then let the reviews of those with higher reputations have more weight than reviews by those with low or no reputation.
Finally allow reviewers to freeform rank attributes so that the rankings can in-aggregate provide those what want to do more evaluation with more details about the plugin.
@Mike – you’re right. I wasn’t intending to say that this particular rating should indicate whether or not it works across the board of hosts or that it’s compatible with every theme and that it encompasses every single issue we have around a plugins.
I meant only to suggest this around the code that goes into plugin. For example, we’ve been talking about:
UI guidelines (which I know are still up to discussion)
I meant that, as developers, if we’re having an discussion about plugin quality, then we need to target *all* of the criteria. It doesn’t make sense to me to have a discussion about improving the quality of the plugins, but only go in half-way.
If we truly care about the entire plugin experience (which I think everyone here does!), then we need to care about it on all fronts.
So this badge/indicator/whatever-its-called is meant only to say that it meets a certain set of rules for evaluation of development. It’s not a guarantee that it works on all hosts.
To me, that’s an entirely other rating (which is still something that needs to be covered).
Anyway, didn’t mean to be unclear in my initial comment .
Random question: Shouldn’t we vet plugin _versions_ instead of plugins in general? “The 1.3 version of Plugin XXX is vetted”. Because I’ve seen lots and lots of plugins f’k up really bad with an update.
It’s another great point. Especially with the “recommended” system. I think any badges / recommending has to be version specific…. which of course, might discourage people from upgrading their code… eek.
As I noted in my comment, I think it’s a good idea to link vetting to a specific version. I think the UI can show that it was previously vetted, but the current version hasn’t been yet. The question is, how do we convey that without causing user confusion?
And maybe let them install the vetted version… ? If there’s no way to know if the upgrade broke something, of what value is the past vetted version, really? I’d still need to audit it. Maybe show diff of changes since audit… ? Not useful to non-devs…
Something like “The current version (1.3.1) has not yet been vetted. Install 1.3 instead?“
That’s the idea. And maybe a “What Changed” tab that can include developer’s changelog and a tab with diffs for developer types? Maybe % of code change for non-devs?
I don’t think that recommending older versions is aligned with the views of the repo. In most(?) cases an update is better / safer.
Daniel: I agree, and that’s why I think revetting updates should be a fairly high priority (and have a separate queue).
@otto42 , how many plugin updates does the repo get each day?
I think that having to vet versions can be a pretty monumental task. If it goes that route, I think it’d be important to get plugins to adopt the WP main project methodology for tagging. As in, version 1.0 and 1.1 are major versions, and would go in a vetting queue, but versions 1.0.1 or 1.1.2 would be minor and perhaps not require vetting. Still a bit of the honor system on a system like this, but I think that will be necessary no matter what. Plus, if someone was behaving badly and tagging major updates 1.x.x then if they are flagged they could lose that status anyway. But vetting *every* update could be ridiculous. WP SEO I know of the top of my head sometimes updates multiple times in a week. Not feasible at scale.
Vetting every update would be the best solution, but I’m guessing it’s just not feasible. Vetting the plugin originally, but not the updates, has issues as discussed and is still a huge job.
So, what if we don’t vet the plugins, what if we vet the authors? Some sort of accreditation scheme where they have demonstrate all the concepts required and earn an official ‘score’. You still let everyone in, so as to not discourage new plugin authors, but users can easily see the dev’s score, which is a signal of quality..
It’s not as good as vetting every version of every plugin, but it may be more feasible. Still loads of issues with it (what about old plugins from before they were accredited), but might be worth looking at.
We all have a few authors that we trust without even opening the code, but if we need to make it a general rule, I don’t trust people; I trust code. Even with my own code
After all, anyone can have a really bad day.
While I love the idea of vetting authors rather than plugins (easier), the reality of it makes me uncomfortable.
Take a begining dev. they submit a plugin for review. it goes into a 3 month loop as they get coached. finally it passes. are they vetted? if they tackle a new plugin which is far more complex and it fails, should it really get auto-vetting? I don’t think so. Even the best of us who are pushing boundaries will create a mess every so often. Peer review of the plugin will create quality. I work with some amazing devs, and sometimes, they have off weeks (or months).
Second, the issue is that plugins are often created with a specific purpose and audience in mind. Some might be made to scale, others to improve a confusing process. How do we make these decisions transparant?
how many plugin updates does the repo get each day?
Well, it’s 17:14 now, and there have been 602 changes so far today. So, guesstimate would be around 800-1000 on average.
Not all changes result in a new version to a plugin. That’s harder to measure.
Otto. That metric is extra useless because many devs don’t use the repo svn to dev, as they prefer git. For example we dev elsewhere and then simply commit on version change. Any way to figure out the number of version changes?
No, I don’t have that information, nor know where to start even trying to figure it out. It’s not something we track.
However, if the majority of developers did as you describe, then that metric would actually be extra-useful, since most commits would be complete version updates. Since many people *do* dev via the SVN, the number is inflated.
I don’t think we’d be doing anyone any good vetting a plugin but not keeping track of the versions. Maybe have two types of “badges” on the plugin (not sure what exactly we are calling them). Either by a semi-transparent or grayscale vs color, show that its been vetted and the current version is vetted – or a previous version was.
Another thing to think about is silent updates? Say that v1.1.0 gets pushed out and vetted. After being vetted the dev decides to go into the 1.1.0 tag and make some nasty changes. Something would need to stop this from happening. Or in the case like what happened to malicious SVN commits by someone other than the dev? We will need to lock down vetted versions one way or the other.
I shared some of the plugin code review I currently do for my clients in the “Locking Down WordPress” Code Poet ebook. This was a security related response (as opposed to a performance related response), but I think it is worth sharing here:
If the client requests a plugin that I have never used before I review the
plugin files and the plugin developer(s). When I review the plugin files I
specifically look for WordPress Plugin API hooks, actions, and filters, properly sanitized data and MySQL statements, unique namespace items, use of the Settings API for any plugin settings or options, and nonces instead of browser cookies. I review the developer to verify reasonable response times to support items and that the plugin is actively developed.
Some additions from my plugin review process:
1. Are scripts and styles enqueued correctly? Related: Is the plugin enqueueing yet ANOTHER version of jQuery?
2. Will front end scripts/styles only load in views where needed?
3. Verify the plugin can be activated, used and deactivated without throwing an exception or triggering an error.
4. Upon deactivation, does the plugin clear any database tables or rows added during use?
5. When uninstalled does the plugin remove any added database tables/rows and all files?
6. Does the use of the plugin cause memory leaks/spikes?
7. Does the use of the plugin increase the number of database queries required to load different views? If so, is the increase expected? If the plugin functionality was expected to increase database queries is the increased load reasonable?
Interesting. I think all those step could be implemented in an automatic tester like we have for themes.
+1 for an automated tester. I know there’s a WP Coding Standards sniffer, but I’m not sure if there’s any tool to check for any of these things. I think it *could* be done though.
Ryan, There is a Code Sniffer 1.3 plugin for WordPress standards: https://github.com/mrchrisadams/WordPress-Coding-Standards that in conjunction with xDebug and some sort of profiler would be helpful to start automating this…
That’s the one I was thinking of. That’ll check things like indentation, braces, etc, but it won’t check the things you noted above, which I think could be automated.
+1 for the points above.
I think there is *some* leeway in a few of these points. For example, not all plugins will need to clean up after themselves.
For example, I’ve got one plugin that, when activated, adds data to the end of comment content based on user input. If I were to back that stuff out upon plugin deactivation, it would literally remove files and break previous comments.
So I see this as kind of being a “Theme Check” style set of rules where there are :
Great list. Your having such a list and indicating you’ve done this for clients already also reinforces my sense that great developers like yourself are ALREADY doing this vetting… and we’re not capturing it. Sure you can vote, but 20 other people whose credibility means nothing to me got mixed in with your vote.
Yes. I have been thinking about solving the original problem you brought up – which seems more of an aside to the general Plugin Repo. A place where other developers/development companies can share the plugins they use, plugins created (that perhaps are not meant for non-developers) and even share code.
I would love to talk to you (Jake) and any other other developers that have interest. Many of us doing client development work are doing double work, and I am interested in finding a way for us to work smarter.
I think an interesting question is… how are you documenting each one of this reviews so your team can always get back to them? Do you have a system / process for that?
Also, do you re-review each time the plugin gets an update?
This seems like something outside the repo to me and would totally make sense to collaborate on. Invite only to people who trust. Simple cpt called review – 1 per plugin. We are already doing the vetting consistently for projects.
Rachel / Shane – you really hit on the spirit of my initial tweet / idea.
I’m glad Otto flagged it, and he’s right that we should try to help improve the official repository, which (I’m going to say it again) is broken. I want clients and users to be able to see this list as much as other developers.
But in the meantime, upgrading the repo is a bit of a hornet’s nest politically, and incremental change is probably necessary and inevitable. A third party site can avoid this.
I even think that something like this would fit nicely into Code Poet. Code Poet seems like a way to create a social community around developers, and I think this would align quite well with that.
good call ryan, I sent pete davies a quick email on the topic
I tend to think Code Poet is a little too geared toward smaller businesses (that may have a lot of plug-ins that would fail vetting) and a little too official (an Automattic project)…
Would also think checking for things that aren’t necessarily “wrong”, but generally inadvisable. query_posts() comes to mind. Caching third-party API requests, etc.
To the thought of a third-party, independent ‘review’ repository – why not? Perhaps this can be hashed out and built at the dev day tomorrow?
My 2 cents
Which plugins to vet first – this is simple, we should be vetting the plugins with the highest download count. Why? Because these are the most risky plugins in the repository. More people are using these and there’s the potential for big headaches if a plugin with 500k downloads has a security issue. This is similar to how the Twitter verification process works, they obviously prioritize verifying someone like Obama compared to some random dude with 10 followers. If this happened though we’d have to prevent people from artificially boosting their download count, perhaps look at active installs instead.
Vet Random Plugins – This is to add a little fairness to the system. You could commit to vet every 10th plugin submission. This means no matter how popular your plugin is there’s still the chance of it being vetted.
Trusted Plugin Author Badges – The biggest problem with this vetting system is plugin updates. This would somewhat fix that. We could implement a badge that recognises “trusted plugin authors”. We’d have to figure out a way of determining this but an relatively fair way would be if the author has a certain amount of previously vetted plugins (3 for example). This way more people are going to trust their updates and even new their new plugins which may not be vetted yet.
Even every 10th (do we count upgrades??) is probably undoable, but I actually like the idea of vetting by popularity. Maybe start with any plug-in that has greater than 100 downloads per day?
I would say only new plugins (i.e. not updates). The number was just something random, we would obviously need to pick a number that works. The point was to introduce to random factor into the mix so that people can’t complain that their plugins aren’t being chosen to be vetted.
There’s a caveat to that. There are categories that are organically less popular that other. Ie: A _great_ plugin that integrates WP with Pinboard may never get 100 downloads per day.
I think that’d be a good idea to start with until a significant portion of the repository has been vetted, but I don’t think that should be permanent.
What about prioritizing then? Maybe all plugins over 100k downloads are prioritized to be vetted first. Once those are done then a new method for selecting which plugins are vetted takes precedence. I’m pushing for this mainly as most users go by the thought of “hmm this has heaps of downloads, it must be good”
I think another thing to take into consideration is once a plugin is reviewed we should have some kind of way of displayed both if it was vetted AND if it wasn’t vetted (and why). This means while most plugins will still be accepted into the repository people can see that a plugin was reviewed and NOT vetted, otherwise there’s no real transparency.
+1 for starting with high download plugins. Lets get the high-profile, poor performers weeded out first.
Is vetting truly a yes / no? Perhaps ultimatly it is a pass / fail but includes a few scales (example zappos: http://cl.ly/Inf4) with a few key items like : * Code Quality * WP Way * Usability * Setup
As for failures, there are 2 types.
1 > prolonged reviews in which the dev team engages and gets the plugin to pass. We need to decide if that is a fail until it is a pass or just stay under review.
2 > fails which do not get the attention of the dev, in which they are wither offered the option to remove the plugin or have a fail badge on it.
this does imply a willingness on the review team to engage, educate and help people come into compliance.
I am honestly more concerned about this process alienating people than a more stringent series of initial requirements.
As for “Trusted Plugin Author Badges”, that is a seriously great idea and fits into the profile that the team is currently working on.
I will say that having built in peer review is a wonderful thing and that no code deserves auto-vetting, I don’t care who it is. We are in an eternal process of growth and improvement. Some of the best coders I know rarely consider the ux implications of their code decisions.
This is just personal preference but I’m not sure that UX should be considered in the vetting process.
Simply because it’s a bias decision. Some people choose to follow “WordPress” style UI and some don’t. Would this mean that Jetpack fails the vetting process because they don’t follow WordPress-esque styling?
Some areas that are completely unbiased that everyone can agree on for vetting purposes:
Spammy / Scammy behaviour
Phone home tactics
I hear you. I totally disagree though. I am actively working with the core ux team to build a set of usability guidelines (a la apple) for plugins: http://make.wordpress.org/ui/2012/08/15/meeting-summary-for-814/. It is the wild west right now. I am not saying that we need to enforce “the wp way” 100% but rather that we can identify if this is a decent experience as a user or simply befuddling. Much of it is simply are people using common sense (as helen stated).
This is awesome code, secure, scales, respectful of privacy but I couldn’t even figure out how to use after activation it is not worthy of a vetted badge. Part of that is the plugin interface and part of that is the documentation that should be reviewed.
Ok awesome, this would work if we had said guidelines. I was assuming that this would be on a personal “Hmm I don’t really like this interface” basis.
“no code deserves auto-vetting, I don’t care who it is” <– agreed
Shane – until we have real UX guidelines, I don't agree that it should be part of the vetting process, although perhaps a separate rating metric for general users.
It is "opinionated" without official guidelines, and more important, users are able to judge this on their own – it's what they see.
I really believe users can judge for themselves whether it looks good enough for their needs. My concern right now is that they have no idea what's lurking in the code, nor should they have to worry about this.
Wondering the effect that can have on a new WP dev to have his first plugin marked as not-vetted. Because we can argue that is a great opportunity to learn.. but…
Maybe we put a time limit on it? So 1 month (arbitrary time) after the review if they haven’t “fixed” the plugin then they’re given the not-vetted label. This would obviously have to be done in a tasteful manner though. The feedback would be a critical aspect here.
The plugin could have several vetted statuses attached to it also:
Previous version(s) vetted
Not-vetted or failed vetting?
“Not vetted” I’m less worried about – it’s a bonus, and we can explain how we pick which plug-ins we vet.
“Failed vetting” – this needs more exploration, but I think most developers would welcome the opportunity for free feedback from expert devs.
I was referring to failed vettings.
I think most developers would welcome the opportunity for free feedback from expert devs.
I’ll reply to this when I stop laughing… it may be some time.
Didn’t work that way for the theme review team. Lots and lots and lots of theme authors didn’t like having to go through the process. Of course, that may be a different situation, since a “fail” in that case meant the theme didn’t go in the directory. Many theme authors left, but since these tended to be spammy authors or just making pretty poor themes in general, no big loss. Eventually a large amount of theme authors improved their code, and started getting in. So the standards went up a bit. Rinse, repeat.
Nobody likes criticism, really. Even when you couch it in “expert advice” or something, people tend to react badly. Expect it.
Otto – I really think there’s a difference between “sorry, you’re rejected” and “to earn the vetted badge, here are some tips.” Maybe I am naive on this point, and sure, some will have bruised egos, but when they realize the tips are valuable and nothing has been take away from them, and assuming the reviewers have a positive style, I think there will be some level of respect / appreciation. Think of it as catching a bug for another developer, and making a suggestion for a fix.
It’s not my case, but I know for a fact that there are some amazing project managers here. Can anyone take a stab at how much manpower do we need to achieve this? Not talking about creating the tools to do it. I mean to have a good chunk of the repo vetted.
@otto – I need some stats and could probably think this through.
How many plugins over 100k downloads are in the repo?
How many plugin in the repo which have been actively updated in the last 2 years
How many new plugin submission a day
How many updates a day
Anything else you think relevant.
How many plugins over 100k downloads are in the repo?
How many plugin in the repo which have been actively updated in the last 2 years
How many new plugin submission a day
20-30, on average.
How many updates a day
Probably around 800-1000 check-ins a day to the repo, on average.
Thanks otto – anyone know the pace by which the theme team reviews? how many themes / day on avg?
Even with a couple plugins a day it would take a year just to do the top plugins. We could probably setup a few hack days to hammer through the big ones?
They used to track that information themselves, however I don’t think anybody has updated the numbers in a while:
Hey guys, deeply sad I missed that the Google Hangout was happening.
I just watched the entire video and I’m a bit concerned about this “vetting” process where some people get to be trusted reviewed and some plugins get to be vetted. It sounds like what you are trying to build is the Yahoo directory of the 90′s when we know that Google’s algorithmic search was a lot more scalable.
Can I propose that we implement a reputation system for community members and for plugins based on numerous signals instead of a manual selection process, and that we enable everyone to become a reviewer and from this the best reviewers reviews can bubble up to the top. We can then weight the reviews based on the reviewers reputation and how helpful the review is voted to be (like Amazon.com’s “was this review helpful?”)
This type of system would encourage companies like Jake’s 10up to capture their efforts when they rate plugins because good reviews that are voted up would improve 10up’s brand on WordPress.org. This way lots of people will be motived to write really helpful review because it will help them raise their standing in the community and help them present themselves as having a high reptuation to promote their services to prospective clients.
I’ve also got some thoughts about what plugin reviews might look like which I’ll post in topic #2.
I think we’re already headed to a review system that exposes and reviewers and (hopefully) allows for up / down ratings on reviews, that will take us along this path (see my reply to Matt, way up in this P2).
I also agree that a positive indirect consequence of this system would be more constructive involvement in the repository by community members, including 10up.
My original thinking was that this could happen in parallel with vetting, but my thinking is changing.
I’m not clear on what you mean in the last sentence, can you clarify? Also, I don’t see any comments from Matt on this page so I can’t find the comment you mention. Got a link?
I absolutely agree with you, Mike. The “vetting” idea actually really concerns me, both from an implementation and management perspective and from a user’s point of view.
You must be logged in to post a comment.
← Topic 2: Community Feedback & Plugin Quality Metrics
If you’re at WordCamp Portland there will be… →
Blog at WordPress.com. Theme: P2 by Automattic.
Get every new post delivered to your Inbox.
Join 39 other followers