[development] DevScore Module

Laura Scott pinglaura at gmail.com
Thu Mar 24 16:10:57 UTC 2011

On Mar 24, 2011, at 9:39 AM, Greg Knaddison wrote:

> But taken in aggregate across a bunch of sites it could be a
> reasonably useful metric. If we want to compare developer's skills or
> the quality of sites in a scalable automated way then we have to base
> it on some metrics that may not be perfect but are reasonably proxies
> for real measures of skill.

There are so many outside variables that can skew this, that I don't see how this could work:

* Did the client have an adequate budget? Underbudgeted projects can have more problems, and that's not a measure of a developer's quality of work.

* How are security issues handled? Will a client want a module like this on their site when it's advertising bugs and potential security flaws on their site?

* Is the site being maintained? As new releases come out, maybe the client isn't interested in paying for those updates being deployed. Maybe the client is ignoring them altogether. Maybe the client has other priorities and doesn't give a crap anymore.

* Does the developer have exclusive control? Unless the developer is providing ongoing maintenance on a completed site, and did all the development in the first place, the site build is not necessarily teh developer's -- all the more so if everything now is under someone else's control. Other developers may be brought on board, and their work reflects on the original developer's reputation. A site well-built could end up very troubled, dragging down the original developer's score.

* Some modules are maintained better than others. A good module today may end up being unsupported for any of myriad reasons. Does a poorly maintained module end up reflecting on the site developer who never touched the code of that module?

* What does number of nodes and members have to do with quality of work? Sheesh!

* How does this reveal bad practices? Will it reveal that the developer has hard-coded blocks into the theme? Or makes direct database queries from a template? Or that the code that doesn't throw errors does not do what it's supposed to do? Or all the custom work is uncommented and unreadable? Or that core was hacked?

* How is the difficulty of execution handled? Does this measure integration points with third-party systems? Does it measure migration of legacy data? Does not measure the difficulty of custom module development?

* If you're going to measure SEO, you need to know the site goals. Does the site even answer the needs of the client? Or did the developer say "Drupal doesn't do that" and give the client something else that happens to ping the metrics this kind of module might monitor? Getting a lot of traffic from the wrong audience is not good SEO, and you can't measure that in a module that just looks for structural components. And you can't blame or credit good or bad SEO on the developer if the developer is not developing content as well as the site software. (Example: A random post I made about Marilyn Monroe is still one of the highest traffic posts on my personal blog. A blunt measure would say "great!" But that traffic has nothing to do with what I blog about in general, and if it were a business site that off-topic traffic from an audience I'm not trying to reach would be next to useless, unless I were only after selling ad impressions.)

IMHO, it's next to impossible to automate scoring this way of what is knowledge work, especially when what you're measuring has so many unknowns.

(Sorry for the rant.)


More information about the development mailing list