[development] DevScore Module

Laura Scott pinglaura at gmail.com
Thu Mar 24 17:51:04 UTC 2011

On Mar 24, 2011, at 11:22 AM, Greg Knaddison wrote:

> On Thu, Mar 24, 2011 at 10:10 AM, Laura Scott <pinglaura at gmail.com> wrote:
>> On Mar 24, 2011, at 9:39 AM, Greg Knaddison wrote:
>>> But taken in aggregate across a bunch of sites it could be a
>>> reasonably useful metric. If we want to compare developer's skills or
>>> the quality of sites in a scalable automated way then we have to base
>>> it on some metrics that may not be perfect but are reasonably proxies
>>> for real measures of skill.
>> There are so many outside variables that can skew this, that I don't see how this could work:
>> * Did the client have an adequate budget? Underbudgeted projects can have more problems, and that's not a measure of a developer's quality of work.
> That's a problem on a specific site, but it wouldn't be a problem in
> broad aggregate which, I guess, is how this is meant to be used.

How many sites will a developer develop over time? 5-10 a year, maybe? If they're relatively easy. 2000 hours baseline. It will take many years to build up a "broad aggregate." Of course, shops can work much faster, but then you don't have one developer being measured.

>> * Does the developer have exclusive control? Unless the developer is providing ongoing maintenance on a completed site, and did all the development in the first place, the site build is not necessarily teh developer's -- all the more so if everything now is under someone else's control. Other developers may be brought on board, and their work reflects on the original developer's reputation. A site well-built could end up very troubled, dragging down the original developer's score.
> Again, we don't know how this proposed system will work but your
> strawman could easily be fixed by taking the score at time of launch
> and crediting the initial developer with that score vs. later
> developers getting incremental scores.

That doesn't address who else has fingers in the code. And if you follow the "release early and often" motto, the initial release is almost certainly going to run into issues.

>> * What does number of nodes and members have to do with quality of work? Sheesh!
> It is a proxy for the relative size of the site. Visitors are another
> example. If the site is an e-commerce or donation focused site you
> could compare dollars in revenue.
> I think most folks agree that someone who can build a brochure site
> for the local ice cream shop has fewer skills than someone who builds
> a site meant to hold hundreds of thousands of nodes and users. The
> number of nodes/members gives a rough indication of which kind of site
> it is especially when taken in aggregate across several sites.

I think it's a mistake to equate volume with quality. It really depends upon the use cases.

>> * If you're going to measure SEO, you need to know the site goals. Does the site even answer the needs of the client? Or did the developer say "Drupal doesn't do that" and give the client something else that happens to ping the metrics this kind of module might monitor? Getting a lot of traffic from the wrong audience is not good SEO, and you can't measure that in a module that just looks for structural components. And you can't blame or credit good or bad SEO on the developer if the developer is not developing content as well as the site software. (Example: A random post I made about Marilyn Monroe is still one of the highest traffic posts on my personal blog. A blunt measure would say "great!" But that traffic has nothing to do with what I blog about in general, and if it were a business site that off-topic traffic from an audience I'm not trying to reach would be next to useless, unless I were only after selling ad impressions.)
> There are currently 2 or three modules that measure progress in SEO
> that are relatively popular. So...some of the rough means of whether a
> site has achieved good SEO can be measured.

Only bad SEO can be partially measured by noting what's missing. Good SEO can't be quantified programmatically, unless you can measure the goals and conversion rates. See my example above. Maybe SEO is the wrong term here. Maybe something like Search Engine Validity would be more appropriate. Real SEO is more than having the right technical architecture.

>> IMHO, it's next to impossible to automate scoring this way of what is knowledge work, especially when what you're measuring has so many unknowns.
> It depends on your goal - if your goal is the perfect measure of
> knowledge work then of course it's impossible. If your goal is just
> something better than the other options then there's plenty of
> progress to be made.
> As always, we shouldn't let perfect be the enemy of progress.

On the other hand, it's a mistake to believe you can draw solid conclusions from bad or unreliable data.

If this were developed as a module for the developer, to help follow best practices, like your own security module, then it's a different thing. As that, in fact, it could be very useful. But otherwise I feel it would be just another standardized test that measures little while giving people the belief they know a lot more than they really do. (Like with our public schools.)


More information about the development mailing list