[development] Drupal Administration survey II --
looking volunteers to do interviews
kieran at civicspacelabs.org
Tue Aug 1 22:51:11 UTC 2006
On Aug 1, 2006, at 3:27 PM, Gary Feldman wrote:
> Kieran Lal wrote:
>> On Jul 31, 2006, at 10:54 AM, Gary Feldman wrote:
>>> Can you state some more specific goals for the survey?
>> To understand Drupal administrators situation when they are
>> administering, their goals, and the tasks they are trying to
> That's still pretty vague. It could be addressed by a single
> question that just asks administrators to describe such things.
> But in order to come up with more specific (and useful) questions,
> it's good to have more specific goals.
> If you're really just trying to improve your abstract understanding
> and appreciation of Drupal administrators, contextual inquiry would
> be better than a survey, and better than a scripted interview
> (though both could be done in one visit).
I am struggling to get interviewers. If you think we can get people
to do contextual inquiry then I'd be happy to combine those results.
Survey data doesn't stand on it's own. The fact that 900 Drupal
administrators were involved carries weight behind those results.
The fact that ~900 people completed the survey (versus 200 for the
documentation survey) indicates the community thinks these are
important issues. I am as interested in the process of engaging the
community at large to help with improving the user experience as I am
in actually changing the Drupal software user experience.
>> The survey was part of a larger effort to improve Drupal
>> administration. The survey specifically helped to identify tasks
>> Drupal administrators were trying to accomplish so we could
>> improve their ability to complete those tasks.
> Forgive my nitpicking, but unless there's something more
> complicated about the way the survey was done than indicated by the
> results, the survey started out with its own list of identified
> tasks. There are about thirty in question 7. Or did the survey
> actually ask those questions with no (or just a couple) of tasks
> listed, and then somebody took the prose results and organized them
> into a manageable number of tasks?
We did interviews initially and then took the tasks identified in
those interviews and provided a selection of tasks on a likert scale
as request by Charlie Lowe, who taught audience analysis at Purdue.
> Question 8 appears to have identified a handful of tasks not on the
> original list, although it's not clear to me whether those are all
> tasks that administrators do or things they want (e.g. does "group
> tasks logically" mean that administrators have their own tasks that
> somehow need to be grouped? or more likely, do they want Drupal's
> administration tasks to be grouped logically?) That's good as far
> as it goes, as long as it's understood that they're giving their
> own perceptions and conclusions, which don't necessarily reflect
Agreed, responses are not as accurate as context inquiry. Which is
not as accurate as direct observation, which is not as accurate as
logging everything that every Drupal site does. I think a feedback
module in the core distribution that sent back analysis voluntarily
would be great!
>> Surveys serve a narrow purpose. They allow broad participation
>> from the community as a whole and they help provide feedback to
>> the Drupal core development process. I would use different user
>> experience techniques to evaluate some of these measures. For
>> example, I use analysis of search terms on Drupal.org and comments
>> in the Drupal handbook to track what people are interested in and
>> what they are having trouble learning.
> Narrow in the sense that they need to be focused? Or in the sense
> that they're limited in what they can do? I agree with both, which
> is why I'd like to see good, focused goals. I also agree that a
> combination of various types of user data is good.
They need to be focused on Drupal administrators and administration
>>> Another way to phrase my question is what decisions do you hope
>>> to make based on the results of the survey?
>> I can't make decisions for the larger community, but I would hope
>> that developers, consulting firms, and Drupal site owners would
>> choose to put their resources to improving the most difficult and
>> important tasks identified in the survey.
> Ah, finally some specific questions: What's difficult to do?
> What's important to work on?
> For the stuff that's difficult, the next question is what makes it
> difficult? For some specific tasks, there might be useful survey
> questions. For example, terminology was difficult for 30% of the
> respondents, so it might be interesting to ask how hard is it to
> find a definition and once you've found it, how hard is it to
> understand. But for something like administering the structure of
> a site (32% found it difficult), is it because people have hard
> things they want to do with the structure? Or is it because they
> didn't structure it well in the first place? Or is it because
> they're unaware of features that would make it easier? There might
> be some good survey questions to ask for those, especially if you
> bring to bear the collective knowledge about Drupal administrators
> (which is much greater than my own), but I can't think of any such
I think we had significant and successful improvements from the last
survey. Once a problem area is identified we are capable or coming
to good consensus solutions through Drupal community analysis on the
>> I would use the results to direct where CivicSpace makes it's
>> investments in improving the user experience of administering
>> Drupal. I would encourage and validate others efforts to do
>> likewise. For example, in the last survey we identified that
>> making your theme work across all browsers was the most difficult
>> Drupal administration task. If that result was validated again in
>> this survey I'd probably post emails and contact consulting firms
>> and customers encouraging them to fund Drupal theming improvements.
> Is that the item that reads "Manage inconsistency in themes"?
> Should that be "inconsistency in browsers"?
I used the terminology that came up in the interviews. Next time I
would spend more time figuring out what that meant before adding it
to the survey.
> Regardless, this is a good example of the limits of a survey (if I
> understand it correctly). This may well be something that's high
> on their list, but it's not obvious how that applies to Drupal
> (especially when the quickest solution might be to just wait for IE
> 7 to become popular, assuming MS fully and correctly supports CSS
> 2.1 with it). It might be that the most productive thing Drupal
> could do here would be to recommend some other open source tools
> that focus on this particular problem.
>> In the survey I identified categorization as being the third most
>> "Very Difficult task". I didn't understand why. I conducted a
>> small follow up survey for a dozen people to understand why
>> categorization was important. What I learned was the for non-
>> profits and advocacy groups it was very important that they are
>> able to communicate the structure of their organization and the
>> goals of their organization through their website categorization.
>> I also learned that these users treated categorization as three
>> distinct tasks: managing categories, navigating by categories,
>> organizing by categories. That lead to a review of over 20
>> taxonomy modules and we built a taxonomy garden to make it easier
>> for Drupal administrator to understand how to use categories and
>> the available modules. You can see the results of that work here:
>> http://drupal.org/node/47822 Managing categories
>> http://drupal.org/node/47623 Navigating by categories
>> http://drupal.org/node/47527 Organizing content by categories
> I'm not really sure what to make of these. From the titles, I was
> expecting something more in the way of strategies than module
> descriptions. In this context, it might be most helpful to
> understand the mappings between the user data and the modules. For
> example, how does Taxonomy XML fit in? Are there specific problems
> in the user data that it solves? Or is the point merely that the
> taxonomy modules were organized in a way that corresponds to the
> user's distinctions?
> My conclusions:
> In trying to put this all together in my mind, I'm wondering is the
> need for a second survey to assign priorities to user tasks and
> problems (which is the way I would describe the results of the
> first survey, instead of the more general "get a better
> understanding"). If so, then what were the problems or
> deficiencies of that survey?
Not enough volunteer time was available to summarize it.
> Or would it be more valuable, to pick the top 3-5 issues from the
> first survey, and collect data around those?
Well how about we get some more volunteers to do the 10 interviews
first. Then we can plan tertiary studies later ;-)
> These are the sorts of questions that would help me construct a
> survey (or decide to use another method for collecting user data at
> the moment). And no, I'm not pretending to be a survey
> statistician or other expert; I'm coming from a background in
> requirements gathering and management.
Understood. The feedback is useful, but let's get some interviews
done first and then try to design the survey from what we learn in
More information about the development