HooksAndData

From The Art Of Community by O'Reilly (http://www.artofcommunityonline.org) by Jono Bacon

Hooks ’n’ Data

So far we’ve discussed the importance of gathering feedback and measurements from your community, and that the focal point is the goals that we decided on in our strategic plan. The next step is to build into each goal a feedback loop that can deliver information about our progress on the goal.

This feedback loop is composed of two components—hooks and data:

Hooks

A hook is a medium or resource in which we can slurp out useful information about our goal. As an example, if our goal was to reduce crime in a neighborhood, a hook could be local crime reports from the police. The reason I call them hooks is that they are the protruding access points in which we can display interesting information.

Data

If a hook is the medium that provides useful information, data is the information itself. Using our previous example of a goal to reduce crime in a neighborhood, the hook (local crime reports) could provide data such as “10 crimes this month.” The data is composed of two attributes, the data itself and the measurement unit. Again, the kind of unit can be used to feed a display (e.g., numerical units are great for graphs).

To help understand this further, let’s look at an example. In the Ubuntu community, my team has worked to help increase the number of people who become new developers. In our strategic plan we created an objective to increase the number of community developers and fleshed it out with goals for improving developer documentation, awareness, and education. Each goal had the expected set of actions. For us to effectively track progress on the objective, we needed data about developer growth.

Fortunately, we have access to a system called Launchpad (http://www.launchpad.net), which is where all Ubuntu developers do their work. This system was an enormous hook that we could use to extract data. To do this we gathered a range of types of data:

  • The current number of developers (e.g., 50 developers).
  • How long new contributions from prospective developers took to be mentored by existing developers (e.g., 1.4 weeks).
  • How many of these new contributions are outstanding for mentoring (e.g., 23 developers).

Launchpad had all of this information available. Using some computer programs created by Daniel Holbach, we could extract the data. This allowed us to track not only the current number of developers but also how quickly progress was being made: we knew that if the number of developers was regularly growing, we were making progress. We could also use this data to assess the primary tool that new developers use to participate in Ubuntu: the queue of new contributions to be mentored. When a new developer wants to contribute, she adds her contribution to this queue. Our existing developers then review the item, provide feedback, and if it is suitable, commit it.

By having data on the average time something sits on that queue as well as the number of outstanding items, we could (a) set reasonable expectations, and (b) ensure that that facility was working as well as possible.

In this example, Launchpad was a hook. Using it involved some specific knowledge of how to physically grab the data we needed from it. This required specialist knowledge: a script was written in Python that used the Launchpad API to gather the data, and then it was formatted in HTML to be viewed.

Launchpad was an obvious hook, but not the only one. Although Launchpad could provide excellent numbers, it could not give us personal perspectives and opinions. What were the thoughts, praise, concerns, and other views about our developer processes and how well they worked? More specifically, how easy was it to get approved as an Ubuntu developer? To gather this feedback, our hook was a developer survey designed for prospective and new developers. We could direct this survey to another hook: the list of the most recently approved developers and their contact details. This group of people would be an excellent source of feedback, as they had just been through the developer approval process and it would be fresh in their minds.

With so many hooks available to communities, I obviously cannot cover the specific details of how to use them. This would turn The Art of Community into War and Peace—complete with tragic outcome (at least for the author). Fortunately, the specifics are not of interest, as all hooks can be broadly divided into three categories:

Statistics and automated data

Hooks in this category primarily deal with numbers, and numbers can be automatically manipulated into statistics.

Surveys and structured feedback

These hooks primarily deal with words and sentences and methods of gathering them.

Observational tests

These hooks are visual observations that can provide insight into how people use things. Let’s take a walk through the neighborhood of each of these hooks and learn a little more about them.

Statistics and Automated Data

People have a love/hate relationship with statistics. Gregg Easterbrook in The New Republic said, “Torture numbers, and they’ll confess to anything.” Despite the cynicism that surrounds statistics, they turn up insistently on television, in newspapers, on websites, and even in general pub and restaurant chitchat. The problem with the general presentation of statistics is that the numbers are often used to make the point itself instead of being an indicator of a wider conclusion.

Statistics are merely indicators. They are the metaphorical equivalent to the numbers and gauges on the dashboard of a car: no single reading can advise on the health of the car. The gauges, along with the sound of the car itself, the handling, look and feel, and smell of burning rubber all combine to give you an indication that your beloved motor may be under the weather. Despite the butchered reputation of statistics, they can offer us valuable insight into the status quo of our community. Statistics can provide hard evidence of how aspects of your community are functioning.

Many hooks can deliver numerical data. A few examples:

  • Forums and mailing lists can deliver the number of posts and number of members.
  • Your website can deliver the number of visitors and downloads.
  • Your meeting notes can deliver the number of participants and number of topics discussed.
  • Your development tools can deliver the number of lines of code written, number of commits made to the source repository, and number of developers.
  • Your wiki can deliver the number of users and number of pages.

For us to get the most out of statistics, we need to understand the mechanics of our community and which hooks can deliver data from those mechanics. We will discuss how to find hooks from these mechanics later in this chapter.

The risks of interpretation

Although statistics can provide compelling documentation of the current status quo of your community, they require skill to be interpreted properly. A great example of this is forum posts. Many online communities use discussion forums, the online message boards in which you can post messages to a common topic (known in forums parlance as a thread). Within most forums there is one statistic that everyone seems to have something of a love affair with: the total number of posts made by each user.

It’s easy to see how people draw this conclusion. If you have three users, one with 2 posts, one with 200 posts and one with 2,000 posts, it’s temping to believe that the user with 2,000 posts has more insight, experience, and wisdom. Many forums leap aboard this perspective and provide labels based upon the number of posts. As an example, a forum could have these labels:

  • 0–100 posts: New to the Forum
  • 101–500 posts: On the Road to Greatness
  • 501–1,050 posts: Regular Hero 1,501–3,000 posts: Dependable Legend
  • 3,001+ posts: Expert Ninja

As an example, if I had 493 posts, this would give me the “On the Road to Greatness” label, but if I had 2,101 posts, I would have the “Dependable Legend” label. These labels and the number of posts statistic is great for pumping up the members, but it offers little insight in terms of quality.

Quantity is rarely an indicator of quality; if it were, spammers would be the definition of email quality. When you are gathering statistics, you will be regularly faced with a quantity versus quality issue, but always bear in mind that quality is determined by the specifics of an individual contribution as opposed to the amalgamated set of contributions. What quantity really teaches us is experience. No one can deny that someone with 1,000 forum posts has keen experience of the forum, but it doesn’t necessarily reflect on the quality of his opinion and insight.

Plugging your stats into graphs

Stats with no presentation are merely a list of numbers. When articulated effectively though, statistics can exhibit the meaning that we strive for. This is where graphs come into play. Graphs are an excellent method of displaying lots of numerical information and avoiding boring the pants off either (a) yourself, or (b) other people. Let’s look at an example. Earlier we talked about a project to increase the number of community developers in Ubuntu, and one piece of data we gathered was the current number of community developers who had been approved. This is of course a useful piece of information, and as the number climbs it helps indicate that we are achieving our goals. What that single number does not teach us, though, is how quickly we are achieving our goal. Imagine that we had 50 developers right now and we wanted to increase that figure by 20% a year. This would mean we would need to find five developers in the next six months. This works out at approximately one developer per month. If we want to encourage this consistency of growth, we need not only to look at the number of current developers once, but also track it over time so we can see if we are on track to achieve our 20% target. Using this example in the Ubuntu world, we could use Launchpad to take a regular snapshot of the number of current developers, plot it on a graph, and draw a line between the dots. This could give us a growth curve of new developers joining the project.

Another handy benefit of graphs is to show the impact of specific campaigns on your community. On my team at Canonical we have a graph that shows the current number of bugs in Ubuntu. On the graph is a line that shows the current number of bugs for each week. As you can imagine, the line that connects these numbers shows a general curve of our bug performance. This line is generally fairly consistent. Each cycle, we have a special event called the Ubuntu Global Bug Jam (http://wiki.ubuntu.com/UbuntuGlobalJam) in which our community comes together to work on bugs. Our local user groups organize bug-squashing parties, and there are online events and other activities that are all based around fixing bugs. Interestingly, each time we do the event, we see a drop in the number of bugs on our graph for the days that the Global Bug Jam happens. This is an excellent method of assessing the impact of the event on our bug numbers.


TECHNICAL TIP

You may be wondering how you can gather data from various hooks and display them in a graph automatically. I just wanted to share a few tips. If this seems like rocket science to you, I recommend that you seek advice from someone who is familiar with these technologies. Gathering data from hooks is hugely dependent on the hook. Fortunately, many online services offer an application programming interface (API) that can be used by a program to gather the data. This will require knowledge of programming. Many programming languages, such as Python and Perl, make it simple to get data through the API.

Another approach with hooks is to screen scrape. This is the act of downloading a web page and figuring out what text on the page has the data. This is useful if an API is not available. For graphing, there are many tools available that can ease graphing if the data is available. This includes Cricket (http://cricket.sourceforge.net/ ), and of course you could load data into a spreadsheet with a comma-separated value (CSV) file if required.


Surveys and Structured Feedback

Surveys are an excellent method of taking the pulse of your community. For us, they are simple to set up, and for our audience, they are simple to use. I have used surveys extensively in my career, and each time they have provided me and my teams with excellent feedback. Over the next few pages I want to share some of the experience I have picked up in delivering effective surveys.

The first step is to determine the purpose of the survey. What do you want to achieve with it? What do you need to know? Every survey needs to have a purpose, and it is this purpose that will help you craft a useful set of questions that should generate an even more useful set of data.

NOTE

You should avoid surveys just for the purpose of creating a survey. Only ever create a survey if there is a question in your head that is unanswered. Surveys are tools to help you understand your community better: use them only when there is a purpose. Examples of this could include understanding the perception of a specific process, identifying common patterns in behavior in communication channels, and learning which resources are used more than others.

Again, your goals from your strategic plan are a key source of purpose for your surveys. As an example, if your goal is “increase the number of contributors in the community,” you should break down the workflow of how people join your community, and produce a set of questions that test each step in this workflow. You can use the feedback from the answers to gauge whether your workflow is effective and use the data as a basis for improvements.

Choosing questions

When deciding on questions, you should be conscious of one simple fact: everyone hates filling in surveys. When someone has considered participating in your survey, you need to be able to gather that person’s feedback as quickly and easily as possible. This should take no longer than five minutes. As such, I recommend you use no more than 10 questions. This will give the respondent an average of 30 seconds to answer each question. The vast majority of surveys have questions with multiple-choice ratings for satisfaction. Most of you will be familiar with these: we are provided with a satisfaction scale between 1 (awful) and 5 (excellent). You are then expected to select the appropriate satisfaction grade for that question. Surveys like this are simple and effective.


THE VARIANCE OF THE VOTE

Ratings are a funny beast, and everyone interprets them differently. A great example of this is the employee performance reviews that so many of us are familiar with. In one organization I have worked at, the scale ranged from 1 (unacceptable) to 5 (outstanding). I did a small straw poll of how different people interpreted the grading system, and the views varied tremendously:

  • Some felt that if 1 is unacceptable and 5 is outstanding, then 3 would be considered acceptable, and if staff completed their work as contractually expected, a 3 would be a reasonable score.
  • Some others felt that meeting contractually agreed upon standards would merit a 5 on the scale, and that 3 would indicate significant, if tolerable, lapses.
  • Interestingly, some people informed me that they would never provide a 5, as they felt there was always room for improvement.

When people fill in your survey, you will get an equally varied set of expectations around the ratings. You should factor this variation of responses into your assessment of the results. One way to do this is to add up the responses from each person and increase or reduce them proportionally so each person’s total adds up to the same points. But this may not be valid if someone legitimately had a wonderful or horrific experience across the board.


When writing your questions, you need to ensure that they are simple, short, and specific enough that your audience will not have any uncertainty about what you are asking. When people are confronted with unclear questions in surveys, they tend to simply give up or pick a random answer. Obviously both of these are less-than-stellar outcomes. Let’s look at an example of a bad question:

Do you like our community?

Wow, how incredibly unspecific. Which aspect of the community are we asking about? What exactly does “like” mean? Here is an example of a much better question: Did you receive enough help and assistance from the mailing list to help you join the community successfully?

This is more detailed, easier to understand, and therefore easier to answer. It’s no coincidence that the results are more immediately applicable to making useful changes in the community. Using the previous example of a survey to track progress on the goal of increasing the number of contributors, here are some additional example questions:

How clear was the New Contributor Process to you?

How suitable do you feel the requirements are to join the community?

How useful was the available documentation for joining the community?

How efficiently do you feel your application was tended to?

Each of these asks a specific question about your community and the different processes involved.

Showing off your survey reports

Earlier, when we talked about statistics, we also explored the benefits of using graphs for plotting numerical feedback. We could feed the data directly into the graph, and the findings are automatically generated. This makes the entire process of gathering statistics easy: we can automate the collection of the data from the hook (such as regularly sucking out the data) and then the presentation of the data (regularly generating the graph). Unfortunately, this is impossible when dealing with feedback provided in words, sentences, and paragraphs. A person has to read and assess the findings and then present them in a report. It is this report that we can present to our community as a source for improving how we work. Readers have priorities when picking up your report. No one wants to read through reams and reams of text to find a conclusion: they want to read the conclusion up front and optionally read the details later. I recommend that you structure your survey findings reports as follows:

  1. Present a broad conclusion, a paragraph that outlines the primary revelation that we can take away from the entire survey. For example, this could be “developer growth is slower than expected and needs to improve.” It is this broad conclusion that will inspire people to read the survey. Do bear one important thing in mind, though: don’t turn the conclusion into an inaccurate, feisty headline just for the purposes of encouraging people to read the survey. That will just annoy your readers and could lead to inaccurate buzz that spirals out of your control, both within and outside your community.
  2. Document the primary findings as a series of bullet points. These findings don’t necessarily need to be the findings for each question, but instead the primary lessons to be learned from the entire survey. It is these findings that your community will take as the meat of the survey. They should be clear, accurate, and concise.
  3. You should present a list of recommended actions that will improve on each of the findings. Each of these actions should have a clear correlation with the findings that your survey presented. The reader should be able to clearly identify how an action will improve the current situation. One caveat, though: not all reports can present action items. Sometimes a factual finding does not automatically suggest an action item; it may take negotiation and discussion for leaders to figure out the right action.
  4. Finally, in the interest of completeness, you should present the entire set of data that you received in the survey. This is often useful as an addendum or appendix to the preceding information. This is a particularly useful place to present non multiple-choice answers (written responses).

When you have completed your survey and documented these results, you should ensure they are available to the rest of your community. Sharing these results with the community is (a) a valuable engagement in transparency, (b) a way of sharing the current status quo of the community with everyone, and (c) an opportunity to encourage others to fix the problems or seek the opportunities that the survey uncovers.

To do this, you should put the report on your website. Ensure you clearly label the date on which the results were taken. This will make it clear to your readers that the results were a snapshot of that point in the history of your community. If you don’t put a date, your community will assume the results are from today. When you put the results online, you should notify your community through whatever communication channels are in place, such as mailing lists, online chat channels, forums, websites, and more.


Documented Results are Forever

Before we move on, I just want to ensure we are on the same page (pun intended) about documenting your results. When you put the results of your survey online, you should never go back and change them. Even if you work hard to improve the community, the results should be seen as a snapshot of your community. You should ensure that you include with the results the date that they were taken so this is clear.


Observational Tests

When trying to measure the effectiveness of a process, an observational test can be one of the most valuable approaches. This is where you simply sit down and watch someone interact with something and make notes on what the person does. Often this can uncover nuances that can be improved or refined. This is something that my team at Canonical has engaged in a number of times. As part of our work in refining how the community can connect bugs in Ubuntu to bugs that live upstream, I wanted to get a firm idea of the mechanics of how a user links one bug to another. I was specifically keen to learn if there were any quirks in the process that we could ease. If we could flatten the process out a little, we could make it easier for the community to participate.

To do this, we sat down and watched a contributor working with bugs. We noted how he interacted with the bug tracker, what content he added, where he made mistakes, and other elements. This data gave us a solid idea of areas of redundancy in how he interacted with a community facility.

What Jorge on my team did here was user-based testing, more commonly known as usability testing. This is a user-centered design method that helps evaluate software by having real people use it and provide feedback. By simply sitting a few people in front of your software and having them try it out, usability testing can provide valuable feedback for a design before too much is invested in coding a bad solution.

Usability testing is important for two reasons. The most obvious is that it gets us feedback from a lot of real users, all doing the same thing. Even though we aren’t necessarily looking for statistical significance, recognizing usage patterns can help the designer or developer begin thinking about how to solve the problem in a more usable way. The second reason is that usability testing, when done early in the development cycle, can save a lot of community resources. Catching usability problems in the design phase can save development time normally lost to rewriting a bad component. Catching usability problems early in a release cycle can preempt bug submissions and save time triaging. This is on top of the added benefit that many users may never experience such usability issues, because they are caught and fixed so early.

Open source is a naturally user-centered community. We rely on user feedback to help test software and influence future development directions. A weakness of traditional usability testing is that it takes a lot of time to plan and conduct a formal laboratory test. With the highly iterative and aggressive release cycles some open source projects follow, it is sometimes difficult to provide a timely report on usability testing results. Some examples of projects that overcame problems in timing and cost appear in the accompanying sidebar (“Examples of Low-Budget, Rigorous Usability Tests”) by Celeste Lyn Paul, a senior interaction architect at User-Centered Design, Inc. She helps make software easier to use by understanding the user’s work processes and designing interactive systems to fit their needs. She is also involved in open source software and leads the KDE Usability Project, mentors for the OpenUsability Season of Usability program, and serves on the Kubuntu Council.


Example of Low-Budget, Rigorous Usability

There are some ways you can make usability testing work in the open source community. Throughout my career in open source, I have run a number of usability tests, and not all have been the conventional laboratory-based testing you often think of when you hear “usability test.” These three examples help describe the different ways usability testing can be conducted and how it can fit into the open source community.

My first example is the usability testing of the Kubuntu version of Ubiquity, the Ubuntu installer. This usability test was organized as a graduate class activity at the University of Baltimore. I worked with the students to design a research plan, recruit participants, run the test, and analyze the results. Finally, all of the project reports were collated into a single report, which was presented to the Ubuntu community. The timing of the test was aligned with a recent release and development summit, and so even though the logistics of the usability test spanned several weeks, the results provided to the Ubuntu community were timely and relevant.

Although this is the more rare case of how to organize open source usability testing, involving university students in open source usability testing provides three key benefits. The open source project benefits from a more formal usability test, which is otherwise difficult to obtain; the university students get experience testing a real product, which looks good on a curriculum vitae; and the university students get exposure to open source, which could potentially lead to interest in further contribution in the future.

My second example involves guerilla-style usability testing over IRC. I was working with Konstantinos Smanis on the design and development of KGRUBEditor. Unlike most software, which usually are in the maintenance phase, we had the opportunity to design the application from scratch. While we were designing certain interactive components, we were unsure which of the two design options was the most intuitive. Konstantinos coded and packaged dummy prototypes of the two interactive methods while I recruited and interviewed several people on IRC, guiding them through the test scenario and recording their actions and feedback. The results we gathered from the impromptu testing helped us make a decision about which design to use. The IRC testing provided a quick and dirty way of testing interface design ideas in an interactive prototype. However, this method was limited in the type of testing we could do and the amount of feedback we could collect. Remote usability testing provides the benefit of anytime, anywhere, anyone at the cost of high-bandwidth communication with the participant and control over the testing environment.

My final example is the case of usability testing with the DC Ubuntu Local Community (LoCo). I developed a short usability testing plan that had participants complete a small task that would take approximately 15 minutes to complete. LoCo members brought a friend or family member to the LoCo’s Ubuntu lab at a local library. Before the testing sessions, I worked with the LoCo members and gave them some tips on how to take their guest through the test scenario. Then, each LoCo member led their guest through the scenario while I took notes about what the participant said and did. Afterward, the LoCo members discussed what they saw in testing, and with assistance, came up with a few key problems they found in the software.

The LoCo-based usability test was a great way to involve nontechnical members of the Ubuntu community and provide them an avenue to directly contribute. The drawback to this method is that it takes a lot of planning and coordination: I had to develop a testing plan that was short but provided enough task to get useful data, find a place to test (we were lucky enough to already have an Ubuntu lab), and get enough LoCo members involved to make testing worthwhile. —Celeste Lyn Paul Senior Interaction Architect User-Centered Design, Inc.


Although Celeste was largely testing end-user software, the approach that she took was very community-focused. The heart of her approach involved community collaboration, not only to highlight problems in the interface but also to identify better ways of approaching the same task. These same tests should be made against your own community facilities. Consider some of the following topics for these kinds of observational tests:

  • Ask a member to find something on your website.
  • Ask a prospective contributor to join the community and find the resources they need.
  • Ask a member to find a piece of information, such as a bug, message on a mailing list, or another resource.
  • Ask a member to escalate an issue to a governance council.

Each of these different tasks will be interpreted and executed in different ways. By sitting down and watching your community performing these tasks, you will invariably find areas of improvement.

Measuring Mechanics

The lifeblood of communities, and particularly collaborative ones, is communication. It is the flow of conversation that builds healthy communities, but these conversations can and do stretch well beyond mere words and sentences. All communities have collaborative mechanics that define how people do things together. An example of this in software development communities is bugs. Bugs are the defects, problems, and other it-really-shouldn’t-work-that way annoyances that tend to infiltrate the software development process.

Every mechanic (method of collaborating) in your community is like a conveyor belt. There is a set of steps and elements that comprise the conversation. When we understand these steps in the conversation, we can often identify hooks that we can use to get data. With this data we can then make improvements to optimize the flow of conversation.

Let’s look at our example of bugs to illustrate this. Every bug has a lifeline, and that lifeline is broadly divided into three areas: reporting, triaging, and fixing. Each of these three areas has a series of steps involved. Let’s look at reporting as an example. These are the steps:

  1. The user experiences a problem with a piece of software.
  2. The user visits a bug tracker in her web browser to report that problem.
  3. The user enters a number of pieces of information: a summary, description, name of the software product, and other criteria.
  4. When the bug is filed, the user can subscribe to the bug report and be notified of the changes to the bug.

Now let’s look at each step again, see which hooks are available and what data we could pull out:

  1. There are no hooks in this step.
  2. When the user visits the bug tracker in her web browser, the bug tracker could provide data about the number of visitors, what browsers they are using, which operating systems they are on, and other web statistics.
  3. We could query the bug tracker for anything that is present in a bug report: how many bugs are in the tracker, how many bugs are in each product, how many bugs are new, etc.
  4. We could gather statistics about the number of subscribers for each and which bugs have the most subscribers. So there’s a huge range of possible hooks in just the bug-reporting part of the bug conveyor belt. Let’s now follow the example through with the remaining two areas and their steps and hooks:

The following are the triaging steps:

  1. A triager looks at a bug and changes the bug status.
  2. The triager may need to ask for additional information about the bug.
  3. Other triagers add their comments and additional information to help identify the cause of the bug.

Triaging hooks:

  1. We could use the bug tracker to tell us how many bugs fall into each type of status. This could give us an excellent idea of not only how many bugs need fixing, but also, when we plot these figures on a graph, how quickly bugs are being fixed.
  2. Here we can see how often triagers need to ask for further details. We could also perform a search of what kind of information is typically missing from bug reports so we can improve our bug reporting documentation.
  3. The bug tracker can tell us many things here: how many typical responses are needed to fix a bug, which people are involved in the bug, and which organizations they are from (often shown in the email address, e.g., billg@microsoft.com). Fixing steps:

  4. A community member chooses a bug report in the system and fixes it. This involves changing and testing the code and generating a patch.
  5. If the contributor has direct access to the source repository, he commits the patch. Otherwise, the patch is attached to the bug report.
  6. The status of the bug is set to FIXED.

Fixing hooks:

  1. There are no hooks in this step.
  2. A useful data point is to count the number of patches either committed or attached to bug reports. Having the delta between these two figures is also useful: if you have many more attached patches, there may be a problem with how easily contributors can get commit access to the source repository.
  3. When the status is changed, we can again assess the number changes and plot them on a timeline to identify the rate of bug fixes that are occurring. In your community, you should sit down and break down the conveyor belt of each of the mechanics that forms your community. These could be bugs, patches, document collaboration, or otherwise. When you break down the process and identify the steps in the process and the hooks, this helps you take a peek inside your community.

Gathering General Perceptions

Psychologically speaking, perception is the process in which awareness is generated as a result of sensory information. When you walk into a room and your nose tells you something, your ears tell you something else, and your eyes tell still more, your brain puts the evidence together to produce a perception. Perception occurs in community, too, but instead of physical senses providing the evidence, the day-to-day happenings of the community provide the input. When this evidence is gathered together, it can have a dramatic impact on how engaged and enabled people feel in that community. Even in the closed and frightening world of a prison community, with its constant threat of random violence and tyranny, there are shared perceptions, interestingly between staff and prisoners. Professor Alison Liebling, a world expert on prisons, discovered common cause between staff and prisoners in her Measuring the Quality of Prison Life study, which took place between 2000 and 2001. Liebling invited staff and prisoners to reflect on their best rather than worst experiences and identified broad agreement between staff and prisoners on “what matters” in prison life. She discovered that “staff and prisoners produced the same set of dimensions, suggesting a moral consensus or shared vision of social order and how it might be achieved.” Her work provided a model that described and monitored that which previously appeared impossible to measure: “respect, humanity, support, relationships, trust, and fairness,” which had remained hidden under the traditional radar of government accountability.

Perception plays a role in many communities, particularly those online. Some years back I was playing with a piece of software (that shall remain nameless). I spent quite some time setting it up and was more than aware of some of the quirks that were involved in its installation. In the interest of being a good citizen, I thought it could be useful to keep a notepad and scribble down some of the quirks, what I expected, and how the software did and did not meet my expectations. I thought that this would provide some useful real-world feedback about a genuine user installing and using the software. I carefully gathered my notes and when I was done I wrote an email to the software community’s mailing list with my notes. I strived to be as constructive and proactive in my comments as possible: my aim here was not to annoy or insult, but to share and suggest.

And thus the onslaught began....

Email after email of short-tempered, antagonistic, and impatient responses came flowing in my general direction. It seemed that I struck a nerve. I was criticized for providing feedback on the most recent stable release and not the unreleased development code in the repository(!), many of my proposed solutions were shot down because they would “make the software too easy” (like that is a bad thing!), and the tone was generally defensive. Strangely, I was not perturbed, and I still took an interest in the software and community, but as I dug deeper I found more issues. The developer source repository was very restrictive; the comments in bug reports were equally defensive and antagonistic; the website provided limited (and overtly terse information),;and the documentation had statements such as “if you don’t understand this, maybe you should go somewhere else.” Well, I did. When each of these pieces of evidence combined in my brain, I developed a somewhat negative perception of the community. I felt it was rude, restrictive, cliquey, and unable to handle reasonably communicated constructive criticism. It was perception that drove me to this conclusion, and it was perception that caused me to focus on another community in which my contributions would be more welcome and my life there would be generally happier.

Throughout the entire experience there was no explicit statement that the community was “rude, restrictive, cliquey, and unable to handle reasonably communicated constructive criticism.” This was never written, spoken of, or otherwise shared. Measuring perception involves two focus points. On one hand you want to understand the perception of the people inside your community, but you also want to explore the perception of your community from the outside. This is particularly important for attracting new contributors.

To measure both kinds of perception, our hooks are people, and we need to have a series of conversations with different people inside and outside our projects to really understand how they feel. As an example, imagine you are a small software project and you have a development team, a documentation team, and a user community. You should spend some time having a social chitchat with a few members in each of those teams. This will help paint a picture for you. Some of the most valuable feedback about perception can happen with so-called “corridor conversations.” These are informal, social, ad hoc conversations that often happen in bars, restaurants, and the corridors of conferences. These conversations typically have no agenda, there are no meeting notes, and they are not recorded. The informal nature of the conversation helps the community member to relax and share her thoughts with you.

Perception of you

Another important measurement criterion is the perception of you as a person. As a leader you are there to work with and represent your community. Your community will have a perception of you that will be shared among its members. You want to understand that perception and ensure it fairly reflects your efforts. Perception of community leaders is complex, particularly when a leader works for a company to lead the community. As an example, as part of my current role at Canonical as the Ubuntu community manager I work extensively with our community in public, running public projects. There are, however, some internal activities that I focus on. I help the wider company work with the community. I work on Canonical projects that are currently under a NonDisclosure Agreement (NDA). There is also the work I do with my own team, such as building strategy, reviewing objectives, conducting performance reviews, making weekly calls, and more. Many of these internal activities are never seen by the wider community, and as such the community may not be privy to the genuine work that helps the community but is not publicized.

Gathering feedback about your performance is hard work. It is difficult to gather constructive, honest, and frank feedback, because most people find it impossible to deliver that content to someone directly. Even if you are entirely open to feedback, you need to ensure that the people who are speaking to you feel there will be no repercussions if they offer criticism. You need to work hard to foster an atmosphere of “I welcome your thoughts on how I can improve.” Due to the difficulty of gathering frank feedback, you may want to rely on email to gather it. When we have physical conversations or even discussions on the phone, body language, vocal tone, and enunciation make those conversations feel much more personal. The visceral connection may make it intimidating for your respondent to provide frank and honest feedback (particularly if that involves criticism). Email removes these attributes in the conversation, and this can make gathering this feedback easier.

Transparency in Personal Feedback

In the continuing interest of building transparency, an excellent method is to be entirely public in letting your community share their feedback about you. As an example, you could write a blog entry asking for feedback and encouraging people to leave comments on the entry, and allow anonymous comments. This is a tremendously open gesture toward your community. It could also be viewed as a tremendously risky gesture. There is a reasonable likelihood that someone could share some negative thoughts about you there, and others may agree. (But that’s also feedback you need to collect!)

BuildingCommunity/HooksAndData (last edited 2010-09-02 04:55:12 by 94)