Democracy is great, right? We'd all prefer to have direct participation in the decisions that affect our lives, from which multimillionaire will represent us common folk to which features we'd like to see in the next version of Microsoft Office. (Please, please, PowerPoint team, just copy Keynote's auto-align feature already.) The more voting we do, the more we feel that civilization has advanced, and the better the quality of the products or politicians we get.

Polls Are Valuable But Inadequate
In recent years, application development teams have grown increasingly open minded, and in many cases even enthusiastic, about voting or polls as a prioritization mechanism. Worried that your requirements rely too heavily on interviews with a potentially unrepresentative sample of users? Take a quick poll to get a more accurate estimate of real demand for the work you might do.

As important as polls can be, they're not perfect. Even if you don't go crazy with how you use the results, the quality of these findings depends a great deal on the questions you ask. Even if your survey question kung fu is great, other risks exist, such as the unfortunate tendency of people with no opinion answering the question anyway because they don't want to appear foolish.

One peril that holds special relevance to application development (or product development in general) is the missing part of the sample. By their nature, polls omit the customers you think you should have but don't yet have. 

Polls Provide A Megaphone For Customers You Already Know
From first-hand experience on development teams, and second-hand experience as a researcher, I've seen the "missing customer" problem in action. But how bad is this problem? Since I have practically no ability to resist simulation exercises, I whipped together a simple model of the costs of relying exclusively on polls. (Click here to see the spreadsheet I developed if you want to scrutinize the calculations or play around with the numbers yourself.) I tried to keep the scenario relatively simple but still reflective of the real choices that development teams face. 

  1. You lead a development team making prioritization decisions for your customers. These could be users for a system IT is deploying or customers of a product company. For simplicity's sake, I'll use the word "customers" to describe them both.
  2. You have four categories of customers you are trying to serve:
    • Category A ("Away With You!"). You wish you didn't have to serve this constituency. Currently, you have 100 of these customers.
    • Category B ("Best To Keep Them Around"). While they're not your core user base, you're still obliged to serve their needs. Again, you have 100 of this type of customer.
    • Category C ("Core market"). Fairly self explanatory. Again, 100 of these customers.
    • Category D ("Dearly Like To Win"). The people whom you've not yet reached. For this exercise, I assigned 200 customers to this category. We can pretend that the whole reason for pursuing them is their greater numbers. 
  3. You're currently pondering which of three features to build.
  4. Each constituency wants these features with greater or lesser intensity. For example, while 75% of the Category A customers want a feature, only 10% of Category D customers want it. The customers in Categories A and D have diametrically opposed needs, and between these two extremes lie the enhancement requests of the Categories B and C customers.
  5. You run a poll, asking these customers what you should build next. Multiply the number of customers in a category by the percentage who want a feature, and you know how many votes that customer segment will give that feature. For instance, 50% of the 100 core customers (Category C) want Feature #2, so they give it 50 votes.
  6. You prioritize the results based on the number of votes each feature received.

OK so far? While the Category D ("Dearly Like To Win") customers are missing, some of the other customers want the same features, albeit with greater or lesser interest than the Category D people. This kind of overlap is part of requirements and prioritization exercises all the time: for example, while salespeople might be interested in an improved pipeline report in a CRM system, sales managers are even happier to see it developed.

Using the numbers I plugged into the model, here are the votes:

  • Feature #1: 90.
  • Feature #2: 130.
  • Feature #3: 155.

Prioritization now seems easy-peasy: Build Feature #3, then #2, then #1.

If Want To Serve A New Audience, Be Prepared To Invert Priorities
While intrigued by the results, your management is still anxious about reaching the Category D ("Dearly Love To Have") customers. Therefore, to triangulate around the truth, you invest in another mechanism for collecting data from all four customer categories. Let's posit, for reasons that will be clear later, that this second mechanism is a serious game. 

This second tool turns out to be amazingly successful, capturing the exact demand for each feature from within each customer segment. (In reality, no tool is perfect, but our point here is to compare the poll results to the actual demand.) We'll use the same method for calculating votes, only in this case, it includes how people in Category D would have voted, given the opportunity. Here are the results:

  • Feature #1: 240.
  • Feature #2: 190.
  • Feature #3: 175.

Now that we understand what the Category D customers want, our prioritization is exactly the reverse of what it would have been, based on voting alone. And no, I did not construct this model to engineer this result. 

Prioritization Shouldn't Be Completely By The Numbers
There are other reasons for expanding your inputs into the prioritization process beyond polls. Votes express demand without explaining it. If 1,193 customers vote for a feature, what have you really learned? Do all of them desire it with the same ardor? Do they want to see it designed in the same way?

Context is everything, which is why serious games are an important supplement to other techniques for requirements and prioritization. Read the chat logs in a Buy A Feature session. Talk to participants during a session of Prune The Product Tree. These exercises strike the happy medium between the breadth of measurement that polls provide and the depth of understanding you get from interviewing a few stakeholders.

[P.S. Please, feel free to rip apart the spreadsheet on which I based this analysis.]