As executives, we’ve all been pummeled with advice on the importance of strategic planning, setting goals and making tough choices. (At dPrism, we’ve done our share of the pummeling). Less discussed is how to make tough choices as a group: How can a top table of senior leaders collectively make the best decisions on what to pursue without inciting a circular firing squad? After all, each person has his or her own number-one priority—and typically each requires the consent and resources of others around the table.
Stay in touch for future posts from dPrism
We’ve found some success lately with paired-comparison analyses, also known as pairwise comparisons. This is a method for determining the relative importance of a number of options with differing criteria—like ranking preferences for apples, oranges and bananas—based on the observation that it’s easier to decide between two options than to rank a half dozen. I won’t go into the science behind comparison theory, nor will I try to explain the Bradley-Terry model
of probability that underlies its most common applications. But I will tell you that we at dPrism have found success with an online pairwise comparison tool that we recently developed. We used it in a recent meeting, and it went so well it could be a template for how to make pairwise comparisons work for your organization.
We had to choose among seven product development priorities to accomplish in the remainder of 2017—in addition to the already-approved backlog of other items. The enterprise had capacity to do two or three of the seven items. Everyone around the table had a horse in the race. While everyone in this group was smart, respectful and eager to work as a team, their own priorities were all over the map. Had we simply put each option up for a vote, the group would have been deadlocked, or loudest voice would have won, and priorities would never have been truly agreed upon.
After a healthy discussion of each of the proposals, we posed the question: Which priorities should the product team explore further, to scope the level of effort, time and cost estimates?
We then emailed each team member’s smartphone with a link to our comparison tool, where they were shown a pair of options and asked to “vote” for the one they preferred.
The site kept sending them pairs of options (about 10 total) until they had ranked each one against the others. The results were combined, tallied and displayed for the group.
The tool ranked the results automatically, according to the percent of respondents who favored each item relative to the others. Interestingly, most of the seven items was ranked within five points of each other (at or around 50 percent) support. We quickly pointed out that this meant there was no clear consensus on any of the items. But rather than this being a sign of failure for the pairwise-comparison approach, it led to a terrific discussion in which people spoke honestly about why they ranked one priority over another. One person even conceded they had found themselves “voting against my own interests,” as the group proceeded to then re-prioritize the list and agree to the final ranking.
The meeting showed how pairwise comparisons can be a superior way for a group to rank a list of disparate priorities. But a few specific factors also contributed to this success:
- We gamified the process. The team had fun choosing between pairs using a tool on their smartphones and then seeing the results.
- We didn’t make it a death match. The question was not “should this project live or die?” but rather: Which should be further explored and scoped? True, the ones that didn’t make the top cut probably would not be getting attention any time soon, but this was a gentler and less provocative way to present the options.
- The rankings were a starting point, not an end point. Because the rankings were so close to each other, the group did the actual prioritizing through the discussion that followed.
Have you tried pairwise comparisons? Let us know your experiences.