Let’s stop pretending voting with stickers is a good technique for adults to effectively prioritize among options. We know it’s not. Sticker-dot voting is not reliable because of the bandwagon effect, vote splitting, choice overload, and how easy it is to mess up or cheat. It may be quick and fun, but I wouldn’t trust sticky dots with deciding anything important – not even pizza toppings.
If you agree, no need to read on. Here is the new group decision-making tool to use instead: Feedback Frames
Feedback Frames are a non-electronic gadget I invented for secret rate-voting with instant visual results. Participants drop one token per a frame on a scale of agreement to record their opinion, and sign to validate it. You can get them at FeedbackFrames.com
Still need to be convinced that sticker dots are not to be trusted for recording opinions? Let’s dive in.
Problem 1: The Bandwagon Effect
The bandwagon effect is a psychological phenomenon whereby people do something primarily because other people are doing it, regardless of their own beliefs, which they may ignore or override. Call it “groupthink,” “conformity,” or “peer pressure,” it’s part of human social nature to follow the crowd. We can’t help but notice where others have already placed their dots, and give these options more attention. Where the first dots land, others are biased to follow.
If you want smart results, based on the “wisdom of the crowd,” voting needs to be secret and independent, which is impossible with sticker voting.
To avoid the bandwagon effect, you need to have secret ballots, either using an online voting app, paper ballots that need to be tallied – or Feedback Frames.
Problem 2: Vote Splitting
Vote splitting, also known as the “spoiler effect,” is where votes get split between similar candidates, which gives an unfair advantage to a unique candidate.
For example, if you were voting on which fruit to snack on, with the main contenders being Granny Smith apple, Gala apple, or an orange, the orange is more likely to win. Even if the majority of participants prefer apples, their votes get split between the two tasty varieties of Granny Smith and Gala.
Some facilitators might say this is why you need to group similar options to avoid splitting the vote. But then we end up voting for a generic category and not the specific options, which can make all the difference. When it comes to the category of “apples,” I’d want to know if this means my favourite Gala or the consistently disappointing Red Delicious (we won’t even talk about crab apples).
If instead, we are using a 5-star rating scale, like with Feedback Frames, apple lovers can vote five out of five stars for all their favourites without worrying about vote splitting.
The opinion scale of results are clear and reliably represent the participants’ true collective preferences. No vote splitting, and no need for grouping or multiple rounds of voting.
Problem 3: Choice Overload
Dotmocracy is like a multiple-choice survey question. Participants need to read and consider all the options before selecting their favourites to dot. But the human mind can only retain about three to nine units of information at one time. With over a dozen options, trying to critically decide where to stick your dot can be overwhelming, especially if the options are even a bit detailed. Thus participants suffer from the psychological phenomenon of “choice overload,” a.k.a. “overchoice.” With too many options, we lean on the crutch of social proof to help us decide, which goes back to the problem of the bandwagon effect.
Rather than trying to consider and compare all the options, it’s much easier for participants to just rate options one by one. Just like reviewing a product on Amazon, I don’t need to try on every pair of bunny slippers just to give a rating of the cozy ones I bought.
Even if each participant only rates a handful of options, with enough participants you can scale up to an unlimited number of options being thoroughly rated and prioritized.
Problem 4: Results Cannot be Validated
When counting stickers from a dotmocracy vote, there is no way to confirm the truth of the results. For example, you cannot tell:
- Did one person cheat and stick extra dots on their favorite option, or pull some dots off an option they didn’t like?
- Did a sticky note temporarily fall on the floor, or was added late, and thus got missed by some participants?
- Were some participants confused by wording or maybe couldn’t read the handwriting?
With the sophisticated Feedback Frames system, these challenges are not a concern.
Each vote is validated with a unique signature, to detect and deter cheating.
There is a dedicated sixth column for “Not Sure” where participants can record their confusion, e.g., with bad handwriting or poor wording.
Overall, results are judged not by the number of votes but by the voting pattern, and thus there is no confusion between an option that lacked attention or one that participants did not like.
It’s Time to Move On
I used to be the Dotmocracy Guy (I literally wrote the Handbook). But I always knew it was imperfect, and tried many ways to upgrade the technique. For almost a decade I was stumped on how to avoid the bandwagon effect. Sure, I could use online apps or electronic voting keypads, but I consistently found budget and technical difficulties to be a challenge in the government meetings I facilitate.
In 2014, I started to prototype Feedback Frames and I’ve never looked back. In 2018, I sold over 1,500 units to government staff, professional facilitators, educators, and consultants across North America and around the world. I hope you will give them a try!