Let’s stop pretending voting with stickers is a good technique for adults to effectively prioritize the options.
We know better.
The chief reasons that sticker-dot voting is not reliable include the bandwagon effect, vote splitting, choice overload, and how easy it is to cheat. It may be a quick and fun method but I wouldn’t trust sticky dots with deciding anything – not even pizza toppings.
If you agree, there’s no need to read on.
Here is the new tool your group should be using to collaboratively prioritize among options: Feedback Frames.
Feedback Frames are a low-tech gadget that I invented for secret rate-voting with instant visual results. Participants drop one token per frame on a scale of agreement to record their opinion, and sign to validate it. That’s it! A simple, effective, and time-saving method that gets results.
Problem 1: The Bandwagon Effect
The bandwagon effect is a psychological phenomenon whereby people do something primarily because other people are doing it, regardless of their own beliefs, which they may ignore or override.
Call it groupthink, conformity, or peer pressure- it’s part of human social nature to follow the crowd. We can’t help but notice where others have already placed their dots, and give these options more credence. Where the first dots land, others are sure to follow.
Do you want smart results, based on the wisdom of the crowd? Well then voting needs to be secret and independent, which is impossible with sticker voting.
To avoid the bandwagon effect, you need to have secret ballots, either using an online voting app, paper ballots that need to be tallied, or, you can grab a box of Feedback Frames!
Problem 2: Vote Splitting
Vote splitting, also known as the spoiler effect, is where votes get split between similar candidates, which gives an unfair advantage to a unique candidate.
For example, if you were voting on which fruit to snack on, with the contenders being Granny Smith apple, Gala apple, or an orange – the orange is most likely to win. Even if the majority of participants prefer apples, votes get split between the two tasty varieties of Granny Smith and Gala.
Some facilitators might say this is why you need to group similar options to avoid splitting the vote. But then we end up voting for a generic category and not the specific options, which can make all the difference.
When it comes to the category of “apples,” I’d want to know if this means my favourite Gala or the consistently disappointing Red Delicious (we won’t even talk about crab apples).
If instead, we are using a 5-star rating scale, like with Feedback Frames, apple lovers can vote five out of five stars for all their favourites without worrying about vote splitting.
The opinion scale of results are clear and reliably represent the participants’ true collective preferences. No vote splitting, and no need for grouping or multiple rounds of voting.
Problem 3: Choice Overload
Dotmocracy is like a multiple-choice survey question. Participants need to read and consider all the options before selecting their favourites to dot.
The human mind can only retain about three to nine units of information at one time. With over a dozen options, trying to critically decide where to stick your dot can be overwhelming, especially if the options are even a bit detailed. That’s when participants suffer from the psychological phenomenon of “choice overload,” or overchoice.
With too many options, we lean on the crutch of social proof to help us decide, which goes back to the problem of the bandwagon effect.
Rather than trying to consider and compare all the options, it’s far easier for participants to rate options one by one. Just like reviewing a product on Amazon, I don’t need to try every pair of bunny slippers just to give a rating of the cozy ones that I bought.
Think about it, even if each participant only rates a handful of options, with enough participants you can scale up to an unlimited number of options being thoroughly rated and prioritized.
Problem 4: Results Cannot be Validated
When counting stickers from a dotmocracy vote, there is no way to confirm the accuracy of the results.
For example, you cannot tell:
- If one person cheated and stuck extra dots on their favorite option, or similarly pulled some dots off the option they didn’t like.
- If a sticky note temporarily fell on the floor, or was added late, and got missed by participants.
- If participants got confused by the wording or had difficulty deciphering the handwriting.
With the simple, yet sophisticated Feedback Frames system, these potential risks do not exist.
Each vote is validated with a unique signature, to detect and deter cheating.
There is a dedicated sixth column for Not Sure where participants can record their confusion, e.g., with bad handwriting or poor wording.
Overall, results are judged not by the number of votes but by the voting pattern, and thus there is no confusion between an option that lacked attention or one that participants did not like.
It’s Time to Move On…
I used to be the Dotmocracy Guy (I literally wrote the Handbook). I always knew it was imperfect, and tried many ways to upgrade the technique. For almost a decade I was stumped on how to avoid the bandwagon effect.
Sure, I could use online apps or electronic voting keypads, but I consistently found budget and technical difficulties to be a challenge in the government meetings I facilitate.
In 2014, I started to prototype Feedback Frames and I never looked back.
In 2018, I sold over 1,500 units to government staff, professional facilitators, educators, and consultants across North America and around the world. I hope you will give them a try and let the experience speak for itself.