This is what nuanced collective opinions look like…
Each Feedback Frame has one idea. Each token represents one participant’s rating of that idea. The relative levels of tokens in the columns tell a story about all the opinions in the room.
This is what nuanced collective opinions look like…
Each Feedback Frame has one idea. Each token represents one participant’s rating of that idea. The relative levels of tokens in the columns tell a story about all the opinions in the room.
Below are a few examples of result patterns from real meetings.
Watch the process in action (90 sec. video)
Score: | 4.3 / 5 |
Consent: | 100% |
Score: | 3.1 / 5 |
Consent: | 65% |
Score: | 3.1 / 5 |
Consent: | 65% |
Score: | 1.8 / 5 |
Consent: | 21% |
Score: | 3.6 / 5 |
Consent: | 85% |
Score: | 2.7 / 5 |
Consent: | 50% |
Instead of the standard “agreement” scale, a 5-star template allows facilitators to easily define the scale criteria e.g. “Importance,” “Urgency” or “Level of Difficulty”.
Use different colours of tokens for different types of stakeholders, to gain further insights into opinion trends.
The number of ideas rated in a meeting is only limited by the number of Feedback Frames on hand. Enter the raw results into a spreadsheet for sorting, theming and deeper analysis.
Example of 22 statements from an adult training workshop:
Example of a results data table with 32 statements from middle school students:
Sharing the raw results as a data table (including photos) is ideal for transparency.
Results from multiple sessions, following a consistent framework, can be combined into a single table for wider comparative analysis.
The “Score” numbers in these examples are calculated as an average, where:
The “Consent” number is based on the percentage of tokens not in disagreement (excluding “Not sure”):
= 1 – ( ([Strong Disagreement]+[Disagreement]) / ([Strong Agreement]+[Agreement]+[Neutral]+[Disagreement]+[Strong Disagreement]) )
These and other formulas can be instantly calculated after entering results into a spreadsheet.