This is what nuanced collective opinions look like…

Each Feedback Frame has one idea. Each token represents one participant’s rating of that idea. The relative levels of tokens in the columns tell a story about all the opinions in the room.

Below are a few examples of result patterns from real meetings.

Watch the process in action (90 sec. video)

Play Video

United Agreement

Result Strong Agreement
Score:  4.3 / 5
Consent:  100%


Score:  3.1 / 5
Consent:  65%

Mixed Opinions

Score:  3.1 / 5
Consent:  65%

Super Majority in Strong Opposition

Score:  1.8 / 5
Consent:  21%

Majority in Tepid Agreement/Acceptance

Score:  3.6 / 5
Consent:  85%

Weak Opinions & Significant Confusion

Score:  2.7 / 5
Consent:  50%


Alternative Rating Scale

5 star score voting Feedback Frames example strong agreement

Instead of the standard “agreement” scale, a 5-star template allows facilitators to easily define the scale criteria e.g. “Importance,” “Urgency” or “Level of Difficulty”.

Colour Tokens

different color tokens for stakeholders example Feedback Frames

Use different colours of tokens for different types of stakeholders, to gain further insights into opinion trends.

Compare Dozens of Ideas in a Spreadsheet

The number of ideas rated in a meeting is only limited by the number of Feedback Frames on hand.  Enter the raw results into a spreadsheet for sorting, theming and deeper analysis.

results spreadsheet

Example of 22 statements from an adult training workshop:

Example of a results data table with 32 statements from middle school students:

Sharing the raw results as a data table (including photos) is ideal for transparency.

Results from multiple sessions, following a consistent framework, can be combined into a single table for wider comparative analysis.


The “Score” numbers in these examples are calculated as an average, where:

  • Strong Agreement = 5
  • Agreement = 4
  • Neutral = 3
  • Disagreement = 2
  • Strong Disagreement = 1
  •  ? Not Sure  (not counted, but taken into consideration)

The “Consent” number is based on the percentage of tokens not in disagreement (excluding “Not sure”):

= 1 ( ([Strong Disagreement]+[Disagreement]) / ([Strong Agreement]+[Agreement]+[Neutral]+[Disagreement]+[Strong Disagreement]) )

These and other formulas can be instantly calculated after entering results into a spreadsheet.