When More is Less in Crowd-sourcing Innovation

When More is Less in Crowd-sourcing Innovation

In most organizations, there is an omnipresent quest for more. With crowd-sourcing innovation programs, the first impulse is to seek more ideas. Is more really better? What about idea quality? Does ‘more ideas’ work against idea quality? And if so, then what do you do about it?

In 2011, I launched the employee open innovation program at UnitedHealth Group (Fortune 6). In the first year we ran 11 events (all time-based challenges), and by 2015 we were running 150+ events (75% challenges). The difficulty with the quality of input was immediately apparent. This is a common problem across the dozen programs I have led and a central theme in discussions with other program leaders. The issue is that idea quality is a function of effort, both for the ideators and also for the subsequent filtering/selection process. And with crowds, when the effort increases, activity decreases.

Put differently, pushing for higher quality ideas decreases the number of ideas that will get submitted.

This begs the question for crowd-sourcing programs. What is more important, quantity or quality?

The goal needs to be quality. Quantity matters but not in the way most leaders think. Nearly every idea campaign has a numeric limit to the number of ideas they are capable of acting on. In the case of the CEO challenge, that number was around 20. And though I have seen programs take action on many more than 20, there is always an absolute limit in any organization. So what does it matter whether we get 50, 500, or 5000 ideas? At some point, more ideas means our ability to evaluate and filter diminishes. This is because all organizations have limited capacity, even with excellent crowd-sourcing tactics in place. So the focus needs to shift to improving quality at the point of input.

How do you improve quality?

There are dozens of ways to improve idea quality in crowd-sourcing. I will not address them all but here are three examples of strategies that I depend on to produce better results.

Form fields:

The default for any idea challenges is title and description (usually, “What’s your idea?” or “Describe your solution”). This will yield maximum idea count and also introduce the broadest range of quality. My experience is that some people are better at articulating their ideas and the context/need than others. So by increasing the number of fields, we get better information about the concept. It also puts ideas on a more level playing field when filtering and selecting.

For example, our “standard” form became the following questions (there were various wordings as we iterated but this will give you the gist). This format is fundamentally an elevator pitch. (Full disclosure. These questions were inspired by a Quora post that I read years ago. And though I’ve seen similar posts since, I don’t know who deserves original attribution).

Problem: What is the problem, opportunity, or unmet need? (We abbreviate this as POU; yes, pronounced “poo”)
User/Audience: Who has the POU? (Who is the user, customer, etc. Most people struggle to be ‘specific’ enough with this answer).
Status quo: What are they doing now? (what is the user doing without the solution in place?).
Solution: What is your solution? (How will the solution work? What are the key features and functionality? Simply stating “build an app” isn’t good enough).
Value Proposition: How is the solution better? (Why will the user see this as valuable? Why would they expend resources–time, money, reputation–to use the solution rather than stick with the status quo?).
Add more fields, improve quality, reduce idea count. It’s a simple and reliable formula.


Having ideas generated by teams is guaranteed to produce better results. Make it a requirement to post as a team. Run lunch-and-learn workshops or open-to-anyone collaboration sessions. Teams do not need to self-form. Have 30 random employees show up in a room, organize them in any way (or use a tool like Collaboration.ai if you want to supercharge the process) and unleash them for 1-2 hours. The meetings can be facilitated using human-centered design techniques to improve the process, or not. The minimum requirement is just to get people together.


By far one of the leanest yet most successful strategies for improving quality is the introduction of an idea coach. Having a trained and capable idea mentor assigned to each idea to read and give ‘submission’ feedback absolutely produces better quality. An idea coach is not supposed to pass judgment on feasibility or otherwise engage too deeply in the concept. Instead, the coach is intended to be a feedback mechanism to reflect ‘understanding.’ In other words, how well did the ideator communicate their concept? When the ideator describes the problem, does the coach understand the submission and if not, why not? This technique is a quality control counterpoint to the first strategy of adding more complexity to the submission form proposed above.

Obviously, all three of these approaches can be combined along with other strategies.

When does ‘more’ matter?

The side effect (and I argue the most significant benefit) of an open innovation program is engagement. The harder you make the process, you will naturally have a decrease in participation. Unless you create other actions that users can take to stay engaged, your program will suffer as you improve idea quality. For metrics driven organizations, where more=better, getting fewer ideas will seem like a contradiction. For example, every year we ran a CEO challenge. By topic, year after year, we implemented more and more strategies to increase quality which also reduced the total volume of ideas. When your CEO and executives want to see the year over year number of ideas, and you show them a smaller number each time, that causes some head scratching. It takes some explaining around strategy and intended results before it will make sense.

BTW, we also increased total engagement year over year, but that’s a different story.

There are many more tactics for increasing quality. And that is the point. Be strategic. Do not act as if a crowd of people and a digital submission tool is the magic bean to grow breakthrough ideas. Ideation is by its very nature raw and unrefined. Quality comes from processes designed to produce quality after ideation. But there is no reason that a little more quality cannot be achieved at the beginning, even at the cost of losing some of the ideas.

Build a common language of innovation on your team

Wait! Before you go…

Choose how you want the latest innovation content delivered to you:

Gregory HicksGregory Hicks is skilled at: crowd-sourcing, facilitation, Business Model Generation (business model canvas, value proposition canvas), human-centered design, and business strategy. He has developed two proprietary tools for Unlabel Innovation, the Innovation Program Handbook assessment and Innovation IQ, an innovation project assessment, risk manager, health check, and decision-making tool.

Gregory Hicks

Greg Hicks has designed, advised, and launched 'enterprise innovation programs' in over a dozen organizations including corporations, public education, and higher education. This work includes structure, frameworks, methods, culture, and vast knowledge in leveraging crowd-sourcing to achieve scale. He also serves as an advisor to early-stage tech start-ups.




Our Annual Trends Report: Boosting the Possibilities for 2021

By Shawn Nason | February 25, 2021

  I still remember the day we got the email telling us that the in-person version of the 2021 Consumer…

Read More

Controlling the controllables

By Janet Sernack | February 22, 2021

A recent article by McKinsey and Co “COVID-19: Implications for business”  describes a paradoxical dilemma for managers: the need to process both the…

Read More

Leave a Comment