Crowdsourcing: Caution with Community Ratings (Rivella Example)
By: Universitat Luzern
Ideas that receive the most positive ratings on crowdsourcing platforms are not necessarily the best. A study shows why this is the case and what a company should look out for when evaluating ideas.
Companies are increasingly relying on online innovation platforms to generate and evaluate ideas for new products and services. Swiss beverage company Rivella also used one when it came time to launch a new flavor in 2012. More than 800 ideas were submitted, and it quickly became clear what the community wanted: a health-oriented ginger-flavored beverage. But upon closer inspection, those responsible realized that only a handful of people were making “a lot of noise” about this flavor. In the end, the consensus at Rivella was that the ginger flavor would be a flop on the market, and those in charge decided on a different idea.
Over 30,000 ideas examined
According to Reto Hofstetter, professor of business administration at the University of Lucerne, a typical example of social bias. To understand how this can distort results, he conducted a study. Over 14 months, the team led by Prof. Dr. Reto Hofstetter examined 87 crowdsourcing projects on Atizo, one of Europe’s leading innovation platforms. A total of 31,114 ideas from 18 Swiss companies were analyzed. Since sorting and evaluating these proposals is very time-consuming – an average of 358 ideas are received per competition – Atizo offers the option of immediately rating and commenting on the ideas. The study showed that these likes and comments have an impact. This is because the companies use this evaluation system to decide which ideas are rewarded.
Market success not guaranteed
But it turned out that for positive comments or likes, the same are returned – regardless of whether the idea is liked or not. A well-known phenomenon in social media. In addition, people can network as “friends” on Atizo. The researchers found that ideas from friends are commented on and positively evaluated more often than ideas from people with whom one is not networked. In a further step, the scientists investigated whether the “crowd” can actually predict which products will be successful on the market. For this purpose, they surveyed the companies one year after the conclusion of the ideas competition to find out which of the crowdsourcing ideas had been successfully implemented. Reto Hofstetter: “The results showed no correlation between the ideas preferred by the crowd and those that actually led to successful products.”
In summary, the study does not advise against crowdsourcing. However, it does suggest that companies should look beyond likes and mutual positive reviews and find more effective ways to evaluate the ideas generated.
Prof. Dr. Reto Hofstetter led the study, “Should You Really Produce What Consumers Like Online? Empirical Evidence for Reciprocal Voting in Open Innovation Contests,” together with co-authors Dr. Suleiman Aryobsei, Manager at A.T. Kearney and Prof. Dr. Andreas Herrmann, Professor of Marketing and Director of the Institute for Customer Insight, University of St. Gallen, and published the results in the “Journal of Product Innovation Management” (article). A summary was published in the “Harvard Business Review.”