As companies are struggling to keep up with the competition, innovation is a critical success factor. To increase the rate of innovation, managers often turn to the crowd (outsiders, like customers) to help generate ideas for new products and services. If successful, managers face another dilemma: how to best deal with the inflow of ideas? How can they effectively evaluate large amounts of ideas to pick the most promising ones for further consideration? For this, they often turn to the crowd again to help with idea evaluation. But how can we best design systems to identify winning ideas? Our journal article, Rate or Trade? Identifying Winning Ideas in Open Idea Sourcing, shows that relatively simple tools like rating scales with multiple attributes such as novelty and feasibility work pretty well—compared with baseline measures of idea quality collected from expert panels. More complex tools like prediction markets seem unable to deliver on the promises. As we show in our work, one key reason for this is that these tools are simply too difficult to use. While using prediction markets to predict who wins the next soccer world cup or predict how much revenue a new movie is going to generate works well as users are familiar with the underlying concepts: you know what a soccer team is, you know what winning a tournament entails. Conversely, judging new product ideas is much more complicated. Even more so if those product ideas are generated through crowdsourcing and are potentially poorly written. As a result, users can easily become cognitively overwhelmed by the complex market mechanism such that simpler mechanisms outperform them. The most surprising finding of this study is how much of the potential benefits of some modern IT systems can be undermined if they turn out to be too complex to use. The promising insight is, however, information technology continues to improve and increases our ability to tailor them to the task at hand and thus make them easier to use.
“Rate or Trade? Identifying Winning Ideas in Open Idea Sourcing”: Ivo Blohm, Christoph Riedl, Johann Füller, Jan Marco Leimeister. Information Systems Research, published online March 2016.
Information technology (IT) has created new patterns of digitally-mediated collaboration that allow open sourcing of ideas for new products and services. These novel sociotechnical arrangements afford finely-grained manipulation of how tasks can be represented and have changed the way organizations ideate. In this paper, we investigate differences in behavioral decision-making resulting from IT-based support of open idea evaluation. We report results from a randomized experiment of 120 participants comparing IT-based decision-making support using a rating scale (representing a judgment task) and a preference market (representing a choice task). We find that the rating scale-based task invokes significantly higher perceived ease of use than the preference market-based task and that perceived ease of use mediates the effect of the task representation treatment on the users’ decision quality. Furthermore, we find that the understandability of ideas being evaluated, which we assess through the ideas’ readability, and the perception of the task’s variability moderate the strength of this mediation effect, which becomes stronger with increasing perceived task variability and decreasing understandability of the ideas. We contribute to the literature by explaining how perceptual differences of task representations for open idea evaluation affect the decision quality of users and translate into differences in mechanism accuracy. These results enhance our understanding of how crowdsourcing as a novel mode of value creation may effectively complement traditional work structures.
Access the full article