CrowdRank
A Simple Ranking Algorithm for Crowdsourced Rating Systems with Uneven Participation
Keywords:
ranking, scoring, crowdsourcing, margin of error, confidence level, confidence interval, statistics
Abstract
Public rating systems are difficult to score well. Voting systems tend to simply favor what is already popular. Averaging systems tend to have significant variance if there are not enough people scoring. CrowdRank assigns scores based on a "minimal defensible score" criteria. Each average score has its error calculated based on the sample size (number of submissions), and the bottom end of the range is used as the official score of the item in question.
Published
2020-01-20
Section
Letters