首页 | 本学科首页   官方微博 | 高级检索  
     


An evaluation framework for software crowdsourcing
Authors:Wenjun Wu  Wei-Tek Tsai  Wei Li
Affiliation:1. State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China2. School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ85281, USA3. Department of Computer Science and Technology, INLIST, Tsinghua University, Beijing 100084, China
Abstract:Recently software crowdsourcing has become an emerging area of software engineering. Few papers have presented a systematic analysis on the practices of software crowdsourcing. This paper first presents an evaluation framework to evaluate software crowdsourcing projects with respect to software quality, costs, diversity of solutions, and competition nature in crowdsourcing. Specifically, competitions are evaluated by the min-max relationship from game theory among participants where one party tries to minimize an objective function while the other party tries to maximize the same objective function. The paper then defines a game theory model to analyze the primary factors in these minmax competition rules that affect the nature of participation as well as the software quality. Finally, using the proposed evaluation framework, this paper illustrates two crowdsourcing processes, Harvard-TopCoder and AppStori. The framework demonstrates the sharp contrasts between both crowdsourcing processes as participants will have drastic behaviors in engaging these two projects.
Keywords:crowdsourcing  software engineering  competition rules  game theory  
本文献已被 SpringerLink 等数据库收录!
点击此处可从《Frontiers of Computer Science》浏览原始摘要信息
点击此处可从《Frontiers of Computer Science》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号