Journal of Computer Applications ›› 2022, Vol. 42 ›› Issue (12): 3913-3923.DOI: 10.11772/j.issn.1001-9081.2021101709
• Frontier and comprehensive applications • Previous Articles Next Articles
Jia XU1,2, Jing LIU1, Ge YU3, Pin LYU1,2(), Panyuan YANG1
Received:
2021-10-08
Revised:
2021-11-19
Accepted:
2021-11-25
Online:
2022-12-21
Published:
2022-12-10
Contact:
Pin LYU
About author:
XU Jia, born in 1984, Ph. D., associate professor. Her research interests include educational data analysis, personalized recommendation.Supported by:
许嘉1,2, 刘静1, 于戈3, 吕品1,2(), 杨攀原1
通讯作者:
吕品
作者简介:
许嘉(1984—),女,山东荣成人,副教授,博士,CCF高级会员,主要研究方向:教育数据分析、个性化推荐基金资助:
CLC Number:
Jia XU, Jing LIU, Ge YU, Pin LYU, Panyuan YANG. Review of peer grading technologies for online education[J]. Journal of Computer Applications, 2022, 42(12): 3913-3923.
许嘉, 刘静, 于戈, 吕品, 杨攀原. 面向在线教育的同伴互评技术综述[J]. 《计算机应用》唯一官方网站, 2022, 42(12): 3913-3923.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2021101709
文献编号 | 模型 | 可靠性 | 偏见 | 互评分数 | 相对分数 | 社交信息 | 评分者能力 | 发表时间 |
---|---|---|---|---|---|---|---|---|
文献[ | PG1,PG2,PG3 | √ | √ | √ | × | × | × | 2013 |
文献[ | PG4,PG5 | √ | √ | √ | × | × | × | 2015 |
文献[ | PG6(2017),PG7(2017) | √ | √ | √ | × | √ | × | 2017 |
文献[ | PG6(2019),PG7(2019) | √ | √ | √ | √ | × | × | 2019 |
文献[ | CD-PG1,CD-PG2 | √ | √ | √ | √ | × | √ | 2021 |
Tab. 1 Comparison of different probability graph models
文献编号 | 模型 | 可靠性 | 偏见 | 互评分数 | 相对分数 | 社交信息 | 评分者能力 | 发表时间 |
---|---|---|---|---|---|---|---|---|
文献[ | PG1,PG2,PG3 | √ | √ | √ | × | × | × | 2013 |
文献[ | PG4,PG5 | √ | √ | √ | × | × | × | 2015 |
文献[ | PG6(2017),PG7(2017) | √ | √ | √ | × | √ | × | 2017 |
文献[ | PG6(2019),PG7(2019) | √ | √ | √ | √ | × | × | 2019 |
文献[ | CD-PG1,CD-PG2 | √ | √ | √ | √ | × | √ | 2021 |
估计类型 | 主要方法或策略 | 优点 | 缺点 |
---|---|---|---|
序数估计 | 矩阵分解[ | 利用矩阵分解方法学习效用函数,容易将排名结果转化为数值结果;求解速度快,并容易扩展到评价者规模较大的互评场景 | 由于每位评价者只评价少数作业,偏好矩阵十分稀疏,影响最终排序结果;并且未对评价者可靠性进行分析,存在局限性 |
模糊群决策[ | 定义评价者对作业的模糊偏好关系能够量化不同程度的偏好 | 偏好数量关系定义可能会引入错误影响估计准确性;并且未考虑评价者的可靠性 | |
贝叶斯方法[ | 不仅能推断作业质量,还能显式估计评价者的可靠性 | 对超参数依赖较大,求解速度较慢 | |
基于配对比较[ | 考虑了评价者的可靠性,估计准确性较高;模型求解速度较快 | 对质量相似的作业没有较好地处理 | |
基数估计 | 加权求和[ | 模型易理解,求解速度快;不同文献基于不同因素(评价者的评分、学习参与度、评价者与教师的评分差异等)来量化评价者的评分可靠性权重,估计准确性较高 | 由于均未在同一个数据集上开展实验,无法判定权重影响因素的优劣 |
概率图模型[ | 同时对评价者的评分可靠性和评分偏见进行建模,估计真实分数准确性高 | 概率图模型对超参数依赖较大,模型的迭代求解较耗时 |
Tab. 2 Comparison of methods or strategies of true grade estimation for subjective assignments
估计类型 | 主要方法或策略 | 优点 | 缺点 |
---|---|---|---|
序数估计 | 矩阵分解[ | 利用矩阵分解方法学习效用函数,容易将排名结果转化为数值结果;求解速度快,并容易扩展到评价者规模较大的互评场景 | 由于每位评价者只评价少数作业,偏好矩阵十分稀疏,影响最终排序结果;并且未对评价者可靠性进行分析,存在局限性 |
模糊群决策[ | 定义评价者对作业的模糊偏好关系能够量化不同程度的偏好 | 偏好数量关系定义可能会引入错误影响估计准确性;并且未考虑评价者的可靠性 | |
贝叶斯方法[ | 不仅能推断作业质量,还能显式估计评价者的可靠性 | 对超参数依赖较大,求解速度较慢 | |
基于配对比较[ | 考虑了评价者的可靠性,估计准确性较高;模型求解速度较快 | 对质量相似的作业没有较好地处理 | |
基数估计 | 加权求和[ | 模型易理解,求解速度快;不同文献基于不同因素(评价者的评分、学习参与度、评价者与教师的评分差异等)来量化评价者的评分可靠性权重,估计准确性较高 | 由于均未在同一个数据集上开展实验,无法判定权重影响因素的优劣 |
概率图模型[ | 同时对评价者的评分可靠性和评分偏见进行建模,估计真实分数准确性高 | 概率图模型对超参数依赖较大,模型的迭代求解较耗时 |
平台名称 | 分配评价者策略 | 互评训练 | 互评反馈 | 估计作业真实分数策略 |
---|---|---|---|---|
中国大学慕课(iCourse) | 随机分配 | 比较学习者批改与教师批改样例之间差距 | — | 去掉互评分数中最小值和最大值后的平均值 |
学堂在线(XuetangX) | 随机分配 | — | — | 分组中最后一个成员提交的分数作为作业最终分数 |
好大学在线(CNMOOC) | 随机分配 | 学习者浏览教师给出的作业批改样例 | √ | 互评分数的平均分 |
Coursera | 随机分配 | 比较学习者批改得分与教师批改得分 | √ | 每个评分点的中位数之和 |
edX | 随机分配 | 比较学习者批改得分与教师批改得分 | — | 互评分数的中位数 |
Moodle | 手动分配或随机分配 | 比较学习者批改得分与教师批改得分 | √ | 所有得分点的加权平均值(每个得分点权重 不同) |
CrowdGrader | 随机分配 | — | √ | 以评分者的评分准确性为权值对其给出的评分进行加权求和 |
Peerceptiv | 随机分配 | — | √ | 以评分者的评分准确性为权值对其给出的评分进行加权求和 |
Udacity | — | — | — | — |
Tab. 3 Comparison of peer grading modules of representative online education platforms or systems
平台名称 | 分配评价者策略 | 互评训练 | 互评反馈 | 估计作业真实分数策略 |
---|---|---|---|---|
中国大学慕课(iCourse) | 随机分配 | 比较学习者批改与教师批改样例之间差距 | — | 去掉互评分数中最小值和最大值后的平均值 |
学堂在线(XuetangX) | 随机分配 | — | — | 分组中最后一个成员提交的分数作为作业最终分数 |
好大学在线(CNMOOC) | 随机分配 | 学习者浏览教师给出的作业批改样例 | √ | 互评分数的平均分 |
Coursera | 随机分配 | 比较学习者批改得分与教师批改得分 | √ | 每个评分点的中位数之和 |
edX | 随机分配 | 比较学习者批改得分与教师批改得分 | — | 互评分数的中位数 |
Moodle | 手动分配或随机分配 | 比较学习者批改得分与教师批改得分 | √ | 所有得分点的加权平均值(每个得分点权重 不同) |
CrowdGrader | 随机分配 | — | √ | 以评分者的评分准确性为权值对其给出的评分进行加权求和 |
Peerceptiv | 随机分配 | — | √ | 以评分者的评分准确性为权值对其给出的评分进行加权求和 |
Udacity | — | — | — | — |
1 | 网易,高等教育出版社. 中国大学MOOC[EB/OL]. [2021-06-26].. 10.1145/2872518.2890577 |
NetEase, Inc., Higher Education Press. Chinese University MOOC[EB/OL]. [2021-06-26].. 10.1145/2872518.2890577 | |
2 | 北京慕华信息科技有限公司. 学堂在线[EB/OL]. [2021-06-26].. 10.1145/3395245.3395249 |
MOOC-CN Information Technology (Beijing) Co, Ltd. XuetangX[EB/OL]. [2021-06-26].. 10.1145/3395245.3395249 | |
3 | Coursera, Inc. Coursera[EB/OL]. [2021-06-26].. 10.4135/9781483318332.n95 |
4 | edX LLC. edX[EB/OL]. [2021-06-26].. |
5 | CARAGIANNIS I, KRIMPAS G A, VOUDOURIS A A. Aggregating partial rankings with applications to peer grading in massive online open courses[C]// Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. Richland, SC: International Foundation for Autonomous Agents and MultiAgent Systems, 2015: 675-683. |
6 | PARÉ E P, JOORDENS S. Peering into large lectures: examining peer and expert mark agreement using peerScholar, an online peer assessment tool[J]. Journal of Computer Assisted Learning, 2008, 24(6): 526-540. 10.1111/j.1365-2729.2008.00290.x |
7 | 郑庆华,董博,钱步月,等. 智慧教育研究现状与发展趋势[J]. 计算机研究与发展, 2019, 56(1): 209-224. 10.7544/issn1000-1239.2019.20180758 |
ZHANG Q H, DONG B, QIAN B Y, et al. The state of the art and future tendency of smart education[J]. Journal of Computer Research and Development, 2019, 56(1): 209-224. 10.7544/issn1000-1239.2019.20180758 | |
8 | 王如. 基于信息整合方法的网上同伴互评评阅人推荐系统分析与设计[D]. 昆明:昆明理工大学, 2016: 13. |
WANG R. Analysis and design of online peer assessement reviewer recommendation system based on information integration method[D]. Kunming: Kunming University of Science and Technology, 2016: 13. | |
9 | 方慧. 众包系统中基于参与者互评的数据质量控制研究[D]. 南京:南京邮电大学, 2019: 30. |
FANG H. Research on data quality control of peer grading in crowdsourcing system[D]. Nanjing: Nanjing University of Posts and Telecommunications, 2019: 30. | |
10 | 许嘉,李秋云,刘静,等. 基于概率图模型的主观题同行互评系统的开发与实践[J]. 中国教育信息化, 2021(10): 87-91. |
XU J, LI Q Y, LIU J, et al. Development and practice of peer assessment system for subjective questions based on probability graph model[J]. The Chinese Journal of ICT in Education, 2021(10): 87-91. | |
11 | 马志强,王雪娇,龙琴琴. 基于同侪互评的在线学习评价研究综述[J]. 远程教育杂志, 2014, 32(4): 86-92. 10.3969/j.issn.1672-0008.2014.04.014 |
MA Z Q, WANG X J, LONG Q Q. A literature review of online peer assessment[J]. Journal of Distance Education, 2014, 32(4): 86-92. 10.3969/j.issn.1672-0008.2014.04.014 | |
12 | 李红霞,赵呈领,疏凤芳,等. 促进学习的评价:在线开放课程中同伴互评投入度研究[J]. 电化教育研究, 2021, 42(4): 37-44. |
LI H X, ZHAO C L, SHU F F, et al. Assessment for learning: a study of engagement of peer assessment in MOOC [J]. e-Education Research, 2021, 42(4): 37-44. | |
13 | TOPPING K. Peer assessment between students in colleges and universities[J]. Review of Educational Research, 1998, 68(3): 249-276. 10.3102/00346543068003249 |
14 | SADLER P M, GOOD E. The impact of self- and peer-grading on student learning[J]. Educational Assessment, 2006, 11(1): 1-31. 10.1207/s15326977ea1101_1 |
15 | FALCHIKOV N. Learning Together: Peer Tutoring in Higher Education [M]. London: RoutledgeFalmer, 2001: 79-81. |
16 | GEHRINGER E F. A survey of methods for improving review quality[C]// Proceedings of the 2014 International Conference on Web-Based Learning, LNCS 8699. Cham: Springer, 2014: 92-97. |
17 | WANG W Y, AN B, JIANG Y C. Optimal spot-checking for improving evaluation accuracy of peer grading systems[J]. IEEE Transactions on Computational Social Systems, 2020,7(4): 940-955. 10.1109/tcss.2020.2998732 |
18 | STRIJBOS J W, SLUIJSMANS D. Unravelling peer assessment: Methodological, functional, and conceptual developments[J]. Learning and Instruction, 2010, 20(4): 265-269. 10.1016/j.learninstruc.2009.08.002 |
19 | SADLER D R. Beyond feedback: developing student capability in complex appraisal[J]. Assessment and Evaluation in Higher Education, 2010, 35(5): 535-550. 10.1080/02602930903541015 |
20 | NICOL D, THOMSON A, BRESLIN C. Rethinking feedback practices in higher education: a peer review perspective[J]. Assessment and Evaluation in Higher Education, 2014, 39(1): 102-122. 10.1080/02602938.2013.795518 |
21 | RACE P. Practical pointers on peer assessment[R]. SEDA PAPER, 1998: 113-122. |
22 | ZHENG L Q, CHEN N S, CUI P P, et al. A systematic review of technology-supported peer assessment research: an activity theory approach[J]. International Review of Research in Open and Distributed Learning, 2019, 20(5): 168-191. 10.19173/irrodl.v20i5.4333 |
23 | SHAH N B, BRADLEY J K, PAREKH A, et al. A case for ordinal peer-evaluation in MOOCs[EB/OL]. [2021-06-26].. |
24 | SHAH N B, BALAKRISHNAN S, BRADLEY J, et al. When is it better to compare than to score?[EB/OL]. (2014-06-25) [2021-06-26].. |
25 | HAN Y, WU W J, YAN Y T, et al. Human-machine hybrid peer grading in SPOCs[J]. IEEE Access, 2020, 8: 220922-220934. 10.1109/access.2020.3043291 |
26 | GARCIA-LORO F, MARTIN S, RUIPÉREZ-VALIE J A, et al. Reviewing and analyzing peer review Inter-Rater Reliability in a MOOC platform[J]. Computers and Education, 2020, 154: No.103894. 10.1016/j.compedu.2020.103894 |
27 | PANADERO E, ROMERO M, STRIJBOS J W. The impact of a rubric and friendship on peer assessment: effects on construct validity, performance, and perceptions of fairness and comfort[J]. Studies in Educational Evaluation, 2013, 39(4): 195-203. 10.1016/j.stueduc.2013.10.005 |
28 | LAI C L, HWANG G J. An interactive peer-assessment criteria development approach to improving students’ art design performance using handheld devices[J]. Computers and Education, 2015, 85: 149-159. |
29 | BECKER A. Student-generated scoring rubrics: examining their formative value for improving ESL students’ writing performance[J]. Assessing Writing, 2016, 29: 15-24. |
30 | WICHMANN A, FUNK A, RUMMEL N. Leveraging the potential of peer feedback in an academic writing activity through sense-making support[J]. European Journal of Psychology of Education, 2018, 33(1): 165-184. 10.1007/s10212-017-0348-7 |
31 | LUO H, ROBINSON A C, PARK J Y. Peer grading in a MOOC: reliability, validity, and perceived effects[J]. Journal of Asynchronous Learning Networks, 2014, 18(2): No.429. 10.24059/olj.v18i2.429 |
32 | FORMANEK M, WENGER M C, BUXNER S R, et al. Insights about large-scale online peer assessment from an analysis of an astronomy MOOC[J]. Computers and Education, 2017, 113: 243-262. 10.1016/j.compedu.2017.05.019 |
33 | LI L. Using game-based training to improve students’ assessment skills and intrinsic motivation in peer assessment[J]. Innovations in Education and Teaching International, 2019, 56(4): 423-433. 10.1080/14703297.2018.1511444 |
34 | PANADERO E, ALQASSAB M. An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading[J]. Assessment and Evaluation in Higher Education, 2019, 44(8): 1253-1278. 10.1080/02602938.2019.1600186 |
35 | VANDERHOVEN E, RAES A, MONTRIEUX H, et al. What if pupils can assess their peers anonymously? a quasi-experimental study[J]. Computers and Education, 2015, 81: 123-132. 10.1016/j.compedu.2014.10.001 |
36 | CARNELL B. Aiming for autonomy: formative peer assessment in a final-year undergraduate course[J]. Assessment and Evaluation in Higher Education, 2016, 41(8): 1269-1283. 10.1080/02602938.2015.1077196 |
37 | ROTSAERT T, PANADERO E, SCHELLENS T. Anonymity as an instructional scaffold in peer assessment: its effects on peer feed-back quality and evolution in students’ perceptions about peer assessment skills[J]. European Journal of Psychology of Education, 2018, 33(1): 75-99. 10.1007/s10212-017-0339-8 |
38 | HOWARD C D, BARRETT A F, FRICK T W. Anonymity to promote peer feedback: pre-service teachers’ comments in asynchronous computer-mediated communication[J]. Journal of Educational Computing Research, 2010, 43(1): 89-112. 10.2190/ec.43.1.f |
39 | HAN Y, WU W J, PU Y J. Task assignment of peer grading in MOOCs [C]// Proceeding of the 2017 International Conference on Database Systems for Advanced Applications, LNCS 10179. Cham: Springer, 2017: 352-363. |
40 | CAPUANO N, CABALLÉ S, MIGUEL J. Improving peer grading reliability with graph mining techniques [J]. International Journal of Emerging Technologies in Learning, 2016, 11(7): 24-33. 10.3991/ijet.v11i07.5878 |
41 | OHASHI H, ASANO Y, SHIMIZU T, et al. Adaptive balanced allocation for peer assessments[J]. IEICE Transactions on Information and Systems, 2020, E103-D(5): 939-948. 10.1587/transinf.2019dap0004 |
42 | XU Y H, WANG R. Peer reviewer recommendation in online social learning context: integrating information of learners and submissions[C]// Proceeding of the 19th Pacific Asia Conference on Information Systems. Atlanta, GA: Association for Information Systems, 2014: No.295. |
43 | 何升,邓伟林,肖体斌. MOOC中基于二分图推荐的同伴互评系统优化[J]. 计算机应用研究, 2016, 33(5): 1399-1402. 10.3969/j.issn.1001-3695.2016.05.027 |
HE S, DENG W L, XIAO T B. Peer review system optimization based on bipartite graph recommendation in MOOC[J]. Application Research of Computers, 2016, 33(5): 1399-1402. 10.3969/j.issn.1001-3695.2016.05.027 | |
44 | ANAYA A R, LUQUE M, LETÓN E, et al. Automatic assignment of reviewers in an online peer assessment task based on social interactions[J]. Expert Systems,2019,36(4):No.e12405. 10.1111/exsy.12405 |
45 | LU J Y, LAW N. Online peer assessment: effects of cognitive and affective feedback[J]. Instructional Science, 2012, 40(2): 257-275. 10.1007/s11251-011-9177-2 |
46 | CHENG K H, LIANG J C, TSAI C C. Examining the role of feedback messages in undergraduate students’ writing performance during an online peer assessment activity[J]. The Internet and Higher Education, 2015, 25: 78-84. 10.1016/j.iheduc.2015.02.001 |
47 | ZONG Z, SCHUNN C D, WANG Y Q. Learning to improve the quality peer feedback through experience with peer feedback[J]. Assessment and Evaluation in Higher Education, 2021, 46(6): 973-992. 10.1080/02602938.2020.1833179 |
48 | MOFFITT R L, PADGETT C, GRIEVE R. Accessibility and emotionality of online assessment feedback: using emoticons to enhance student perceptions of marker competence and warmth[J]. Computers and Education, 2020, 143: No.103654. 10.1016/j.compedu.2019.103654 |
49 | PATCHAN M M, SCHUNN C D, CORRENTI R J. The nature of feedback: How peer feedback features affect students’ implementation rate and quality of revisions[J]. Journal of Educational Psychology, 2016, 108(8): 1098-1120. |
50 | LEIJEN D A J. A novel approach to examine the impact of web-based peer review on the revisions of L2 writers[J]. Computers and Composition, 2017, 43: 35-54. 10.1016/j.compcom.2016.11.005 |
51 | WU Y, SCHUNN C D. From feedback to revisions: effects of feedback features and perceptions[J]. Contemporary Educational Psychology, 2020, 60: No.101826. 10.1016/j.cedpsych.2019.101826 |
52 | XIAO Y K, ZINGLE G, JIA Q J, et al. Detecting problem statements in peer assessments[C]// Proceedings of the 13th International Conference on Educational Data Mining. Massachusetts: International Educational Data Mining Society, 2020: 704-709. |
53 | ZINGLE G, RADHAKRISHNAN B, XIAO Y K, et al. Detecting suggestions in peer assessments[C]// Proceedings of the 12th International Conference on Educational Data Mining. Massachusetts: International Educational Data Mining Society, 2019: 474-479. |
54 | RICO-JUAN J R, GALLEGO A J, CALVO-ZARAGOZA J. Automatic detection of inconsistencies between numerical scores and textual feedback in peer-assessment processes with machine learning[J]. Computers and Education, 2019, 140: No.103609. 10.1016/j.compedu.2019.103609 |
55 | RICO-JUAN J R, GALLEGO A J, VALERO-MAS J J, et al. Statistical semi-supervised system for grading multiple peer-reviewed open-ended works[J]. Computers and Education, 2018, 126: 264-282. 10.1016/j.compedu.2018.07.017 |
56 | 赵鸣铭,王聪,李敏. 互助学习环境下可抗恶意评价的同伴互评算法[J]. 计算机应用研究, 2020, 37(8): 2305-2309. |
ZHAO M M, WANG C, LI M. Peer grading algorithm against malicious evaluation for collaborative learning[J]. Application Research of Computers, 2020, 37(8): 2305-2309. | |
57 | PIECH C, HUANG J, CHEN Z H, et al. Tuned models of peer assessment in MOOCs[C]// Proceedings of the 6th International Conference on Educational Data Mining. Massachusetts: International Educational Data Mining Society, 2013: 153-160. |
58 | MI F, YEUNG D Y. Probabilistic graphical models for boosting cardinal and ordinal peer grading in MOOCs[C]// Proceedings of the 29th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2015: 454-460. 10.1609/aaai.v29i1.9210 |
59 | XIONG Y, SCHUNN C D. Reviewer, essay, and reviewing-process characteristics that predict errors in web-based peer review[J]. Computers and Education, 2021, 166: No.104146. 10.1016/j.compedu.2021.104146 |
60 | JAMES S, LANHAM E, MAK-HAU V, et al. Identifying items for moderation in a peer assessment framework[J]. Knowledge-Based Systems, 2018, 162: 211-219. 10.1016/j.knosys.2018.05.032 |
61 | LIN Y R, HAN S C, KANG B H. Machine learning for the peer assessment credibility[C]// Companion Proceedings of the 2018 Web Conference. Republic and Canton of Geneva: International World Wide Web Conferences Steering Committee, 2018: 117-118. 10.1145/3184558.3186957 |
62 | STELMAKH I, SHAH N B, SINGH A. Catch me if I can: detecting strategic behaviour in peer assessment [C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2021: 4794-4802. 10.1609/aaai.v35i6.16611 |
63 | DÍEZ J, LUACES O, ALONSO-BETANZOS A, et al. Peer assessment in MOOCs using preference learning via matrix factorization[EB/OL]. [2021-06-26].. |
64 | LUACES O, DÍEZ J, ALONSO-BETANZOS A, et al. A factorization approach to evaluate open-response assignments in MOOCs using preference learning on peer assessments[J]. Knowledge-Based Systems, 2015, 85: 322-328. 10.1016/j.knosys.2015.05.019 |
65 | CAPUANO N, LOIA V, ORCIUOLI F. A fuzzy group decision making model for ordinal peer assessment[J]. IEEE Transactions on Learning Technologies, 2017, 10(2): 247-259. 10.1109/tlt.2016.2565476 |
66 | CAPUANO N, CABALLÉ S, PERCANNELLA G, et al. FOPA-MC: fuzzy multi-criteria group decision making for peer assessment[J]. Soft Computing, 2020, 24(23): 17679-17692. 10.1007/s00500-020-05155-5 |
67 | WATERS A E, TINAPPLE D, BARANIUK R G. BayesRank: a Bayesian approach to ranked peer grading[C]// Proceedings of the 2nd ACM Conference on Learning @ Scale. New York: ACM, 2015: 177-183. 10.1145/2724660.2724672 |
68 | RAMAN K, JOACHIMS T. Methods for ordinal peer grading[C]// Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2014: 1037-1046. 10.1145/2623330.2623654 |
69 | LIN L, TAN C W. Peer-grading at scale with rank aggregation[C]// Proceedings of the 8th ACM Conference on Learning @ Scale. New York: ACM, 2021: 327-330. 10.1145/3430895.3460980 |
70 | BRADLEY R A, TERRY M E. Rank analysis of incomplete block designs: I. the method of paired comparisons[J]. Biometrika, 1952, 39(3/4): 324-345. 10.1093/biomet/39.3-4.324 |
71 | LUCE D R. Individual choice behavior: a theoretical analysis[J]. American Statistical Association, 2005, 67(293): 1-15. |
72 | MALLOWS C L. Non-null ranking models. I[J]. Biometrika, 1957, 44(1/2): 114-130. 10.1093/biomet/44.1-2.114 |
73 | THURSTONE L L. The method of paired comparisons for social values[J]. The Journal of Abnormal and Social Psychology, 1927, 21(4): 384-400. 10.1037/h0065439 |
74 | PLAKETT R L. The analysis of permutations[J]. Journal of the Royal Statistical Society. Series C (Applied Statistics), 1975, 24(2): 193-202. 10.2307/2346567 |
75 | DE ALFARO L, SHAVLOVSKY M. CrowdGrader: a tool for crowdsourcing the evaluation of homework assignments[C]// Proceedings of the 45th ACM Technical Symposium on Computer Science Education. New York: ACM, 2014: 415-420. 10.1145/2538862.2538900 |
76 | WALSH T. The PeerRank method for peer assessment[C]// Proceedings of the 21st European Conference on Artificial Intelligence. Amsterdam: IOS, 2014: 909-914. |
77 | PAGE L, BRIN S, MOTWANI R, et al. The PageRank citation ranking: bringing order to the Web[R/OL]. (1998-01-29) [2021-06-26].. |
78 | GARCÍA-MARTÍNEZ C, CEREZO R, BERMÚDEZ M, el at. Improving essay peer grading accuracy in massive open online courses using personalized weights from student’s engagement and performance[J]. Journal of Computer Assisted Learning, 2019, 35(1):110-120. 10.1111/jcal.12316 |
79 | DARVISHI A, KHOSRAVI H, SADIQ S. Employing peer review to evaluate the quality of student generated content at scale: a trust propagation approach[C]// Proceedings of the 8th ACM Conference on Learning @ Scale. New York: ACM, 2021: 139-150. 10.1145/3430895.3460129 |
80 | LI P, YIN Z R, LI F Y. Quality control method for peer assessment system based on multi-dimensional information[C]// Proceedings of the 2020 International Conference on Web Information Systems and Applications, LNCS 12432. Cham: Springer, 2020: 184-193. |
81 | YUAN Z, DOWNEY D. Practical methods for semi-automated peer grading in a classroom setting[C]// Proceeding of the 28th ACM Conference on User Modeling, Adaptation and Personalization. New York: ACM, 2020: 363-367. 10.1145/3340631.3394878 |
82 | YANG S H, LONG B, SMOLA A, et al. Like like alike: joint friendship and interest propagation in social networks[C]// Proceedings of the 20th International Conference on World Wide Web. New York: ACM, 2011: 537-546. 10.1145/1963405.1963481 |
83 | CHAN H P, KING I. Leveraging social connections to improve peer assessment in MOOCs[C]// Proceedings of the 26th International Conference on World Wide Web Companion. Republic and Canton of Geneva: International World Wide Web Conferences Steering Committee, 2017: 341-349. 10.1145/3041021.3054165 |
84 | WANG T Q, LI Q, GAO J, et al. Improving peer assessment accuracy by incorporating relative peer grades[C]// Proceedings of the 12th International Conference on Educational Data Mining. Massachusetts: International Educational Data Mining Society, 2019: 450-455. |
85 | XU J, LI Q Y, LIU J, et al. Leveraging cognitive diagnosis to improve peer assessment in MOOCs[J]. IEEE Access, 2021, 9: 50466-50484. 10.1109/access.2021.3069055 |
86 | 王超,刘淇,陈恩红,等. 面向大规模认知诊断的DINA模型快速计算方法研究[J]. 电子学报, 2018, 46(5): 1047-1055. 10.3969/j.issn.0372-2112.2018.05.004 |
WANG C, LIU Q, CHEN E H, et al. The rapid calculation method of DINA model for large scale cognitive diagnosis[J]. Acta Electronic Sinica, 2018, 46(5): 1047-1055. 10.3969/j.issn.0372-2112.2018.05.004 | |
87 | 上海交通大学教育技术中心. 好大学在线[EB/OL]. [2021-06-26].. 10.1145/3184066.3184098 |
Education Technology Center of SJTU. CNMOOC [EB/OL]. [2021-06-26].. 10.1145/3184066.3184098 | |
88 | DOUGIAMAS M. Moodle — open-source learning platform[EB/OL]. [2021-06-26].. |
89 | CrowdGrader LLC. CrowdGrader[EB/OL]. [2021-06-26].. 10.1145/2538862.2538900 |
90 | University of Pittsburgh’s Learning Research and Development Center. Peerceptiv[EB/OL]. [2021-06-26].. 10.52547/rme.13.4.76 |
91 | Udacity, Inc. Udacity[EB/OL]. [2021-06-26].. 10.4135/9781483318332.n372 |
92 | VOZNIUK A, HOLZER A, GILLET D. Peer assessment dataset[J]. Journal of Learning Analytics, 2016, 3(2): 322-324. 10.18608/jla.2016.32.18 |
93 | TENÓRIO T, BITTENCOURT I I, ISOTANI S, et al. Dataset of two experiments of the application of gamified peer assessment model into online learning environment MeuTutor[J]. Data in Brief, 2017, 12: 433-437. 10.1016/j.dib.2017.04.032 |
94 | ASHENAFI M M. Online peer-assessment datasets[EB/OL]. (2019-12-30) [2021-06-26].. |
95 | 王娟,王丽清,马文倩,等. 群智协同激励机制研究综述[J]. 计算机工程与应用, 2020, 56(6): 1-9. |
WANG J, WANG L J, MA W Q, et al. Survey on incentive mechanisms for crowd-based cooperative computing[J]. Computer Engineering and Applications, 2020, 56(6): 1-9. | |
96 | ASHENAFI M M, RICCARDI G, RONCHETTI M. Predicting students’ final exam scores from their course activities[C]// Proceedings of 2015 IEEE Frontiers in Education Conference. Piscataway: IEEE, 2015: 1-9. 10.1109/fie.2015.7344081 |
97 | ASHENAFI M M, RONCHETTI M, RICCARDI G. Predicting student progress from peer-assessment data[C]// Proceedings of the 9th International Conference on Educational Data Mining. Massachusetts: International Educational Data Mining Society, 2016: 270-275. |
98 | CORBETT A T, ANDERSON J R. Knowledge tracing: modeling the acquisition of procedural knowledge[J]. User Modeling and User-Adapted Interaction, 1994, 4(4): 253-278. 10.1007/bf01099821 |
99 | PIECH C, BASSEN J, HUANG J, et al. Deep knowledge tracing[C]// Proceedings of the 28th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2015: 505-513. |
100 | ZHANG J N, SHI X J, KING I, et al. Dynamic key-value memory networks for knowledge tracing[C]// Proceedings of the 26th International Conference on World Wide Web. Republic and Canton of Geneva: International World Wide Web Conferences Steering Committee, 2017: 765-774. 10.1145/3038912.3052580 |
[1] | Tingwei CHEN, Jiacheng ZHANG, Junlu WANG. Random validation blockchain construction for federated learning [J]. Journal of Computer Applications, 2024, 44(9): 2770-2776. |
[2] | Yuhan LIU, Genlin JI, Hongping ZHANG. Video pedestrian anomaly detection method based on skeleton graph and mixed attention [J]. Journal of Computer Applications, 2024, 44(8): 2551-2557. |
[3] | Hong CHEN, Bing QI, Haibo JIN, Cong WU, Li’ang ZHANG. Class-imbalanced traffic abnormal detection based on 1D-CNN and BiGRU [J]. Journal of Computer Applications, 2024, 44(8): 2493-2499. |
[4] | Xinrui LIN, Xiaofei WANG, Yan ZHU. Academic anomaly citation group detection based on local extended community detection [J]. Journal of Computer Applications, 2024, 44(6): 1855-1861. |
[5] | Yajuan ZHAO, Fanjun MENG, Xingjian XU. Review of online education learner knowledge tracing [J]. Journal of Computer Applications, 2024, 44(6): 1683-1698. |
[6] | Fan MENG, Qunli YANG, Jing HUO, Xinkuan WANG. EraseMTS: iterative active multivariable time series anomaly detection algorithm based on margin anomaly candidate set [J]. Journal of Computer Applications, 2024, 44(5): 1458-1463. |
[7] | Zimeng ZHU, Zhixin LI, Zhan HUAN, Ying CHEN, Jiuzhen LIANG. Weakly supervised video anomaly detection based on triplet-centered guidance [J]. Journal of Computer Applications, 2024, 44(5): 1452-1457. |
[8] | Pei ZHAO, Yan QIAO, Rongyao HU, Xinyu YUAN, Minyue LI, Benchu ZHANG. Multivariate time series anomaly detection based on multi-domain feature extraction [J]. Journal of Computer Applications, 2024, 44(11): 3419-3426. |
[9] | Yongjiang LIU, Bin CHEN. Pixel-level unsupervised industrial anomaly detection based on multi-scale memory bank [J]. Journal of Computer Applications, 2024, 44(11): 3587-3594. |
[10] | Yuhao TANG, Dezhong PENG, Zhong YUAN. Fuzzy multi-granularity anomaly detection for incomplete mixed data [J]. Journal of Computer Applications, 2024, 44(10): 3097-3104. |
[11] | Hui JIANG, Qiuyan YAN, Zhujun JIANG. Symmetric positive definite autoencoder method for multivariate time series anomaly detection [J]. Journal of Computer Applications, 2024, 44(10): 3294-3299. |
[12] | Lishuo YE, Zhixue HE. Multiscale time series anomaly detection incorporating wavelet decomposition [J]. Journal of Computer Applications, 2024, 44(10): 3300-3306. |
[13] | Yuning ZHANG, Abudukelimu ABULIZI, Tisheng MEI, Chun XU, Maierdana MAIMAITIREYIMU, Halidanmu ABUDUKELIMU, Yutao HOU. Anomaly detection method for skeletal X-ray images based on self-supervised feature extraction [J]. Journal of Computer Applications, 2024, 44(1): 175-181. |
[14] | Chaoshuai QI, Wensi HE, Yi JIAO, Yinghong MA, Wei CAI, Suping REN. Survey on anomaly detection algorithms for unmanned aerial vehicle flight data [J]. Journal of Computer Applications, 2023, 43(6): 1833-1841. |
[15] | Yongfeng DONG, Yacong WANG, Yao DONG, Yahan DENG. Survey of online learning resource recommendation [J]. Journal of Computer Applications, 2023, 43(6): 1655-1663. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||