Collaborative Document Evaluation: An Alternative Approach to Classic Peer Review
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32797
Collaborative Document Evaluation: An Alternative Approach to Classic Peer Review

Authors: J. Beel, B. Gipp

Abstract:

Research papers are usually evaluated via peer review. However, peer review has limitations in evaluating research papers. In this paper, Scienstein and the new idea of 'collaborative document evaluation' are presented. Scienstein is a project to evaluate scientific papers collaboratively based on ratings, links, annotations and classifications by the scientific community using the internet. In this paper, critical success factors of collaborative document evaluation are analyzed. That is the scientists- motivation to participate as reviewers, the reviewers- competence and the reviewers- trustworthiness. It is shown that if these factors are ensured, collaborative document evaluation may prove to be a more objective, faster and less resource intensive approach to scientific document evaluation in comparison to the classical peer review process. It is shown that additional advantages exist as collaborative document evaluation supports interdisciplinary work, allows continuous post-publishing quality assessments and enables the implementation of academic recommendation engines. In the long term, it seems possible that collaborative document evaluation will successively substitute peer review and decrease the need for journals.

Keywords: Peer Review, Alternative, Collaboration, Document Evaluation, Rating, Annotations.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1070317

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439

References:


[1] D. E. Chubin and E. J. Hackett, "Peerless Science", Peer Review and U.S. Science Policy, 1990, p. 192.
[2] J. Beel and B. Gipp, "Research Proposal: IT Supported Research" http://www.uni-magdeburg.de/beel/2006-Research_Proposal.pdf.
[3] A. S. Relman, "Peer review in Scientific journals-What good is it?", West Journal of Medicine, Vol. 153, 1990.
[4] "Analysing the purpose of peer review", Nature, 2006, doi:10.1038/nature04990.
[5] J. Ziman, "Bias, incompetence, or bad management?", The Behavioral and Brain Sciences, Vol. 5, No. 2, 1982, pp. 245-246.
[6] D. Kaplan, "How to Fix Peer Review", The Scientist, Vol. 19, No. 1, 2005, p. 10.
[7] J. F. Miller, "Impact Factors and Publishing Research", The Scientist, Vol. 16, No. 18, 2002, p. 11.
[8] F. Godlee, R. C. Gale, and N. C. Martyn, "Effect on the Quality of Peer Review of Blinding Reviewers and Asking Them to Sign Their Reports", Journal of the American Medical Association, Vol. 280, 1998, pp. 237-240.
[9] P. M. Rothwell, and C. N. Martyn, "Reproducibility of peer review in clinical neuroscience", Brain, Vol. 123, 2000, pp. 1964-1969.
[10] S. Cole, J. R. Cole, and G. A. Simon, "Chance and consensus in peer review", Science, 1981, Vol. 214, No. 4523, pp. 881-886.
[11] M. H. MacRoberts and B. R. MacRoberts, "Problems of Citation Analysis: A Critical Review", Journal of the American Society for Information Science, Vol. 40, No. 5, 1989, pp. 342-349.
[12] T. Opthof, "Sense and nonsense about the impact factor", Cardiovascular Research, Vol. 33, 1997, pp. 1-7.
[13] S. O. Seglen, "Why the impact factor of journals should not be used for evaluating research", British Medical Journal, Vol. 314, 1997, pp. 498- 513.
[14] B. Gipp, J. Beel, and C. Hentschel, "Scienstein - A Research Paper Recommender System", not published yet.