Title
Academic integrity: differences between computing assessments and essays
Abstract
There appears to be a reasonably common understanding about plagiarism and collusion in essays and other assessment items written in prose text. However, most assessment items in computing are not based in prose. There are computer programs, databases, spreadsheets, and web designs, to name but a few. It is far from clear that the same sort of consensus about plagiarism and collusion applies when dealing with such assessment items; and indeed it is not clear that computing academics have the same core beliefs about originality of authorship as apply in the world of prose. We have conducted focus groups at three Australian universities to investigate what academics and students in computing think constitute breaches of academic integrity in non-text-based assessment items; how they regard such breaches; and how academics discourage such breaches, detect them, and deal with those that are found. We find a general belief that non-text-based computing assessments differ in this regard from text-based assessments, that the boundaries between acceptable and unacceptable practice are harder to define than they are for text assessments, and that there is a case for applying different standards to these two different types of assessment. We conclude by discussing what we can learn from these findings.
Year
DOI
Venue
2013
10.1145/2526968.2526971
Koli Calling
Field
DocType
Citations 
Academic integrity,sort,Psychology,Originality,Pedagogy,Focus group,Collusion
Conference
8
PageRank 
References 
Authors
0.68
15
5
Name
Order
Citations
PageRank
Simon132040.39
Beth Cook2231.85
Judy Sheard344460.95
Angela Carbone480.68
Chris Johnson5517.03