Abstract | ||
---|---|---|
Replications play an important role in verifying empirical results. In this paper, we discuss our experiences performing a literal replication of a human subjects experiment that examined the relationship between a simple test for consistent use of mental models, and success in an introductory programming course. We encountered many difficulties in achieving comparability with the original experiment, due to a series of apparently minor differences in context. Based on this experience, we discuss the relative merits of replication, and suggest that, for some human subjects studies, literal replication may not be the the most effective strategy for validating the results of previous studies. |
Year | DOI | Venue |
---|---|---|
2008 | 10.1145/1368088.1368115 | Leipzig |
Keywords | Field | DocType |
computer science education,programming,software engineering,human subjects experiment,introductory programming course,literal replication,mental models,software engineering,empirical,experience report,human subjects,replication | Permission,Software engineering,Computer science,Human–computer interaction,Comparability | Conference |
ISSN | ISBN | Citations |
0270-5257 E-ISBN : 978-1-60558-079-1 | 978-1-60558-079-1 | 34 |
PageRank | References | Authors |
1.63 | 7 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jonathan Lung | 1 | 34 | 1.96 |
Jorge Aranda | 2 | 345 | 17.79 |
Steve Easterbrook | 3 | 2654 | 165.58 |
Gregory V. Wilson | 4 | 34 | 1.63 |