Abstract | ||
---|---|---|
Flawed problem comprehension leads students to produce flawed implementations. However, testing alone is inadequate for checking comprehension: if a student develops both their tests and implementation with the same misunderstanding, running their tests against their implementation will not reveal the issue. As a solution, some pedagogies encourage the creation of input-output examples independent of testing-but seldom provide students with any mechanism to check that their examples are correct and thorough.
We propose a mechanism that provides students with instant feedback on their examples, independent of their implementation progress. We assess the impact of such an interface on an introductory programming course and find several positive impacts, some more neutral outcomes, and no identified negative effects.
|
Year | DOI | Venue |
---|---|---|
2019 | 10.1145/3291279.3339416 | Proceedings of the 2019 ACM Conference on International Computing Education Research |
Keywords | Field | DocType |
automated assessment, examplar, examples, testing | Software engineering,Computer science,Knowledge management,Implementation,Comprehension,Executable | Conference |
ISSN | ISBN | Citations |
978-1-4503-6185-9 | 978-1-4503-6185-9 | 0 |
PageRank | References | Authors |
0.34 | 0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
John Wrenn | 1 | 0 | 1.35 |
Shriram Krishnamurthi | 2 | 2446 | 178.81 |