Title
An empirical comparison between monkey testing and human testing (WIP paper)
Abstract
Android app testing is challenging and time-consuming because fully testing all feasible execution paths is difficult. Nowadays apps are usually tested in two ways: human testing or automated testing. Prior work compared different automated tools. However, some fundamental questions are still unexplored, including (1) how automated testing behaves differently from human testing, and (2) whether automated testing can fully or partially substitute human testing. This paper presents our study to explore the open questions. Monkey has been considered one of the best automated testing tools due to its usability, reliability, and competitive coverage metrics, so we applied Monkey to five Android apps and collected their dynamic event traces. Meanwhile, we recruited eight users to manually test the same apps and gathered the traces. By comparing the collected data, we revealed that i.) on average, the two methods generated similar numbers of unique events; ii.) Monkey created more system events while humans created more UI events; iii.) Monkey could mimic human behaviors when apps have UIs full of clickable widgets to trigger logically independent events; and iv.) Monkey was insufficient to test apps that require information comprehension and problem-solving skills. Our research sheds light on future research that combines human expertise with the agility of Monkey testing.
Year
DOI
Venue
2019
10.1145/3316482.3326342
Proceedings of the 20th ACM SIGPLAN/SIGBED International Conference on Languages, Compilers, and Tools for Embedded Systems
Keywords
Field
DocType
Empirical, Monkey testing, human testing
Empirical comparison,Industrial engineering,Computer science,Parallel computing
Conference
ISBN
Citations 
PageRank 
978-1-4503-6724-0
1
0.36
References 
Authors
0
3
Name
Order
Citations
PageRank
Mostafa Mohammed110.36
Haipeng Cai210.69
Na Meng31109.72