Abstract | ||
---|---|---|
The Automated Turing test (ATT) is almost a standard security technique for addressing the threat of undesirable or malicious bot programs. In this paper, we motivate an interesting adversary model, cyborgs, which are either humans assisted by bots or bots assisted by humans. Since there is always a human behind these bots, or a human can always be available on demand, ATT fails to differentiate such cyborgs from humans. The notion of "telling humans and cyborgs apart" is novel, and it can be of practical relevance in network security. Although it is a challenging task, we have had some success in telling cyborgs and humans apart automatically. |
Year | DOI | Venue |
---|---|---|
2006 | 10.1007/978-3-642-04904-0_26 | Security Protocols Workshop |
Keywords | DocType | Citations |
turing test,network security | Conference | 8 |
PageRank | References | Authors |
1.49 | 4 | 1 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jianxin Jeff Yan | 1 | 876 | 63.76 |