Abstract | ||
---|---|---|
Developing trust in intelligent agents requires understanding the full capabilities of the agent, including the boundaries beyond which the agent is not designed to operate. This paper focuses on applying formal verification methods to identify these boundary conditions in order to ensure the proper design for the effective operation of the human-agent team. The approach involves creating an executable specification of the human-machine interaction in a cognitive architecture, which incorporates the expression of learning behavior. The model is then translated into a formal language, where verification and validation activities can occur in an automated fashion. We illustrate our approach through the design of an intelligent copilot that teams with a human in a takeoff operation, while a contingency scenario involving an engine-out is potentially executed. The formal verification and counterexample generation enables increased confidence in the designed procedures and behavior of the intelligent copilot system. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1007/978-3-319-77935-5_2 | Lecture Notes in Computer Science |
Keywords | Field | DocType |
Formal verification,Intelligent agents,Human-machine teams | Autonomous agent,Intelligent agent,Formal language,Verification and validation,Software engineering,Computer science,Counterexample,Cognitive architecture,Formal verification,Executable | Conference |
Volume | ISSN | Citations |
10811 | 0302-9743 | 0 |
PageRank | References | Authors |
0.34 | 14 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Siddhartha Bhattacharyya | 1 | 0 | 0.34 |
Thomas C. Eskridge | 2 | 118 | 12.33 |
natasha neogi | 3 | 1 | 1.41 |
Marco M. Carvalho | 4 | 128 | 18.44 |
Milton Stafford | 5 | 0 | 0.34 |