Title
Engineering trust alignment: Theory, method and experimentation
Abstract
In open multi-agent systems trust models are an important tool for agents to achieve effective interactions. However, in these kinds of open systems, the agents do not necessarily use the same, or even similar, trust models, leading to semantic differences between trust evaluations in the different agents. Hence, to successfully use communicated trust evaluations, the agents need to align their trust models. We explicate that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems and propose a novel approach. We show how the trust alignment can be formed by considering the interactions that agents share and describe a mathematical framework to formulate precisely how the interactions support trust evaluations for both agents. We show how this framework can be used in the alignment process and explain how an alignment should be learned. Finally, we demonstrate this alignment process in practice, using a first-order regression algorithm, to learn an alignment and test it in an example scenario.
Year
DOI
Venue
2012
10.1016/j.ijhcs.2012.02.007
Int. J. Hum.-Comput. Stud.
Keywords
Field
DocType
engineering trust alignment,ontology alignment method,interactions support trust evaluation,open system,agents share,trust model,mathematical framework,trust alignment,open multi-agent systems trust,alignment process,trust evaluation,regression,trust
Ontology (information science),Ontology alignment,Computer science,Knowledge management,Human–computer interaction,Open system (systems theory)
Journal
Volume
Issue
ISSN
70
6
1071-5819
Citations 
PageRank 
References 
7
0.44
30
Authors
3
Name
Order
Citations
PageRank
Andrew Koster1478.27
W. Marco Schorlemmer2111385.18
Jordi Sabater-Mir357341.11