Abstract | ||
---|---|---|
We are interested in securing the operation of robot swarms composed of heterogeneous agents that collaborate by exploiting aggregation mechanisms. Since any given robot type plays a role that may be critical in guaranteeing continuous and failure-free operation of the system, it is beneficial to conceal individual robot types and, thus, their roles. In our work, we assume that an adversary gains access to a description of the dynamic state of the swarm in its non-transient, nominal regime. We propose a method that quantifies how easy it is for the adversary to identify the type of any of the robots, based on this observation. We draw from the theory of differential privacy to propose a closed-form expression of the leakage of the system at steady-state. Our results show how this model enables an analysis of the leakage as system parameters vary; they also indicate design rules for increasing privacy in aggregation mechanisms. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1007/978-3-319-73008-0_41 | Springer Proceedings in Advanced Robotics |
Field | DocType | Volume |
Differential privacy,Swarm behaviour,Computer science,Adversary,Robot,Distributed computing | Conference | 6 |
ISSN | Citations | PageRank |
2511-1256 | 0 | 0.34 |
References | Authors | |
0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Amanda Prorok | 1 | 97 | 9.17 |
Vijay Kumar | 2 | 7086 | 693.29 |