Title
Governing Ai Safety Through Independent Audits
Abstract
Highly automated systems are becoming omnipresent. They range in function from self-driving vehicles to advanced medical diagnostics and afford many benefits. However, there are assurance challenges that have become increasingly visible in high-profile crashes and incidents. Governance of such systems is critical to garner widespread public trust. Governance principles have been previously proposed offering aspirational guidance to automated system developers; however, their implementation is often impractical given the excessive costs and processes required to enact and then enforce the principles. This Perspective, authored by an international and multidisciplinary team across government organizations, industry and academia, proposes a mechanism to drive widespread assurance of highly automated systems: independent audit. As proposed, independent audit of AI systems would embody three 'AAA' governance principles of prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements. Independent audit of AI systems serves as a pragmatic approach to an otherwise burdensome and unenforceable assurance challenge.As highly automated systems become pervasive in society, enforceable governance principles are needed to ensure safe deployment. This Perspective proposes a pragmatic approach where independent audit of AI systems is central. The framework would embody three AAA governance principles: prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements.
Year
DOI
Venue
2021
10.1038/s42256-021-00370-7
NATURE MACHINE INTELLIGENCE
DocType
Volume
Issue
Journal
3
7
Citations 
PageRank 
References 
1
0.35
0
Authors
20