Title
Safe Baby Agi
Abstract
Out of fear that artificial general intelligence (AGI) might pose a future risk to human existence, some have suggested slowing or stopping AGI research, to allow time for theoretical work to guarantee its safety. Since an AGI system will necessarily be a complex closed-loop learning controller that lives and works in semi-stochastic environments, its behaviors are not fully determined by its design and initial state, so no mathematico-logical guarantees can be provided for its safety. Until actual running AGI systems exist - and there is as of yet no consensus on how to create them - that can be thoroughly analyzed and studied, any proposal on their safety can only be based on weak conjecture. As any practical AGI will unavoidably start in a relatively harmless baby-like state, subject to the nurture and education that we provide, we argue that our best hope to get safe AGI is to provide it proper education.
Year
DOI
Venue
2015
10.1007/978-3-319-21365-1_5
ARTIFICIAL GENERAL INTELLIGENCE (AGI 2015)
Keywords
Field
DocType
Artificial intelligence, Nurture, Nature, AI safety, Friendly AI
Nature versus nurture,Computer science,Artificial general intelligence,Artificial intelligence,Conjecture,Learning controller
Conference
Volume
ISSN
Citations 
9205
0302-9743
0
PageRank 
References 
Authors
0.34
5
3
Name
Order
Citations
PageRank
Jordi Bieger1122.84
Kristinn Thorisson265894.55
Pei Wang3215.09