Originally shared by Gideon Rosenblatt Speaking in Cambridge on Friday (20 February) Murray Shanahan, professor of cognitive robotics at Imperial College London, said that in order to nullify this threat any "human-level AI" - or artificial general intelligence (AGI) - should also be "human-like". Shanahan suggested that if forces driving us towards the development of human-level AI are unstoppable, then there are two options. Either a potentially dangerous AGI based on a ruthless optimisation process with no moral reasoning is developed, or an AGI is created based on the psychological and perhaps even neurological blueprint of humans. "Right now my vote is for option two, in the hope that it will lead to a form of harmonious co-existence (with humanity)," Shanahan said. http://www.ibtimes.co.uk/ai-should-be-human-like-capable-empathy-avoid-existential-threat-mankind-1489168