May 19, 2016

"The possibility that a malevolent artificial intelligence might pose a serious threat to humankind has become a hotly debated issue."

Technology Review: Various high profile individuals from the physicist Stephen Hawking to the tech entrepreneur Elon Musk have warned of the danger. via Emerging Technology from the arXiv

'Which is why the field of artificial intelligence safety is emerging as an important discipline. Computer scientists have begun to analyze the unintended consequences of poorly designed AI systems, of AI systems created with faulty ethical frameworks or ones that do not share human values.

'But there’s an important omission in this field, say independent researchers Federico Pistono and Roman Yampolskiy from the University of Louisville in Kentucky. “Nothing, to our knowledge, has been published on how to design a malevolent machine,” they say.

'That’s a significant problem because computer security specialists must understand the beast they are up against before they can hope to defeat it.

'Today, Pistono and Yampolskiy attempt to put that right, at least in part, and the key point they make is that a malevolent AI is most likely to emerge only in certain environments. So they have set out the conditions in which a malevolent AI system could emerge. And their conclusions will make for uncomfortable reading for one or two companies.'

"Unethical Research: How to Create a Malevolent Artificial Intelligence" by Federico Pistono and Roman V. Yampolskiy here

No comments: