April 15, 2017

"A growing field of research proves that artificial intelligence can be fooled...seeing one thing where humans would see something else entirely."

BBC: As machine learning algorithms increasingly find their way into our roads, our finances, our healthcare system, computer scientists hope to learn more about how to defend them against these “adversarial” attacks – before someone tries to bamboozle them for real. By Aviva Hope Rutkin

'“It’s something that’s a growing concern in the machine learning and AI community, especially because these algorithms are being used more and more,” says Daniel Lowd, assistant professor of computer and information science at the University of Oregon. “If spam gets through or a few emails get blocked, it’s not the end of the word. On the other hand, if you’re relying on the vision system in a self-driving car to know where to go and not crash into anything, then the stakes are much higher.”

'Whether or not a smart machine malfunctions, or is hacked, hinges on the very different way that machine learning algorithms see the world. In this way, to a machine, a panda could look like a gibbon, or a school bus could read as an ostrich.

'In one experiment, researchers from France and Switzerland showed how such perturbations could cause a computer to mistake a squirrel for an grey fox, or a coffee pot for a macaw.'

No comments: