September 27, 2016

"Quoc [Le] and his colleagues at Google rolled out a new translation system that uses massive amounts of data and increased processing power to build more accurate translations."

Science: The new system, a deep learning model known as neural machine translation, effectively trains itself—and reduces translation errors by up to 87%. By Catherine Matacic

'“This … demonstrates like never before the power of neural machine translation,” says Yoshua Bengio, a computer scientist at the University of Montreal in Canada, who helped invent one of the critical components of the new system several years ago, but who was not involved in the current work.

'Neural machine translation has been a latecomer to the game of deep learning, a method of making predictions about everything from effective marketing pitches to potential drug candidates. This happens by feeding large sets of data through layers of interconnected processors. The processors—modeled after the brain’s networks of neurons—are first trained by humans on actual translations and then let loose on new sets of data. Well-calibrated processors can pick up on subtle cues in the data, transform them, and send them to the next level for further processing and translation. Deep learning is what allows Apple’s “personal assistant,” Siri, to pick up on (most) human speech, and it’s what lets Facebook’s image recognition software use small visual cues to pick out things like individual faces.

'But many people, Quoc says, think that translating language requires deeper cognitive abilities. “For instance, it takes us a fraction of a second to recognize an image or understand … audio. But it takes more than 1 second even for me to translate an English sentence to Chinese.”

'The new method, reported today on the preprint server arXiv, uses a total of 16 processors to first transform words into a value known as a vector. What is a vector? “We don’t know exactly,” Quoc says. But it represents how related one word is to every other word in the vast dictionary of training materials (2.5 billion sentence pairs for English and French; 500 million for English and Chinese). For example, “dog” is more closely related to “cat” than “car,” and the name “Barack Obama” is more closely related to “Hillary Clinton” than the name for the country “Vietnam.” The system uses vectors from the input language to come up with a list of possible translations that are ranked based on their probability of occurrence.'

"Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" by Quoc V. Le, et al (pdf file) here

"A Neural Network for Machine Translation, at Production Scale" by Quoc V. Le & Mike Schuster here

No comments: