engadget: The paper tackles the problem of teaching AI to recognize objects using simulated images, which are easier to use than photos (since you don't need a human to tag items) but poor for adapting to real-world situations. by Jon Fingas
'The trick, Apple says, is to use the increasingly popular technique of pitting neural networks against each other: one network trains itself to improve the realism of simulated images (in this case, using photo examples) until they're good enough to fool a rival "discriminator" network. Ideally, this pre-training would save massive amounts of time and account for hard-to-predict situations that don't always turn up in photos.
'This doesn't mean that Apple is suddenly an open book. It could take years before it's clear how transparent Apple has become with its scientific findings. However, this is a big step -- if also a necessary one. AI is an increasingly competitive field, and Apple's past reluctance to contribute to scientific knowledge may have scared away potential hires who wanted their discoveries recognized. If papers like these become relatively commonplace, Apple might have an easier time attracting the talent it needs for self-driving car platforms, Siri and other AI-based projects.'
"Learning from Simulated and Unsupervised Images through Adversarial Training" by Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, and Russ Webb here