"Rectified activation units (rectifiers) are essential for
state-of-the-art neural networks. In this work, we study
rectifier neural networks for image classification from two
aspects. First, we propose a Parametric Rectified Linear
Unit (PReLU) that generalizes the traditional rectified unit.
PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to
train extremely deep rectified models directly from scratch
and to investigate deeper or wider network architectures.
Based on our PReLU networks (PReLU-nets), we achieve
4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the
ILSVRC 2014 winner (GoogLeNet, 6.66% ). To our
knowledge, our result is the first to surpass human-level performance (5.1%, ) on this visual recognition challenge.
"While our algorithm produces a superior result on this
particular dataset, this does not indicate that machine vision
outperforms human vision on object recognition in general.
On recognizing elementary object categories (i.e., common
objects or concepts in daily lives) such as the Pascal VOC
task , machines still have obvious errors in cases that are
trivial for humans. Nevertheless, we believe that our results show the tremendous potential of machine algorithms
to match human-level performance on visual recognition."