Paper: Rethinking the Usage of Batch Normalization and Dropout in the Training of Deep Neural Networks

"In this work, we propose a novel technique to boost training efficiency of a neural network. Our work is based on an excellent idea that whitening the inputs of neural networks can achieve a fast convergence speed"

"we propose to implement an IC layer by combining two popular techniques, Batch Normalization and Dropout"

"we should not place Batch Normalization before ReLU since the nonnegative responses of ReLU will make the weight layer updated in a suboptimal way, and we can achieve better performance by combining Batch Normalization and Dropout together as an IC layer."



Post a Comment

0 Comments