FANDOM


Lin et al. (2015)[1] present impressive results of training and using neural networks with fewer multiplications (therefore potentially faster). Their method include binarizing weights in forward pass and quantizing error signals in backward pass. Experiments on 3 standard image recognition benchmarks show not only good but even improved performance compared to standard neural networks.

References Edit

  1. Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio. 2015. Neural Networks with Few Multiplications. PDF