Appropriateness of Numbers of Receptive Fields in Convolutional Neural Networks Based on Classifying CIFAR-10 and EEACL26 Datasets

Authors

  • Vadim Romanuke Polish Naval Academy

DOI:

https://doi.org/10.2478/ecce-2018-0019

Keywords:

Convolutional neural networks, Convolutional layers, Filters, Performance, Receptive fields

Abstract

The topical question studied in this paper is how many receptive fields (filters) a convolutional layer of a convolutional neural network should have. The goal is to find a rule for choosing the most appropriate numbers of filters. The benchmark datasets are principally diverse CIFAR-10 and EEACL26 to use a common network architecture with three convolutional layers whose numbers of filters are changeable. Heterogeneity and sensitiveness of CIFAR-10 with infiniteness and scalability of EEACL26 are believed to be relevant enough for generalization and spreading of the appropriateness of filter numbers. The appropriateness rule is drawn from top accuracies obtained on 10 × 20 × 21 parallelepipeds for three image sizes. They show, knowing that the number of filters of the first convolutional layer should be set greater for the more complex dataset, the rest of appropriate numbers of filters are set at integers, which are multiples of that number. The multipliers make a sequence similar to a progression, e.g., it may be 1, 3, 9, 15 or 1, 2, 8, 16, etc. With only those multipliers, such a rule-of-progression does not give the number of filters for the first convolutional layer.

References

H. H. Aghdam and E. J. Heravi, Guide to convolutional neural networks: a practical application to traffic-sign detection and classification. Cham, Switzerland: Springer, 2017. https://doi.org/10.1007/978-3-319-57550-6

A. Gibson and J. Patterson, Deep Learning: A Practitioner’s Approach. O’Reilly Media, Inc., 2017.

K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Computer Vision and Pattern Recognition, arXiv:1409.1556v6 [cs.CV], 2015.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017. https://doi.org/10.1145/3065386

P. Tang, H. Wang, and S. Kwong, “G-MS2F: GoogLeNet based multi-stage feature fusion of deep CNN for scene recognition,” Neurocomputing, vol. 225, pp. 188–197, 2017. https://doi.org/10.1016/j.neucom.2016.11.023

V. Campos, B. Jou, and X. Giró-i-Nieto, “From pixels to sentiment: Fine-tuning CNNs for visual sentiment prediction,” Image and Vision Computing, vol. 65, pp. 15–22, 2017. https://doi.org/10.1016/j.imavis.2017.01.011

Z. Bai, L. L. C. Kasun, and G.-B. Huang, “Generic object recognition with local receptive fields based extreme learning machine,” Procedia Computer Science, vol. 53, pp. 39–399, 2015. https://doi.org/10.1016/j.procs.2015.07.316

V. V. Romanuke, “Appropriateness of Dropout layers and allocation of their 0.5 rates across convolutional neural networks for CIFAR-10, EEACL26, and NORB datasets,” Applied Computer Systems, vol. 22, no. 1, pp. 54–63, 2017. https://doi.org/10.1515/acss-2017-0018

M. Ranzato, C. Poultney, S. Chopra, and Y. L. Cun, “Efficient Learning of Sparse Representations with an Energy-Based Model,” Advances in Neural Information Processing Systems, vol. 19, pp. 1137–1144, 2006.

V. V. Romanuke, “Training data expansion and boosting of convolutional neural networks for reducing the MNIST dataset error rate,” Research Bulletin of the National Technical University of Ukraine “Kyiv Polytechnic Institute”, no. 6, pp. 29–34, 2016. https://doi.org/10.20535/1810-0546.2016.6.84115

E. Kussul and T. Baidyk, “Improved method of handwritten digit recognition tested on MNIST database,” Image and Vision Computing, vol. 22, no. 12, pp. 971–981, 2004. https://doi.org/10.1016/j.imavis.2004.03.008

P. Date, J. A. Hendler, and C. D. Carothers, “Design index for deep neural networks,” Procedia Computer Science, vol. 88, pp. 131–138, 2016. https://doi.org/10.1016/j.procs.2016.07.416

D. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber, “Flexible, high performance convolutional neural networks for image classification,” Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, vol. 2, pp. 1237–1242, 2011. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-210

V. V. Romanuke, “Two-layer perceptron for classifying flat scaled-turned-shifted objects by additional feature distortions in training,” Journal of Uncertain Systems, vol. 9, no. 4, pp. 286–305, 2015.

V. V. Romanuke, “Boosting ensembles of heavy two-layer perceptrons for increasing classification accuracy in recognizing shifted-turned-scaled flat images with binary features,” Journal of Information and Organizational Sciences, vol. 39, no. 1, pp. 75–84, 2015.

V. V. Romanuke, “An attempt of finding an appropriate number of convolutional layers in CNNs based on benchmarks of heterogeneous datasets,” Electrical, Control and Communication Engineering, vol. 14, no. 1, pp. 51–57, 2018. https://doi.org/10.2478/ecce-2018-0006

N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929–1958, 2014.

V. V. Romanuke, “Appropriate number and allocation of ReLUs in convolutional neural networks,” Research Bulletin of the National Technical University of Ukraine “Kyiv Polytechnic Institute”, no. 1, pp. 69–78, 2017. https://doi.org/10.20535/1810-0546.2017.1.88156

Z. Liao and G. Carneiro, “A deep convolutional neural network module that promotes competition of multiple-size filters,” Pattern Recognition, vol. 71, pp. 94–105, 2017. https://doi.org/10.1016/j.patcog.2017.05.024

J. Mutch and D. G. Lowe, “Object class recognition and localization using sparse features with limited receptive fields,” International Journal of Computer Vision, vol. 80, no. 1, pp. 45–57, 2008. https://doi.org/10.1007/s11263-007-0118-0

Downloads

Published

01.12.2018

How to Cite

Romanuke, V. (2018). Appropriateness of Numbers of Receptive Fields in Convolutional Neural Networks Based on Classifying CIFAR-10 and EEACL26 Datasets. Electrical, Control and Communication Engineering, 14(2), 157-163. https://doi.org/10.2478/ecce-2018-0019