[1] L
in M, Chen Q, Yan S. Network in network[J]. arXiv preprint arXiv:1312.4400, 2013.
[2] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[C]//European conference on computer vision. Springer, Cham, 2014: 818-833.
[3] Iandola F N, Han S, Moskewicz M W, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size[J]. arXiv preprint arXiv:1602.07360, 2016.
[4] Jin J, Dundar A, Culu
rciello E. Flattened convolutional neural networks for feedforward acceleration[J]. arXiv preprint arXiv:1412.5474, 2014.
[5] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 2818-2826.
[6] Sifre L , Mallat, Stéphane. Rigid-Motion Scattering for Texture Classification[J]. Computer Science, 2014.
[7] Chollet F. Xception: Deep learning with depthwise separable convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1251-1258.
[8] Howard A G, Zhu M, Chen B, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications[J]. arXiv preprint arXiv:1704.04861, 2017.
[9]Sandler M, Howard A, Zhu M, et al. Mobilenetv2: Inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 4510-4520.
[10] Zhang X, Zhou X, Lin M, et al. Shufflenet: An extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 6848-6856.
[11] Ma N, Zhang X, Zheng H T, et al. Shufflenet v2: Practical guidelines for efficient cnn architecture design[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 116-131.
[12] Zhang T, Qi G J, Xiao B, et al. Interleaved group convolutions[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 4373-4382.
[13] Xie G, Wang J, Zhang T, et al. Interleaved structured sparse convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8847-8856.
[14] Sun K, Li M, Liu D, et al. Igcv3: Interleaved low-rank group convolutions for efficient deep neural networks[J]. arXiv preprint arXiv:1806.00178, 2018.
[15] Tan M, Le Q V. MixNet: Mixed Depthwise Convolutional Kernels[J]. arXiv preprint arXiv:1907.09595, 2019.
[16] Chen C F, Fan Q, Mallinar N, et al. Big-little net: An efficient multi-scale feature representation for visual and speech recognition[J]. arXiv preprint arXiv:1807.03848, 2018.
[17] Chen Y, Fang H, Xu B, et al. Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution[J]. arXiv preprint arXiv:1904.05049, 2019.
[18] Gennari M, Fawcett R, Prisacariu V A. DSConv: Efficient Convolution Operator[J]. arXiv preprint arXiv:1901.01928, 2019.
[19
] Shang W, Sohn K, Almeida D, et al. Understand
ing and improving convolutional neural networks via concatenated rectified linear units[C]//international conference on machine learning. PMLR, 2016: 2217-2225.
[20] Han K, Wang Y, Tian Q, et al. GhostNet: More Features from Cheap Operations.[J]. arXiv: Computer Vision and Pattern Recognition, 2019.
[21] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 4700-4708.
[22] Jin X, Yang Y, Xu N, et al. Wsnet: Compact and efficient networks through weight sampling[C]//International Conference on Machine Learning. PMLR, 2018: 2352-2361.
[23] Zhou D, Jin X, Wang K, et al. Deep model compression via filter auto-sampling[J]. arXiv preprint, 2019.
[24] Huang G, Sun Y, Liu Z, e
t al. Deep networks with stochastic depth[C]//European conference on computer vision. Springer, Cham, 2016: 646-661.
[25] Veit A, Wilber M J, Belongie S. Residual networks behave like ensembles of relatively shallow networks[C]//Advances in neural information processing systems. 2016: 550-558.
[26] Teerapittayanon S, McDanel B, Kung H T. Branchynet: Fast inference via early exiting from deep neural networks[C]//2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016: 2464-2469.
[27] Graves A. Adaptive computation time for recurrent neural networks[J]. arXiv preprint arXiv:1603.08983, 2016.
[28] Figurnov M, Collins M D, Zhu Y, et al. Spatially adaptive computation time for residual networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 1039-1048.
[29] Wu Z, Nagarajan T, Kumar A, et al. Blockdrop: Dynamic inference paths in residual networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8817-8826.
[30] Wang X, Yu F, Dou Z Y, et al. Skipnet: Learning dynamic routing in convolutional networks[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 409-424.
[31] Almahairi A, Ballas N, Cooijmans T, et al. Dynamic capacity networks[C]//International Conference on Machine Learning. PMLR, 2016: 2549-2558.
[32] Wu B,
Wan A, Yue X, et al. Shift: A Zero FLOP, Zero Parameter Alternative to Spatial Convolutions[C]. computer vision and pattern recognition, 2018: 9127-9135.
[33] Chen W, Xie D, Zhang Y, et al. All you need is a few shifts: Designing efficient convolutional neural networks for image classification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 7241-7250.
[34] He Y, Liu X, Zhong H, et al. Addressnet: Shift-based primitives for efficient convolutional neural networks[C]//2019 IEEE Winter conference on applications of computer vision (WACV). IEEE, 2019: 1213-1222.
[35] Chen H, Wang Y, Xu C, et al. AdderNet: Do we really need multiplications in deep learning?[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 1468-1477.
[36] You H, Chen X, Zhang Y, et al. Shiftaddnet: A hardware-inspired deep network[J]. Advances in Neural Information Processing Systems, 2020, 33: 2771-2783.
[
37] L
i D, Wang X, Kong D. Deeprebirth: Accelerating deep neural network execution on mobile devices[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2018, 32(1).
[38] Ding X, Zhang X, Ma N, et al. Repvgg: Making vgg-style convnets great again[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 13733-13742.
[39] Li D, Hu J, Wang C, et al. Involution: Inverting the inherence of convolution for visual recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 12321-12330.
[40]
Denton E L, Zaremba W, Bruna J, et al. Exploiting linear structure within convolutional networks for efficient evaluation[C]//Advances in Neural Information Processing Systems. 2014: 1269-1277.