Special Session 134: Recent advances in wavelet analysis, PDEs and dynamical systems - part II

Optimistic Sample Size Estimate for Deep Neural Networks

Yaoyu Zhang
Shanghai Jiao Tong University
Peoples Rep of China
Co-Author(s):    
Abstract:
Estimating the sample size required for a deep neural network (DNN) to accurately fit a target function is a crucial issue in deep learning. In this talk, we introduce a novel sample size estimation method inspired by the phenomenon of condensation, which we term the optimistic estimate. This method quantitatively characterizes the best possible performance achievable by neural networks. Our findings suggest that increasing the width and depth of a DNN preserves its sample efficiency. However, increasing the number of unnecessary connections significantly deteriorates sample efficiency. This analysis provides theoretical support for the commonly adopted strategy in practice of expanding network width and depth rather than increasing the number of connections.