Special Session 55: Sparse signal learning and its applications in data science

Neural Network Approximation of Continuous Functions in High Dimensions with Applications to Inverse Problems

Santhosh Karnik
Michigan State University
USA
Co-Author(s):    Rongrong Wang and Mark Iwen
Abstract:
The success of neural networks in a variety of inverse problems has fueled their adoption in disciplines ranging from medical imaging to seismic analysis. However, the high dimensionality of such inverse problems has simultaneously left current theory, which predicts that networks should scale exponentially in the dimension of the problem, unable to explain why the seemingly small networks work as well as they do in practice. To reduce this gap between theory and practice, we provide a general method for bounding the complexity required for a neural network to approximate a H\{o}lder (or uniformly) continuous function on a high-dimensional set with a low-complexity structure. Many sets of interest in high dimensions have low-distortion linear embeddings into lower dimensional spaces. We can exploit this fact to show the size of a neural network needed to approximate a H\{o}lder (or uniformly) continuous function on a low-complexity set in a high dimensional space grows exponentially with the dimension of its low-distortion embedding, not the dimension of the space it lies in. The result is a general theoretical framework which can be used to better explain the observed empirical success of smaller networks in a wider variety of inverse problems than current theory allows.