Abstract: |
Deep neural networks have demonstrated a great success in many applications, especially for problems with high-dimensional data sets. In spite of that, most existing statistical theories are cursed by data dimension and cannot explain such a success. To bridge the gap between theories and practice, we exploit the low-dimensional structures of data set and establish theoretical guarantees with a fast rate that is only cursed by the intrinsic dimension of the data set. Autoencoder is a powerful tool in exploring data low-dimensional structures. In our work, we analyze the approximation error and generalization error of autoencoder and its application in operator learning. Our results provide fast rates depending on the intrinsic dimension of data sets and show that deep neural networks are adaptive to low-dimensional structures of data sets. |
|