Abstract: |
The ridgelet transform has been developed to study neural network parameters, and it can describe the distribution of parameters. Mathematically, it is defined as a pseudo-inverse operator of neural networks. Namely, given a function $f$, and network $NN[\gamma]$ with parameter $\gamma$, the ridgelet transform $R[f]$ for the network $NN$ satisfies the reconstruction formula $NN[R[f]]=f$. For depth-2 fully-connected networks on a Euclidean space, the ridgelet transform has been discovered up to the closed-form expression, thus we could describe how the parameters are distributed. However, for a variety of modern neural network architectures, the closed-form expression has not been known. In this talk, I will introduce a systematic method to induce the generalized neural networks and their corresponding ridgelet transforms from group equivariant functions, and present an application to deep neural networks. |
|