Abstract: |
Detecting differences and building classifiers between distributions, given only finite samples, are important tasks in a number of scientific fields. Optimal transport (OT) has evolved as the most natural concept to measure the distance between distributions and has gained significant importance in machine learning. There are some drawbacks to OT: computing OT can be slow, and it often fails to exploit reduced complexity in case the family of distributions is generated by simple group actions.
If we make no assumptions on the family of distributions, these drawbacks are difficult to overcome. However, in the case that the measures are generated by push-forwards of elementary transformations, forming a low-dimensional submanifold in the Wasserstein space, we can deal with both of these issues on a theoretical and computational level. In this talk, we`ll show how to embed the space of distributions into a Hilbert space via linearized optimal transport, and how linear techniques can be used to classify different families of distributions generated by elementary transformations and perturbations. The proposed framework significantly reduces both the computational effort and the required training data in supervised learning settings. We demonstrate the algorithms in pattern recognition tasks in imaging and provide some medical applications. |
|