Abstract: |
Most of machine learning algorithms are directly related to the concept of expectations. In particular, the analysis and calculation of the expected value of the loss functions or utility functions directly and deeply involve the optimization problem. But in most cases, people often make a very heavy assumption about the data samples, that is, the samples are required to be i.i.d., but it is well-known that the real world`s situation is often far from these conditions. In fact, this problem has become a well-known open and major issue in the field of machine learning. In this talk, we give more realistic assumptions: we only need to assume that the data sample is ``nonlinear iid``, e.g., iid under nonlinear expectations. Based on this more relaxed condition we obtain a very robust algorithm. The theoretical foundation is the law of large numbers and central limit theorem in the framework of sublinear expectations. |
|