| Abstract: |
| Understanding how humans respond to incentives, both individually and collectively, is central to effective policy design. In the context of stochastic differential game, mean field games (MFGs) are usually used to capture interactions among fully non-cooperative (egocentric) players, whereas mean field control (MFC) models are to study fully cooperative (altruistic) players. To capture the whole spectrum of behaviors, mixed-individual MFGs introduce a parameterized blend of egocentric and altruistic objectives. However, in practical settings policymakers cannot directly observe intrinsic altruism levels and/or other private cost parameters such as cost of efforts. We address this challenge by developing an inverse learning framework for mixed-individual MFGs. We demonstrate the feasibility and accuracy of the method through numerical experiments, showcasing recovery of latent altruism levels under noisy observations. Our results highlight the potential of inverse MFG techniques to infer behavioral structure in large populations, with implications for incentive design and data-driven policy analysis. |
|