Contents |
A common experimental goal is to detect dependency among multiple time series based on a limited number of observations. Mutual information is an information theoretic measure that is often used to detect dependency. Although the mutual information is theoretically zero for independent processes, estimates derived using finite data sets are imprecise and ambiguous. In this case, one must distinguish nonzero values to determine a likelihood of dependency. A rigorous approach is to use a statistical significance test to assess the null hypothesis of independent processes. In this talk, we present a significant test for mutual information that is accurate for finite (small) data sets. The key development is a method for generating and uniformly sampling surrogates from the set of all sequences that exactly match the n-th order properties of the observed data sets. Examples using coupled chaotic maps demonstrate the effectiveness of the test. |
|