Contents |
Atomistic simulation is now routinely used to shed light on the atomic scale mechanisms that underlie experimentally-observed phenomena. This approach is useful because statistical mechanics tells us that there is a relationship between the probability of adopting a particular microscopic configuration and the free energy of the corresponding state. This is both a blessing and a curse - on the one hand the ability to calculate free energies is what makes these sorts of simulations useful. At the same time, however, because it is unlikely that the system will adopt the high-energy configuration at the top of an energetic barriers it is unlikely that the system will cross such barriers during a short simulation. A large number of algorithms have been put forward to solve this problem many of which are based on the adding of a bias that forces the system to adopt particular configurations. There are many ways in which this bias can be generated but for the most part the bias is a function of a small number of collective variables (CVs). In the majority of the applications of these methods CVs are selected based on what the simulator knows about the chemical reaction/physical process under study, which works well in many cases but rather poorly in others. As such we have, over the past few years, tried to develop an alternative approach in which one tries to build a low dimensional representation that describes the spatial relationships between frames taken from a short trajectory. This representation is constructed using an algorithm called sketch-map that is itself based on the multidimensional scaling algorithm. In this talk I will explain how we adaptively construct a bias as a function of such collective coordinates and how such biases can be used to enhance sampling. I will then provide examples where this technique has been used to enhance the sampling on a model potential energy surface and to bias the sampling of a short protein model. |
|