Humans may be alone in the extent to which we can imagine scenarios we've never experienced and evaluate potential futures before they come to be. This capacity for prospection plays an integral role in problem-solving and planning. Forward models that generate rollouts through state space feature prominently in reinforcement learning (e.g. Monte Carlo methods of value function estimation), but typically lack biological plausibility, and are often computationally intractable for realistic problems.
Building on active inference (which sees cognition as a dynamical, surprise minimizing system) I seek to understand how recruitment of the body enables us to not only learn, but learn from, an internal generative model. Specifically, I'm interested in an important open question in cognitive science and artificial intelligence: how are the contours of foresight shaped by embodied interaction? Relatedly, how might the results of imagined trajectories be compressed into useful representations? What techniques help us interrogate our world model to balance epistemic and homeostatic demands? I pursue these questions through two interlinked methodological lines: the development of a novel computational model of prospective mental simulation, and the implementation of behavioral experiments to guide and test its predictions. My planning and forecasting experiments are implemented in VR to enable the capture of fine-grained motion, gaze, and other physiological signals often correlated with latent model-based processes.
Applications of this work may extend into a variety of critical domains. Computational agents leveraging biologically inspired simulation dynamics may be more effective collaborators, able to communicate legible beliefs, risks, and hypothesized futures. Additionally, by better understanding how individuals reason about the future, we might confront our own limitations in preparing for long-term out-of-distribution phenomena such as climate change.