PROJECT SUMMARYDeficits in spatial navigation are associated with a number of mental disorders including anxiety autismdepression and schizophrenia. Previous work to investigate these deficits has relied on relatively coarsebehavioral measures such as error rates and completion times whose relation to the underlying neuralcomputations is unclear. Moreover by using desktop Virtual Reality (VR) tasks in which participants navigatea virtual world while sitting at a computer many spatial navigation experiments ignore one of the definingfeatures of spatial navigation that it involves movement of the body through space. Because of theselimitations in measurement and ecological validity the computational mechanisms underlying spatialnavigation deficits in mental illness are poorly understood.The objective of this proposal is to develop and test new computational models and behavioral paradigms thatcan pick apart the cognitive computations underlying spatial navigation. Our central hypothesis is that peopleperform spatial navigation tasks using a mixture of two interacting processes: Path Integration based onintegrating body-based cues about ones own rotations and translations (e.g. sensory inputs about themovement of ones limbs) and Landmark Navigation based on processing environmental cues (e.g. visuallandmarks). Work in this grant will develop computational models of both processes including how estimatesfrom the two processes are combined. To test these models we make use of a new technology known asimmersive VR. In immersive VR a virtual environment is rendered on a head-mounted display whileparticipants walk freely in a real room. Thus participants experience identical body-based cues to realnavigation but with visual cues that are under experimental control. Using immersive VR we will create twobehavioral tasks in which participants either turn (Rotation Task) or walk (Translation Task) to a rememberedheading or location in the dark. In the No Feedback condition (Aim 1) participants will complete thesemovements with no visual feedback allowing us to quantify Path Integration. In the Feedback condition (Aim2) participants will receive a brief flash (300 ms) of visual information that is potentially offset relative to thetrue location allowing us to test how they combine Path Integration with Landmark Navigation from visualcues. Finally in both experiments we will test whether performance is predictive of more general spatialnavigation ability using measures of real-world navigation ability (Aim 3).The innovation in our proposal lies in both our experiments which leverage new immersive VR technology toinduce mismatch between Path Integration and Landmark Navigation systems and in our models whichprovide precise concise and quantitative descriptions of behavior. Beyond the foundational work in this grantestablishing the paradigms and models our experiments are readily translatable to patients and animalsallowing for insight into the clinical implications and neural mechanisms of deficits in spatial navigation.