My research focuses on how our brains anticipate future events, make decisions under uncertainty and learn. I combine large-scale neural data analysis, behavioral analysis and computational modeling, to study how distributed brain networks support goal-directed behavior. Applying machine learning algorithms, dynamical systems theory, information theory, reinforcement learning, and statistical modeling, I investigate how complex biological systems process information, adapt to changing environments, and optimize actions. Beyond fundamental neuroscience, my work connects to broader questions in artificial intelligence, adaptive systems, and modeling of decision processes.
I studied how premotor and posterior parietal areas support anticipatory movements. Using large-scale simultaneous neural recordings, dynamical systems and advanced statistical analysis, I examined how the brain anticipates future actions when environmental structure becomes predictable.
I investigated how premotor circuits represent uncertainty during social interaction. Using information-theoretic analyses of neural population activity, I studied how cooperative decision strategies emerge when agents must coordinate actions to maximize reward.
I examined how the brain evaluates information during trial-and-error learning. By combining reinforcement learning models with behavioral and neural data, I investigated how frontal and cingulate regions encode information gain during trial-and-error decision making.