Laura Globig

Bio:
Dr. Laura Globig studies responsible AI through the lens of reward-driven cognition, asking how the incentives that shape human decision-making extend to our interactions with AI systems, and how they can be redesigned to improve societal outcomes. Dr. Globig’s research highlights both risks and opportunities. On the risk side, Dr. Globig shows that because AI provides affirmation without imposing social costs, users may learn distorted social norms that could spill over into human–human contexts. On the opportunity side, Dr. Globig finds that people often prefer AI to human advisors in politically charged domains, suggesting that responsibly designed systems can circumvent identity-based biases and broaden access to balanced information. Dr. Globig’s cross-cultural work further underscores that perceptions of AI alignment vary across societies, emphasizing the need for globally sensitive approaches. Together, these projects illustrate how responsible AI can be advanced by understanding and redesigning reward dynamics to promote accuracy, prosociality, and equity.

Abstract:
When seeking information, people not only consider what to learn, but also who to learn it from. Prior research shows that source selection is shaped not only by accuracy but also by social identity: individuals actively discount outgroup sources and prefer ingroup sources, constraining information acquisition and narrowing exposure. This bias represents a critical barrier to informed decision-making and demands new solutions. 

The rapidly evolving information ecosystem may provide one such solution. Whereas information used to be transmitted exclusively by humans—whether in person or via media—it is now increasingly supplemented by artificial intelligence (AI). 

Across three studies, we investigated how AI interacts with source selection preferences and modeled the mechanisms underlying these decisions. We find that individuals prefer seeking information from AI relative to humans, particularly when the latter belong to their outgroup. Strikingly, they also prefer AI over ingroup sources when tasks are identity-relevant, suggesting awareness of potential human biases. Computational modeling indicates that this preference reflects a process bias rather than a prior inclination toward AI. 

These findings highlight the promise of AI systems, when designed and deployed responsibly, to mitigate identity-driven distortions in information seeking and facilitate more balanced knowledge acquisition.