skip to content
 

We are working on the following research projects:

The digital manipulation economy.

Did you know that fake social media accounts, likes, shares, and even AI-driven bot armies are for sale? To understand this digital manipulation market, we developed the Cambridge Online Trust and Safety Index (https://cotsi.org/), which tracks the daily price of SMS verifications, a cornerstone of the global manipulation marketplace, for more than 190 countries and over 500 online platforms.  In the first paper of this project, published in Science in December 2025, we demonstrated that the price of manipulation is (among other things) related to the price of SIM cards. We are currently working on developing predictive models of inauthentic online activity, and more broadly understanding the impact of the digital manipulation market on human behaviour in social environments.

Propaganda and the war in Ukraine.

Russia’s war against Ukraine is now in its thirteenth year, having begun with the 2014 annexation of Crimea and the outbreak of the Donbas War. With the start of the war came a wave of propaganda, aimed at convincing Ukrainians, Russians, and people from all over the world that the Ukrainian nation state is a myth, and that Russia’s historical ties with Ukraine justify its territorial claims. We study this propaganda and its impacts from a range of angles. For example, we have studied the history of Russian-Ukrainian relations and the effects of Russia’s propaganda post-2014, both in Dr Roozenbeek’s 2020 PhD dissertation and his 2024 book Propaganda and Ideology in the Russian-Ukrainian War. We have also looked at how people’s social media sharing behaviour changed after the 2022 invasion, and are working on several more ongoing projects, for example to understand the spread of misinformation and media effects on Telegram.

The social science of generative AI.

As (generative) AI tools become part of our daily routines, understanding how they interact with humans is critical. We’ve published research on whether AI models have social identity biases similar to humans, how we can stop AI browser agents from taking part in research surveys and political polls, and how we can align large language models to bridge political divides and boost decision-making. Ongoing work goes into further detail on AI alignment with pluralistic human values, the design and testing of AI-based interventions, and more broadly using language models to simulate human psychology and behaviour.

The political psychology of social media.

Why we consume and share content in online environments, and how doing so impacts beliefs, attitudes, and behaviours, is an important scientific as well as policy question. To this end, we conduct large-scale analyses of human behaviour and human-computer interactions in social networks. For example, we have discovered that what people share on social media is not stable over time, but is instead strongly impacted by political crises and other major news events: while people ordinarily seem to prefer sharing content containing “outgroup hostility” (negative language about disliked outgroups), there is a strong preference for “ingroup solidarity” (positive language about your own group) just after major political events. We’ve also written extensively about understanding differences in how people evaluate (mis)information across different countries, how people cluster together into like-minded online communities, and whether “unfollowing” toxic or polarising individuals on social networks can reduce polarisation. We are actively conducting studies to understand these phenomena in countries around the world, and to understand the causal (psychological) mechanisms that drive people’s behaviour in social networks.

Measuring psychological phenomena.

Understanding people’s attitudes, beliefs, and behaviours requires measuring them accurately, but this is easier said than done. We are working on several projects to test the validity of psychological measurement tools including scales used to measure authoritarianism, and have developed new scales to understand susceptibility to manipulation and belief in misinformation. We are currently conducting research on whether we can use language models to develop better and more adaptive psychometric scales.

Intervention and implementation science.

Addressing societal challenges is a complex task. We have developed and implemented numerous practical interventions aimed at understanding and countering phenomena such as misinformation (for example the Bad News and Harmony Square games and Google’s “Prebunking” initiative) and the cheap and ubiquitous availability of fake activity online (e.g., the Cambridge Online Safety Index). We also work on understanding what makes these interventions effective rather than merely efficacious (effectively asking how we can take these interventions from the petri dish to the pharmacy) and how to predict interventions’ impact in the real world, and we conduct large-scale intervention trials in the lab, the field, and on social media.