Postdoctoral researcher in AI/multi-agent modelling of dynamics of disinformation in social media (M/F/X)The Amsterdam AI, Media and Democracy Lab Artificial Intelligence (AI) is expected to play a crucial role in the future of social media. AI can contribute to new ways of informing and engaging with citizens but, in order to achieve this goal, it must address the pressing problem presented by the spread of disinformation, polarisation and fake news.
The Netherlands AI, Media and Democracy Lab (AI4DEM)
https://www.aim4dem.nl/ aims both to create models for how rapid developments in AI will transform the media and democracy area. AI4DEM was set up as an interdisciplinary collaboration between 3 top academic institutions in the Amsterdam area (UvA, HvA and CWI), with many companies, media organisations and societal partners.
The Intelligent and Autonomous Systems group at CWI The proposed position will be based in the IAS group at Centrum Wiskunde & Informatica (CWI). Based in the Science Park, Amsterdam, CWI is the national research institute for Mathematics and Computer Science in the Netherlands. The Intelligent and Autonomous Systems research group at CWI (
https://www.cwi.nl/en/groups/intelligent-and-autonomous-systems/) studies distributed intelligence and autonomy in complex cyber-physical systems, and applies them to concrete areas of societal relevance, including smart energy systems, distributed logistics, financial markets and online social networks. IAS researchers have extensive experience in areas like complex networks, multi-agent system design, automated markets, algorithmic game theory and automated negotiation.
Background problem for the postdoc position The postdoctoral researcher will work closely with staff researchers in the IAS group and across the AI4DEM consortium, focusing on AI/multi-agent models for the dynamics and prevention of disinformation and polarisation. This involves studying complex social networks, formed of both humans and automated agents, and modeling how agents in the network can be influenced by the spread of fake news, and in turn, influence others. In more detail, this can lead to complex systems dynamics, such as “cascade effects” in which a particular piece of disinformation spreads rapidly through a social network. This also involves studying how different parameters influence such dynamics, and study how game theoretic methods can be designed to prevent spread of disinformation in social networks.
While disinformation has always been a problem in social media and online news, the recent advances in large language models (LLMs) has brought increasing urgency to addressing these challenges. The ability to effortless generate vast amounts of content, both informative and persuasive, can transform media dynamics drastically. By posing as human users or content creators, AI agents can, for example, make users believe in misleading or false news, or lead to the creation of filter bubbles, where their own biases are reinforced. Moreover, they can also corrupt online decision-making or voting systems, by creating the impression some biased point of view is more popular/accepted than it really is.
Relevant research topics: In more detail, some specific potential directions that can be relevant for this position include:
- Multi-agent models for the spread of disinformation and polarisation on online media platforms. Specifically, such models can capture how individual agent behaviours can lead to complex effects, such as cascade effect dynamics in the spread of disinformation in social networks, or as polarisation (i.e. Schelling segregation models).
- Game theoretic methods for incentivising agents in social networks. New methods from algorithmic game-theory and mechanism design methods can be employed to reward truthful information spreading, and identify/punish agents that aim to influence others by spreading disinformation.
- Machine learning, network science and game-theory to construct models that explain the dynamics of opinion creation and spread of (dis)-information in social networks, especially when populated by both humans and LLM-agents impersonating humans.
- Study of the dynamics of large online deliberation and decision-making platforms, in the presence of strategic and potentially malicious agents. Here, we aim to develop links to the international multi-agent research community, such as the subcommunity working on the International Computational Social Choice Competition: https://compsoc.algocratic.org/